input
stringlengths
6.82k
29k
Instruction: Are your arteries older than your age? Abstracts: abstract_id: PUBMED:12242525 Distributions of diffuse intimal thickening in human arteries: preferential expression in atherosclerosis-prone arteries from an early age. Diffuse intimal thickening (DIT) is a thickened intima present in human arteries before atherosclerosis develops and is considered to be related to atherogenesis. The purpose of this study was to clarify the systemic and age distributions of DIT. Coronary, cerebral, carotid, subclavian, iliac and abdominal organ arteries and the aorta were examined in 72 autopsy cases (aged 36 weeks of gestation to 30 years at death). DIT was found in the coronary arteries and aorta from 36 weeks of gestation and the first year of life, respectively. The intima/media (I/M) ratio of coronary arteries showed an age-dependent increase and was much greater than that of other muscular arteries, i.e., intracranial and extraparenchymal cerebral arteries and abdominal organ arteries. Aorta also demonstrated age-dependent as well as site-dependent increases of I/M ratio; the more distal the segments, the greater the ratio. Consequently, the abdominal aorta had the largest I/M ratio within the aorta. Other elastic arteries, i.e., carotid, subclavian and iliac arteries, showed trends similar to the distal portions of the aorta. Thus, DIT was strongly expressed from an early age in arteries that are considered to be prone to atherosclerosis. These findings suggest that the development of atherosclerosis depends at least partly on the degree of DIT. abstract_id: PUBMED:37553781 Older and fearing new technologies? The relationship between older adults' technophobia and subjective age. Objective: This article aimed to identify different technophobia subgroups of older adults and examine the associations between these distinct subgroups and the subjective age. Methods: A sample of 728 retired older adults over the age of 55 was recruited in China. Latent profile analysis was conducted to identify technophobia subgroups using three indicators: techno-anxiety, techno-paranoia and privacy concerns. Analysis of Variance was applied to determine whether a relationship exists between the identified technophobic subgroups and subjective ages (Feel-age, Look-age, Do-age and Interests-age). Result: Four technophobia types were identified: 'low-technophobia' (24.59%), 'high-privacy concerns' (26.48%), 'medium-technophobia' (28.38%), and 'high-technophobia' (20.55%). Privacy concerns play a major role in the profiles of older adults who belong to the profiles of 'high-privacy concerns' and 'high-technophobia' (47.03%). A series of ANOVAs showed that older adults in the 'low-technophobia' were more likely to be younger subjective ages of the feel-age and interest-age. Conclusion: The majority of Chinese older adults do not suffer from high levels of technophobia, but do concerns about privacy issues. It also pointed out the younger subjective age might have a protective effect on older adults with technophobia. Future technophobia interventions should better focus on breaking the age stereotype of technology on older adults. abstract_id: PUBMED:37975423 Commentary on: Increased stiffness of omental arteries from late pregnant women at advanced maternal age. Worldwide, pregnancy at age 35 or older, termed 'advanced maternal age (AMA)', is increasing exponentially. As the incidence of pregnancy at AMA has increased, a growing body of evidence has suggested that AMA is also associated with increased risk for adverse maternal and fetal outcomes outside of genetic anomalies. Importantly, despite the mounting evidence and the increased global risk of adverse perinatal outcomes observed, few studies have examined the potential mechanisms underlying this elevated risk in pregnant people ≥35 years of age. Wooldridge and colleagues begin to address this gap in the literature. In their recent report, they examine vessel stiffness in omental resistance vessels obtained from pregnant individuals ≥35 years of age compared with pregnant individuals <35 years of age. Omental arteries were isolated and assessed via pressure myography (mechanical properties) and histological analysis for collagen and elastin content. Overall, the findings from this investigation report that maternal resistance arteries collected from women of AMA were less compliant and had less elastin than arteries obtained from women <35 years of age, suggesting that maternal resistance vessel stiffening in AMA may contribute to increased risk of adverse pregnancy outcomes. The authors should be commended for completing these studies in human resistance vessels, which now open new avenues for investigation and provoke a cascade of questions related to maternal cardiovascular adaptations to pregnancy in women ≥35 years of age. abstract_id: PUBMED:26898581 Side predilections of offending arteries in hemifacial spasm. The side predilections of various offending arteries in hemifacial spasm (HFS) have not been well studied. The relationship between clinical and radiological features of HFS and offending arteries were investigated in the present study. A retrospective analysis of 370 patients who underwent microvascular decompression for HFS was performed. The patients were divided into four groups based on the offending arteries, namely anterior inferior cerebellar artery (AICA), posterior inferior cerebellar artery (PICA), vertebral artery, and multiple offending arteries. Affected side, age at onset, presence of hypertension, and sigmoid sinus area and dominance were compared between groups. The mean age of patients with a left HFS was significantly greater than that of patients with a right HFS (P=0.009). The AICA affected primarily the right side and PICA and multiple offending arteries the left side (P<0.001). Side of sigmoid sinus dominance was significantly different among groups (P<0.001). The offending arteries in HFS may be related to these differences. AICA was associated with right-sided symptoms, younger age at onset, and presence of left dominant sigmoid sinus, while PICA was associated with left-sided symptoms, older age at onset, and smaller right sigmoid sinus area. abstract_id: PUBMED:11692003 Age-related effects on atherogenesis and scavenger enzymes of intracranial and extracranial arteries in men without classic risk factors for atherosclerosis. Background And Purpose: Atherosclerosis occurs later and is less extensive in intracranial arteries than in extracranial arteries. However, the mechanisms responsible are poorly understood. A previous study has suggested a better antioxidant protection of intracranial arteries. Methods: To assess the influence of age on arterial activity of antioxidant enzymes and atherogenesis, we compared intracranial and extracranial arteries of humans of different ages who retrospectively lacked confounding classic risk factors (48 premature fetuses aged 6.4+/-0.8 months [mean+/-SD], 58 children aged 7.9+/-3.8 years, 42 adults aged 42.5+/-5.1 years, and 40 elderly subjects aged 71.8+/-3.4 years; all males). Lesions were quantified by computer-assisted imaging analysis of sections of the middle cerebral and basilar arteries, the left anterior descending coronary artery, the common carotid artery, and the abdominal aorta. Macrophages, apolipoprotein B, oxidized LDL, and matrix metalloproteinase-9 in lesions were determined by immunocytochemistry. The effect of aging on atherogenesis was then compared with that on the activity of 4 antioxidant enzymes in the arterial wall. Results: Atherosclerosis was 6- to 19-fold greater (P<0.01) in extracranial arteries than in intracranial arteries, and it increased linearly with age. Intracranial arteries showed significantly greater antioxidant enzyme activities than did extracranial arteries. However, the antioxidant protection of intracranial arteries decreased significantly in older age, coinciding with a marked acceleration of atherogenesis. An increase in matrix metalloproteinase-9 protein expression and in gelatinolytic activity consistent with the degree of intracranial atherosclerosis was also observed. Conclusions: These results suggest that a greater activity of antioxidant enzymes in intracranial arteries may contribute to their greater resistance to atherogenesis and that with increasing age intracranial arteries respond with accelerated atherogenesis when their antioxidant protection decreases relatively more than that of extracranial arteries. abstract_id: PUBMED:6116326 Age-related alterations in epicardial arteries of spontaneously hypertensive rats. Proximal portions of the left coronary arteries were examined microscopically in aging female normotensive Wistar-Kyoto (WKY) and spontaneously hypertensive rats (SHR). The age-related intimal alterations in SHR were largely limited to endothelial cells, which demonstrated a proliferation of organelles (most notably Weibel-Palade bodies). In the media, degenerative alterations appeared to be most marked near the medio-adventitial junction. Compared to WKY and increasing with age, the media of the SHR epicardial artery demonstrated an accumulation of extracellular elements that included basement membrane-like material, collagen fibers and debris. Complex carbohydrates, as determined with the silver methenamine reaction, were noted to accumulate in a lamellar fashion. Smooth muscle cells demonstrated an age-related tendency (which was exaggerated in SHR) toward development of invaginations and irregular profiles. These observations indicate that age-related structural alterations in epicardial arteries of SHR are progressive, and they support the concept that structural alterations in SHR coronary arteries may represent accelerated aging phenomena. abstract_id: PUBMED:18500453 Age-dependent increases of calcium, phosphorus, and sodium in the uterine arteries of Thai. To elucidate compositional changes of the uterine artery with aging, the authors investigated age-related changes of elements in the uterine arteries of Thai. After ordinary dissection by medical students at Chiang Mai University was finished, the uterine arteries were resected from Thai subjects. Thai subjects ranged in age from 27 to 86 years (average age = 63.3 +/- 17.7 years). The element content of the uterine arteries was analyzed by inductively coupled plasma-atomic emission spectrometry. It was found that the Ca, P, and Na contents increased progressively in the uterine arteries of Thai with aging. A significant accumulation of Ca and P in the uterine arteries of Thai was found in the sixties patients, and the accumulation increased markedly in the seventies. Regarding the uterine arteries in subjects more than 60 years, the extent of accumulation of Ca and P in the uterine arteries of Thai was one half of that in the uterine arteries of Japanese. Regarding the relationships among elements, extremely significant direct correlations were found among the contents of Ca, P, Mg, Zn, and Na in the uterine arteries of Thai. As Ca increased in the uterine arteries of Thai, P, Mg, Zn, and Na increased simultaneously in the arteries. abstract_id: PUBMED:18285672 Normal blood flow velocities of basal cerebral arteries decrease with advancing age: a transcranial Doppler sonography study. Transcranial Doppler sonography (TCD) is a non-invasive diagnostic tool enabling evaluation of blood flow characteristics of basal intracerebral vessels via thin calvarian regions. Several factors may affect the normal values of cerebral hemodynamic parameters, and standard reference values for each laboratory are needed for precise interpretation of the results. The aims of this study were to determine normal values of flow velocities of basal cerebral arteries of our TCD laboratory, and to study the influence of age and gender on normal values. We studied 63 healthy volunteers (30 male and 33 females; age range, 5 - 69 years old) with TCD with a 2-MHz transcranial probe. The subjects were divided into 7 age groups: 5-10 years, 11-20 years, 21-30 years, 31-40 years, 41-50 years, 51-60 years and > 60 years. Mean velocity (V mean), peak systolic velocity (PSV), and end-diastolic velocities (EDV) were determined in middle, anterior and posterior cerebral arteries. No significant gender difference was found. However, there was a decrease in blood flow velocities in all vessels with advancing age, which was significant when subjects older than 40 years and < or = 40 years old were compared. V mean, PSV and EDV values were highest in the age group of 5 - 10 years old and lowest in volunteers older than the age of 60 (p < 0.05). As a conclusion, flow velocities in basal cerebral arteries range widely and are significantly age-related. Age matching of TCD data is a requirement for clinically relevant conclusions. abstract_id: PUBMED:37493195 Increased stiffness of omental arteries from late pregnant women at advanced maternal age. Advanced maternal age (≥35 years) is a risk factor for poor pregnancy outcomes. Pregnancy requires extensive maternal vascular adaptations, and with age, our blood vessels become stiffer and change in structure (collagen and elastin). However, the effect of advanced maternal age on the structure of human resistance arteries during pregnancy is unknown. As omental resistance arteries contribute to blood pressure regulation, assessing their structure in pregnancy may inform on the causal mechanisms underlying pregnancy complications in women of advanced maternal age. Omental fat biopsies were obtained from younger (<35 years) or advanced maternal age (≥35 years) women during caesarean delivery (n = 7-9/group). Arteries (200-300 µm) were isolated and passive mechanical properties (circumferential stress and strain) assessed with pressure myography. Collagen (Masson's Trichrome) and elastin (Verhoff) were visualized histologically and % positively-stained area was assessed. Median maternal age was 32 years (range 25-34) for younger, and 38 years (range 35-42) for women of advanced maternal age. Circumferential strain was lower in arteries from advanced maternal age versus younger women but circumferential stress was not different. Omental artery collagen levels were similar, while elastin levels were lower with advanced maternal age versus younger pregnancies. The collagen:elastin ratio was greater in arteries from advanced maternal age versus younger women. In conclusion, omental arteries from women of advanced maternal age were less compliant with less elastin compared with arteries of younger controls, which may affect how vascular stressors are tolerated during pregnancy. Understanding how vascular aging affects pregnancy adaptations may contribute to better pregnancy outcomes. abstract_id: PUBMED:9874513 Age-related differences in calcium accumulation in human arteries. To elucidate the accumulation of calcium in the human arteries, the calcium contents of the thoracic aorta, coronary, common carotid, basilar, internal thoracic, axillary, radial, femoral, popliteal, and dorsalis pedis arteries, were analyzed by inductively coupled plasma atomic emission spectrometry (ICP-AES). The calcium content began to increase in both the thoracic aorta and femoral artery around the age of 50 years (yrs), in the popliteal artery at the age of 60 yrs, in the coronary, basilar and dorsalis pedis arteries at the age of 70 yrs, and in the common carotid artery at the age of 80 yrs. In the same time, the calcium content did not increase significantly in the internal thoracic and radial arteries. Accumulation of calcium in human arteries was classified into two groups: The first is an age-related increase of calcium content in the arteries like the thoracic aorta, coronary, common carotid, basilar, axillary, femoral, popliteal and dorsalis pedis arteries. The second is non-age-related, such as the internal thoracic and radial arteries. To examine the localization of this calcium accumulation, the thoracic-aortic and femoral-arterial walls were separated into the three tunicae, intima, media and adventitia. In the case of the thoracic aorta, the accumulation of calcium and phosphorus occurred primarily in the tunica media of aorta, secondarily in the tunica intima. With regard to the femoral artery, the accumulation of calcium and phosphorus occurred only in the tunica media, only in the tunica intima, or in both the tunicae media and intima. Therefore, the manner of accumulation of calcium and phosphorus in the femoral-arterial wall was different from that in the aortic wall. Comparing the upper and lower limb arteries, the calcium content was found to be higher in the femoral, popliteal, and dorsalis pedis arteries of the lower limb than that of the axillary and radial arteries of the upper limb. Answer: The age of your arteries can be influenced by various factors, including the development of atherosclerosis, the presence of diffuse intimal thickening (DIT), and changes in the arterial structure and composition that occur with aging. Diffuse intimal thickening is a thickened intima present in human arteries before atherosclerosis develops and is considered to be related to atherogenesis. It is found in the coronary arteries and aorta from 36 weeks of gestation and the first year of life, respectively, and shows an age-dependent increase in the intima/media (I/M) ratio, particularly in arteries prone to atherosclerosis (PUBMED:12242525). As individuals age, there are also age-related increases in the content of calcium, phosphorus, and sodium in the uterine arteries (PUBMED:18500453), and calcium accumulation in various arteries such as the thoracic aorta, coronary, and femoral arteries, which can begin to increase around the age of 50 years (PUBMED:9874513). Additionally, the normal blood flow velocities of basal cerebral arteries decrease with advancing age (PUBMED:18285672), and there are age-related structural alterations in epicardial arteries, which may represent accelerated aging phenomena (PUBMED:6116326). Furthermore, in pregnant women of advanced maternal age (≥35 years), omental arteries have been found to be less compliant and have less elastin than arteries obtained from younger women, suggesting that maternal resistance vessel stiffening in advanced maternal age may contribute to increased risk of adverse pregnancy outcomes (PUBMED:37975423; PUBMED:37493195). In summary, if your arteries exhibit advanced atherosclerosis, increased stiffness, higher calcium accumulation, or significant intimal thickening, they may be considered "older" than your chronological age. These changes can contribute to the risk of cardiovascular diseases and are influenced by both age and lifestyle factors.
Instruction: Do childhood cancer survivors with physical performance limitations reach healthy activity levels? Abstracts: abstract_id: PUBMED:23681516 Do childhood cancer survivors with physical performance limitations reach healthy activity levels? Background: The extent to which physical performance limitations affect the ability of childhood cancer survivors to reach healthy activity levels is unknown. Therefore this study aims to describe the effect of different types of limitations on activity levels in survivors. Procedure: Within the Swiss Childhood Cancer Survivor Study we sent a questionnaire to all survivors (≥16 years) registered in the Swiss Childhood Cancer Registry, who survived >5 years and were diagnosed 1976-2005 aged <16 years. We measured healthy activity levels using international guidelines and assessed different kinds of performance limitations (visual impairment, weight and endurance problems, cardiorespiratory, musculoskeletal, and neurological problems, pain and fatigue syndromes). Results: The sample included 1,560 survivors (75% response rate), of whom 209 (13.5%) reported they have performance limitations. Forty-two percent of survivors with limitations reached healthy activity levels, compared to 57% of survivors without limitations. Least active were survivors with vision impairments (25% active), weight and endurance problems (27.3%), cardiorespiratory problems (36.4%), and musculoskeletal problems (43.1%). After adjusting for socio-demographic variables and type of cancer, we found that survivors with limitations were 1.4 (95%CI 1.0-2.0; P = 0.047) times more likely to be inactive. Conclusions: Although many survivors with physical performance limitations maintain healthy activity levels, there is room for improvement. Adapted and targeted physical activity counseling for survivors with performance limitations might help them to raise level of activity and pursue a healthy lifestyle. abstract_id: PUBMED:34133073 Accuracy of perceived physical activity and fitness levels among childhood cancer survivors. Background: Childhood cancer survivors do not engage in sufficient physical activity and have low fitness levels. Perceived physical activity and fitness levels may influence survivors' engagement in health behaviours. We aimed to investigate survivors' perceptions of physical activity and fitness levels and identify how accurate their perceptions were. We further explored survivors' attitudes toward physical activity, including perceived importance and desire to increase activity levels. Procedure: We recruited 116 childhood cancer survivors (8-18 years) and assessed their perceived physical activity levels using a questionnaire and the Godin's Leisure Score Index. Accuracy of their perceptions was established by comparing their perceived physical activity levels with the recommended guidelines. Survivors reported their perceived fitness levels using the International Fitness Scale. We compared survivors' perceptions with their performance on the 6-minute walk test using weighted Cohen's kappa to determine interrater agreement between perceived and objectively measured fitness. Results: Most survivors did not meet the physical activity guidelines (<420 min/week). One-third incorrectly perceived whether their self-reported physical activity levels were appropriate (84% underestimated, while 16% overestimated). Survivors had average fitness and were inaccurate at perceiving their fitness level. Survivors highly valued the importance of being able to do physical activity, and 89% reported a desire to increase their physical activity. Conclusions: Our results reveal that many survivors are not accurate when perceiving their physical activity and fitness levels. Emphasising the need for objective fitness assessments, and patient education in clinical practice may support survivors to accurately perceive their physical activity and fitness levels, thus improving health behaviours. abstract_id: PUBMED:30993451 Physical activity and fitness among pediatric cancer survivors: a meta-analysis of observational studies. Purpose: The number of pediatric cancer survivors has increased dramatically over recent decades. Prior studies involving pediatric cancer survivors have reported reduced physical activity and fitness levels. Thus, the aim of this meta-analysis was to synthesize previous findings on physical activity and fitness levels of pediatric cancer survivors, who had completed cancer treatment and are in complete remission compared with age-matched, non-athletic healthy controls with no history of cancer diagnosis. Methods: Three electronic databases (PubMed, Web of Science, and EBSCO) were searched using a combination of 24 terms. Observational studies examining the post-treatment physical activity and/or fitness levels of pediatric cancer survivors compared with that of non-cancer controls and published in peer-reviewed, English-language journals before August 22, 2018 were eligible. Random-effect models were used in Comprehensive Meta-Analysis software for effect-size estimations of eight studies for physical activity and eight for fitness. Results: The studies included a total sample of 2628; 1413 pediatric cancer survivors and 1215 non-cancer controls. Both physical activity and fitness were significantly lower in childhood cancer survivors than in non-cancer controls (g = - 0.889; 95% confidence interval [CI] = - 1.648 - 0.130; p = 0.022) and (g = - 1.435; 95% CI = - 2.615 - 0.225; p = 0.017), respectively, with high heterogeneity. Conclusions: Pediatric cancer sequelae and its treatment may limit participation in physical activity and fitness activities by survivors of pediatric cancer. Accentuating the need to incorporate physical activity and fitness into treatment protocols and post-treatment recommendations may improve pediatric cancer survivors' health and well-being. abstract_id: PUBMED:23082232 Physical performance limitations in adolescent and adult survivors of childhood cancer and their siblings. Purpose: This study investigates physical performance limitations for sports and daily activities in recently diagnosed childhood cancer survivors and siblings. Methods: The Swiss Childhood Cancer Survivor Study sent a questionnaire to all survivors (≥ 16 years) registered in the Swiss Childhood Cancer Registry, who survived >5 years and were diagnosed 1976-2003 aged <16 years. Siblings received similar questionnaires. We assessed two types of physical performance limitations: 1) limitations in sports; 2) limitations in daily activities (using SF-36 physical function score). We compared results between survivors diagnosed before and after 1990 and determined predictors for both types of limitations by multivariable logistic regression. Results: The sample included 1038 survivors and 534 siblings. Overall, 96 survivors (9.5%) and 7 siblings (1.1%) reported a limitation in sports (Odds ratio 5.5, 95%CI 2.9-10.4, p<0.001), mainly caused by musculoskeletal and neurological problems. Findings were even more pronounced for children diagnosed more recently (OR 4.8, CI 2.4-9.6 and 8.3, CI 3.7-18.8 for those diagnosed <1990 and ≥ 1990, respectively; p=0.025). Mean physical function score for limitations in daily activities was 49.6 (CI 48.9-50.4) in survivors and 53.1 (CI 52.5-53.7) in siblings (p<0.001). Again, differences tended to be larger in children diagnosed more recently. Survivors of bone tumors, CNS tumors and retinoblastoma and children treated with radiotherapy were most strongly affected. Conclusion: Survivors of childhood cancer, even those diagnosed recently and treated with modern protocols, remain at high risk for physical performance limitations. Treatment and follow-up care should include tailored interventions to mitigate these late effects in high-risk patients. abstract_id: PUBMED:22434746 Prospectively examining physical activity in young adult survivors of childhood cancer and healthy controls. Background: This study compares young adult survivors of childhood cancer (YASCC) and young adults without a history of serious illness/injury on physical activity levels and examines psychological predictors of physical activity in survivors over a 2-month period. Procedure: YASCC participants (n = 117) and healthy controls (n = 148), ages 18-30, recruited during cancer survivorship clinic or primary care clinics completed self-report measures of physical activity, health problems, psychological distress, and health beliefs (Health Perceptions, Satisfaction with Healthcare, Cognitive Competence, and Autonomy). Survivorship providers completed ratings of health problems and treatment intensity for survivors. Results: Survivors had significantly lower levels of physical activity than controls. Family income, survivor-reported health problems and less positive health beliefs were associated with lower rates of physical activity. Provider-reported survivor health problems and ratings of cancer treatment intensity were not related to survivor physical activity. Less positive survivor beliefs about their cognitive competence predicted survivor physical activity 2 months later after accounting for other pertinent demographic, medical and psychological variables. Conclusions: YASCC were significantly less active than healthy controls. YASCC with more self-identified health problems and negative beliefs about their cognitive competence were less physically active. Beliefs about their health and cognitive competencies may be viable areas for assessment and intervention in order to promote increased engagement in physical activity. abstract_id: PUBMED:32895869 Balance impairment in survivors of pediatric brain cancers: risk factors and associated physical limitations. Purpose: The study aims were to determine the prevalence of balance impairments in adult survivors of pediatric central nervous system (CNS) tumors, and to identify predictors of and limitations associated with balance impairments. Methods: Participants were adult survivors (N = 329) of pediatric CNS tumors. Balance was considered impaired among those with composite scores < 70 on the sensory organization test. Potential predictors of impaired balance were evaluated with generalized linear regression. Multivariable logistic regression was used to evaluate associations between balance impairment and function. Results: Balance impairment was observed in 48% of survivors, and associated with infratentorial tumor location (OR = 4.0, 95% CI, 2.0-7.6), shunt placement (OR = 3.5, 95% CI, 1.8-6.7), increased body fat percentage (OR = 1.1, 95% CI, 1.0-1.1), hearing loss (OR = 11.1, 95% CI, 5.6-22.2), flexibility limitations (OR = 2.0, 95% CI, 1.0-3.9), peripheral neuropathy (OR = 2.4, 95% CI, 1.2-4.5), and cognitive deficits (OR = 2.2, 95% CI, 1.1-4.7). In adjusted models, impaired balance was associated with limitations in overall physical performance (OR = 3.6, 95% CI, 2.0-6.3), mobility (OR = 2.6, 95% CI, 1.5-4.4), diminished walking endurance (OR = 2.9, 95% CI, 1.7-5.0), and non-independent living (OR = 2.0, 95% CI, 1.0-4.3). Conclusions: Nearly half of adult survivors of pediatric CNS tumors have impaired balance, which is associated with mobility and physical performance limitations. Interventions to address the complex needs of this population should be prioritized. Implications For Cancer Survivors: Survivors with identified risk factors should be closely evaluated for presence of balance impairment. Interventions tailored to improve balance also can positively affect function and mobility in survivors. abstract_id: PUBMED:26973721 Do Childhood Cancer Survivors Meet the Diet and Physical Activity Guidelines? A Review of Guidelines and Literature. Despite advances in cancer treatment, childhood cancer survivors are at higher risk of developing chronic health conditions than peers who have not had cancer. Being overweight or obese adds to the already elevated risk of cardiovascular diseases and metabolic abnormalities. Diet and physical activity are modifiable behaviors that reduce obesity risk and have been shown to improve cancer survival in adult cancer survivors. Specific guidelines have been developed for cancer survivors that provide advice on nutrition, physical activity and weight management following cancer diagnosis and treatment. In this review, we report on existing nutrition and physical activity guidelines for cancer survivors, supplemented by available literature on diet and physical activity status of childhood cancer survivors and their associations with health-related outcomes. The 2012 American Cancer Society (ACS) and the 2008 Children's Oncology Group (COG) guidelines provide similar advice on diet but the ACS guidelines also offer specific advice on physical activity and weight management. Thirty-one observational studies and 18 intervention trials published prior to June 2012 that met the inclusion criteria were reviewed. Results suggest that a high proportion of childhood cancer survivors had poor adherence to dietary and physical activity guidelines. Although findings from existing intervention trials are preliminary due to small sample size, available evidence suggests that exercise intervention is safe and feasible for patients and survivors of childhood cancer. Childhood cancer survivors should be encouraged to engage in physical activity, adopt a healthy diet, and maintain a healthy weight throughout cancer survivorship. abstract_id: PUBMED:19332713 Physical performance limitations in the Childhood Cancer Survivor Study cohort. Physical performance limitations are one of the potential long-term consequences following diagnosis and treatment for childhood cancer. The purpose of this review is to describe the risk factors for and the participation restrictions that result from physical performance limitations among childhood cancer survivors who participated in the Childhood Cancer Survivor Study (CCSS). Articles previously published from the CCSS cohort related to physical performance limitations were reviewed and the results summarized. Our review showed that physical performance limitations are prevalent among childhood cancer survivors and may increase as they age. Host-based risk factors for physical disability include an original diagnosis of bone tumor, brain tumor, or Hodgkin's disease; female sex; and an income less than $20,000 per year. Treatment-based risk factors include radiation and treatment with a combination of alkylating agents and anthracyclines. Musculoskeletal, neurologic, cardiac, pulmonary, sensory, and endocrine organ system dysfunction also increase the risk of developing a physical performance limitation. In summary, monitoring of physical performance limitations in an aging cohort of childhood cancer survivors is important and will help determine the impact of physical performance limitations on morbidity, mortality, and caregiver burden. In addition, in developing restorative and preventive interventions for childhood cancer survivors, we must take into account the special needs of survivors with physical disability to optimize their health and enhance participation in daily living activities. abstract_id: PUBMED:37703544 Physical Activity, Fitness, and Cardiovascular Disease Risk in Adolescent Childhood Cancer Survivors Compared to Controls: The Physical Activity in Childhood Cancer Survivors Study. Purpose: Childhood cancer survivors have increased risk of cardiac late effects that can be potentially mitigated by physical activity and fitness. We aimed to (1) compare cardiovascular disease (CVD) risk between survivors and controls, and (2) examine whether the associations of moderate-to-vigorous physical activity (MVPA), cardiorespiratory fitness (CRF), and musculoskeletal fitness (MSF) with CVD risk factors differed between survivors and controls. Methods: Within the Physical Activity in Childhood Cancer Survivors (PACCS) study, we assessed CVD risk factors (android fat mass, systolic blood pressure [SBP], total cholesterol/high-density lipoprotein [HDL]-cholesterol, and glycosylated hemoglobin) in 157 childhood cancer survivors and 113 age- and sex-matched controls aged 9-18 years. We used multivariable mixed linear regression models to compare CVD risk factors between survivors and controls, and assess associations of MVPA, CRF, and MSF with CVD risk factors. Results: Compared with controls, survivors had more android fat mass (861 vs. 648 g, p = 0.001) and lower SBP (114 vs. 118 mmHg, p = 0.002). MVPA, CRF, and MSF were associated with lower levels of android fat mass and total cholesterol/HDL-cholesterol, and higher SBP in survivors. Associations of MVPA, CRF, and MSF with CVD risk factors were similar in survivors and controls (Pinteraction > 0.05), except the associations of CRF and MSF with android fat mass, which were stronger in survivors than in controls (Pinteraction ≤ 0.001). Conclusion: Owing to higher levels of android fat mass and its stronger association with physical fitness in childhood cancer survivors compared with controls, survivors should get targeted interventions to increase fitness to reduce future risk of CVD. abstract_id: PUBMED:33389553 Physical activity for pediatric cancer survivors: a systematic review of randomized controlled trials. Purpose: To collate evidence and evaluate the effects of physical activity interventions on physical activity level among pediatric cancer survivors who had completed active cancer treatment. Methods: Relevant published studies were identified in May 2020 via five databases and reference checking. Searches were limited to randomized controlled trials or controlled clinical trials, published in English involving pediatric cancer survivors aged 18 years or below. Interventions were related to promote physical activity among the survivors. Included studies were assessed using the revised version of the Cochrane's Risk of Bias Tool. Results: Eight randomized controlled trials (620 pediatric cancer survivors and 53 caregivers of pediatric cancer survivors) were included. All studies investigated interventions for pediatric cancer survivors to increase their physical activity level. The interventions used varied across the eight included studies: three mHealth-medical and public health practice supported by mobile devices; two eHealth-the use of information and communication technologies to improve health care; two adventure-based training; and one educational program. Measures of physical activity level also varied: five used various objective measurements (i.e., accelerometer, pedometer, multisensory activity monitor); three used different self-reported questionnaires. Owing to high variability of the interventions and measures, it was impossible to perform meta-analysis. Overall, eHealth and mHealth interventions showed effectiveness and feasibility to promote physical activity among pediatric cancer survivors. Conclusions: eHealth and mHealth interventions appear to be increasingly important strategies to promote physical activity among pediatric cancer survivors. Implications For Cancer Survivors: Future larger-scale studies using a core-set of assessment tools are warranted to further promote regular physical activity in pediatric cancer survivors. Answer: Childhood cancer survivors with physical performance limitations face challenges in reaching healthy activity levels. A study within the Swiss Childhood Cancer Survivor Study found that 42% of survivors with limitations reached healthy activity levels, compared to 57% of survivors without limitations. Survivors with vision impairments, weight and endurance problems, cardiorespiratory problems, and musculoskeletal problems were among the least active. After adjusting for socio-demographic variables and type of cancer, survivors with limitations were 1.4 times more likely to be inactive (PUBMED:23681516). Another study showed that most childhood cancer survivors did not meet physical activity guidelines, with one-third incorrectly perceiving whether their self-reported physical activity levels were appropriate. Survivors had average fitness and were inaccurate at perceiving their fitness level, although they highly valued the importance of being able to do physical activity and 89% reported a desire to increase their physical activity (PUBMED:34133073). A meta-analysis of observational studies confirmed that both physical activity and fitness were significantly lower in childhood cancer survivors than in non-cancer controls, highlighting the need to incorporate physical activity and fitness into treatment protocols and post-treatment recommendations (PUBMED:30993451). Furthermore, survivors of childhood cancer, even those diagnosed recently and treated with modern protocols, remain at high risk for physical performance limitations, which can affect their ability to engage in sports and daily activities (PUBMED:23082232). Overall, while many survivors with physical performance limitations maintain healthy activity levels, there is room for improvement. Adapted and targeted physical activity counseling for survivors with performance limitations might help them raise their level of activity and pursue a healthy lifestyle (PUBMED:23681516).
Instruction: Is genital mycosis associated with HIV risk behaviors among heterosexuals? Abstracts: abstract_id: PUBMED:8712269 Is genital mycosis associated with HIV risk behaviors among heterosexuals? ACSF Investigators. Analyse des comportements sexuels en France. Objectives: This study examined in both sexes whether genital mycosis is as strongly associated with human immunodeficiency virus (HIV) sexual risk behavior as are other sexually transmitted diseases. Methods: Heterosexuals in the French National Survey of Sexual Behavior who reported a history of either mycosis or another sexually transmitted disease in the previous 5 years were compared with heterosexuals who reported no such disease. Odds ratios were adjusted for age, education, place of residence, and number of sexual partners. Results: Among men, mycosis and other sexually transmitted diseases showed similar associations with age and number of sexual partners. Among women, mycosis was also associated with a high number of sexual partners, albeit to a lesser extent than other sexually transmitted diseases. Lifetime experience of heterosexual anal penetration was reported significantly more often by men with a history of mycosis. Conclusion: These results suggest that most male mycosis is sexually acquired and constitutes a self-reported indicator as good as other sexually transmitted diseases for high risk. Female mycosis may help define an intermediate HIV risk group. abstract_id: PUBMED:11486720 Evaluation of two self-care treatments for prevention of vaginal candidiasis in women with HIV. Vaginal candidiasis (VC) is a common concern for women living with HIV infection. The authors evaluated the effectiveness of two self-care approaches to prophylaxis of VC among HIV-infected women, weekly intravaginal application of Lactobacillus acidophilus or weekly intravaginal application of clotrimazole tablets, in a randomized, double-blind, placebo-controlled trial. VC was defined as a vaginal swab positive for Candida species in the presence of signs/symptoms of vaginitis and the absence of a diagnosis of Trichomonas vaginalis or bacterial vaginosis. Thirty-four episodes of VC occurred among 164 women followed for a median of 21 months. The relative risk of experiencing an episode of VC was 0.4 (95% CI = 0.2, 0.9) in the clotrimazole arm and 0.5 (95% CI = 0.2, 1.1) in the Lactobacillus acidophilus arm. The estimated median time to first episode VC was longer for clotrimazole (p = .03, log rank test) and Lactobacillus acidophilus (p = .09, log rank test) compared with placebo. Vaginal yeast infections can be prevented with local therapy. Education about self-care for prophylaxis of VC should be offered to HIV-infected women. abstract_id: PUBMED:9742480 Factors associated with vaginal yeast infections in HIV-positive women. To better understand factors associated with symptomatic and asymptomatic vulvovaginal candidiasis, including the role of immune compromise and patient self-report, a cross-sectional analysis of factors associated with the isolation of yeast from vaginal swabs and clinical diagnosis of Candida vaginitis (CV) among 184 HIV-infected women was conducted. Sixty-four (35%) of the women had vaginal swabs positive for yeast. Nineteen (10%) women met the case definition for CV. In a logistic regression model, only CD4 < or = 100 cells/mm3 was predictive of CV (adds ratio = 4.5; 95% confidence interval = 1.0, 20; p = .05). The predictive value of patient self-report of CV was only 12%. This study demonstrates that all HIV-infected women should receive a regular and thorough gynecologic evaluation, regardless of self-reported symptoms. HIV-infected women will benefit from education about prevention and treatment of CV, and women whose CD4 counts are low may wish to consider prophylaxis for CV. abstract_id: PUBMED:28793015 Virulence factors and genetic variability of vaginal Candida albicans isolates from HIV-infected women in the post-highly active antiretroviral era. Vulvovaginal candidiasis (VVC) in HIV-infected (HIV+) women is a serious public health problem. However, little is known about the virulence mechanisms of vaginal Candida albicans from HIV+ women in the post-highly active antiretroviral therapy (HAART) era. Here, we report a comparative analysis of the expression of key virulence factors and genetic variability of 26 vaginal C. albicans strains isolated from HIV+ women undergoing HAART and 18 from HIV-uninfected (HIV-) women. In general, we observed that C. albicans from HIV+ women receiving HAART showed lower expression of virulence factors compared with C. albicans from HIV- women, except for the proteinase activity which is highly expressed. The results in HIV-women further suggest that virulence factors appear to be expressed in response to the yeast stress, in the presence of an adequate immune response. Furthermore, the RAPD results showed a high heterogeneity among isolates from both groups of women. These findings in HIV+ women using HAART will help to improve the monitoring of vaginal yeast infections and the quality of life of patients. abstract_id: PUBMED:35910510 Prevalence and Risk Factors of Vulvovaginal Candidosis during Pregnancy: A Review. Vulvovaginal candidosis (VVC) is a symptomatic vaginal yeast infection, especially caused by Candida spp. Although VVC is common among reproductive-age women, prevalence studies notice the uprise of vaginal Candida colonization to 30% during pregnancy by culture, especially in the last trimester. Recent studies have considered it a severe problem due to the emerging evidence showing the association of VVC with a higher chance of pregnancy-related complexities (e.g., preterm labor, premature rupture of membranes, congenital cutaneous candidosis, and chorioamnionitis). In this review, we have reassessed and summarized the prevalence rate of VVC in expecting mothers and analyzed the association of several factors to the increased risk of VVC during pregnancy in different regions of the world. Altogether, these data collected from various studies showed the highest prevalence of VVC during pregnancy, mostly in Asian and African countries (90.38%, 62.2%, and 61.5% in Kenya, Nigeria, and Yemen, respectively). The prevalence rate of VVC during pregnancy was also found to differ with age, gestation period, parity, educational status, and socioeconomic level. Some pregnancy-related factors (e.g., weakened immunity; elevated level of sex hormones, glycogen deposition; low vaginal pH; decreased cell-mediated immunity) and several clinical and behavioral factors can be suggested as potential risk factors of candidosis during pregnancy. abstract_id: PUBMED:18450853 Prenatal exposure to maternal infections and epilepsy in childhood: a population-based cohort study. Objective: We estimated the association between prenatal exposure to maternal infections and the subsequent risk for epilepsy in childhood. Methods: We included 90,619 singletons who were born between September 1997 and June 2003 in the Danish National Birth Cohort and followed them up to December 2005. Information on maternal infections during pregnancy (cystitis, pyelonephritis, diarrhea, coughs lasting >1 week, vaginal yeast infection, genital herpes, venereal warts, and herpes labialis) was prospectively reported by mothers in 2 computer-assisted telephone interviews in early and midgestation; information on maternal cystitis and pyelonephritis during late period of pregnancy was also collected in a third interview after birth. Children who received a diagnosis of epilepsy as inpatients or outpatients were retrieved from the Danish National Hospital Register. We identified 646 children with a diagnosis of epilepsy during up to 8 years of follow-up time. Cox proportional hazards regression models were used to estimate incidence rate ratio and 95% confidence interval. Results: Children who were exposed to maternal cystitis, pyelonephritis, diarrhea, coughs, and/or vaginal yeast infection some maternal infections in prenatal life had an increased risk for epilepsy. Coughs lasting >1 week were associated with an increased risk for epilepsy only in the first year of life, as was vaginal yeast infection only in children who were born preterm. These associations remained unchanged for children without cerebral palsy, congenital malformation, or a low Apgar score at 5 minutes. Conclusions: Prenatal exposure to some maternal infections was associated with an increased risk for epilepsy in childhood. abstract_id: PUBMED:25162498 Vaginal yeast infections while deployed in Southwest/Central Asia, active component females, U.S. Armed Forces, 2008-2013. In field settings, female service members may not have adequate access to bathrooms, showers, laundry, or sanitary products necessary to maintain adequate feminine hygiene; therefore, service women may be at risk for vaginal yeast infections while deployed. During the 6-year surveillance period, nearly 3,000 U.S. military service women were diagnosed with at least one clinically significant yeast infection while supporting combat operations in Southwest/Central Asia. The crude overall incidence rate was 35.1 per 1,000 person-years (p-yrs). Overall incidence rates were higher among black, non-Hispanic service women, and among those in the Army and Air Force, in enlisted grades, and in communications/intelligence and motor transport occupations. The yearly rate of yeast infections was relatively stable from 2008 through 2010, then decreased in 2011 through 2013. Prior to deploying to austere operational settings, female service members should be provided practical and useful information, realistic training, and material support to decrease the risk of acquiring and increase the effectiveness of treating clinically and military operationally significant yeast infections. abstract_id: PUBMED:27540323 Yeast Infection and Diabetes Mellitus among Pregnant Mother in Malaysia. Background: Vaginal yeast infection refers to irritation of the vagina due to the presence of opportunistic yeast of the genus Candida (mostly Candida albicans). About 75% of women will have at least one episode of vaginal yeast infection during their lifetime. Several studies have shown that pregnancy and uncontrolled diabetes increase the infection risk. Reproductive hormone fluctuations during pregnancy and elevated glucose levels characteristic of diabetes provide the carbon needed for Candida overgrowth and infection. The goal of this study was to determine the prevalence of vaginal yeast infection among pregnant women with and without diabetes. Methods: This was a case-control study using cases reports from Kepala Batas Health Clinic, Penang State, Malaysia from 2006 to 2012. In total, 740 pregnant ladies were chosen as sample of which 370 were diabetic and 370 were non-diabetic cases. Results: No relationship between diabetes and the occurrence of vaginal yeast infection in pregnant women was detected, and there was no significant association between infection and age group, race or education level. Conclusion: In conclusion, within radius of this study, vaginal yeast infection can occur randomly in pregnant women. abstract_id: PUBMED:10616356 Epidemiology of vulvar vestibulitis syndrome: an exploratory case-control study. Background: Vulvar vestibulitis syndrome (VVS) is a chronic, persistent syndrome characterised by vestibular pain, tenderness, and erythema. The aetiology of VVS is unknown and few of the hypothesised risk factors have been tested in controlled studies. Methods: Using a matched case-control study design, medical, sexual, health behaviour, and diet history of 28 women with VVS were compared with 50 friend controls without VVS to identify possible causal factors. Results: Cases were more likely than controls to report every vaginal and urinary symptom at the time of interview measured, particularly vaginal soreness or pain (60.7%) and pain during intercourse (64.3%). There were no significant differences between cases and controls with respect to sexual behaviour. Cases were more likely than controls to report self reported history of physician diagnosed bacterial vaginosis (OR = 22.2, 95% CI = 2.8, 177.2, p value = 0.0001), vaginal yeast infections (OR = 4.9, 95% CI = 1.4, 18.0, p value = 0.01), and human papillomavirus (OR = 7.1, 95% CI = 0.6, 81.2, p value = 0.08). There were no differences between cases and controls with respect to dietary intake of oxalate. Cases were more likely than controls to report poor health status (OR = 5.7, 95% CI = 1.1, 28.7, p value = 0.02) and history of depression for 2 weeks or more during the past year (OR = 4.4, 95% CI = 1.6, 12.3, p value = 0.002). Conclusion: Self reported history of bacterial vaginosis, yeast infections, and human papillomavirus were strongly associated with VVS. An infectious origin for VVS should be pursued in larger controlled studies, using questionnaire and laboratory measures. abstract_id: PUBMED:33213078 Efficacy and Safety of SGLT-2 Inhibitors for Treatment of Diabetes Mellitus among Kidney Transplant Patients: A Systematic Review and Meta-Analysis. Background: The objective of this systematic review was to evaluate the efficacy and safety profiles of sodium-glucose co-transporter 2 (SGLT-2) inhibitors for treatment of diabetes mellitus (DM) among kidney transplant patients. Methods: We conducted electronic searches in Medline, Embase, Scopus, and Cochrane databases from inception through April 2020 to identify studies that investigated the efficacy and safety of SGLT-2 inhibitors in kidney transplant patients with DM. Study results were pooled and analyzed utilizing random-effects model. Results: Eight studies with 132 patients (baseline estimated glomerular filtration rate (eGFR) of 64.5 ± 19.9 mL/min/1.73m2) treated with SGLT-2 inhibitors were included in our meta-analysis. SGLT-2 inhibitors demonstrated significantly lower hemoglobin A1c (HbA1c) (WMD = -0.56% [95%CI: -0.97, -0.16]; p = 0.007) and body weight (WMD = -2.16 kg [95%CI: -3.08, -1.24]; p < 0.001) at end of study compared to baseline level. There were no significant changes in eGFR, serum creatinine, urine protein creatinine ratio, and blood pressure. By subgroup analysis, empagliflozin demonstrated a significant reduction in body mass index (BMI) and body weight. Canagliflozin revealed a significant decrease in HbA1C and systolic blood pressure. In terms of safety profiles, fourteen patients had urinary tract infection. Only one had genital mycosis, one had acute kidney injury, and one had cellulitis. There were no reported cases of euglycemic ketoacidosis or acute rejection during the treatment. Conclusion: Among kidney transplant patients with excellent kidney function, SGLT-2 inhibitors for treatment of DM are effective in lowering HbA1C, reducing body weight, and preserving kidney function without reporting of serious adverse events, including euglycemic ketoacidosis and acute rejection. Answer: Yes, genital mycosis is associated with HIV risk behaviors among heterosexuals. A study examining the association between genital mycosis and HIV sexual risk behavior in both sexes found that among men, mycosis and other sexually transmitted diseases showed similar associations with age and number of sexual partners. Among women, mycosis was also associated with a high number of sexual partners, albeit to a lesser extent than other sexually transmitted diseases. Additionally, men with a history of mycosis reported significantly more often a lifetime experience of heterosexual anal penetration. These results suggest that most male mycosis is sexually acquired and constitutes a self-reported indicator as good as other sexually transmitted diseases for high risk. Female mycosis may help define an intermediate HIV risk group (PUBMED:8712269).
Instruction: Do patients with Pierre Robin sequence have worse outcomes after cleft palate surgery? Abstracts: abstract_id: PUBMED:23676521 Do patients with Pierre Robin sequence have worse outcomes after cleft palate surgery? Background: Pierre Robin sequence (PRS) is a congenital condition characterized by micrognathia, glossoptosis, airway obstruction, and often clefting of the soft and hard palate. The purpose of this study was to compare oronasal fistula rates, long-term speech outcomes, and rates of secondary surgery for velopharyngeal insufficiency (VPI) after primary cleft palate repair using the modified Furlow technique between nonsyndromic patients with and without PRS. This study also sought to determine if type of airway management in infancy correlated with speech outcomes in patients with PRS. Methods: A retrospective review was performed of all nonsyndromic patients with clefts of the palate only who underwent repair between 1981 and 2006 at The Children's Hospital of Philadelphia using the modified Furlow technique. Patient outcomes were evaluated by the rate of postoperative oronasal fistula, speech scores at a minimum of age 5 years using the Pittsburgh Weighted Values for Speech Symptoms Associated with VPI, and the need for secondary pharyngeal surgery for VPI. Results: Fifty-five patients with PRS and 129 without PRS were included in this study. There was no significant difference in oronasal fistula rate between groups. Speech outcomes were worse in patients with PRS with 52.7% demonstrating a competent velopharyngeal mechanism, and 30.9% a borderline mechanism, compared to 72.1% and 19.4% in the non-PRS group, respectively (P = 0.035). In addition, only 76.3% of patients with PRS had no or mild hypernasality compared to 91.5% of patients without PRS (P = 0.01). The rates of nasal emission, articulation errors associated with VPI, and secondary surgery for VPI did not differ between groups. Of the patients with PRS, 36 were managed in infancy with positioning alone and 15 underwent surgical intervention for airway obstruction, and there were no significant differences in speech scores or rates of secondary surgery for VPI between these subgroups. Conclusions: Nonsyndromic patients with PRS had worse speech outcomes after modified Furlow cleft palate repair, but no significant differences were seen in the rates of secondary surgery for VPI or postoperative oronasal fistula. In addition, initial airway management in patients with PRS did not correlate with speech outcomes. abstract_id: PUBMED:33136861 A Comparison of Outcomes After Palatoplasty in Patients With Non-Syndromic Pierre Robin Sequence Versus Patients With Non-Syndromic Isolated Cleft Palate. The purpose of this retrospective study was to compare outcomes of cleft palate repair in patients with non-syndromic Pierre Robin sequence (NS-PRS) versus those with non-syndromic isolated cleft palate (NS-ICP). Pierre Robin sequence (PRS) was defined as a diagnosis of the triad of microretrognathia, glossoptosis, and cleft palate, and the severity of PRS was assessed based on the presence of respiratory and feeding problems. All patients underwent palatoplasty between January 2000 and December 2011. The authors examined age at palatoplasty, type of cleft palate, fistula rate, velopharyngeal (VP) function, nasal emission, hypernasality, and need for secondary speech surgery, in addition to PRS severity in the NS-PRS patients. A total of 15 NS-PRS patients and 40 NS-ICP patients were reviewed. The incidence of VP dysfunction, nasal emission, hypernasality, and secondary speech surgery was not significantly different between the NS-PRS patients and NS-ICP patients. Age at palatoplasty was significantly different between the 2 groups (P = 0.012) but type of CP was not (P = 1.00). Only 2 NS-PRS patients were classified as category III (severe), and all of the NS-PRS patients who had VP insufficiency were classified as PRS severity category I (not severe). The findings of this study indicate that NS-PRS patients may not have worse outcomes than NS-ICP patients. abstract_id: PUBMED:26910675 Outcomes of Cleft Palate Repair in Patients with Pierre Robin Sequence: A Matched Case-Control Study. Background: Pierre Robin sequence results from a cascade of events that occur during embryologic development and frequently presents with cleft palate. Some studies have shown speech outcomes to be worse in patients with Pierre Robin sequence after cleft palate repair. Methods: A cohort of Pierre Robin sequence patients who all required an airway intervention and nasogastric feeding in the neonatal period were identified and speech outcomes assessed at 5 years of age. A cleft- and sex-matched non-Pierre Robin sequence, cleft palate-only comparison group was also identified from the same institution and study period. Results: A total of 24 patients with Pierre Robin sequence that required airway and nutritional support in the neonatal period were matched for age, sex, and cleft type to a group of 24 non-Pierre Robin sequence cleft patients. There was no significant difference in the incidence of oronasal fistula between the groups. Secondary surgery for velopharyngeal incompetence was significantly more (p = 0.017) in the Pierre Robin sequence group, who also had significantly greater nasality (p = 0.031) and cleft speech characteristic (p = 0.023) scores. Conclusions: The authors hypothesize that other factors may exist in Pierre Robin sequence that may lead to poor speech outcomes. The authors would suggest counseling parents of children with Pierre Robin sequence that have required a neonatal airway intervention, that speech development may be poorer than in other children with cleft palate, and that these children will have a significantly higher incidence of secondary speech surgery. Clinical Question/level Of Evidence: Risk, II. abstract_id: PUBMED:36576825 Speech Outcomes following Mandibular Distraction Osteogenesis for Robin Sequence: Midchildhood Results. Background: The aim of this study was to compare midchildhood speech outcomes in patients with nonsyndromic Robin sequence with cleft palate (RSCP) treated with mandibular distraction osteogenesis (MDO) to patients with nonsyndromic Veau class I and Veau class II cleft palate (CP). Methods: The authors performed a retrospective review of patients with nonsyndromic Robin sequence from 2000 to 2017, comparing those who underwent MDO to patients with nonsyndromic CP. Demographics, operative details, length of hospital stay, complications, and Pittsburgh Weighted Speech Scale scores were collected. Results: Thirty-three patients met inclusion criteria in the MDO group with 127 patients as controls. Despite similar median age (RSCP, 4.5 years; CP only, 4.6 years) and Veau cleft type at early evaluation, there was a significant increase in composite Pittsburgh Weighted Speech Scale score within the MDO cohort ( P ≤ 0.002); specifically, with worse visible nasal emission ( P ≤ 0.007), hypernasality ( P ≤ 0.001), and compensatory articulation ( P ≤ 0.015). However, these differences were not present at age-matched midchildhood evaluation (median, RSCP, 6.5; CP only, 7.1; P ≥ 0.092). Median age-matched follow-up was 6.4 years in the MDO group and 7.1 years in the control group ( P ≥ 0.136). There was also no difference in the rate of secondary speech surgery at midchildhood evaluation ( P ≥ 0.688). Conclusions: The authors' retrospective comparison of speech outcomes in RSCP versus CP only demonstrates no difference in midchildhood speech, conflicting with recent reports. Although patients with Robin sequence treated with MDO had worse visible nasal emission, hypernasality, and compensatory articulation in early childhood, this appears to have resolved in the interim without additional intervention. Longitudinal follow-up is needed to fully understand the speech ramifications of RSCP. Clinical Question/level Of Evidence: Risk, II. abstract_id: PUBMED:34670591 Quality of life and phonatory and morphological outcomes in cognitively unimpaired adolescents with Pierre Robin sequence: a cross-sectional study of 72 patients. Background: Pierre Robin sequence (PRS) is a heterogeneous condition involving retro(micro)gnathia, glossoptosis and upper airway obstruction, very often with posterior cleft palate. Patients with PRS, either isolated or associated with Stickler syndrome have good intellectual prognosis. Nevertheless, the quality of life in adolescence and the phonatory and morphological outcomes are rarely analysed. We assessed the phonatory and morphological outcomes of 72 cognitively unimpaired adolescents with PRS, studied their oral (COHIP-SF19), vocal (VHI-9i) and generic quality of life (QoL; KIDSCREEN-52), and searched for determinants of these outcomes. Results: Two-thirds of our adolescents retained low or moderate phonation difficulties, but risk factors were not identified. For 14%, morphological results were considered disharmonious, with no link to neonatal retrognathia severity. Only one vs two-stage surgery seemed to affect final aesthetic results. The oral QoL of these adolescents was comparable to that of control patients and was significantly better than that of children with other craniofacial malformations (COHIP-SF19 = 17.5, 15.4 and 25.7, respectively). The oral QoL of the adolescents with non-isolated PRS was significantly worse (COHIP-SF19 = 24.2) than that of control patients and close to that of children with other craniofacial malformations. The vocal QoL of the adolescents (mean [SD] VHI-9i = 7.5 [5.4]) was better than that of patients with other voice pathologies and better when phonation was good. The generic QoL of the adolescents was satisfactory but slightly lower than that of controls, especially in dimensions concerning physical well-being, relationships and autonomy. QoL results were lower for adolescents with non-isolated than isolated PRS. Only non-isolated PRS and low oral QoL affected generic QoL. Conclusion: Morphological or phonatory impairments remain non-rare in adolescents with PRS but do not seem to be directly responsible for altered QoL. These adolescents, especially those with non-isolated PRS, show self-confidence and social-relation fragility. We must focus on long-term functional and psychological results for PRS patients and improve therapy protocols and follow-up, notably those affecting the oral aspects of the disease. abstract_id: PUBMED:37533341 Patient Factors Influencing Speech Outcomes in Velopharyngeal Function Following Initial Cleft Palate Repair: A Systematic Review and Meta-Analysis. Objective: Identification of patient factors influencing velopharyngeal function for speech following initial cleft palate repair. Design: A literature search of relevant databases from inception until 2018 was performed using medical subject headings and keywords related to cleft palate, palatoplasty and speech assessment. Following three stage screening data extraction was performed. Setting: Systematic review and meta-analysis of relevant literature. Patients/participants: Three hundred and eighty-three studies met the inclusion criteria, comprising data on 47 658 participants. Interventions: Individuals undergoing initial palatoplasty. Main Outcome Measures: Studies including participants undergoing initial cleft palate repair where the frequency of secondary speech surgery and/or velopharyngeal function for speech was recorded. Results: Patient factors reported included cleft phenotype (95% studies), biological sex (64%), syndrome diagnosis (44%), hearing loss (28%), developmental delay (16%), Robin Sequence (16%) and 22q11.2 microdeletion syndrome (11%). Meta-analysis provided strong evidence that rates of secondary surgery and velopharyngeal dysfunction varied according to cleft phenotype (Veau I best outcomes, Veau IV worst outcomes), Robin Sequence and syndrome diagnosis. There was no evidence that biological sex was associated with worse outcomes. Many studies were poor quality with minimal follow-up. Conclusions: Meta-analysis demonstrated the association of certain patient factors with speech outcome, however the quality of the evidence was low. Uniform, prospective, multi-centre documentation of preoperative characteristics and speech outcomes is required to characterise risk factors for post-palatoplasty velopharyngeal insufficiency for speech. Systematic Review Registration: Registered with PROSPERO CRD42017051624. abstract_id: PUBMED:31734563 A retrospective study of patients with Robin sequence: Patient characteristics and their impact on clinical outcomes. Introduction: Robin sequence (RS) is a congenital set of abnormalities of the head and neck, consisting of a hypoplastic mandible (micrognathia), a tongue that is displaced posteriorly (glossoptosis), and obstruction of the airway. A clear set of diagnostic criteria for this complex condition has recently been established, but there is still no consensus in the literature with respect to managing the associated airway, feeding, and hearing difficulties. The objectives of the study are: 1) to describe the management of airway and feeding issues in children with RS; and 2) to evaluate the impact of airway and feeding strategies on important clinical outcomes. Methods: A retrospective chart review was conducted of all pediatric patients diagnosed with RS at London Health Sciences Centre from January 1995 until September 2017. The frequencies of all airway and feeding interventions were collected. Data were collected on several clinical outcomes including initial admission length, enteral feeding duration, tympanostomy tube insertion frequency, and hearing thresholds. Statistical analyses to evaluate the impact of airway and feeding strategies on clinical outcomes were carried out using independent samples t-tests and Chi-square tests, where appropriate. Results: Twenty-four patients were identified. Five patients (20.8%) required airway surgery and 18 patients (75.0%) required enteral feeding. Airway surgery was significantly associated with a longer ICU admission (15.8 vs. 4.3 days, p < 0.05), a longer overall hospital admission (73.0 vs. 25.2 days, p < 0.05), a delay in introducing oral feeds (222.8 vs. 11.5 days, p < 0.05), and a higher frequency of tympanostomy tube insertions (80% vs. 23.5% requiring ≥2 insertions, p < 0.05). Enteral feeding was significantly associated with a longer ICU admission (8.8 vs. 0 days, p < 0.05) and a longer overall hospital admission (43.9 vs. 5.6 days, p < 0.05). Conclusions: This study demonstrates the substantial impact that airway and feeding difficulties have on RS patient morbidity, particularly with respect to length of hospital and ICU admission. This information has prognostic value and may be helpful in generating a management algorithm for this complex patient population. abstract_id: PUBMED:37431904 Feeding Outcomes After Mandibular Distraction for Airway Obstruction in Infants. Robin sequence is a congenital issue resulting in airway obstruction, difficulty feeding, and failure to thrive. Mandibular Distraction Osteogenesis is used to improve airway obstruction in these patients, but little data exists characterizing feeding outcomes following surgery. This study aims to evaluate feeding outcomes and weight gain following mandibular distraction for airway correction in infants. A single-center retrospective chart review was conducted, and patients under 12 months old who underwent mandibular distraction between December 2015 and July 2021 were included in the study. The presence of cleft palate, distance of distraction, and polysomnography results were recorded. The primary outcomes were the length of distraction, need for nasogastric tube or G-tube at discharge, time lapsed to achieve full oral feeds, and weight gain (kilogram). Ten patients met the criteria. Of those 10 patients, 4 were syndromic, 7 had a cleft palate, and 4 had a congenital cardiac diagnosis. The average length of stay postsurgery was 28 days. Eight patients achieved full oral feeds in an average of 65.6 days. Five patients required nasogastric tube or G-tube at discharge, with 3 of these patients later transitioning to full oral feeds. All patients gained weight 3 months postsurgery with an average of 0.521 kg/mo. Patients who achieved full oral feeds gained an average of 0.549 kg/mo. Patients with supplementation gained an average of 0.454 kg/mo. All patients demonstrated improvement in airway obstruction with an average postoperative apnea hypopnea index of 1.64. Further investigation is necessary to identify challenges seen in feeding after mandibular distraction osteogenesis and improve care. abstract_id: PUBMED:18788872 Clinical outcomes of the Furlow Z-plasty for primary cleft palate repair. Objective: To review the clinical outcomes following the Furlow Z-plasty for primary cleft palate repair. The primary objective was to determine if the presence of an associated sequence or syndrome (i.e., Pierre Robin sequence), age at palate repair, cleft type, or surgeon experience influenced speech outcomes after a Furlow Z-plasty. Design: The outcomes of 140 patients who underwent palate repair were analyzed retrospectively. Speech evaluations were performed to score the severity of hypernasality, nasal escape, articulation errors, and velopharyngeal insufficiency. Results: The mean age at latest evaluation was 4 years 9 months (age range 2 years old to 12 years old and 4 months). Of the 140 patients, 83% had no evidence of hypernasality, 91% had no presence of nasal escape, and 69% had no articulation errors. Overall, 84% of patients had no evidence of velopharyngeal insufficiency. Secondary posterior pharyngeal flap to correct velopharyngeal insufficiency was required in only 2.1% of patients. The formation of an oronasal fistula occurred in only 3.6% of patients. Nonsyndromic patients with Pierre Robin sequence and syndromic patients did just as well as purely nonsyndromic patients in terms of velopharyngeal insufficiency, hypernasality, and nasal escape. Syndromic patients were more likely to make mild-to-moderate articulation errors. In addition, age at palate repair, cleft type, and surgeon experience had no statistically significant effect on speech results. Conclusions: The Furlow Z-plasty yielded excellent speech results in our patient population with minimal and acceptable rates of fistula formation, velopharyngeal insufficiency, and the need for additional corrective surgery. abstract_id: PUBMED:33587554 Robin Sequence: 5-Year Speech Outcomes-A Case-Control Study. Background: Pierre Robin sequence (Robin sequence) is defined as the triad of micrognathia, glossoptosis, and airway obstruction. It is frequently associated with palatal clefting. In recent years, increased interest in speech outcomes of cleft patients diagnosed with Robin sequence has been shown. Methods: Speech outcomes of cleft patients with Robin sequence were assessed at age 5 in comparison with a cleft palate-only cohort. Speech parameters were evaluated according to the Cleft Audit Protocol for Speech-Augmented and analyzed using the National Audit Standards for Speech (United Kingdom). All patients were treated in the same institution during the same period (2005 to 2012). Subjects who needed nasopharyngeal airway support and those whose airway was managed by positioning only were eligible. Results: Fifty-one cleft patients diagnosed with Robin sequence were included in this study. Outcomes were compared to those of 128 nonsyndromic cleft palate-only patients.Patients with Robin sequence were shown to present with a significantly higher rate of cleft speech characteristics in comparison to the reference cohort (p = 0.001). Furthermore, it was shown that Robin sequence is associated with a significantly higher rate of secondary speech surgery for velopharyngeal dysfunction before the age of 5 (p = 0.016). Robin sequence patients with a nasopharyngeal airway presented with a higher rate of cleft speech characteristics compared to Robin sequence patients managed with positioning only. Conclusion: Cleft patients with Robin sequence are more likely to need further surgery to correct velopharyngeal dysfunction before the age of 5 and are more prone to present with cleft speech characteristics at the age of 5. Clinical Question/level Of Evidence: Risk, II. Answer: Patients with Pierre Robin sequence (PRS) appear to have worse outcomes after cleft palate surgery compared to those without PRS in some aspects, particularly concerning speech outcomes. Studies have shown that nonsyndromic patients with PRS had worse speech outcomes after modified Furlow cleft palate repair, with a lower percentage demonstrating a competent velopharyngeal mechanism and more presenting with mild hypernasality compared to patients without PRS (PUBMED:23676521). Another study found that patients with PRS who required airway and nutritional support in the neonatal period had a significantly higher incidence of secondary speech surgery and greater nasality and cleft speech characteristic scores (PUBMED:26910675). However, the outcomes are not uniformly worse across all studies. One study indicated that non-syndromic PRS patients may not have worse outcomes than non-syndromic isolated cleft palate patients, with no significant difference in the incidence of velopharyngeal dysfunction, nasal emission, hypernasality, and secondary speech surgery between the two groups (PUBMED:33136861). Another study found that while patients with Robin sequence treated with mandibular distraction osteogenesis had worse visible nasal emission, hypernasality, and compensatory articulation in early childhood, these differences were not present at age-matched midchildhood evaluation, suggesting that the speech issues may resolve without additional intervention (PUBMED:36576825). A systematic review and meta-analysis also demonstrated that rates of secondary surgery and velopharyngeal dysfunction varied according to cleft phenotype, Robin Sequence, and syndrome diagnosis, indicating that PRS is a factor associated with speech outcomes (PUBMED:37533341). In summary, while some studies suggest that patients with PRS have worse speech outcomes following cleft palate surgery, the evidence is not entirely consistent, and some patients with PRS may have comparable outcomes to those without the condition. Long-term functional and psychological results for PRS patients are important, and therapy protocols and follow-up should be improved, especially those affecting the oral aspects of the disease (PUBMED:34670591).
Instruction: Pediatric myringoplasty: does previous adenoidectomy improve the likelihood of perforation closure? Abstracts: abstract_id: PUBMED:19692939 Pediatric myringoplasty: does previous adenoidectomy improve the likelihood of perforation closure? Objective: To determine whether perforation closure after pediatric myringoplasty is more likely to be achieved in patients who have undergone a previous adenoidectomy and to evaluate the impact of other demographic and clinical parameters on operative success. Study Design: Retrospective case review. Setting: : Teaching hospital. Patients: Children younger than 16 years. Intervention: Primary or revision myringoplasty undertaken between 1999 and 2007. Main Outcome Measure: The integrity of the tympanic membrane was assessed by otoscopy or tympanometry during postoperative follow-up. Success was defined as an intact tympanic membrane with no evidence of middle ear disease. Results: Two hundred thirteen consecutive myringoplasties were performed. Patients' mean age was 12.1 years (range, 4-15 yr) and mean length of follow-up was 16 months (range, 2-99 mo). The overall perforation closure rate was 71.8%. The adenoidectomy group comprised 79 patients (37.1%). Perforation closure was achieved in 79.1% of these patients compared with 67.4% in the nonadenoidectomy group (p = 0.157). Older age (p = 0.032), smaller perforations (p = 0.042), nonanterior perforations (p = 0.022), and operations performed by a consultant (p = 0.047) all resulted in significantly higher rates of perforation closure. Conclusion: This study suggests that adenoidectomy before pediatric myringoplasty may not improve the likelihood of a successful outcome. abstract_id: PUBMED:16650485 Pediatric adenoidectomy: what is the effect of obstructive symptoms on the likelihood of future surgery? Objective: Determine if pediatric patients undergoing adenoidectomy for obstruction have an increased likelihood of undergoing eventual tonsillectomy and/or second adenoidectomy over extended follow-up. Methods: Retrospective cohort study within the ambulatory surgery component of a tertiary children's hospital. Exclusion criteria consisted of the presence of significant co-morbid conditions (obesity, craniofacial syndrome, cerebral palsy, etc.) and less than 1 year of post-adenoidectomy follow-up. The study population included 100 children (mean age=3.9 years, range 0.9-15 years) randomly selected from a financial billing database undergoing adenoidectomy either with pre-operative obstructive symptoms (OB n=52) or without pre-operative obstructive symptoms (NOB n=48) with a minimum of 1 year of retrospective follow-up. Electronic records were searched for subsequent surgery (tonsillectomy and/or revision adenoidectomy). Results: Overall, 29 of 100 children (29%) underwent subsequent surgery (tonsillectomy and/or revision adenoidectomy) over an average of 3.46 years (range 1.0-6.59 years) retrospective follow-up. Children in the OB group were three times more likely (age, sex adjusted odds ratio=3.03, 95% confidence interval =1.18-7.78 p=0.021) than children in the NOB group to require tonsillectomy or second adenoidectomy. Age less than 2 years at time of initial adenoidectomy also suggested an increased likelihood of subsequent surgery. Conclusion: Children undergoing adenoidectomy with obstructive symptoms are more likely to require eventual tonsillectomy and/or second adenoidectomy than those undergoing adenoidectomy without obstruction. However, the low absolute likelihood of future surgery precludes a generalized recommendation for tonsillectomy in these children. abstract_id: PUBMED:31732314 Impact of perforation size and predictive factors for successful pediatric gelfoam myringoplasty. Objectives: Rates of success with pediatric myringoplasty range from 35 to 95%. The objective of this study is to evaluate the impact of perforation size on successful closure of the tympanic membrane (TM) after gelfoam myringoplasty. We also aim to identify variables that affect perforation closure rates and define predictive factors for successful TM closure. Methods: A retrospective chart review of all patients that underwent gelfoam myringoplasty by a single surgeon from August 2008 through January 2015 was performed. Results: One hundred fifty-nine patients met inclusion criteria and underwent a total of 219 procedures. Overall, gelfoam myringoplasty had an 83.1% rate of successful closure. Average perforation size was 15.31%. Classification tree analysis separated our cohort into three groups based on perforation size: Group 1 (<16.25%) had a 91% closure rate, group 2 (16.25% to <31.25%) had a 66.0% closure rate and group 3 (≥31.25%) had a 30.0% closure rate. Smaller perforations (P ≤0.001) were associated with increased success rates. Other factors associated with successful closure of the TM included younger age at the time of myringoplasty (P ≤0.001), fewer number of prior tympanostomy tubes (P = 0.016), and lesser duration of tube retention (P = 0.003). Conclusion: Gelfoam myringoplasty provides good overall TM closure rates and may be considered as a potential first-line option for repair of perforations, including those involving up to 40% of the TM. Younger patients with smaller perforations, fewer sets of tubes, shorter length of tube retention are more likely to have successful closure of the tympanic membrane. abstract_id: PUBMED:36452765 Pediatric Adenoidectomy: A Comparative Study Between Cold Curettage and Coblation Technique. The objective of the study was to assess endoscopic coblation adenoidectomy and conventional cold curettage adenoidectomy in terms of safety and efficacy in pediatric patients. Study included 40 pediatric patients, aged between 4-17 years. 20 patients underwent cold curettage adenoidectomy and 20 underwent Coblation adenoidectomy. The 2 procedures were compared on various parameters like duration of surgery, intra-operative blood loss, and post -operative pain. To further the comparison, follow -up Nasal Endoscopy was done after 1 week and after 1 month to assess for injury to peripheral tissues and completeness of removal of adenoids. There was statistically significant difference, favouring Coblation adenoidectomy in terms of lesser intra-operative blood loss (mean blood loss of 19 mL Vs 28.5 mL) and lesser post -operative pain measured on Visual Analogue Scale (median VAS score of 2 Vs 2.67). Shorter duration of surgery (mean operative time of 10.3 min Vs 15.5 min) was the only parameter in favour of conventional cold curettage method. Injury to peripheral tissue and residual adenoid were seen in patients who underwent curettage adenoidectomy. The overall advantages of Coblation adenoidectomy when compared with cold curettage adenoidectomy are less intra-operative bleeding, less post -operative pain, completeness and preciseness of adenoid removal with minimal injury to adjacent tissues. For these reasons, Coblation adenoidectomy should be the standard technique adopted for adenoidectomy. abstract_id: PUBMED:32431976 Traumatic Tympanic Membrane Perforation in Children in the Emergency Department: Comparison of Spontaneous Closure and Paper Patch. Objective There are different clinical approaches for traumatic tympanic membrane perforation in the pediatric age group. The purpose of this study was to compare spontaneous recovery and the paper patch procedure and to analyze the factors that play a role in recovery. Materials and methods Pediatric patients who were admitted to the Emergency Department between January 2014 and June 2019 due to traumatic tympanic membrane perforation were investigated. Among these patients, medical records of cases followed by the otorhinolaryngology clinic for spontaneous closure or paper patch procedure were retrospectively examined. Medical records of a total number of 71 patients aged 2-16 years (33 females, 38 males) were analyzed. Results The overall closure rate was 89.75% (64/71). Although there was no difference between the groups of small- and medium-size perforations in terms of closure rates, the closure rate in large perforations was 90.9% in the paper patch group and 63.6% in the spontaneous closure group, and the difference between these two groups was statistically significant (p < 0.05). Additionally, the closure rate in the paper patch group (91.6%) was found to be significantly higher than that in the spontaneous closure group (58.3%) in the presence of a perforation contacting the malleus (p > 0.05). Conclusion Both procedures can be employed in pediatric cases of traumatic tympanic membrane perforation with high success rates. However, in case of a large perforation or a perforation contacting the malleus that is detected in the Emergency Department, it is necessary to refer the patients to the otorhinolaryngology clinic immediately for paper patch procedure instead of following up the patients for spontaneous closure. abstract_id: PUBMED:32499985 Incidence and Characteristics of Revision Adenoidectomy Among Pediatric Patients at King Abdulaziz University Hospital in Saudi Arabia. Background Adenoidectomy is the most commonly performed pediatric operation worldwide and one of the most frequent otorhinolaryngological procedures. It is a safe procedure with a low risk of complications. However, after a successful adenoidectomy, few patients experience symptoms of nasal obstruction, suggestive of adenoid regrowth. Because of various risk factors, patients require a revision adenoidectomy. This study aimed to determine the incidence of revision adenoidectomy at King Abdulaziz University Hospital (KAUH). Moreover, we aimed to identify the characteristics and factors that present a risk of revision adenoidectomy in pediatric patients. Materials and Methods We retrospectively reviewed the medical records of 680 pediatric patients (age below 18 years) of Saudi and non-Saudi descent who underwent a prior adenoidectomy with or without tonsillectomy, as well as those who underwent a revision adenoidectomy. The data from 2015 to 2018 were obtained from the hospital medical records using a data collection sheet. The data were entered on to a Microsoft excel sheet, and descriptive statistical analysis was performed using IBM SPSS software V21 (IBM Corp., Armonk, NY). Results The incidence of revision adenoidectomy at our center was 2.79%. We found significant relationships between comorbidities and revision adenoidectomy (p=0.014), initial adenoidectomy without tonsillectomy and revision adenoidectomy (p=0.001), and a young age at initial surgery and revision adenoidectomy. The mean age at initial adenoidectomy was 2.5 years (standard deviation [SD], ±0.607 years), whereas that at revision adenoidectomy was 1.89 years (SD, ±0.737 years). The mean interval between primary and revision adenoidectomies was 42.32 months (range, 9-86 months). Conclusion The incidence of revision adenoidectomy at KAUH was 2.79%. Moreover, only adenoidectomy without a tonsillectomy presented a high risk of adenoid regrowth necessitating a revision adenoidectomy. Therefore, we recommend counseling patients to undergo an adenoidectomy with tonsillectomy to reduce the risk of revision adenoidectomy. abstract_id: PUBMED:38400707 Predictors of Success of Adenoidectomy in the Treatment of Pediatric Chronic Rhinosinusitis. Objectives: There is consensus for adenoidectomy as treatment of chronic rhinosinusitis (CRS) in children younger than 6 years but less consensus for those aged 6 years and older. The present clinical research study investigated predictors of adenoidectomy success for pediatric CRS. Methods: A retrospective chart review was conducted for 107 children who presented to an otolaryngology clinic and had an adenoidectomy for CRS after failure of maximal medical therapy. Results: The majority of sample was male (N = 63; 58.9%) with mean age of 4.88 (SD = 2.54) and mean Lund-Mackay score of 7.76 (SD = 3.87). Patients who had success with adenoidectomy were older on average [mean (M) = 5.46; SD = 2.83] than patients who failed the procedure (M = 4.36; SD = 2.40; F = 4.06, P = .047). This equated to 67.9% of children aged 6 years and older and 39.2% of children younger than 6 years succeeding with the surgery. Conclusions: Adenoidectomy is a good first-line surgery for children aged 12 years and younger, especially in the 6 to 12 years group which had a high rate of success with 68% experiencing remission of symptoms after surgery. abstract_id: PUBMED:7567008 Pediatric tympanoplasty: the role of adenoidectomy. A chronic dry perforation in a child presents a dilemma with regard to timing of intervention or whether intervention is appropriate at all. Many studies have looked at elements associated with eustachian tube function in hopes of finding prognostic factors. Adenoidectomy has been shown to be very effective in the treatment of chronic and recurrent otitis media. Intuitively, it would seem to play a role in pediatric tympanoplasty; however, no study has thoroughly investigated this issue to date. To evaluate the role of adenoidectomy in pediatric tympanoplasty, we performed a retrospective review of all patients younger than 18 years who had a simple dry perforation and underwent a Wullstein's type I tympanoplasty. Thirty-six patients were identified for review during the 7-year study period. The patients were grouped into those who had previous adenotonsillectomy (n = 12), those who had prior adenoidectomy alone (n = 10), and those who had neither (n = 14). Initial success of tympanoplasty was noted to be high in all three groups. However, at 6-month follow-up, the success for the group who had not had prior adenoidectomy or tonsillectomy dropped dramatically (14.3%), whereas the other two groups maintained success rates in excess of 75% (p = 0.002). This relationship remained fairly constant throughout the 2 years of follow-up. Although our population under study is somewhat small, the results support a potentially advantageous role of adenoidectomy for pediatric tympanoplasty. These results and their implications will be discussed. abstract_id: PUBMED:26730034 Laparoscopic Peptic Ulcer Perforation Closure: the Preferred Choice. Peptic ulcer perforation is a common life-threatening emergency needing immediate intervention. Laparoscopic closure of perforation is now widely practiced over conventional open closure. This study aimed to compare laparoscopic peptic ulcer perforation closure with conventional open closure in terms of operative time, postoperative analgesia, complications, hospital stay, and return to routine activities. This unicentric, nonblinded, prospective, randomized study was carried out in 50 patients with peptic ulcer perforation who were randomly allocated to undergo either laparoscopic closure or open closure surgery with 25 patients in each group. The mean operative time (60 vs 90 min) was less in the laparoscopic group (p < 0.05). Postoperative analgesia requirements (1 vs 6 days) were also less in laparoscopic patients (p < 0.05). Complications (nil vs 6; p < 0.05) and hospital stay (3 vs 8 days) were less in laparoscopic patients (p < 0.05). Patients return to normal activities (5 vs 10 days; p < 0.05) earlier in laparoscopic perforation closure than in open closure. Our study has shown better outcomes and lesser morbidities with laparoscopic approach in terms of shorter operative time, shorter hospital stay, less analgesic requirements, and less wound infections. Patients also return to routine activities earlier with the laparoscopic approach. It is a safe alternative to open surgery and should be a preferred choice when there are no contraindications to laparoscopy. abstract_id: PUBMED:28550876 Neonatal and pediatric esophageal perforation. Esophageal perforation (EP) is a rare complication that is often iatrogenic in origin. In contrast with adult patients in whom surgical closure of the defect is preferred, nonoperative treatment has become a common therapeutic approach for EP in neonates and children. Principles of management pediatric EP includes rapid diagnosis, appropriate hemodynamic monitoring and support, antibiotic therapy, total parenteral nutrition, control of extraluminal contamination, and restoration of luminal integrity either through time or operative approaches. Answer: The study described in abstract PUBMED:19692939 aimed to determine whether a previous adenoidectomy improves the likelihood of perforation closure after pediatric myringoplasty. The retrospective case review included children younger than 16 years who underwent primary or revision myringoplasty between 1999 and 2007. The main outcome measure was the integrity of the tympanic membrane assessed during postoperative follow-up. The study found that the overall perforation closure rate was 71.8%. The adenoidectomy group, which comprised 79 patients (37.1%), achieved perforation closure in 79.1% of cases, compared with 67.4% in the nonadenoidectomy group. However, the difference was not statistically significant (p = 0.157). Other factors such as older age, smaller perforations, nonanterior perforations, and operations performed by a consultant resulted in significantly higher rates of perforation closure. The conclusion of the study suggests that adenoidectomy before pediatric myringoplasty may not improve the likelihood of a successful outcome.
Instruction: Do we need a new classification for radical hysterectomy? Abstracts: abstract_id: PUBMED:34283057 A New Anatomic and Staging-Oriented Classification of Radical Hysterectomy. The current understanding of radical hysterectomy more is centered on the uterus and little is being discussed about the resection of the vaginal cuff and the paracolpium as an essential part of this procedure. This is because that the current classifications of radical hysterectomy are based only on the lateral extent of resection. This way is easier to be understood but does not reflect the anatomical and surgical conception of radical hysterectomy and the three-dimensional ways of tumour spreading, neither meet the need of adjusting the radicality according to the different stages of FIGO classification, which depends-at least in the early stages-on the tumour volume and the infiltration in the vagina (but not on the directly spread in the parametrium). The new classification presented in this paper does not base anymore on the lateral extent of resection only but too on the depth of resection in the small pelvic and the extent of the resected vaginal vault without or with its three-dimensional paracolpium. This classification takes into account the tumour size, stage, localization and infiltration in the vaginal vault and may offer the optimal tool to adjust and tailor the surgery according to these important variables. abstract_id: PUBMED:38145104 A new membrane anatomy-oriented classification of radical surgery for rectal cancer. For patients with different clinical stages of rectal cancer, tailored surgery is urgently needed. Over the past 10 years, our team has conducted numerous anatomical studies and proposed the "four fasciae and three spaces" theory to guide rectal cancer surgery. Enlightened by the anatomical basis of the radical hysterectomy classification system of Querleu and Morrow, we proposed a new classification system of radical surgery for rectal cancer based on membrane anatomy. This system categorizes the surgery into four types (A-D) and incorporates corresponding subtypes based on the preservation of the autonomic nerve. Our surgical classification unifies the pelvic membrane anatomical terminology, validates the feasibility of classifying rectal cancer surgery using the theory of "four fasciae and three spaces," and lays the theoretical groundwork for the future development of unified and standardized classification of radical pelvic tumor surgery. abstract_id: PUBMED:37788719 International expert consensus on the surgical anatomic classification of radical hysterectomies. Background: The anatomic descriptions and extents of radical hysterectomy often vary across the literature and operative reports worldwide. The same nomenclature is often used to describe varying procedures, and different nomenclature is often used to describe the same procedure despite the availability of guideline and classification systems. This makes it difficult to interpret retrospective surgical reports, analyze surgical databases, understand technique descriptions, and interpret the findings of surgical studies. Objective: In collaboration with international experts in gynecologic oncology, the purpose of this study was to establish a consensus in defining and interpreting the 2017 updated Querleu-Morrow classification of radical hysterectomies. Study Design: The anatomic templates of type A, B, and C radical hysterectomy were documented through a set of 13 images taken at the time of cadaver dissection. An online survey related to radical hysterectomy nomenclature and definitions or descriptions of the associated procedures was circulated among international experts in radical hysterectomy. A 3-step modified Delphi method was used to establish consensus. Image legends were amended according to the experts' responses and then redistributed as part of a second round of the survey. Consensus was defined by a yes response to a question concerning a specific image. Anyone who responded no to a question was welcome to comment and provide justification. A final set of images and legends were compiled to anatomically illustrate and define or describe a lateral, ventral, and dorsal excision of the tissues surrounding the cervix. Results: In total, there were 13 questions to review, and 29 experts completed the whole process. Final consensus exceeded 90% for all questions except 1 (86%). Questions with relatively lower consensus rates concerned the definitions of types A and B2 radical hysterectomy, which were the main innovations of the 2017 updated version of the 2008 Querleu-Morrow classification. Questions with the highest consensus rates concerned the definitions of types B1 and C, which are the most frequently performed radical hysterectomies. Conclusion: The 2017 version of the Querleu-Morrow classification proved to be a robust tool for defining and describing the extent of radical hysterectomies with a high level of consensus among international experts in gynecologic oncology. Knowledge and implementation of the exact definitions of hysterectomy radicality are imperative in clinical practice and clinical research. abstract_id: PUBMED:17727931 Do we need a new classification for radical hysterectomy? Insights in surgical anatomy and local tumor spread from human embryology. Objective: Current surgical treatment of cervical carcinoma is based on the assumption of undirected intra- and transcervical local tumor propagation and is executed by tailored excision of the paracervical tissues. We have recently demonstrated that cervical carcinoma spreads for extended phases during its malignant progression within the permissive compartment of the Müllerian morphogenetic unit (Lancet Oncol 2005;6:751-56) and proposed Müllerian compartment resection as the new principle for surgical treatment of cervical cancer. Do we need a new classification of radical hysterectomy? Methods: The therapeutic index of the surgical treatment of cervical carcinoma FIGO stages IB1-IIB by extirpation of the Müllerian compartment through total mesometrial resection (TMMR) without adjuvant radiation is evaluated by an ongoing controlled prospective trial at the University of Leipzig. Results: From 7/1998 to 12/2006, 163 patients with cervical carcinoma, FIGO stages IB1 (n=94), IB2 (n=21), IIA (n=14) and IIB (n=34) have been treated with TMMR and nerve-sparing therapeutic lymph node dissection. Twenty-five patients received (neo)adjuvant chemotherapy. No patient underwent adjuvant radiotherapy although 95 patients (58%) would have needed this additional modality in case of conventional radical hysterectomy because of their high-risk histopathological tumor features. At a median follow-up time of 45 months (3-104 months), recurrence-free and disease-specific overall survival is 93% and 96%. Maximum treatment-related morbidity according to the Franco-Italian score has been grade 2 in 12 patients (8%). Conclusions: The developmental view of local tumor spread and surgical anatomy holds a great promise for improving the therapeutic index of surgical cervical cancer therapy and challenges both the classification of radical hysterectomy based on tailored paracervical resection and the indication for adjuvant radiation. abstract_id: PUBMED:18308255 Classification of radical hysterectomy. Since the first publications about surgery for cervical cancer, many radical procedures that accord with different degrees of radicality have been described and done. Here, we propose a basis for a new and simple classification for cervical-cancer surgery, taking into account the curative effect of surgery and adverse effects, such as bladder dysfunction. The international anatomical nomenclature is used where it applies. For simplification, the classification is based only on lateral extent of resection. We describe four types of radical hysterectomy (A-D), adding when necessary a few subtypes that consider nerve preservation and paracervical lymphadenectomy. Lymph-node dissection is considered separately: four levels (1-4) are defined according to corresponding arterial anatomy and radicality of the procedure. The classification applies to fertility-sparing surgery, and can be adapted to open, vaginal, laparoscopic, or robotic surgery. In the future, internationally standardised description of techniques for communication, comparison, clinical research, and quality control will be a basic part of every surgical procedure. abstract_id: PUBMED:35913623 Clavien-Dindo classification and risk prediction model of complications after robot-assisted radical hysterectomy for cervical cancer. Although significant progress has been made with surgical methods, the incidence of complications after minimally invasive surgery in patients with cervical cancer remains high. Established as a standardized system, Clavien-Dindo classification (CDC) has been applied in a variety of surgical fields. This study is designed to evaluate the complications after robot-assisted radical hysterectomy (RRH) for cervical cancer using CDC and further establish a prediction model. This is a study on the development of prediction model based on retrospective data. Patients with cervical cancer who received RRH treatment in our hospital from January 2016 to April 2019 were invited to participate in the study. The demographic data, laboratory and imaging examination results and postoperative complications were collected, and the logistic regression model was applied to analyze the risk factors possibly related to complications to establish a prediction model. 753 patients received RRH. The overall incidence of complications was 32.7%, most of which were grade I and grade II (accounting for 30.6%). The results of multivariate analysis showed that the preoperative neoadjuvant chemotherapy (OR = 1.693, 95%CI: 1.210-2.370, P = 0.002), preoperative ALT (OR = 1.028, 95%CI: 1.017-1.039, P < 0.001), preoperative urea nitrogen (OR = 0.868, 95%CI: 0.773-0.974, P = 0.016), preoperative total bilirubin (OR = 0.958, 95%CI: 0.925-0.993, P = 0.0.018), and preoperative albumin (OR = 0.937, 95%CI: 0.898-0.979, P = 0.003) were related to the occurrence of postoperative complications. The area under the curve (AUC) of receiver-operating characteristic (ROC) in the prediction model of RRH postoperative complications established based on these five factors was 0.827 with 95% CI of 0.794-0.860. In patients undergoing robot-assisted radical hysterectomy for cervical cancer, preoperative ALT level, urea nitrogen level, total bilirubin level, albumin level, and neoadjuvant chemotherapy were significantly related to the occurrence of postoperative complications. The regression prediction model established on this basis showed good prediction performance with certain clinical promotion and reference value. abstract_id: PUBMED:35195838 Radical Hysterectomy After the LACC Trial: Back to Radical Vaginal Surgery. Opinion Statement: Classical radical vaginal hysterectomy first performed by Anton Pawlik in Prague in 1888 and popularized by Frederic Schauta is now a historical technique virtually abandoned due to painful perineal incision, a high rate of urinary dysfunction, and the inability to perform lymph node assessment. However, the heritage of this approach has been still used and taught in a few centers outside their Austrian birthplace. A combined vaginal and laparoscopic approach was developed in the 1990s by French and German surgeons who designed diverse surgical techniques for which a novel classification is proposed. All these techniques are different from the so-called laparoscopically assisted radical vaginal hysterectomy (LARVH), a term widely used for laparoscopic radical hysterectomies with vaginal extraction of the specimen. Interestingly, after the publication of the LACC trial (Laparoscopic Approach to Cervical Cancer), the radical vaginal approach has found a very timely application. The creation of a vaginal cuff before performing radical laparoscopic hysterectomy described in 2007 by Leblanc as "Schautheim" operation can be used as a protective maneuver to avoid tumor spillage and potentially overturn the negative outcome of minimally invasive surgery in early-stage cervical cancer. As a result, the combination of radical vaginal and laparoscopic steps of surgery is one possible evolution after the LACC trial that needs further investigation. The forgotten vaginal surgical technique needs a specific learning curve. The creation of a vaginal cuff should be mastered by every gynecological oncologist. abstract_id: PUBMED:21592548 New classification system of radical hysterectomy: emphasis on a three-dimensional anatomic template for parametrial resection. Objective: The international acceptance of a universal classification system for radical hysterectomy is one of the important challenges in gynecologic oncology. The recently published classification system by Querleu and Morrow is a relevant proposal that has been well received by the professional community. However, it does not include a description of parametrial resection in three dimensions, which mostly determines post-operative morbidity. Methods: The intention of this follow-up paper was to further develop the classification system based on the four proposed types of radical hysterectomy (A-D) into a three-dimensional model using standard anatomical landmarks for definition of resection margins in longitudinal and transverse dimensions and demonstrate it on pictures. Results: Resection margins were defined in longitudinal and transverse dimensions for each suggested type of radical hysterectomy on all three parts of the parametria. Besides precise description using stable anatomical landmarks, all resection lines have been shown on intra-operative photographs. Conclusion: Four types of radical hysteretomy can be precisely defined on a three-dimensional anatomical template, including nerve sparing procedure. Our paper should contribute to better standardization (including nomenclature) of the radical hysterectomy, enhancing harmonization of clinical practice in gynecological oncology. abstract_id: PUBMED:18021216 Classification of radical hysterectomy adopted by the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer. The Piver classification of radical hysterectomy for the treatment of cervical cancer is outdated and misused. The Surgery Committee of the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer (EORTC) produced, approved, and adopted a revised classification. It is hoped that at least within the EORTC participating centers, a standardization of procedures is achieved. The clinical indications of the new classification are discussed. abstract_id: PUBMED:25408722 Types of radical hysterectomies : From Thoma Ionescu and Wertheim to present day. The treatment for cervical cancer is a complex, multidisciplinary issue, which applies according to the stage of the disease. The surgical elective treatment of cervical cancer is represented by the radical abdominal hysterectomy. In time, many surgeons perfected this surgical technique; the ones who stood up for this idea were Thoma Ionescu and Ernst Wertheim. There are many varieties of radical hysterectomies performed by using the abdominal method and some of them through vaginal and mixed way. Each method employed has advantages and disadvantages. At present, there are three classifications of radical hysterectomies which are used for the simplification of the surgical protocols: Piver-Rutledge-Smith classification which is the oldest, GCG-EORTC classification and Querlow and Morrow classification. The last is the most evolved and recent classification; its techniques can be adapted for conservative operations and for different types of surgical approaches: abdominal, vaginal, laparoscopic or robotic. Answer: The question of whether a new classification for radical hysterectomy is needed has been a topic of discussion among experts in gynecologic oncology. The current classifications of radical hysterectomy have been critiqued for various reasons, including their focus on the lateral extent of resection without adequately addressing the anatomical and surgical conception of radical hysterectomy, the three-dimensional ways of tumor spreading, and the need to adjust radicality according to different stages of the FIGO classification (PUBMED:34283057). Additionally, the existing classifications may not reflect the depth of resection in the small pelvic and the extent of the resected vaginal vault with or without its three-dimensional paracolpium (PUBMED:34283057). The Querleu-Morrow classification of radical hysterectomies has been updated and has received a high level of consensus among international experts, suggesting that it is a robust tool for defining and describing the extent of radical hysterectomies (PUBMED:37788719). However, there have been calls for a classification that includes a three-dimensional anatomic template for parametrial resection, which is a significant determinant of post-operative morbidity (PUBMED:21592548). Moreover, the concept of Müllerian compartment resection has been proposed as a new principle for the surgical treatment of cervical cancer, challenging the classification of radical hysterectomy based on tailored paracervical resection (PUBMED:17727931). This developmental view of local tumor spread and surgical anatomy suggests that there may be room for improvement in the therapeutic index of surgical cervical cancer therapy. In light of these discussions and proposals, it appears that there is a need for a new or further refined classification system for radical hysterectomy that takes into account the latest anatomical and clinical insights, as well as the need for tailored surgery based on individual patient characteristics and tumor behavior. Such a classification would aim to standardize procedures, facilitate communication and comparison among professionals, and improve clinical research and quality control (PUBMED:18308255, PUBMED:25408722).
Instruction: Does resident experience affect outcomes in complex abdominal surgery? Abstracts: abstract_id: PUBMED:24726058 Impact of podiatry resident experience level in hallux valgus surgery on postoperative outcomes. Background: Despite modern advancements in transosseous fixation and operative technique, hallux valgus (i.e., bunion) surgery is still associated with a higher than usual amount of patient dissatisfaction and is generally recognized as a complex and nuanced procedure requiring precise osseous and capsulotendon balancing. It stands to reason then that familiarity and skill level of trainee surgeons might impact surgical outcomes in this surgery. The aim of this study was to determine whether podiatry resident experience level influences midterm outcomes in hallux valgus surgery (HVS). Methods: Consecutive adults who underwent isolated HVS via distal metatarsal osteotomy at a single US metropolitan teaching hospital from January 2004 to January 2009 were contacted and asked to complete a validated outcome measure of foot health (Manchester-Oxford Foot Questionnaire) regarding their operated foot. Resident experience level was quantified using the surgical logs for the primary resident of record at the time of each case. Associations were assessed using linear and logistic regression analyses. Results: A total of 102 adult patients (n = 102 feet) agreed to participate with a mean age of 46.8 years (standard deviation 13.1, range 18-71) and average length of follow-up 6.2 y (standard deviation 1.4, range 3.6-8.6). Level of trainee experience was not associated with postoperative outcomes in either the univariate (odds ratio 0.99 [95% confidence interval, 0.98-1.01], P = 0.827) or multivariate analyses (odds ratio 1.00 [95% confidence interval, 0.97-1.02], P = 0.907). Conclusions: We conclude that podiatry resident level of experience in HVS does not contribute appreciably to postoperative clinical outcomes. abstract_id: PUBMED:24222321 Does resident experience affect outcomes in complex abdominal surgery? Pancreaticoduodenectomy as an example. Objectives: Understanding the factors contributing to improved postoperative patient outcomes remains paramount. For complex abdominal operations such as pancreaticoduodenectomy (PD), the influence of provider and hospital volume on surgical outcomes has been described. The impact of resident experience is less well understood. Methods: We reviewed perioperative outcomes after PD at a single high-volume center between 2006 and 2012. Resident participation and outcomes were collected in a prospectively maintained database. Resident experience was defined as postgraduate year (PGY) and number of PDs performed. Results: Forty-three residents and four attending surgeons completed 686 PDs. The overall complication rate was 44 %; PD-specific complications (defined as pancreatic fistula, delayed gastric emptying, intraabdominal abscess, wound infection, and bile leak) occurred in 28 % of patients. The overall complication rates were similar when comparing PGY 4 to PGY 5 residents (55.3 vs. 43.0 %; p > 0.05). On univariate analysis, there was a difference in PD-specific complications seen between a PGY 4 as compared to a PGY 5 resident (44 vs. 27 %, respectively; p = 0.016). However, this was not statistically significant when adjusted for attending surgeon. Logistic regression demonstrated that as residents perform more cases, PD-specific complications decrease (OR = 0.97; p < 0.01). For a resident's first PD case, the predicted probability of a PD-specific complication is 27 %; this rate decreases to 19 % by resident case number 15. Conclusions: Complex cases, such as PD, provide unparalleled learning opportunities and remain an important component of surgical training. We highlight the impact of resident involvement in complex abdominal operations, demonstrating for the first time that as residents build experience with PD, patient outcomes improve. This is consistent with volume-outcome relationships for attending physicians and high-volume hospitals. Maximizing resident repetitive exposure to complex procedures benefits both the patient and the trainee. abstract_id: PUBMED:24950796 Resident participation and postoperative outcomes in adrenal surgery. Background: The changing paradigm of surgical residency training has raised concerns about the effects on the quality of training. The purpose of this study is to identify if resident participation in laparoscopic adrenalectomy (LA) and open adrenalectomy (OA) cases is associated with deleterious outcomes. Materials And Methods: This is a retrospective study using the American College of Surgeons National Surgical Quality Improvement Program database. Data from patients undergoing LA and OA from 2005 to 2010 were queried. Preoperative variables as well as intra- and post-operative outcomes for each procedure were evaluated. Multivariate logistic regression was used to analyze if resident participation was associated with significant differences in outcomes, compared with no resident participation. Subset analysis was done to determine possible differences in outcomes based on the level of resident participating, divided into junior (Post Graduate Year [PGY]1-3), senior (PGY4-5), or fellow (≥PGY6) levels. Results: A total of 3219 adrenalectomies were performed. Of these, 735 (22.8%) were OAs and 2484 (77.2%) were LAs. Residents were involved in 2582 (80.2%) surgeries, which comprised 1985 (76.9%) LAs and 597 (23.1%) OAs. Senior residents or fellows performed majority of the cases (85.2%). Mean operative time was significantly higher with resident participation in LA (P < 0.0001) and OA group (P < 0.0001). On multivariate analysis, resident participation was not associated with significant differences in the operative outcomes of 30-d mortality or postoperative complications after laparoscopic or OA. Conclusions: Although resident participation does increase operative time in LA and OA, this does not appear to be clinically significant and does not result in adverse patient outcomes. abstract_id: PUBMED:37675800 Impact of resident participation on surgical outcomes in laparoscopically assisted vaginal hysterectomy. Objective: To compare surgical outcomes in patients with benign diseases who underwent laparoscopically assisted vaginal hysterectomy (LAVH) to determine the association between surgical outcomes and resident participation in the gynecologic field. Methods: A single-center retrospective study was conducted of patients diagnosed with benign gynecologic diseases who underwent LAVH between January 2010 and December 2015. Clinicopathologic characteristics and surgical outcomes were compared between the resident involvement and non-involvement groups. The primary endpoint was the 30-day postoperative morbidity. Observers were propensity matched for 17 covariates for resident involvement or non-involvement. Results: Of the 683 patients involved in the study, 165 underwent LAVH with resident involvement and 518 underwent surgery without resident involvement. After propensity score matching (157 observations), 30-day postoperative morbidity occurred in 6 (3.8%) and 4 (2.5%) patients in the resident involvement and non-involvement groups, respectively (P = 0.501). The length of hospital stay differed significantly between the two groups: 5 days in the resident involvement group and 4 days in the non-involvement group (P < 0.001). On multivariate analysis, Charlson Comorbidity Index >2 (odds ratio [OR] 8.01, 95% confidence interval [CI] 2.68-23.96; P < 0.001), operative time (OR 1.02, 95% CI 1.01-1.03; P < 0.001), and estimated blood loss (OR 1.00, 95% CI 1.00-1.00; P < 0.001) were significantly associated with 30-day morbidity, but resident involvement was not statistically significant. Conclusion: There was no significant difference in the 30-day morbidity rate when residents participated in LAVH. These findings suggest that resident participation in LAVH may be a viable approach to ensure both residency education and patient safety. abstract_id: PUBMED:29656619 Impact of Resident Involvement in Neurosurgery: An American College of Surgeons' National Surgical Quality Improvement Program Database Analysis of 33,977 Patients. Objective: There is conflicting and limited literature on the effect of intraoperative resident involvement on surgical outcomes. Our study assessed effects of resident involvement on outcomes in patients undergoing neurosurgery. Methods: We identified 33,977 adult neurosurgical cases from 374 hospitals in the 2006-2012 National Surgical Quality Improvement Program, a prospectively collected national database with established reproducibility and validity. Outcomes were compared according to resident involvement before and after 1:1 matching on procedure and perioperative risk factors. Results: Resident involvement was documented in 13,654 cases. We matched 10,170 resident-involved cases with 10,170 attending-alone. In the matched sample, resident involvement was associated with increased surgery duration (average, 34 minutes) and slight increases in odds for prolonged hospital stay (odds ratio, 1.2; 95% confidence interval [CI], 1.2-1.3) and complications (odds ratio, 1.2; 95% CI, 1.1-1.3) including infections (odds ratio, 1.4; 95% CI, 1.2-1.7). Increased risk for infections persisted after controlling for surgery duration (odds ratio, 1.3; 95% CI, 1.1-1.5). The majority of cases were spine surgeries, and resident involvement was not associated with morbidity or mortality for malignant tumor and aneurysm patients. Training level of residents was not associated with differences in outcomes. Conclusion: Resident involvement was more common in sicker patients undergoing complex procedures, consistent with academic centers undertaking more complex cases. After controlling for patient and intraoperative characteristics, resident involvement in neurosurgical cases continued to be associated with longer surgical duration and slightly higher infection rates. Longer surgery duration did not account for differences in infection rates. abstract_id: PUBMED:24725294 A case for international neurosurgical experience: US resident experiences with pediatric spinal dysraphism cases. Introduction: In the National Resident Report of Neurological Surgery Case Logs prepared by the department of Applications and Data Analysis of the Accreditation Council for Graduate Medical Education (ACGME), the Resident Review Committee for Neurological Surgery conducted a survey of the total experience of residents completing programs in 2008-2011. Methods: Review of case numbers from the Resident Review Committee survey and at an East African hospital with visiting US neurosurgical trainees was conducted. Results: US residents reported their experiences with the ACGME index cases categorized as 'dysraphism - pediatric cases'. The national resident average of pediatric spinal dysraphism cases totaled for all years was 8·1 cases with a standard deviation of 11. In contrast, more than four spinal dysraphism cases per week were reported per resident during a rotation at Kijabe Hospital in Kenya (mean duration of rotation: 2·4 weeks). The majority of these cases were myelomeningocele cases. The pediatric neurosurgeon in residence at Kijabe has logged over 300 spinal dysraphism cases per year since summer of 2010. Conclusion: A 4-week international resident rotation could potentially provide experience with an average of four spinal dysraphism cases per week. Over one month, this could eclipse the 90th percentile of US resident experience for an entire residency. This is borne out by individual experiences of residents who have visited Kijabe, Kenya, over the past several years. The role and value of international neurosurgical experiences for residency training can be significant, especially for treatment pathologies rarely encountered in many areas of the USA. abstract_id: PUBMED:36335032 Declining Resident Surgical Autonomy and Improving Surgical Outcomes: Correlation Does Not Equal Causality. Objective: The volume of cases that residents perform independently have decreased leaving graduating chief residents less prepared for independent practice. Outcomes are not worse when residents are given autonomy with appropriate supervision, however it is unknown if outcomes are worsening with decreasing operative autonomy experience. We hypothesize that resident autonomous cases parallel the improving outcomes in surgical care over time, however, are less complex and on lower acuity patients. Design: Retrospective study utilizing the Veterans Affairs Surgical Quality Improvement Program (VASQIP) database. Setting: Operative cases performed on teaching services within the VASQIP database from July 1, 2004 to September 30, 2019, were included. Participants: All adult patients who underwent a surgical procedure from July 1, 2004, to September 30, 2019, at a VA hospital on a service that included residents were initially included. After inclusions and exclusions, there were 1,346,461 cases. Cases were divided into 3 sequential 5 year eras (ERA 1: 2004-2008 n = 415,614, ERA 2: 2009-2013 n = 478,528, and ERA 3: 2014-2019 n = 452,319). The main exposure of interest was level of resident supervision, coded at the time of procedure as: attending primary surgeon (AP); attending and resident (AR), or resident primary with the attending supervising but not scrubbed (RP). We compared 30 day all-cause mortality, composite morbidity, work relative value unit (wRVU), hospital length of stay, and operative time between each ERA for RP cases, as well as within each ERA for RP cases compared to AR and AP cases. Results: There was a progressive decline in the rate of RP cases in each successive ERA (ERA 1: 58,249 (14.0%) vs ERA 2: 47,891 (10.0%) vs ERA 3: 35,352 (7.8%), p < 0.001). For RP cases, patients were progressively getting older (60 yrs [53-71] vs 63 yrs [54-69] vs 66 yrs [57-72], p < 0.001) and sicker (ASA 3 58.7% vs 62.5% vs 66.2% and ASA 4/5 8.4% vs 9.6% vs10.0%, p < 0.001). Odds of mortality decreased in each ERA compared to the previous (aOR 0.71 [0.62-0.80] ERA 2 vs ERA 1 and 0.82 [0.70-0.97] ERA 3 vs ERA 2) as did morbidity (0.77 [0.73-0.82] ERA 2 vs ERA 1 and 0.72 [0.68-0.77] ERA 3 vs ERA 2). Operative and length of stay also decreased while wRVU stayed unchanged. When comparing RP cases to AP and AR within each ERA, RP cases tended to be on younger and healthier patients with a lower wRVU, particularly compared to AR cases. Mortality and morbidity were no different or better in RP compared to AR and AP. Conclusions: Despite resident autonomy decreasing, outcomes in cases where they are afforded autonomy are improving over time. This despite RP cases being on sicker and older patients and performing roughly the same complexity of cases. They also continue to perform no worse than cases with higher levels of supervision. Efforts to increase surgical resident operative autonomy are still needed to improve readiness for independent practice. abstract_id: PUBMED:35987044 Resident assistant training level is not associated with patient spinal fusion outcomes. Objective: A hallmark of surgical training is resident involvement in operative procedures. While resident-assisted surgeries have been deemed generally safe, few studies have rigorously isolated the impact of resident post-graduate year (PGY) level on post-operative outcomes in a neurosurgical patient population. The objective of this study is to evaluate the relationship between resident training level and outcomes following single-level, posterior-only lumbar fusion, after matching on key patient demographic/clinical characteristics and attending surgeon. Patients And Methods: This coarsened-exact matching (CEM) study analyzed 2338 consecutive adult patients who underwent single-level lumbar fusion with a resident assistant surgeon at a multi-hospital university health system from 2013 to 2019. Primary outcomes were 30-day and 90-day readmissions, Emergency Department (ED) visits, reoperations, surgical complications, and mortality. First, univariate logistic regression examined the relationship between PGY level and outcomes. Then, CEM was used to control for key patient characteristics - such as race and comorbid status - and supervising attending surgeon, between the most junior (PGY-2)-assisted cases and the most senior (PGY-7)-assisted cases, thereby isolating the relationship between training level and outcomes. Results: Among all patients, resident training level was not associated with risk of adverse post-surgical outcomes. Similarly, between exact-matched cohorts of PGY-2- and PGY-7-assisted cases, no significant differences in adverse events or discharge disposition were observed. Patients with the most senior resident assistant surgeons demonstrated longer length of stay (mean 100.5 vs. 93.8 h, p = 0.022) and longer duration of surgery (mean 173.5 vs. 159.8 min, p = 0.036). Conclusion: Training level of the resident assistant surgeon did not impact adverse outcomes provided to patients in the setting of single-level, posterior-only lumbar fusion. These findings suggest that attending surgeons appropriately manage cases with resident surgeons at different levels of training. abstract_id: PUBMED:27776758 It depends on your perspective: Resident satisfaction with operative experience. Introduction: Resident satisfaction is a key performance metric for surgery programs; we studied factors influencing resident satisfaction in operative cases, and the concordance of faculty and resident perceptions on these factors. Methods: Resident and faculty were separately queried on satisfaction immediately following operative cases. Statistical significance of the associations between resident and faculty satisfaction and case-related factors were tested by Chi-square or Fisher's exact test. Results: Residents and faculty were very satisfied in 56/87 (64%) and 36/87 (41%) of cases respectively. Resident satisfaction was associated with their perceived role as surgeon (p < 0.04), performing >50% of the case (p < 0.01), autonomy (p < 0.03), and PGY year 4-5(p < 0.02). Faculty taking over the case was associated with both resident and faculty dissatisfaction. Faculty satisfaction was associated with resident preparation (p < 0.01), faculty perception of resident autonomy (p < 0.01), and faculty familiarity with resident's skills (p < 0.01). Conclusions: Resident and faculty satisfaction are associated with the resident's competent performance of the case, suggesting interventions to optimize resident preparation for a case or faculty's ability to facilitate resident autonomy will improve satisfaction with OR experience. abstract_id: PUBMED:27234526 The Impact of Resident Participation in Outpatient Plastic Surgical Procedures. Introduction: Ensuring patient safety along with a complete surgical experience for residents is of utmost importance in plastic surgical training. The effect of resident participation on the outcomes of outpatient plastic surgery procedures remains largely unknown. We assess the impact of resident participation on surgical outcomes using a prospective, validated, national database. Methods: We identified all outpatient procedures performed by plastic surgeons between 2007 and 2012 in the American College of Surgeons National Surgical Quality Improvement Program database. Multivariate regression models assessed the impact of resident participation when compared to attendings alone on 30-day wound complications, overall complications, and return to the operating room (OR). Results: A total of 18,641 patients were identified: 12,414 patients with an attending alone and 6227 with residents participating. The incidence of overall complications, wound complications, and return to OR was increased with resident participation. When confounding variables were controlled for in multivariate analysis, resident participation was no longer associated with increased risk of wound complications. When stratified by year, incidence of overall complications, wound complications, and return to OR in the resident participation group are trending down and fail to be significantly different in 2011 and 2012. Multivariate analysis shows a similar trend. Conclusions: Resident participation is no longer independently associated with increased complications in outpatient plastic surgery in recent years, suggesting that plastic surgical training is successfully continuing to improve in both outcomes and safety. Additional prospective studies that characterize patient outcomes with resident seniority and the degree of resident participation are warranted. Level Of Evidence Ii: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . Answer: Yes, resident experience does affect outcomes in complex abdominal surgery. A study examining perioperative outcomes after pancreaticoduodenectomy (PD) at a single high-volume center found that as residents perform more PD cases, PD-specific complications decrease (PUBMED:24222321). The study showed that for a resident's first PD case, the predicted probability of a PD-specific complication is 27%, which decreases to 19% by the resident's 15th case. This suggests that resident experience is associated with improved patient outcomes in the context of complex abdominal surgery such as PD. The study highlights the importance of maximizing resident repetitive exposure to complex procedures to benefit both the patient and the trainee (PUBMED:24222321).
Instruction: Is there an increased risk of complications with laparoscopy looking for a contralateral patent processus vaginalis? Abstracts: abstract_id: PUBMED:22870947 Is there an increased risk of complications with laparoscopy looking for a contralateral patent processus vaginalis? Background: In young children with a unilateral congenital inguinal hernia, the relatively high incidence of an occult contralateral patent processus vaginalis (CPPV) has led to the practice of laparoscopic contralateral exploration. The effect on postoperative complications such as surgical site infection from performing the laparoscopy has not been previously reported. Patients And Methods: A retrospective review was conducted on all patients who underwent a unilateral inguinal hernia repair from January 1, 2000 to March 1, 2010. We compared those children who underwent laparoscopic evaluation of the contralateral inguinal ring with those who did not. Patient demographics and operative data outcomes were evaluated. Student's t test was used to compare continuous variables, and the chi-squared test with Yates's correction was used for discrete variables. Results: There were 1164 patients who underwent a unilateral inguinal hernia repair during the 10-year study period, and laparoscopy was used in 1010 patients. There were no intraoperative complications from the laparoscopy. In the group who underwent laparoscopy, the mean age was 4.0±3.6 years old, and 88% were male. At laparoscopic exploration, 315 (31%) patients were found to have a CPPV. There were 10 patients (1.0%) who developed a surgical site infection. Infection developed in the side used for laparoscopic exploration in 9 patients and in the contralateral side in 1 patient. All patients with surgical site infections were treated initially with oral antibiotics. Abscesses developed in 2 patients, requiring incision and drainage. No patient required hospital admission or reoperation. In the 154 patients who did not undergo laparoscopy, mean age was 4.3±4.4 years (P=.35), and 85.8% were male (P=.54). There was one wound infection identified in this control group (0.6%) (P=1.00). There was no difference in rate of recurrence (control group, 0%; exploration group, 0.6%; P=.72). Conclusions: There is minimal risk of infection or recurrence following unilateral inguinal hernia repair, and this risk is not increased with the use of contralateral exploration using laparoscopy. abstract_id: PUBMED:25574209 Risk factors for contralateral patent processus vaginalis determined by transinguinal laparoscopic examination. Concurrent contralateral inguinal exploration in children with unilateral hernia or hydrocele is a subject of debate. The aim of the present study was to investigate the incidence of contralateral patent processus vaginalis (CPPV) using transinguinal laparoscopy (inguinoscopy). In addition, the risk factors of CPPV were evaluated in order to facilitate the selection of appropriate candidates for contralateral examination. A total of 119 patients who presented with unilateral hydrocele, inguinal hernia or cryptorchidism between 2001 and 2008 underwent inguinoscopy during the ipsilateral surgery. All data were collected prospectively. The incidence of CPPV was investigated and the risk factors affecting the presence of CPPV were analyzed. Among these patients, 29 individuals (24.4%) had CPPV confirmed by inguinoscopy. No surgical complications were observed during the inguinoscopy. Cases with suspicious ultrasound findings were at a higher risk of CPPV than cases with normal findings (odds ratio, 13.800; P=0.004). A history of contralateral disease was also found to be a significant risk factor (odds ratio, 4.008; P=0.019). The present study identified that the significant risk factors for CPPV were suspicious findings on ultrasound examination and a history of contralateral disease. Therefore, it is concluded that performing inguinoscopy in children with these risk factors is beneficial. abstract_id: PUBMED:18631911 A long-term prospective analysis of pediatric unilateral inguinal hernias: should laparoscopy or anything else influence the management of the contralateral side? Purpose: To prospectively determine if children who present with a unilateral inguinal hernia can be identified as at risk for developing a metachronous inguinal hernia (MIH) based on risk factors and laparoscopic findings of the contralateral internal ring. Materials And Methods: Between April 2000 and October 2004, 299 patients with a unilateral inguinal hernia were followed prospectively. Laparoscopy was attempted in each child. Bilateral repair was only performed in those with contralateral swelling or crepitus during laparoscopic evaluation. All other children were followed regardless of laparoscopic findings. Risk factors to include premature delivery, family history and increased abdominal pressure were recorded. Clinical follow up and annual phone interviews were performed to determine the development of a MIH. Results: Thirteen patients underwent initial bilateral inguinal hernia repair. Of the remaining 286 patients (272 boys, 14 girls; ages 54+/-50.8 months), laparoscopy revealed 127 closed, 48 cleft and 67 open (contralateral patent processus vaginalis) contralateral internal rings, and in 44 laparoscopy was not possible due to a small hernia. Of 222 patients followed for 53.2 months (30.1-82.5 months), 15 (6.8%) developed a MIH. When comparing age, gender, laterality, laparoscopic findings, family history, premature birth and intra-abdominal pressure, only family history exhibited a significant risk for MIH (33% vs 7.7%). However, 16/21 children with a family history never developed a MIH, and 47/53 children with a contralateral patent processus vaginalis have yet to develop one. Conclusions: Risk factors and laparoscopic findings failed to predict the few children who would develop a MIH. The contralateral side should not be routinely explored by any methodology. abstract_id: PUBMED:30729303 Independent risk factors for contralateral patent processus vaginalis undetected by pre-operative ultrasonography in boys with unilateral inguinal hernia. Purpose: Many trials have been done to make sure probability of metachronous contralateral side hernia (MCH) and contralateral patent processus vaginalis (CPPV). But the necessity of contralateral side exploration is still on debate. The aim is to investigate the risk factors for the consideration of contralateral examination on operation. Materials And Methods: The study was designed as retrospectively. Patients with unilateral inguinal hernia from January 2010 to May 2015 were enrolled. Pre-operative ultrasonography was done in all patients. Patients with obvious contralateral side hernia on pre-operative US were excluded. The presence of CPPV was evaluated by transinguinal laparoscopy during the operation. Results: In univariate analysis, hernial sac size only shows difference (P value: 0.001). The others, location of the hernia, age at surgery, gestational age (preterm), low birth weight and parent's age, did not show statistically significant differences. Multivariate analysis also demonstrates CPPV is more common in patients with large hernial sac (Odds ratio: 2.727, 95% confidence interval 1.495-4.974, P value: 0.001). Conclusion: We propose that surgeons should consider contralateral evaluation during operation in case with large ipsilateral hernial sac, although CPPV was not detected by pre-operative US. abstract_id: PUBMED:9840905 Diagnostic laparoscopy for contralateral patent processus vaginalis and nonpalpable testes. Diagnostic laparoscopy can be a valuable adjunct for the surgical approach to surgical conditions in children. Two frequently employed indications for diagnostic laparoscopy in children include (1) the search for a contralateral patent processus vaginalis in a child with a known inguinal hernia and (2) localization and management in boys with a nonpalpable testis. Laparoscopy to investigate a possible contralateral patent processus vaginalis is easily performed using a 3-mm, 70 degrees telescope through the known inguinal hernia sac and requires only 5 minutes for completion. Diagnostic laparoscopy in boys with a nonpalpable testis is performed through a 5-mm cannula placed in the umbilicus and takes less than 10 minutes to accomplish. Depending on the laparoscopic findings, ligation of the testicular vessels is possible at laparoscopy in boys with an abdominal testis who appear best managed by a two-staged Fowler-Stephens operation. In addition, laparoscopic orchiectomy may be performed in teenage boys who have an atrophic testis. abstract_id: PUBMED:9800892 Flexible transinguinal laparoscopy to assess the contralateral ring in pediatric inguinal hernias. The incidence of contralateral patent processus vaginalis (CPPV) is >50% in infants with clinical unilateral inguinal hernia (CUIH) and decreases only slowly with advancing age. Laparoscopy through the hernia sac (transinguinal laparoscopy) to detect suspected CPPV is a safe and efficient way to minimize routine contralateral inguinal exploration, but can be technically difficult. We used flexible urologic instruments and/or angled cystoscopic lenses to make transinguinal laparoscopy easier. Over a 3-year period, 37 patients (34 boys and 3 girls) ranging in age from 4 months to 12 years (mean age 59 months) with CUIH underwent ipsilateral groin exploration and diagnostic transinguinal laparoscopy. Laparoscopy was performed with flexible 17F cystoscopes (26 cases), flexible 9F ureteroscopes (2 cases), and rigid 70 degrees cystoscope lenses (9 cases). We detected eight CPPV (22%) in our series of 20 right and 17 left inguinal hernias, in a mean transinguinal laparoscopy time of 4.5 minutes. At 26-month mean follow-up, no patient whose contralateral inguinal ring was deemed closed had developed a hernia. Flexible fiberoptic urologic scopes and/or angled cystoscope lenses make transinguinal laparoscopy easy and efficacious in the pediatric population. Our series represents the longest longitudinal study of transinguinal laparoscopy for the diagnosis of CPPV. abstract_id: PUBMED:29195832 Variation in the use of laparoscopy with inguinal hernia repairs in a sample of pediatric patients at children's hospitals. Introduction/background: Metachronous contralateral inguinal hernias (MCH) occur in approximately 10% of pediatric patients following unilateral inguinal hernia repairs (UIHR). Laparoscopic evaluation of the contralateral internal ring is a method of identifying high-risk individuals for prophylactic contralateral exploration and repair. Objective: The objective of this study was to assess variation in utilization of diagnostic laparoscopy, and report costs associated with the evaluation of a contralateral patent processus vaginalis during hernia repair in pediatric hospitals. Study Design: The Pediatric Health Information System database was searched to identify outpatient surgical encounters for pediatric patients with a diagnosis of inguinal hernia during a 1-year period (2014). Records were identified that contained diagnostic codes for unilateral or bilateral inguinal hernia in combination with a procedure code for open hernia repair with or without diagnostic laparoscopy. Results: After exclusions there were 3952 hernia repairs performed at 30 hospitals; median age was 4 years (IQR 1-7), 78.8% were male, and 64.9% Caucasian. Three-quarters (76.7%) had UIHR, 8.6% had unilateral repairs with laparoscopy (UIHRL), 12.2% had bilateral inguinal hernia repairs (BIHR), and 2.4% had bilateral repairs with laparoscopy (BIHRL). Where laparoscopy was used, 78% resulted in a unilateral repair and 22% in a bilateral procedure. The percent of patients undergoing laparoscopy varied from 0 to 57% among hospitals, and 0-100% among surgeons. Pediatric surgeons were more than three times more likely to perform a diagnostic laparoscopy compared with pediatric urologists. Median adjusted costs were $2298 (IQR 1659-2955) for UIHR, $2713 (IQR 1873-3409) for UIHRL, $2752 (IQR 2230-3411) for BIHR, and $2783 (IQR 2233-3453) for BIHRL. Median costs varied over two-fold among hospitals ($1310-4434), and over four-fold among surgeons ($948-5040). Discussion: Data suggested that <10% of patients with clinically unilateral inguinal hernias developed MCH. A negative diagnostic laparoscopy ensured that 0.9-1.31% developed MCH. However, up to 30% of patients underwent contralateral exploration/repair when diagnostic laparoscopy was used. The current study found increased costs associated with the use of laparoscopy, with considerable variation in costs among surgeons and hospitals. These data elucidate competing financial and clinical consequences associated with the use of diagnostic laparoscopy with clinically unilateral hernias. Conclusions: Variation existed in the use of laparoscopy during inguinal hernia repairs and associated costs within the current sample from children's hospitals in the United States. The additional costs of laparoscopic evaluation must be considered against the clinical utility and therapeutic consequences of identifying individuals with a higher risk of metachronous contralateral inguinal hernia. abstract_id: PUBMED:9876675 Intraoperative laparoscopy by a flexible scope: is it reliable in contralateral childhood hernias? Background And Objectives: Negative contralateral groin exploration for childhood hernias raises the question of whether contralateral groin exploration is necessary or not. To find out whether a contralateral processus vaginalis was patent, we performed laparoscopy with a flexible scope. Methods: After carbon dioxide insufflation, a flexible laparoscope was inserted through the opened hernia sac and the contralateral processus vaginalis orifice was examined. We considered a patent processus vaginalis as a potential hernia. The study involved 20 children: 16 boys and 4 girls. The symptomatic side was explored in a conventional manner and laparoscopy was performed through the opened hernia sac. Results: A contralateral processus vaginalis was found in 6 children: 4 boys and 2 girls. These results were confirmed by exploring the opposite groin. We did not explore if the laparoscopic examination was within normal limits. There was one false-positive result in a female patient. Conclusions: Intraoperative non-puncture laparoscopy utilizing a flexible laparoscope through the hernia opening is an uncomplicated, reliable and precise method for identifying a patent contralateral processus vaginalis. It may represent a satisfactory alternative to routine bilateral inguinal exploration. Also, use of the flexible laparoscope may be more beneficial than use of a rigid laparoscope passed through the umbilicus or hernia sac. abstract_id: PUBMED:9916590 The value of intraoperative laparoscopic examination of the contralateral inguinal ring during hernia repair in children. Approximately 40% of children with a clinical unilateral inguinal hernia display a patent processus vaginalis on the contralateral side; half of these children subsequently develop an inguinal hernia. The management of this problem is still controversial. Different strategies to identify patients who profit from an open contralateral exploration have been applied (diagnostic pneumoperitoneum, herniography, ultrasound). The purpose of this study was to evaluate the usefulness of intraoperative laparoscopy in a no-puncture technique through the opened hernia sack. In 75 children (age 6 months to 7 years) with clinical unilateral hernia laparoscopy (5-mm Laparoscope, 30 degrees and 70 degrees ), in cases of wide-open contralateral internal inguinal ring (Type III) an open surgical exploration was performed. Twelve patients (17%) fulfilled the laparoscopic criteria of a Type III ring. The diagnose was confirmed during open contralateral exploration. Seven children (8%) showed a patent but small processus vaginalis with a shallow internal ring. These patients were treated conservatively. In the follow-up period (median 6 months) one subsequent hernia developed. There was no technical failure, and no associated complications were seen. The median time for laparoscopy was 6 minutes. Intraoperative laparoscopy during unilateral hernia repair allows the identification of patients who profit from bilateral open surgery during the same operation with little additional operating time, and so far with no resulting complications. Application of this method may avoid a second hospitalization and operation as well as unnecessary routine bilateral open exploration. As an additional source of information through excellent visualization of the abdominal cavity, this method proved helpful to examine the incarcerated bowel after repositioning. abstract_id: PUBMED:16362231 A continuous debate on contralateral processus vaginalis: evaluation technique and approach to patency. We present our experience with the transinguinal diagnostic laparoscopy and discuss its efficacy in evaluating the contralateral side in unilateral inguinal hernias. The possible influence of the initial side of hernia, age and gender on recognized bilateralism were also evaluated in two study groups (diagnostic laparoscopy vs clinical diagnosis). In this retrospective study, we evaluated 36 bilateral, 158 left-sided and 303 right-sided consecutive inguinal hernia cases. A total of 211 out of 461 unilateral cases underwent hernia repair and transinguinal laparoscopic evaluation of the contralateral side. Complications and difficulties of the technique, the mean duration of laparoscopy and operative times were additionally analyzed from operation charts. In this study, bilateralism was determined by transinguinal laparoscopy in Group 1 (children with a contralateral patent processus vaginalis) and clinically in Group 2 those who had a metachronous hernia. The patients were also analyzed according to the side of the inguinal hernia, age and gender in both groups. We found an overall positive contralateral patency in 41 cases (19.4%). We failed to perform a successful diagnostic laparoscopy in six cases (2.7%). No anesthetic or surgical complications were noted. Transinguinal laparoscopy did not add any considerable time to the mean operation time. The incidence of bilateralism in Group 1 was higher in girls than boys. Clinically detected bilateralism was significantly higher in 0-6 month age group and contralateral patency detected via laparoscopy was high in all other age groups. We may conclude that transinguinal diagnostic laparoscopy is a feasible technique in children. We advocate its use as a tool with minimal complication risk even in the hands of novice. Answer: Based on the provided abstracts, there does not appear to be a significantly increased risk of complications associated with the use of laparoscopy to look for a contralateral patent processus vaginalis (CPPV). A retrospective review of patients who underwent unilateral inguinal hernia repair, with 1010 out of 1164 patients undergoing laparoscopic evaluation of the contralateral inguinal ring, reported no intraoperative complications from the laparoscopy. The rate of surgical site infection was low (1.0%) and was not increased by the use of contralateral exploration using laparoscopy (PUBMED:22870947). Another study investigating the incidence of CPPV using transinguinal laparoscopy (inguinoscopy) found no surgical complications during the inguinoscopy (PUBMED:25574209). Similarly, a long-term prospective analysis indicated that risk factors and laparoscopic findings failed to predict which children would develop a metachronous inguinal hernia (MIH), suggesting that routine exploration of the contralateral side by any methodology, including laparoscopy, may not be necessary (PUBMED:18631911). Additional studies support the safety and reliability of laparoscopic evaluation for CPPV. One study using flexible transinguinal laparoscopy reported no complications and found it to be an easy and efficacious method (PUBMED:9800892). Another study found that intraoperative laparoscopy using a flexible scope was an uncomplicated, reliable, and precise method for identifying a patent contralateral processus vaginalis (PUBMED:9876675). Furthermore, a study evaluating the use of laparoscopy in pediatric inguinal hernia repairs found variation in its use among hospitals and surgeons, with increased costs associated with laparoscopy, but did not specifically report an increased risk of complications (PUBMED:29195832). Another study concluded that intraoperative laparoscopy during unilateral hernia repair allows the identification of patients who may benefit from bilateral open surgery with little additional operating time and no resulting complications (PUBMED:9916590). In summary, the abstracts provided do not indicate an increased risk of complications with the use of laparoscopy for evaluating CPPV. Instead, they suggest that laparoscopy is a safe and effective method for assessing the contralateral inguinal ring in children with unilateral inguinal hernias.
Instruction: Is nurse-patient agreement of importance to cancer nurses' satisfaction with care? Abstracts: abstract_id: PUBMED:20423392 Is nurse-patient agreement of importance to cancer nurses' satisfaction with care? Aim: This paper is a report of a study of situational (nurse-patient agreement), personal and occupational factors of potential importance to oncology nurses' satisfaction with care provided and general work satisfaction. Background: Nurses have a general tendency to attribute to patients with cancer more problems and suffering than patients themselves report. However, little is known about whether dis/agreement between oncology nurses and patients with cancer concerning perceptions of patients' situation is of importance to nurses' satisfaction with their work. Methods: The study had a comparative and prospective design. Data were collected in 2005 using self-administrated questionnaires with 81 consecutively recruited nurse-patient pairs. Data were analysed with non-parametric tests (for comparison between subgroups) and with multiple regression analyses (for identifying predictors). Results: Initial nurse-patient agreement concerning patients' emotional distress, coping resources and quality of life did not appear to be important to nurses' subsequent satisfaction with the care directed at a specific patient. However, higher satisfaction with care provided as well as general work satisfaction was reported by nurses with more experience of cancer care and with a lower workload. Conclusion: To improve oncology nurses' opportunities to provide high quality cancer care, novice nurses and advanced beginners in particular should receive support and nurses' working conditions must be improved. Further research is needed to examine whether there are other aspects of the nurse-patient relationship that contribute to oncology nurses' satisfaction with the care provided to specific patients. abstract_id: PUBMED:26071199 Nurse-led outpatient clinics in oncology care - Patient satisfaction, information and continuity of care. Purpose: The aims of the present study were to investigate patients' satisfaction with nurse-led clinics, patients' perception of received information and associations between continuity of care and satisfaction with information. Methods: Questionnaires on patient satisfaction were sent to consecutive samples of patients after they attended a nurse-led clinic at the Department of Oncology, Karolinska University Hospital in 2007, 2009, 2011 and 2013. Patients' perceptions of received information were evaluated in 2011 and 2013, by the EORTC QLQ-INFO25. Data on registered continuity of care were retrieved from the patients' medical record. Results: A total of 962 patients responded (79%) to one of the four surveys. Patients' satisfaction with nurse-led clinics was stable over time. More than 90% rated nurses' interpersonal manners and the care at the clinic as "good", the waiting time as "acceptable", and the length of appointments as "sufficient". Over 90% responded that it was important to meet the same nurse and 62% reported they actually did so and 52% stated they were assigned a named nurse navigator. More than 75% rated the information at their latest visit at a nurse-led clinic as "completely" sufficient. However, 48% expressed wish for more information "during the current disease". No statistical significant associations were found between "satisfaction with information" and continuity of care. Conclusions: Patients' satisfaction with nurse-led clinics was stable over time with generally high figures with the exception for continuity of care and information, areas in which improvements are needed. The wide variety in information needs might require a person-centred approach. abstract_id: PUBMED:37883314 Patient satisfaction of specialist nurse-led renal cancer follow-up. Background: For many patients with locally advanced renal cancer, management is surgery followed by surveillance imaging for several years. There is no concrete data regarding specialist nurse follow-up for patients after they have had surgery. Aims: The authors' aim was to assess patient satisfaction with specialist nurse follow-up for patients with renal cancer who have undergone surgery. Methods: A questionnaire was used to measure patient satisfaction of the specialist nurse consultation. There were 100 patients invited who had been followed up in the nurse-led clinic between January 2020 and May 2021. These patients were given a phone call to complete the questionnaire. Findings: The audit revealed that in terms of general satisfaction, 97.1% of patients felt totally satisfied with their visit/consultation at the specialist nurse clinic. Furthermore, 94.2% of patients felt that they had good continuity of care with specialist nurse follow-up. Conclusion: Nurse-led follow-up was found to have high levels of patient satisfaction. abstract_id: PUBMED:30543568 Factors Affecting the Levels of Satisfaction With Nurse-Patient Communication Among Oncology Patients. Background: Nurse-patient communication is of the utmost importance to oncology patients. Objective: The aim of this study was to investigate the factors influencing the level of satisfaction with nurse-patient communication among oncology patients during their daily routine procedures. Methods: In this observational study, 25 registered nurses and 94 patients were recruited from an oncology unit. The patients were asked to complete a concern checklist before and after each admission, administration of medication, and wound-dressing procedure. Nonverbal cues expressed by the nurses and patients were noted on the checklist during these nursing procedures. Results: The results showed that patients at stage 3 cancer and patients receiving admission procedures were significantly more satisfied with their communication with nurses compared with those who were more concerned about the impact of the disease on self-care. Conclusions: Cancer stage, time of admission, and psychological concern related to self-care ability are the potential factors affecting the satisfaction level of nurse-patient communication. Implications For Nursing: Emphasis could be placed on utilizing the time of admission to interact with patients in a busy environment to strive for better communication with oncology patients. The results underscore the importance of applying the self-care enhancement approach for hospitalized cancer patients. abstract_id: PUBMED:31328422 Impact of Structured Nurse Leader Rounds on Satisfaction With Nursing Care Among Patients With Cancer. Background: The patient satisfaction rate is considered a challenge for nurse leaders, especially among patients with cancer, due to the complexity of the disease, diagnostic procedures, and treatment. Aim: The purpose of this study was to evaluate the impact of structured nurse leader rounds (NLRs) on satisfaction with nursing care among patients with cancer. Moreover, we assessed the relationship among NLRs, patient satisfaction, and demographical variables. Methods: A two-group posttest design was used in four adult inpatient oncology units in a specialized oncology center. A stratified random sampling technique was utilized to select 169 patients for the experimental group (80 patients) and control group (89 patients). Structured leader rounds were conducted by nurse leaders using a scripted nurse leader tool to standardize the rounds in the experimental group, while the unstructured leader round was conducted in the control group. Patient satisfaction was measured using the Patient Satisfaction with Nursing Care Quality Questionnaire. Results: The findings revealed that there was a significant difference in total score of patients' satisfaction between study groups (t = -9.213, p > .001). Conversely, the structured leader round has a significant impact on the patient's experience with nurse concern and caring (t = -2.054, p = .042). Conclusions: Adoption of a structured NLR has a significant impact on improving patient satisfaction in an oncology setting where patients receive care from many disciplines, such as surgical, medical, radiotherapy, and radiology. Clinical Relevance: Efficient structured NLRs of nursing professionals are important for enhancing the quality of nursing care and patient satisfaction. abstract_id: PUBMED:15533085 Nurse-patient relationships in palliative care. Aim: The aim of this paper is to report a study exploring aspects of nurse-patient relationships in the context of palliative care. Background: Although there are numerous studies addressing nurse-patient relationships, little research has focused on these in the context of palliative are. Furthermore, no previous study has examined the relationship in the Chinese context. Methods: Qualitative data were collected from 10 hospice nurses and 10 terminally ill patients by means of open ended unstructured interviews. Respondents were asked to reflect on practices and incidents that would allow an understanding of the meaning of nurse-patient relationships in palliative care. Results: Four major categories emerged from the perspectives of patients and nurses: (1) forming a relationship of trust; (2) being part of the family; (3) refilling with fuel along the journey of living and dying; and (4) enriched experiences. Responses revealed that a relationship of trust is formed, and that nurses are not only regarded as health professionals, but also become part of the family or a good friend. Nurses who develop trusting relationships demonstrate a holistic approach to caring, show their understanding of patients' suffering, are aware of their unvoiced needs, provide comfort without actually being asked, and are reliable, proficient, competent and dedicated in their care. Conclusion: Trust, the achievement of the goals of patients and nurses, caring and reciprocity are important elements of nurse-patient relationships in palliative care. Such relationships not only improve patients' physical and emotional state, but also facilitate their adjustment to their illness, ease pain and can ultimately lead to a good death experience. It is nurses' personal qualities and skills, which are embedded in these relationships, that constitute excellence in nursing care. Nurses also derive satisfaction and are enriched through the relationships. abstract_id: PUBMED:19686271 Are cancer patients whose problems are overestimated by nurses less satisfied with their care? The main aim of the present study was to investigate whether patient-nurse dis/agreement concerning cancer patients' situation was of importance to patients' satisfaction with care. Another aim was to describe cancer patients' satisfaction with care and to investigate its relationship to cancer patients' emotional distress. A consecutive sample of individual patient-nurse pairs (n = 82) was recruited and followed during 3 days. Each pair consisted of a cancer patient newly admitted to an oncological/haematological ward and a nurse responsible for that patient's care. The known phenomenon of nurse overestimation of cancer patients' problems did not appear to be of importance to patients' satisfaction with care. However, patients whose depressive problems were underestimated by nurses were significantly less satisfied with the care they received. Furthermore, anxious and depressed patients were less satisfied with some aspects of the care they received than were the remaining patients. Although the patients' ratings and experiences of received care indicated a high degree of satisfaction, the patients also expressed negative experiences of care. To improve the quality of cancer care, nurses need to improve their ability to identify cancer patients' emotional distress if they are to satisfy patients' needs. abstract_id: PUBMED:38016857 Compassion Fatigue Resiliency Program Effects on Oncology-Hematology Nurses' Professional Quality of Life, Stress Levels, and Patients' Care Satisfaction: Nurse, Nurse Manager, and Patient Perspectives, a Mixed-Methods Study. Objectives: This study aimed to evaluate the effects of the Compassion Fatigue Resiliency Program applied to oncology-hematology nurses on the professional quality of life and stress levels of nurses, on the satisfaction of cancer patients, and on the perspectives of nurse managers. Data Sources: An experimental embedded mixed-methods design was conducted between December 20, 2022, and February 20, 2023. The study included 15 oncology-hematology nurses, 19 cancer patients, and 6 nurse managers. Qualitative interviews were conducted with patients and pre-tests were applied to patients and nurses. The Compassion Fatigue Resiliency Program was implemented for the nurses. Then qualitative interviews were repeated with the same patients; focus group interviews were conducted with the nurse managers; post-tests were applied to patients and nurses who participated in the training; and narrative feedback was collected. Quantitative data analysis was carried out using the Wilcoxon signed-rank test. Qualitative data were processed using an inductive approach to thematic analysis. Conclusion: Compassion satisfaction decreased after the Compassion Fatigue Resiliency Program. Qualitative results showed that the training program improved nurses' effective communication skills and ability to cope with stress. The program improved nurses' approach to patients and communication, and patients' care satisfaction levels increased. Implications For Nursing Practice: The Compassion Fatigue Resiliency Program should be included in hospital in-service training programs and implemented and sustained. With this, compassion fatigue of nurses can be reduced or prevented, which will help to avoid the nurse shortages that already existing and increasing. abstract_id: PUBMED:34533519 Nurse Attitudes: A Descriptive Study of Families' Importance in Inpatient Nursing Care. Background: Family support and patient outcomes are linked to nurses' attitudes toward families' importance in patient care. Objectives: The purpose of this study was to characterize inpatient oncology nurses' attitudes toward families' importance in nursing care and determine characteristics related to these attitudes. Methods: A descriptive, cross-sectional design was used with a single set of measurements from inpatient oncology nurses at a comprehensive cancer center in the southeastern United States. Nurse characteristics were summarized using frequency and percentages. Time variables were summarized using median and interquartile range. There were five primary outcomes. Findings: Nurses considered the role of the family important, but level varied by FINC-NA item. Characteristics related to family importance overall included hospital unit and general approach to care. abstract_id: PUBMED:28605117 Patient Perspectives on Nurse Practitioner Care in Oncology in Canada. Purpose: The purpose of this study was to add to what is known about patient satisfaction with nurse practitioner (NP) care, from the perspective of breast cancer patients who were followed by an NP. Methods And Design: This study utilized Interpretive Description, a qualitative method aimed at making sense of the experiential aspects of health care and developing practical knowledge for improved care. Nine patients receiving NP-led care in an outpatient breast cancer clinic were interviewed about their perspectives on and experiences with NP-led care. Interview transcripts were thematically analyzed. Findings: The NP role has long been regarded as a way of addressing many contemporary health system problems, although there continue to be barriers to the effective utilization of the role, including public and patient misunderstandings. This study revealed that, despite persistent traditional role understandings about health professionals, the patient participants appreciated the benefits of NP care and were highly satisfied with both the physical care and holistic support they received during the course of their treatment. Conclusions And Clinical Relevance: Today's healthcare system is characterized by accessibility issues, unmet patient need, workforce issues, and funding pressures. This research supports and enriches what is known about the benefits and usefulness of NP-provided care from the viewpoint of those receiving the care. The findings offer guidance to NPs in the clinical setting regarding patient needs and optimal care strategies. Answer: Nurse-patient agreement does not appear to be a significant factor in oncology nurses' satisfaction with the care provided to specific patients. A study with a comparative and prospective design involving 81 nurse-patient pairs found that initial nurse-patient agreement concerning patients' emotional distress, coping resources, and quality of life was not important to nurses' subsequent satisfaction with the care directed at a specific patient (PUBMED:20423392). Instead, factors such as nurses' experience in cancer care and workload were more influential on their satisfaction with care provided and general work satisfaction. Therefore, it seems that other aspects of the nurse-patient relationship or working conditions may contribute more to oncology nurses' satisfaction with the care they provide than the level of agreement with patients on their condition or needs.
Instruction: The darkening cloud of diabetes: do trends in cardiovascular risk management provide a silver lining? Abstracts: abstract_id: PUBMED:18840767 The darkening cloud of diabetes: do trends in cardiovascular risk management provide a silver lining? Objective: We aimed to evaluate the changes in cardiovascular-related health care utilization (drug therapies, hospitalizations) and mortality for the diabetic population during a 9-year period in Saskatchewan, Canada. Research Design And Methods: We identified annual diabetes prevalence rates for people aged >or=30 years between 1993 and 2001 from the administrative databases of Saskatchewan Health. Annual rates of evidence-based drug therapies (antihypertensives, ACE inhibitors, beta-blockers, calcium channel blockers, 3-hydroxy-3-metaglutaryl coenzyme A reductase inhibitors [statins]), hospitalizations for cerebrovascular and cardiac events, and all-cause mortality were estimated. Rates were direct age and sex standardized using the 2001 Canadian population, and trends over time were assessed using Joinpoint regression. Results: From 1993 to 2001, diabetes prevalence increased 34% (4.7-6.5%, P < 0.001) with the highest rates in men and those aged >or=65 years. The rate of increase in diabetes prevalence appeared to slow in those aged <65 years (P < 0.01 for trend). Significant increased use of evidence-based drug therapies was observed (41% increase in antihypertensive agents, 97% increase in ACE inhibitors, 223% increase in statin therapies; all P < 0.05 for trend). During this period, both cerebrovascular and cardiac-related hospitalizations declined by 36% (9.5 vs. 6.1 per 1,000) and 19% (38.0 vs. 30.6 per 1,000) (P < 0.05 for trends), respectively, with similar reductions regardless of sex. No change in all-cause mortality was observed (17.7 vs. 17.8 deaths per 1,000; P > 0.05). Conclusions: During our period of study, there was an increase in the utilization of evidenced-based drug therapies in people with diabetes and reductions in cardiovascular-related hospitalizations. Despite this, we observed no change in all-cause mortality. abstract_id: PUBMED:20470469 Cardiovascular risk management in the hypertensive patient Hypertension is a major cardiovascular risk factor. The treatment of the individual with hypertension is aimed to reduce cardiovascular mortality and morbidity. With such aim the management of hypertension goes beyond arterial pressure lowering and involves the treatment of thrombotic risk with antiplatelet therapy, treatment of hyperlipidemia with lifestyle measures and statins, and diabetes prevention with lifestyle measures. For the hypertensive patient with concomitant cardiovascular disease or diabetes, pharmacological treatment is consensual. However, in primary prevention the institution of drug therapy must be judicious to guarantee that benefit surpass costs. abstract_id: PUBMED:18073451 Cardiovascular risk and risk management in HIV-infected patients. Patients with HIV infection are at risk of cardiovascular disease from the same factors posing risk in the general population--eg, smoking, dyslipidemia, hypertension, obesity, and diabetes. HIV infection itself and antiretroviral therapy pose additional risk, but available data indicate that the relative rate of myocardial infarction is low and declining in the HIV-infected population. Cardiovascular risk should be addressed before initiation of antiretroviral therapy and frequently during follow-up, and decisions to alter therapy on the basis of adverse changes in metabolic risk factors should be made on an individual basis. Virologic control is the primary goal for HIV-infected persons with cardiovascular risk, and is the primary consideration in determining when to start antiretroviral therapy and when to change regimens. This article summarizes a presentation on cardiovascular risk and risk management in HIV-infected persons made by Oluwatoyin Adeyemi, MD, at an International AIDS Society-USA Continuing Medical Education course in Chicago in May 2007. abstract_id: PUBMED:23904862 Management of cardiovascular risk in patients with rheumatoid arthritis: evidence and expert opinion. The risk of cardiovascular morbidity and mortality is increased in rheumatoid arthritis. The classical cardiovascular risk factors, including smoking, hypertension, dyslipidaemia, insulin resistance and diabetes mellitus, obesity and physical inactivity do not appear to explain the excess cardiovascular risk in rheumatoid arthritis, although they do contribute, albeit in a different way or to a lesser extent, to rheumatoid arthritis in comparison with the general population. A very important link between rheumatoid arthritis and cardiovascular disease is inflammation as it plays a key role in all stages of atherosclerosis: from endothelial dysfunction to plaque rupture and thrombosis. It also has an influence on and accentuates some traditional cardiovascular risk factors, such as dyslipidaemia, obesity and insulin resistance. To date, the exact pathophysiologic mechanism by which this relation between cardiovascular disease and rheumatoid arthritis can be explained is not completely clear. Cardiovascular risk management in rheumatoid arthritis is mandatory. Unfortunately, the way this should be done remains a point of discussion. In this review issues regarding cardiovascular risk in rheumatoid arthritis and its management will be addressed, according to evidence presented in the latest studies and our own experience-based opinion. abstract_id: PUBMED:35800190 A Review of National Level Guidelines for Risk Management of Cardiovascular and Diabetic Disease. Cardiovascular diseases and diabetes are among the leading preventable causes of morbidity and mortality. Cardiovascular disease risk reduction aimed to address the significant modifiable risk factors, including lifestyle-related risk factors, hypertension, hyperlipidemia, and diabetes. Given the severity and disease burden, many insurances, including Medicare, cover the annual counseling for risk reduction of cardiovascular disease. Although numerous national-level guidelines are available for managing these conditions, most of them focus on disease management. Given the broad areas covered in these recommendations, a concise review summarizing the measures addressing the preventive approach in these conditions is not readily available. Herewith, we review and outline the currently available guidelines from national-level publications with principal attention to the primary prevention measures to provide a broad overview and assist providers with the risk reduction counseling. abstract_id: PUBMED:30981542 Standards for global cardiovascular risk management arteriosclerosis. One of the main goals of the Spanish Society of Arteriosclerosis is to contribute to a wider and greater knowledge of vascular disease, its prevention and treatment. Cardiovascular diseases are the leading cause of death in our country and also lead to a high degree of disability and health expenditure. Arteriosclerosis is a multifactorial disease, this is why its prevention requires a global approach that takes into account the different risk factors with which it is associated. Thus, this document summarizes the current level of knowledge and integrates recommendations and procedures to be followed for patients with established cardiovascular disease or high vascular risk. Specifically, this document reviews the main symptoms and signs to be evaluated during the clinical visit, the laboratory and imaging procedures to be routinely requested or those in special situations. It also includes the estimation of vascular risk, the diagnostic criteria of the different entities that are cardiovascular risk factors, and presents general and specific recommendations for the treatment of the different cardiovascular risk factors and their final objectives. Finally, the document includes aspects that are not often mentioned in the literature, such as the organisation of a vascular risk consultation. abstract_id: PUBMED:34352986 Management of Cardiovascular Risk in Perimenopausal Women with Diabetes. Cardiovascular disease is the primary cause of mortality in women and men with diabetes. Due to age and worsening of risk factors over the menopausal transition, risk of coronary heart disease events increases in postmenopausal women with diabetes. Randomized studies have conflicted regarding the beneficial impact of estrogen therapy upon intermediate cardiovascular disease markers and events. Therefore, estrogen therapy is not currently recommended for indications other than symptom management. However, for women at low risk of adverse events, estrogen therapy can be used to minimize menopausal symptoms. The risk of adverse events can be estimated using risk engines for the calculation of cardiovascular risk and breast cancer risk in conjunction with screening tools such as mammography. Use of estrogen therapy, statins, and anti-platelet agents can be guided by such calculators particularly for younger women with diabetes. Risk management remains focused upon lifestyle behaviors and achieving optimal levels of cardiovascular risk factors, including lipids, glucose, and blood pressure. Use of pharmacologic therapies to address these risk factors, particularly specific hypoglycemic agents, may provide some additional benefit for risk prevention. The minimal benefit for women with limited life expectancy and risk of complications with intensive therapy should also be considered. abstract_id: PUBMED:23693011 Branch retinal vein occlusion: high time for cardiovascular risk management Cardiovascular risk management is common in patients suffering from manifest cardiovascular disease, hypertension, hyperlipidaemia and diabetes mellitus. It is generally accepted that medication is the most effective treatment for reducing cardiovascular morbidity and mortality in these patients. Remarkably, cardiovascular risk management is rare in patients suffering from branch retinal vein occlusion. This common retinal vascular disorder in middle aged and elderly people occurs where a thickened retinal artery compresses the affected vein. Although thrombosis is involved, procoagulant disorders are only present in selected cases. On the other hand, prior diabetes, hypertension, and peripheral artery disease are associated with an increased risk of developing branch retinal vein occlusion up to a decade later, and retinal vascular disorder is associated with an increased risk of subsequently developing hypertension, diabetes, congestive heart failure, and cerebrovascular disease later on. Therefore, branch retinal vein occlusion is a marker of developing cardiovascular disease and warrants adequate cardiovascular risk management. abstract_id: PUBMED:34782254 Cardiovascular risk management in type-2 diabetes: Is the patients' cardiovascular risk adequately considered in type-2 specialist diabetes care? Introduction And Objectives: More than 50% of the type-2 Diabetes (T2DM) mortality is due to cardiovascular disease. Current treatment guidelines recommend an increasingly differentiated and comprehensive management of cardiovascular risk factors. This study aims to measure the extent to which T2DM care is currently adjusted for cardiovascular risks in clinical practice. Methods: This observational study included 123 T2DM patients of nine outpatient diabetology specialist clinics in Southern Germany. Guideline adherence was measured based on selected aspects of the joint guideline of the European Society of Cardiology (ESC) and the European Society for the Study of Diabetes (EASD). The proportion of patients with treatment target adherence, monitoring adherence, and therapy adherence for the areas of blood pressure and lipid metabolism management was determined, and the correlation between adherence and treatment outcome was assessed by comparing treatment target achievement rates between the group that received guideline-adherent care and the group with no guideline-adherent care. Results: The combined adherence rates were 39% for blood pressure management and 10% for lipid management. 70% of the participants with adherent blood pressure management and 56% with non-adherent blood pressure management achieved the blood pressure target (p=0.165). 50% of the patients with guideline-adherent lipid management and 17% with not guideline-adherent lipid management achieved the LDL cholesterol target (p=0.032). Discussion: Less than half of the study population received care that was adequately adjusted for cardiovascular risks. Participants that received risk-adjusted care achieved their treatment target for blood pressure and LDL cholesterol more often. In order to validate methods and results, the study should be repeated with routine care data from a larger study population. Conclusion: Guideline-adherent care pays off: cardiovascular risk-adjusted care, especially following the more differentiated ESC/EASC guideline 2019, increases the chance for T2DM patients to avoid or delay cardiovascular complications. abstract_id: PUBMED:26879568 High-grade carotid artery stenosis: A forgotten area in cardiovascular risk management. Background: Patients with high-grade (≥70%) carotid artery stenosis (CAS) rank in the highest risk category for future cardiovascular (CV) events, but the quality of cardiovascular risk management in this patient group is unknown. Design: Cross-sectional retrospective study. Methods: Data were collected for all patients diagnosed with high-grade CAS in Östergötland county, Sweden between 1 January 2009 and 31 July 2012 regarding the quality of cardiovascular risk management, co-morbidity and outcomes during the 2-year follow-up period after a diagnosis of CAS with a carotid ultrasound scan. Patients were included regardless of whether they underwent carotid endarterectomy (CEA). Results: A total of 393 patients with CAS were included in the study; 133 (33.8%) underwent CEA and 260 (66.2%) were assigned to a conservative management (CM) group. In both groups of patients the prescription of platelet inhibitors, statins and antihypertensive drugs increased significantly (p < 0.001) after diagnosis. However treatment targets were not met in the majority of patients and the low-density lipoprotein level was on target in only 13.5% of patients. During follow-up, low-density lipoprotein levels were not measured in 19.8% of patients who underwent CEA and 44.2% of patients in the CM group (p < 0.001); HbA1c was not measured in 24.4% of patients with diabetes in the CEA group and in 18.8% of patients in the CM group (p = 0.560). There was no documentation of counselling on diet, exercise, smoking cessation or adherence to medication. The combined clinical event rate (all-cause mortality, cardiovascular mortality and non-fatal cardiovascular events) was high in both groups (CEA 36.8% and CM 36.9%; p = 1.00) with no difference in the occurrence of ipsilateral ischaemic stroke. Conclusions: The clinical event rate was high in patients with high-grade CAS and the management of cardiovascular risk was deficient in all aspects. Answer: The study conducted in Saskatchewan, Canada, from 1993 to 2001, observed an increase in diabetes prevalence by 34%, with the highest rates in men and those aged ≥65 years. Despite this rise, there was a significant increase in the use of evidence-based drug therapies, such as antihypertensives, ACE inhibitors, and statins, which led to a decline in cardiovascular-related hospitalizations for both cerebrovascular and cardiac events by 36% and 19%, respectively. However, no change in all-cause mortality was observed during this period (PUBMED:18840767). This suggests that while the prevalence of diabetes, a major cardiovascular risk factor, is increasing, the trends in cardiovascular risk management, particularly the increased use of evidence-based drug therapies, may provide a silver lining by reducing the rate of cardiovascular-related hospitalizations.
Instruction: Is obesity an inflammatory disease? Abstracts: abstract_id: PUBMED:28256934 Clinical impact of exercise in patients with peripheral arterial disease. Increasing prevalence, high morbidity and mortality, and decreased health-related quality of life are hallmarks of peripheral arterial disease. About one-third of peripheral arterial disease patients have intermittent claudication with deleterious effects on everyday activities, such as walking. Exercise training improves peripheral arterial disease symptoms and is recommended as first line therapy for peripheral arterial disease. This review examines the effects of exercise training beyond improvements in walking distance, namely on vascular function, parameters of inflammation, activated hemostasis and oxidative stress, and quality of life. Exercise training not only increases walking distance and physiologic parameters in patients with peripheral arterial disease, but also improves the cardiovascular risk profile by helping patients achieve better control of hypertension, hyperglycemia, obesity and dyslipidemia, thus further reducing cardiovascular risk and the prevalence of coexistent atherosclerotic diseases. American guidelines suggest supervised exercise training, performed for a minimum of 30-45 min, at least three times per week, for at least 12 weeks. Walking is the most studied exercise modality and its efficacy in improving cardiovascular parameters in patients with peripheral arterial disease has been extensively proven. As studies have shown that supervised exercise training improves walking performance, cardiovascular parameters and quality of life in patients with peripheral arterial disease, it should be encouraged and more often prescribed. abstract_id: PUBMED:32095935 Effects of green tea extract supplementation and endurance training on irisin, pro-inflammatory cytokines, and adiponectin concentrations in overweight middle-aged men. Purpose: Green tea extract (GTE) supplementation has been proposed to possess anti-inflammatory properties. This study assessed the effects of GTE on endurance training (ET) induced changes on irisin, pro-inflammatory cytokines, adiponectin and anthropometric indices in overweight middle-aged males. Methods: Participants were randomly assigned to three groups (n = 15): endurance training + placebo (ET + P), endurance training + green tea extract supplementation (ET + GTE), and no endurance training + placebo (P). The ET intervention consisted of an 8-week training program that included circuit training, fast walking or jogging performed three times/week at a moderate intensity (40-59% of the heart rate reserve). Participants received 500 mg/day GTE using a green tea capsule. Serum concentrations of interleukin 6 (IL-6), tumor necrosis factor-alpha (TNF-α), irisin, adiponectin, and high-sensitivity C-reactive protein (hs-CRP) were measured prior to and after the 8-week training intervention. Results: Both exercise interventions decreased IL-6 and hs-CRP (p < 0.05), and increased adiponectin (p < 0.01) levels; changes in these variables were greater in the ET + GTE group compared to the ET + P and P groups (p < 0.01). Irisin concentrations increased only in the ET + GTE group and were different from the ET + P and P groups (p < 0.01). There were no changes in TNF-α concentrations in any of the groups. Both exercise interventions (ET + GTE and ET + P) decreased bodyweight, body mass index (BMI), body fat percentage (BFP), and visceral fat area (VFA) (p < 0.05), with greater changes in these variables occurring in the ET + GTE group compared to ET + P and P groups (p < 0.01). Conclusion: The combination of GTE supplementation and ET produces beneficial anti-inflammatory and metabolic effects, which were greater than those produced by ET alone. abstract_id: PUBMED:26416876 Hemodynamic Correlates of Abnormal Aortic Root Dimension in an Adult Population: The Strong Heart Study. Background: We evaluated the relationship of aortic root dimension (ARD) with flow output and both peripheral and central blood pressure, using multivariable equations predicting ideal sex-specific ARD at a given age and body height. Methods And Results: We measured echocardiographic diastolic ARD at the sinuses of Valsalva in 3160 adults (aged 42±16 years, 61% women) from the fourth examination of the Strong Heart Study who were free of prevalent coronary heart disease, and we compared measured data with the theoretical predicted value to calculate a z score. Central blood pressure was estimated by applanation tonometry of the radial artery in 2319 participants. ARD z scores were divided into tertiles representing small, normal, and large ARD. Participants with large ARD exhibited greater prevalence of central obesity and higher levels of inflammatory markers and lipids (0.05<P<0.0001). Stroke volume, heart rate, and both cuff and central diastolic blood pressure were progressively greater from small to large ARD (all P<0.0001). Pulse pressure was higher in small ARD (P<0.0001). In multivariable analysis, ARD z score was related positively to stroke volume, either cuff or central diastolic blood pressure, and negatively to pulse pressure. Large ARD was also independently correlated to higher waist circumference and percentages of neutrophils and plasminogen activator inhibitor-1 (all P<0.01). Conclusions: Aortic root dilatation is associated with high diastolic blood pressure, high stroke volume, central fat distribution, and inflammatory status. In contrast, at a given diastolic blood pressure and stroke volume, aortic root dilatation is associated with lower pulse pressure and systolic blood pressure. abstract_id: PUBMED:35641589 β-elemene regulates M1-M2 macrophage balance through the ERK/JNK/P38 MAPK signaling pathway. Macrophages are classified into classically activated M1 macrophages and alternatively activated M2 macrophages, and the two phenotypes of macrophages are present during the development of various chronic diseases, including obesity-induced inflammation. In the present study, β-elemene, which is contained in various plant substances, is predicted to treat high-fat diet (HFD)-induced macrophage dysfunction based on the Gene Expression Omnibus (GEO) database and experimental validation. β-elemene impacts the imbalance of M1-M2 macrophages by regulating pro-inflammatory cytokines in mouse white adipose tissue both in vitro and in vivo. In addition, the RAW 264 cell line, which are macrophages from mouse ascites, is used to identify the effects of β-elemene on inhibiting bacterial endotoxin lipopolysaccharide (LPS)-induced phosphorylation of mitogen-activated protein kinase (MAPK) pathways. These pathways both induce and are activated by pro-inflammatory cytokines, and they also participate in the process of obesity-induced inflammation. The results highlight that β-elemene may represent a possible macrophage-mediated therapeutic medicine. abstract_id: PUBMED:22869041 Metabolic, hormonal and immunological associations with global DNA methylation among postmenopausal women. DNA methylation is an epigenetic modification essential for the regulation of gene expression that has been implicated in many diseases, including cancer. Few studies have investigated the wide range of potential predictors of global DNA methylation, including biomarkers. Here, we investigated associations between DNA methylation and dietary factors, sex-steroid hormones, metabolic, lipid, inflammation, immune and one-carbon biomarkers. Data and baseline biomarker measurements were obtained from 173 overweight/obese postmenopausal women. Global DNA methylation in lymphocyte DNA was measured using the pyrosequencing assay for LINE-1 repeats. We used correlations and linear regression analyses to investigate associations between continuous data and DNA methylation, while t-tests were used for categorical data. Secondary analyses stratified by serum folate levels and multivitamin use were also conducted. There was little variability in LINE-1 methylation (66.3-79.5%). Mean LINE-1 methylation was significantly higher among women with elevated glucose levels. Mean LINE-1 methylation was also higher among women with high CD4+/CD8+ ratio, and lower among women with elevated vitamin B6, but neither reached statistical significance. In analyses stratified by folate status, DNA methylation was negatively associated with sex hormone concentrations (estrone, estradiol, testosterone and sex hormone binding globulin) among women with low serum folate levels (n = 53). Conversely, among women with high serum folate levels (n = 53), DNA methylation was positively associated with several immune markers (CD4/CD8 ratio, NK1656/lymphocytes and IgA). Results from this screening suggest that global DNA methylation is generally stable, with differential associations for sex hormones and immune markers depending on one-carbon status. abstract_id: PUBMED:23803416 A prospective analysis of body size during childhood, adolescence, and adulthood and risk of non-Hodgkin lymphoma. The etiology of non-Hodgkin lymphoma (NHL) is poorly understood. Obesity is associated with inflammation, a cytokine milieu conducive to lymphocyte proliferation, and has been associated with NHL risk in some epidemiologic studies. To prospectively examine NHL risk in relation to adult and earlier life obesity, we documented 635 incident NHL diagnoses among 46,390 men in the Health Professionals Follow-up Study and 1,254 diagnoses among 116,794 women in the Nurses' Health Study over 22 to 32 years of follow-up. Using multivariable Cox proportional hazards models, we estimated cohort-specific incidence rate ratios (RR) and 95% confidence intervals (CI) for risk of NHL and major histologic subtypes associated with cumulative average middle and young adult (ages, 18-21 years) body mass index (BMI) and adolescent and childhood somatotype. NHL risk was modestly increased in men (but not women) with a cumulative average middle adult BMI ≥ 30 kg/m(2) (vs. 15-22.9 kg/m(2); RR, 1.28; 95% CI, 0.92-1.77; Ptrend = 0.05). In meta-analyses across cohorts, higher young adult BMI was associated with increased risk of all NHL (pooled RR per 5 kg/m(2), 1.19; 95% CI, 1.05-1.37), diffuse large B-cell lymphoma (DLBCL), and follicular lymphoma (all Ptrend ≤ 0.02). Adolescent somatotype was also positively associated with all NHL, DLBCL, and follicular lymphoma in pooled analyses (all Ptrend ≤ 0.03), whereas childhood somatotype was positively associated with NHL overall among women only (Ptrend < 0.01). These findings in two large prospective cohorts provide novel evidence that larger body size in childhood, adolescence, and young adulthood predicts increased risk of NHL, and particularly of DLBCL and follicular lymphoma. abstract_id: PUBMED:30505291 Particulate Matter Air Pollution: Effects on the Cardiovascular System. Air pollution is a complex mixture of gaseous and particulate components, each of which has detrimental effects on human health. While the composition of air pollution varies greatly depending on the source, studies from across the world have consistently shown that air pollution is an important modifiable risk factor for significantly increased morbidity and mortality. Moreover, clinical studies have generally shown a greater impact of particulate matter (PM) air pollution on health than the gaseous components. PM has wide-ranging deleterious effects on human health, particularly on the cardiovascular system. Both acute and chronic exposure to PM air pollution is associated with increased risk of death from cardiovascular diseases including ischemic heart disease, heart failure, and ischemic/thrombotic stroke. Particulate matter has also been shown to be an important endocrine disrupter, contributing to the development of metabolic diseases such as obesity and diabetes mellitus, which themselves are risk factors for cardiovascular disease. While the epidemiological evidence for the deleterious effects of PM air pollution on health is increasingly accepted, newer studies are shedding light on the mechanisms by which PM exerts its toxic effects. A greater understanding of how PM exerts toxic effects on human health is required in order to prevent and minimize the deleterious health effects of this ubiquitous environmental hazard. Air pollution is a growing public health problem and mortality due to air pollution is expected to double by 2050. Here, we review the epidemiological evidence for the cardiovascular effects of PM exposure and discuss current understanding about the biological mechanisms, by which PM exerts toxic effects on cardiovascular system to induce cardiovascular disease. abstract_id: PUBMED:32564738 Interleukin-4 Restores Insulin Sensitivity in Insulin-Resistant Osteoblasts by Increasing the Expression of Insulin Receptor Substrate 1. Obesity and latent inflammation can give rise to insulin resistance and type 2 diabetes. Here we established an insulin resistance model of osteoblasts to explore the restoration effect of anti-inflammatory interleukin-4 (IL-4) on insulin sensitivity and its mechanism. We found that IL-4 inhibited cell proliferation in a concentration- and time-dependent manner. Insulation resistance significantly reduced the phosphorylation levels of the insulin receptor substrate 1 (IRS1; Tyr612), Akt (Ser473), and AS160 (Ser318) proteins. The addition of IL-4 to the insulin resistance model led to a dose-dependent stimulation of the phosphorylation of IRS1, Akt, and AS160. IL-4 fully restored the activation of the insulin cascade in insulin-resistant cells at the concentration of 50 ng/ml. Additionally, IL-4 promoted the expression of IRS1 in a time-dependent manner. We conjecture that IL-4 restores insulin sensitivity in osteoblasts by upregulating the expression of IRS1. It was also found that IL-4 promoted the expression of osteoprotegerin depending on the time of exposure. This effect may play an important role in the regulation of the energy metabolism in the whole body. abstract_id: PUBMED:28513571 Fat, Sugar, and Bone Health: A Complex Relationship. With people aging, osteoporosis is expected to increase notably. Nutritional status is a relatively easily-modified risk factor, associated with many chronic diseases, and is involved in obesity, diabetes, and coronary heart disease (CHD), along with osteoporosis. Nutrients, such as fats, sugars, and proteins, play a primary function in bone metabolism and maintaining bone health. In Western nations, diets are generally high in saturated fats, however, currently, the nutritional patterns dominating in China continue to be high in carbohydrates from starch, cereals, and sugars. Moreover, high fat or high sugar (fructose, glucose, or sucrose) impart a significant impact on bone structural integrity. Due to diet being modifiable, demonstrating the effects of nutrition on bone health can provide an approach for osteoporosis prevention. Most researchers have reported that a high-fat diet consumption is associated with bone mineral density (BMD) and, as bone strength diminishes, adverse microstructure changes occur in the cancellous bone compartment, which is involved with lipid metabolism modulation disorder and the alteration of the bone marrow environment, along with an increased inflammatory environment. Some studies, however, demonstrated that a high-fat diet contributes to achieving peak bone mass, along with microstructure, at a younger age. Contrary to these results, others have shown that a high-fructose diet consumption leads to stronger bones with a superior microarchitecture than those with the intake of a high-glucose diet and, at the same time, research indicated that a high-fat diet usually deteriorates cancellous bone parameters, and that the incorporation of fructose into a high-fat diet did not aggravate bone mass loss. High-fat/high-sucrose diets have shown both beneficial and detrimental influences on bone metabolism. Combined, these studies showed that nutrition exerts different effects on bone health. Thus, a better understanding of the regulation between dietary nutrition and bone health might provide a basis for the development of strategies to improve bone health by modifying nutritional components. abstract_id: PUBMED:18025411 In vivo effects of insulin and free fatty acids on matrix metalloproteinases in rat aorta. Objective: Obesity is associated with insulin resistance, hyperinsulinemia, elevated plasma free fatty acid (FFA), and increased risk for atherosclerotic vascular disease (ASVD). A part of this increased risk may be due to enhanced activation of matrix metalloproteinases (MMPs). Here, we have examined the effects of physiologically elevated levels of insulin and FFA on three MMPs and their inhibitors (tissue inhibitors of MMP [TIMPs]) in aortic tissue of male rats during euglycemic-hyperinsulinemic clamping. Research Design And Methods: Four-hour euglycemic-hyperinsulinemic clamps with infusion of saline/glycerol, lipid/heparin, or insulin with or without lipid/heparin were performed in alert unrestrained male rats. Results: Hyperinsulinemia increased MMP-2 ( approximately 6-fold), MMP-9 ( approximately 13-fold), membrane type 1-MMP (MT1-MMP; approximately 8-fold) (all Western blots), and gelatinolytic activity (zymography) of MMP-2 (2-fold), while not affecting TIMP-1 and TIMP-2. Insulin increased IRS-1-associated PI 3-kinase (PI3K), extracellular signal-regulated kinases 1/2 (ERK1/2), and c-jun NH(2)-terminal kinase (JNK) (by Western blots with phospho-specific antibodies). FFA augmented the insulin-mediated increases in MMP-2 (from approximately 6- to approximately 11-fold), MMP-9 (from approximately 3- to approximately 23-fold), MT1-MMP (from approximately 8- to approximately 20-fold), MMP-2 gelatinolytic activity (from 2- to 3-fold), and JNK and p38 mitogen-activated protein kinase (MAPK) activities but decreased insulin-mediated activation of PI3K and ERK1/2. Raising FFA without raising insulin affected neither MMPs nor TIMPs. Conclusions: FFA augmented insulin stimulation of the MMP/TIMP balance of three proatherogenic MMPs and increased activities of two MAPKs (JNK and p38 MAPK), both of which are known to stimulate the production of proinflammatory cytokines. This may, over time, increase degradation of extracellular matrix and together with inflammatory changes promote development of ASVD. Answer: Yes, obesity is considered an inflammatory disease. This is supported by various studies that have identified the presence of inflammation in obesity. For instance, obesity-induced inflammation is characterized by an imbalance of macrophage phenotypes, with an increase in pro-inflammatory M1 macrophages and a decrease in anti-inflammatory M2 macrophages in adipose tissue. β-elemene has been shown to impact this imbalance by regulating pro-inflammatory cytokines in mouse white adipose tissue, suggesting that obesity is associated with an inflammatory state (PUBMED:35641589). Additionally, obesity is associated with inflammation, a cytokine milieu conducive to lymphocyte proliferation, and has been linked to an increased risk of non-Hodgkin lymphoma, further indicating the inflammatory nature of obesity (PUBMED:23803416). Moreover, obesity is related to elevated levels of inflammatory markers and lipids, and it can contribute to the development of metabolic diseases such as diabetes mellitus, which are also associated with inflammation (PUBMED:26416876). The inflammatory status in obesity is also evident from the increased levels of pro-inflammatory cytokines and the impact of dietary patterns high in fats and sugars on bone health, which involves an inflammatory environment (PUBMED:28513571). Furthermore, air pollution, which is an endocrine disrupter contributing to obesity, has been shown to have wide-ranging deleterious effects on human health, particularly on the cardiovascular system, and is associated with inflammation (PUBMED:30505291). Insulin resistance, a common feature of obesity, has been linked to latent inflammation, and interleukin-4 (IL-4), an anti-inflammatory cytokine, has been found to restore insulin sensitivity in insulin-resistant osteoblasts, indicating the role of inflammation in obesity-related insulin resistance (PUBMED:32564738). In summary, the evidence from these studies supports the concept that obesity is an inflammatory disease, characterized by an altered immune response, increased levels of pro-inflammatory cytokines, and the presence of inflammation-related complications.
Instruction: The family in family practice: is it a reality? Abstracts: abstract_id: PUBMED:25366953 Family caregiver education: an Italian experience The role of family caregivers is essential for optimal treatment of patients with chronic diseases since it ensures the compliance to treatment, continuity of care, emotional and social support. Despite the significant value of informal caregiving, the relatives report not to have the necessary knowledge and skills to provide ongoing support to the family member with chronic disease and, therefore, show the need to be educated in their role as caregivers. The reliance on family caregivers unprepared for the daily management of the illness patients may threaten the patients' quality of life and, moreover, contributes to increase the burden of care of the caregiver by exposing it to the risk of psychosocial distress. The Nursing Department of Campus Bio-Medico of Rome have addressed such a need promoting training courses on family caregivers since 2011 with the goal of supporting families in the acquisition of specific skills related to health care of patients with chronic conditions. The UCBM educational program has been developed in order to convey knowledge tools especially for: the role and challenges of the caregiver, the supply of health services to support patients and the family member. The UCBM educational program has been structured in lessons to analyze the deepening of chronic diseases related to different areas of medical interventions and in practical sessions guided by a nurse specialist in order to support the family in the acquisition of skills and competencies for the care management of patients at home. The positive feedback, received from users who have attended the two past editions of the UCBM educational program, demonstrates the importance of acquiring technical knowledge and practical tools that are essential to independently manage all aspects of patient care. It's important to notice, moreover, that learning these skills will support the family in the patient management, both in short and long term, and that is very relevant considering the temporal characteristics of the chronic disease. abstract_id: PUBMED:22515460 Family and family therapy in Russia. This article represents the information about family and family therapy in the context of culture, traditions and contemporary changes of social situations in Russia. The legislation of family rights are mentioned within items about marriage and family in the Constitution, Civil Code and Family Code of the Russian Federation which has changed during recent years. The definition of family and description of family structure are given through the prism of the current demographic situation, dynamics of statistics of marriage and divorce rates, mental disorders, disabilities and such phenomena as social abandonment. The actual curriculum, teaching of family therapy and its disadvantages, system of continuous education, supervision and initiatives of the Institute of Integrative Family Therapy in improvement of preparing of specialists who can provide qualified psychosocial assistance for the family according to the actual needs of society are noted. The directions of state and private practice of family counselling and therapy both for psychiatric patients and medical patients, for adults and children in a family systemic approach are highlighted with an indication of the spectrum of techniques and methods used by Russian professionals. The main obstacles and perspectives of development of family therapy in Russia are summarized. abstract_id: PUBMED:32400014 The relationships of job and family demands and job and family resources with family caregivers' strain. Background: Because of an expected increase in the number of family caregivers, there is a growing public and scientific interest in family caregiving and more specifically in the combination of family care with paid employment. It is important to gain insight in the family caregivers' strain and determining factors in the job and family domain. Aim: The aim of the study was to examine the associations of job and family demands and job and family resources with indicators of caregivers' psychological strain, that is caregiver burden, work-related emotional exhaustion and general ill mental health. In our research, we focused on individuals who combine paid employment with family caregiving. Methods: A cross-sectional design was used. The study sample was derived in 2011 from a Dutch financial organisation and a healthcare organisation. A digital fully structured questionnaire was used. The sample consisted of 187 employees who identified themselves as family caregivers. Descriptive statistics and hierarchical linear regression analysis were performed. Results: Job demands (i.e. workload, work-family conflict) and family demands (i.e. family care hours and family-work conflict) were significantly positively associated with all three domain-specific indicators of strain. The resources of work-family and family-work enrichment and autonomy did not contribute to less experienced strain. More supervisor and colleague support was associated with lower ill mental health. Conclusion: Our study showed that job demands (workload, work-family conflict) and family demands (family care hours, family-work conflict) were clearly associated with caregiver strain, while associations for job and family resources were not evident. It remains necessary to pay attention to the demanding aspects of dual roles of family caregivers but also to investigate the resources they have available at work as well as in their home situation and explore their potential reducing effect on family caregivers' strain. abstract_id: PUBMED:14129833 "THE FAMILY DOCTOR" N/A abstract_id: PUBMED:22515464 Family and family therapy in the Netherlands. This article describes how families are functioning in the Netherlands, and how family therapy is used in mental healthcare. In the open Dutch society, new ideas are easily incorporated, as exemplified by the rapid introduction and growth of family therapy in the 1980s. In recent decades, however, family therapy has lost ground to other treatment models that are more individually orientated, and adhere to stricter protocols. This decline of family therapy has been exacerbated by recent budget cuts in mental healthcare. In regular healthcare institutes family therapy now has a marginal position at best, although family treatment models are used in specific areas such as forensic treatments. In addition, the higher trained family therapists have found their own niches to work with couples and families. We argue that a stronger position of family therapy would be beneficial for patients and for families, in order to counteract the strong individualization of Dutch society. abstract_id: PUBMED:26050457 "Thinking family" N/A abstract_id: PUBMED:11449813 Variations in family composition. Implications for family therapy. The American family is truly diverse, with many forms and changes between and among forms common. Each variation in family composition presents its own unique challenges and possesses its own unique possibilities for success, building on its own unique strengths. Each family in trouble expects and demands respectful, patient, and committed assistance from the family therapist. The family therapist must appreciate the uniqueness of an individual family and the features that that family has in common with similar families and with all families. It is the commitment of parental figures to take responsibility for their children and participate actively with them in their development that creates successful family life. The family therapist's acceptance, curiosity, and respect for the family and its members encourages change and enlists strengths. This approach permits the family therapist to participate in meaningful and effective lives for each family. abstract_id: PUBMED:29897013 Family relations and behavioral-emotional problems in adolescents - an analysis with the adolescent version of the Family Relations Test for Children and Adolescents Family relations and behavioral-emotional problems in adolescents - an analysis with the adolescent version of the Family Relations Test for Children and Adolescents Abstract. Objectives: So far hardly any instruments are available for the German-speaking countries, covering family relations from the perspective of young people reliably. Moreover, the relationship between family relations from the perspective of young people and behavioral problems has been rarely investigated. Method: Based on the Family Relations Test, which has been developed originally for children, the Family Relations Test for Children and Adolescents was developed in order to assess the family relations from the perspective of adolescents (94 items, 44 % newly developed). A clinical sample (n = 152) and a field sample (n = 132) was tested with this instrument and additionally behavioural problems of the adolescents were rated by the parents and the adolescents. Results: The two-factor solution of the principal component analysis resulted in a clear distinction between two factors describing positive and negative family relations. The internal consistencies (Cronbach's Alpha) of the scales describing positive and negative relations are between .91 and .93. On these total scores young people from the clinic sample describe overall stronger negative relations in their families compared to young people in the field sample. Within the clinic sample moderate correlations between the extent of mental problems of young people rated by themselves and their parents could be found. Conclusions: Positive and negative relationships of young people to the individual family members and to all members of the family as a whole can be assessed reliably and factorially valid. As expected, significant correlations between negative family relations and mental problems could be found. The adolescent version of the Family Relations Test for Children and Adolescents proves to be a useful tool, to assess family relationships from the perspective of young people and thus to identify possible factors maintaining mental disorders of young people. abstract_id: PUBMED:27158242 The most important question in family approach: the potential of the resolve item of the family APGAR in family medicine. Background: We aimed to clarify what aspects of family function are measured by the Family APGAR by examining its correlations with the fourth edition of the Family Adaptability and Cohesion Evaluation Scale at Kwansei Gakuin (FACESKG IV). Furthermore, we sought to confirm the usefulness of the Family APGAR in general practice. Methods: We recruited 250 patients (aged 13-76 years) from the general medicine outpatient clinic in a Japanese hospital between July 1999 and February 2000. We employed a cross-sectional design and administered the Family APGAR and the FACESKG IV-16 (i.e., the short version). The scores on the questionnaires were compared using correlation and multiple regression analyses. We then analyzed relationships between the questionnaires and family issues measures using Chi square, Mann-Whitney U, and logistic regression analyses. Results: The Family APGAR partially evaluates the Cohesion dimension of family functioning as measured by the FACESKG IV-16. Furthermore, we could measure family disengagement using the resolve and partnership items of the Family APGAR. Family dysfunction (excessive or impoverished Adaptability or Cohesion) was not related to the presence of family issues. Nevertheless, there was a significant relationship between scores on the Resolve item and the family issues measure (χ(2) = 6.305, p = 0.043). Conclusions: The Family APGAR, especially the Resolve item, has the potential for use in treating patients with family issues. Interventions could be developed according to the simple Family APGAR responses. abstract_id: PUBMED:6631349 The family and family medicine: should this marriage be saved? Although there have been many noteworthy attempts to integrate a family focus into family medicine, there is little evidence that this integration has occurred in either residency education or community practice. When the specialty was founded, a family emphasis may have been politically useful as a way to differentiate the new family physician from the old general practitioner. Now, however, it is unclear what specific family-related material should be taught or who should teach it since few family practice faculty are trained in understanding families. If a practicing family physician actually wants to see a family, practical problems concerning time, space, and money arise. Furthermore, the medicolegal system is structured to protect the confidential relationship between one patient and one physician. Other obstacles to the integration include the difficult epistemological shift required to apply systems theory, the current chaos in the family field, and family medicine's need to gain professional stature by being proficient in traditional medicine. It may be time for the family and family medicine to reconsider their well-intended but ill-advised relationship. Answer: The integration of a family focus into family practice appears to be more of an ideal than a consistent reality. While the concept of family-centered care has been influential in the development of family medicine, there is little evidence that this integration has occurred extensively in either residency education or community practice (PUBMED:6631349). The specialty of family medicine initially adopted a family emphasis as a means to differentiate the new family physician from the old general practitioner, but it remains unclear what specific family-related material should be taught and who should teach it, given that few family practice faculty are trained in understanding families (PUBMED:6631349). Practical challenges also exist when a practicing family physician attempts to see a family as a unit, including issues related to time, space, and financial considerations. Additionally, the medicolegal system is designed to protect the confidential relationship between an individual patient and their physician, which can be at odds with a family-based approach (PUBMED:6631349). Despite these challenges, there is recognition of the importance of family dynamics in patient care. For instance, the Family APGAR, which measures family function, has been shown to be useful in general practice, particularly the Resolve item, which can help identify family issues (PUBMED:27158242). However, the decline of family therapy in mental healthcare due to budget cuts and a shift towards more individually oriented treatment models with stricter protocols has been noted in the Netherlands (PUBMED:22515464). In summary, while the concept of family in family practice is foundational, the practical application of a family-centered approach faces significant barriers and has not been fully realized in practice. The specialty continues to grapple with how to effectively incorporate family dynamics into patient care and education, suggesting that the family in family practice is more of an aspiration than a consistent reality.
Instruction: High-level disinfection of gastrointestinal endoscopes: are current guidelines adequate? Abstracts: abstract_id: PUBMED:10364023 High-level disinfection of gastrointestinal endoscopes: are current guidelines adequate? Objective: For a germicide to obtain a high level disinfection (HLD) claim, FDA requires demonstration of a 6-log reduction of mycobacterial inoculum under worst case conditions. The purpose of this study was to assess the adequacy of current guidelines for high level disinfection of GI endoscopes using alkaline glutaraldehyde in simulated-use testing. Methods: Various gastrointestinal endoscopes were contaminated with Mycobacterium chelonae in 46 experiments. Quantitative cultures were obtained from each endoscope channel separately after each step: inoculation, standardized manual cleaning, immersion in 2% glutaraldehyde (Cidex) for 10, 20, or 45 min at room temperature, 70% isopropanol rinse, and drying. Results: Manual cleaning alone achieved a 4-log reduction. After 10 min of glutaraldehyde exposure, but before alcohol rinse, two of 10 experiments failed to achieve a 6-log reduction. However, after alcohol rinse, all 10 experiments achieved HLD. After 20 min of glutaraldehyde exposure, but before alcohol rinse, one of 18 experiments failed to achieve a 6-log reduction. After alcohol rinse, all 18 experiments achieved HLD. After 45 min of glutaraldehyde exposure, but before alcohol rinse, one of 18 experiments failed to achieve a 6-log reduction. After alcohol rinse, all 18 experiments achieved HLD. Thus, if the entire reprocessing protocol including manual cleaning, glutaraldehyde exposure, alcohol rinse, and drying was taken into account, the required 6-log reduction of mycobacteria was achieved with a minimum of 10 min of glutaraldehyde exposure at room temperature. Conclusions: Current guidelines for high level disinfection using glutaraldehyde are appropriate. Alcohol rinse is a valuable adjunctive step for drying and for its bactericidal effects. abstract_id: PUBMED:25699232 High-level disinfection of gastrointestinal endoscope reprocessing. High level disinfection (HLD) of the gastrointestinal (GI) endoscope is not simply a slogan, but rather is a form of experimental monitoring-based medicine. By definition, GI endoscopy is a semicritical medical device. Hence, such medical devices require major quality assurance for disinfection. And because many of these items are temperature sensitive, low-temperature chemical methods, such as liquid chemical germicide, must be used rather than steam sterilization. In summarizing guidelines for infection prevention and control for GI endoscopy, there are three important steps that must be highlighted: manual washing, HLD with automated endoscope reprocessor, and drying. Strict adherence to current guidelines is required because compared to any other medical device, the GI endoscope is associated with more outbreaks linked to inadequate cleaning or disinfecting during HLD. Both experimental evaluation on the surveillance bacterial cultures and in-use clinical results have shown that, the monitoring of the stringent processes to prevent and control infection is an essential component of the broader strategy to ensure the delivery of safe endoscopy services, because endoscope reprocessing is a multistep procedure involving numerous factors that can interfere with its efficacy. Based on our years of experience in the surveillance of culture monitoring of endoscopic reprocessing, we aim in this study to carefully describe what details require attention in the GI endoscopy disinfection and to share our experience so that patients can be provided with high quality and safe medical practices. Quality management encompasses all aspects of pre- and post-procedural care including the efficiency of the endoscopy unit and reprocessing area, as well as the endoscopic procedure itself. abstract_id: PUBMED:31241788 Guidelines for standardizing cleansing and disinfection of gastrointestinal endoscopes. As part of the activities toward standardizing endoscopy procedures, the Japan Gastroenterological Endoscopy Society has prepared guidelines for cleansing and disinfection of gastrointestinal endoscopes. The environment of gastrointestinal endoscopy differs between Japan and advanced Western countries. In advanced Western countries, gastrointestinal endoscopy is performed almost exclusively at specialized facilities, where strict provisions are observed for cleansing and disinfecting endoscopes. In Japan, however, gastrointestinal endoscopy is performed even in small clinics, and the annual number of gastrointestinal endoscopy cases is enormous. In addition, the method for cleansing and disinfecting endoscopes differs among hospitals. Although there is a distinct lack of evidence for how gastrointestinal endoscopes are cleaned and disinfected, it is necessary to standardize the method for doing so to advance the field of endoscopic medicine. abstract_id: PUBMED:21269894 Current status of cleaning and disinfection for gastrointestinal endoscopy in China: a survey of 122 endoscopy units. Background: Adequate compliance with the existing guidelines for cleaning and disinfection of gastrointestinal endoscopes and accessories is necessary to obtain high-level disinfection and prevent pathogen transmission. Aim: To investigate cleaning and disinfection practice in China. Methods: A questionnaire with 21 questions concerning gastrointestinal endoscopy reprocessing was sent by e-mail to 189 endoscopy units in China. Results: One hundred and twenty-two (80.39%) of the 189 units responded. Compared with the low-workload units (<5000 procedures/year), the high-workload units (≥5000 procedures/year) had a significantly higher number of gastrointestinal endoscopes (25.8 ± 3.6 vs. 4.7 ± 1.8, p < 0.01) and the higher possessing rate of automated endoscope reprocessors (43.9% vs. 3.1%, p<0.01). Glutaraldehyde was the most commonly employed disinfectant (88.5%) in all the units. In 23/122 (18.8%) units, the exposure time to glutaraldehyde was <45 min in the case of infectious disease patients. Eighty-six of 122 (70.5%) units reused disposable materials, of which 21/86 (24.4%) reused disposable forceps and disposable polypectomy hooks, and 2/86 (1.6%) reused disposable injection needles intermittently. Conclusion: Although gastrointestinal endoscopy has developed rapidly in China in the past decade, there is still room for improvement in the practice of endoscopy reprocessing, especially in middle-sized and small cities. abstract_id: PUBMED:30265392 Guideline Implementation: Manual Chemical High-Level Disinfection: 1.5 www.aornjournal.org/content/cme. The Spaulding system recommends sterilization or, at minimum, high-level disinfection for semicritical medical items (ie, items that come into contact with mucous membranes or nonintact skin). High-level disinfection deactivates all types of microorganisms except bacterial spores and prions. If high-level disinfection is not performed correctly, however, contaminated medical or surgical devices could transmit pathogens to patients. Reusable semicritical items may be processed using manual methods when manual chemical high-level disinfection is the only processing method recommended by the manufacturer. The AORN "Guideline for manual chemical high-level disinfection" provides guidance to health care personnel for performing effective manual chemical high-level disinfection and preventing injury associated with the use of high-level disinfectant chemicals. This article focuses on key points of the guideline that address preparation of items for high-level disinfection, use of high-level disinfectants, health care personnel safety, and documentation. Perioperative RNs should review the complete guideline for additional information and for guidance when writing and updating policies and procedures. abstract_id: PUBMED:11100996 Disinfection of gastrointestinal endoscopes and accessories. A worldwide concern has emerged with regard to endoscope disinfection and many gastrointestinal endoscopy associations have developed guidelines for proper disinfection of endoscopes and endoscopic accessories. A working party was convened to formulate guidelines for the Asia-Pacific region, pertaining to any setting in which gastrointestinal endoscopy is performed. Endoscope reprocessing that meets the established standard of practice helps to ensure a microbial-free endoscope for all patients, reduces the risk of disease transmission and helps to prolong the life of the endoscope. The recommendations included mechanical cleaning as the first and most important step followed by immersion in 2% glutaraldehyde for a minimum period of 10 min. Automated disinfectors have been recommended for busy endoscopy centres to ensure better compliance. Reuse of endoscopic accessories meant for 'single use' remains a controversial issue. Strict quality assurance programmes are a must to preclude lack of compliance with these guidelines. abstract_id: PUBMED:26473118 Recent Update on Microbiological Monitoring of Gastrointestinal Endoscopes after High-Level Disinfection. Endoscopy-related infections are important contributors to nosocomial infections. Endoscope reprocessing according to standard guidelines ensures high-level disinfection and prevents endoscopy-related infections. Microbiological surveillance may help in monitoring the effectiveness of gastrointestinal endoscope disinfection. The process involves microbial cultures and non-culture methods such as bioburden assays, adenosine triphosphate (ATP) bioluminescence, and quantitative polymerase chain reactions (PCRs). Surveillance culturing to monitor endoscopes after reprocessing has been recommended by a majority of organizations. Bioburden assays, ATP bioluminescence, and quantitative PCRs provide rapid and reliable measures. Each institution will have to try to establish its own surveillance guidelines. abstract_id: PUBMED:22529696 Swab culture monitoring of automated endoscope reprocessors after high-level disinfection. Aim: To conduct a bacterial culture study for monitoring decontamination of automated endoscope reprocessors (AERs) after high-level disinfection (HLD). Methods: From February 2006 to January 2011, authors conducted randomized consecutive sampling each month for 7 AERs. Authors collected a total of 420 swab cultures, including 300 cultures from 5 gastroscope AERs, and 120 cultures from 2 colonoscope AERs. Swab cultures were obtained from the residual water from the AERs after a full reprocessing cycle. Samples were cultured to test for aerobic bacteria, anaerobic bacteria, and mycobacterium tuberculosis. Results: The positive culture rate of the AERs was 2.0% (6/300) for gastroscope AERs and 0.8% (1/120) for colonoscope AERs. All the positive cultures, including 6 from gastroscope and 1 from colonoscope AERs, showed monofloral colonization. Of the gastroscope AER samples, 50% (3/6) were colonized by aerobic bacterial and 50% (3/6) by fungal contaminations. Conclusion: A full reprocessing cycle of an AER with HLD is adequate for disinfection of the machine. Swab culture is a useful method for monitoring AER decontamination after each reprocessing cycle. Fungal contamination of AERs after reprocessing should also be kept in mind. abstract_id: PUBMED:14738207 Maintenance of gastrointestinal endoscopes. Endoscopes are used for visualization and biopsy of gastrointestinal lesions, as well as therapeutic procedures, such as foreign body retrieval. In the past, they were primarily used in large institutional settings where specialized personnel could focus on their maintenance. Today, they are becoming increasingly common in general practice. The maintenance of gastrointestinal endoscopes in the veterinary setting involves many challenges, including safe handling, reprocessing, and storage. Meeting these challenges requires well-trained personnel and strict protocols. Reprocessing, which includes cleaning and disinfection, offers the greatest challenge. The complex structure of flexible endoscopes, particularly the long, narrow channels, makes them difficult to clean. Gastrointestinal endoscopes operate in a contaminated environment, exposing them to high levels of organic matter and bacteria. High-level disinfection is necessary for infection control, but liquid germicides available for endoscope disinfection can be toxic to medical personnel and patients. abstract_id: PUBMED:11100998 Survey of gastrointestinal endoscope disinfection and accessory reprocessing practices in the Asia-Pacific region. Concern has been raised about the possibility of transmission of infection by gastrointestinal endoscope. This vexation may be related to reliability of the disinfecting techniques or to the compliance with the guidelines laid down for disinfection. There have been very few recent surveys examining the disinfection practices for gastrointestinal endoscopes and their accessories. This survey was undertaken to understand such practices in the Asia-Pacific region. A questionnaire was sent by post or electronic mail to 356 randomly selected endoscopy centres in the Asia-Pacific region. Responses were received from 38.7% of the centres. The survey showed that there were significant aberrations in the application of disinfection procedures. One-third of the respondents did not practise disinfection at the start of the day's session, 2.9% of the centres were not using a high-level disinfectant and 34.7% of the centres used a soak time in 2% glutaraldehyde of less than 10 min. At 40% of the centres, at the end of the day's session, forced air or alcohol was not used to dry the endoscopes. Reuse of accessories meant for single use was widely practised. Answer: Based on the provided abstracts, it appears that current guidelines for high-level disinfection (HLD) of gastrointestinal endoscopes are generally considered adequate when properly followed. A study assessing the adequacy of current guidelines using alkaline glutaraldehyde in simulated-use testing found that the required 6-log reduction of mycobacteria was achieved with a minimum of 10 minutes of glutaraldehyde exposure at room temperature, especially when combined with manual cleaning, alcohol rinse, and drying (PUBMED:10364023). This suggests that the guidelines for HLD using glutaraldehyde are appropriate, with the alcohol rinse serving as a valuable adjunctive step. Furthermore, the importance of strict adherence to guidelines is emphasized due to the association of GI endoscopes with more outbreaks linked to inadequate cleaning or disinfection during HLD compared to any other medical device (PUBMED:25699232). The guidelines for infection prevention and control for GI endoscopy highlight three important steps: manual washing, HLD with an automated endoscope reprocessor, and drying. In Japan, the Japan Gastroenterological Endoscopy Society has prepared guidelines for cleansing and disinfection of gastrointestinal endoscopes to standardize the method and advance the field of endoscopic medicine, recognizing the need for standardization even in settings with a large number of procedures and varying practices (PUBMED:31241788). A survey in China indicated that there is room for improvement in the practice of endoscopy reprocessing, especially in middle-sized and small cities, despite the rapid development of gastrointestinal endoscopy in the country (PUBMED:21269894). The AORN "Guideline for manual chemical high-level disinfection" provides guidance for performing effective manual chemical HLD and preventing injury associated with the use of high-level disinfectant chemicals (PUBMED:30265392). Overall, the abstracts suggest that while the guidelines are adequate, the effectiveness of HLD is contingent upon proper implementation and adherence to the guidelines, as well as ongoing monitoring and quality assurance programs to ensure compliance (PUBMED:11100996, PUBMED:26473118, PUBMED:22529696, PUBMED:14738207, PUBMED:11100998).
Instruction: Does choosing a treatment depend on making a diagnosis? Abstracts: abstract_id: PUBMED:37828732 The impact of COVID-19 on Australian clinicians' decision making in line with the principles of Choosing Wisely. Aim: To explore the perspectives of clinicians' decision-making processes and considerations in line with the Choosing Wisely principles during the first wave of the COVID-19 pandemic. Design: An exploratory qualitative approach was used. Methods: Data were collected via semi-structured interviews to encourage participants to discuss their own experience in making clinical decisions during the COVID-19 pandemic. A total of 12 clinicians from across disciplines were interviewed to reach data saturation. Interview data were analysed considering the Choosing Wisely principles. Results: Five main themes as they relate to clinician decision-making emerged and included; prioritizing care and treatment, uncertainty regarding best practice as a result of rapidly changing guidelines, organizational challenges to clinical decision-making, the use of telehealth and enabling consumer engagement with health services. Conclusion: Despite the disruption caused by COVID-19, clinicians were mindful of necessary care and worked to ensure that core care was not compromised during the first wave of the pandemic. The need for clinicians to protect both their own safety and that of their colleagues arose as an additional factor that influenced clinicians' decision-making process during the COVID-19 pandemic. abstract_id: PUBMED:31059674 Shared Decision-Making With Choosing Wisely. Choosing Wisely is a national health care improvement campaign promoting conversations between women and their health care professionals about selecting high-value health care practices. It disseminates lists of recommendations and downloadable educational materials from professional societies on its website. In November 2018, we searched for and categorized Choosing Wisely recommendations pertinent to women's health care. Of 78 recommendations, 28 (36%) were related to perinatal care, 22 (28%) were related to gynecologic care, and 28 (36%) were related to women's health and general care. Twelve recommendations (17.6%) were related to antenatal care, 10 (14.7%) to intrapartum and postpartum care, and 10 (14.7%) to cervical cancer screening. These free resources can help frame the shared decision-making process in clinical practice. abstract_id: PUBMED:29306825 Selecting appropriate wastewater treatment technologies using a choosing-by-advantages approach. Selecting the most sustainable wastewater treatment (WWT) technology among possible alternatives is a very complex task because the choice must integrate economic, environmental, and social criteria. Traditionally, several multi-criteria decision-making approaches have been applied, with the most often used being the analytical hierarchical process (AHP). However, AHP allows users to offset poor environmental and/or social performance with low cost. To overcome this limitation, our study examines a choosing-by-advantages (CBA) approach to rank seven WWT technologies for secondary WWT. CBA results were compared with results obtained by using the AHP approach. The rankings of WWT alternatives differed, depending on whether the CBA or AHP approach was used, which highlights the importance of the method used to support decision-making processes, particularly ones that rely on subjective interpretations by experts. This paper uses a holistic perspective to demonstrate the benefits of using the CBA approach to support a decision-making process when a group of experts must come to a consensus in selecting the most suitable WWT technology among several available. abstract_id: PUBMED:30930391 "Choosing Wisely" Improves Shared Decision Making "Choosing Wisely" is a campaign activity promoting awareness of the dangers of providing medical tests, drugs, and procedures that are not supported by firm scientific evidence and that may harm patients. The American Society of Health-System Pharmacists has released a recommendation on polypharmacy. Stakeholders such as patients, health professionals, hospital executives, industry, and mass media may all contribute to treatment decision making. The patient-centric "five questions you need to ask your doctor" are a useful trigger for better dialogue between patients and health professionals and could contribute to appropriate drug prescription in which benefit always exceeds the potential for harm. abstract_id: PUBMED:35434292 Integrating the Choosing Wisely 5 Questions into Family Meetings in the Intensive Care Unit: A Randomized Controlled Trial Investigating the Effect on Family Perceived Involvement in Decision-Making. Family members often act as surrogate decision makers for patients in the intensive care unit (ICU). The use of printed prompts may assist with families feeling empowered to fulfill this role. Prospective, randomized controlled trial in 3 ICUs in Western Australia. In the intervention arm, families received the Choosing Wisely 5 questions as printed prompts prior to a family meeting, and the control arm did not receive prompts. The primary outcome was family perceived involvement in decision-making. Outcomes were measured using a survey. Sixty families participated in the study. The majority of families (87.1% control, 79.3% intervention; P = .334) reported feeling "very included" in decision-making. There was no difference in secondary outcomes, including minimal uptake of the questions by the intervention arm. This has been the first randomized trial evaluating the use of a decision-making tool for families in the ICU. Despite ceiling effects in outcome measures, these results suggest room for future study of the Choosing Wisely 5 questions in the ICU. abstract_id: PUBMED:36597681 Choosing Wisely in Patients with Polypharmacy Choosing Wisely in Patients with Polypharmacy Abstract. Polypharmacy and potentially inappropriate medication have a negative impact on health. For reducing or stopping medication (deprescribing) patient benefits are crucial. The following stepwise approach has turned out to be successful: a. ask patients to bring along all their medication and compare them with the current medication list; b. offer shared decision making; c. evaluate every drug for indication, balance between benefit and harm, side effects and dose; d. prioritize benefit and harm according to values, preferences and goals of the patient; e. decide together about deprescribing; f. track changes in the medication plan und arrange a follow-up consultation. We illustrate this approach by the example of an older, frail female patient with polypharmacy. Deprescribing is just as important for patients' well-being as is prescribing! abstract_id: PUBMED:31467057 Semistructured interviews regarding patients' perceptions of Choosing Wisely and shared decision-making: an Australian study. Objectives: This study aimed to examine how patients perceive shared decision-making regarding CT scan referral and use of the five Choosing Wisely questions with their general practitioner (GP). Design: This is a qualitative exploratory study using semistructured interviews. Setting: This study was conducted in a large metropolitan public healthcare organisation in urban Australia. Participants: Following purposive sampling, 20 patients and 2 carers participated. Patient participants aged 18 years or older were eligible if they were attending the healthcare organisation for a CT scan and referred by their GP. Carers/family were eligible to participate when they were in the role of an unpaid carer and were aged 18 years or older. Participants were required to speak English sufficiently to provide informed consent. Participants with cognitive impairment were excluded. Findings: Eighteen interviews were conducted with the patient only. Two interviews were conducted with the patient and the patient's carer. Fourteen participants were female. Five themes resulted from the thematic analysis: (1) needing to know, (2) questioning doctors is not necessary, (3) discussing scans is not required, (4) uncertainty about questioning and (5) valuing the Choosing Wisely questions. Participants reported that they presented to their GP with a health problem that they needed to understand and address. Participants accepted their GPs decision to prescribe a CT scan to identify the nature of their problem. They reported ambivalence about engaging in shared decision-making with their doctor, although many participants reported valuing the Choosing Wisely questions. Conclusions: Shared decision-making is an important principle underpinning Choosing Wisely. Practice implementation requires understanding patients' motivations to engage in shared decision-making with a focus on attitudes, beliefs, knowledge and emotions. Systems-level support and education for healthcare practitioners in effective communication is important. However, this needs to emphasise communication with patients who have varying degrees of motivation to engage in shared decision-making and Choosing Wisely. abstract_id: PUBMED:12365481 Does choosing a treatment depend on making a diagnosis? US and French physicians' decision making about acute otitis media. Background: The classic sequential processing model of clinical decision making-in which the treatment choice follows and depends on the diagnostic judgment-may in some cases be replaced by a processing model in which the treatment choice depends on an independent assessment of the diagnostic and other cues. The aim of this study was to determine which processing model would better describe physicians' treatment choices in a simulated clinical task. Methods: Seventy-five US and French primary care physicians were presented twice, in a different order, with the same set of 46 scenarios of 15-month-old children suspected of having acute otitis media (AOM). They rated in one set the probability of AOM and in the other set whether they would treat the child with antibiotics (and how confident they felt in their decision). Linear regression analyses revealed the individuals' 2 judgment policies. Hierarchical discriminant analysis was used to analyze the variance explained in the treatment choice by, 1st, the diagnostic judgment, 2nd, the cues specific to treatment, and 3rd, the cues specific to diagnosis. Results: Even when choosing treatment, the participants placed greatest weight on diagnostic cues, especially the ear findings. Only 28% used the cues that reflected parental issues. For 36%, the diagnostic cues had an effect on the treatment choice independent of the effect (if any) of the diagnostic judgment. Conclusion: In deciding how to treat AOM, the majority of the participating US and French primary care physicians followed the classic sequential processing model, but a substantial minority used instead an independent processing model. abstract_id: PUBMED:29153357 Choosing Wisely - Klug Entscheiden: conceptual and ethical considerations Choosing Wisely and Klug Entscheiden (KE) are recent physician-triggered campaigns that aim at identifying and reducing blatant over-treatment as well as - a specialty of KE - occurring under-treatment. This paper provides some conceptual and normative analyses of these campaigns' goals and justifications. abstract_id: PUBMED:34741561 Engaging consumers and health professionals in collaborative decision-making to optimize care. Shared communication and collaborative decision-making between consumers and health professionals is essential in optimizing the quality of consumer care. However, the consumers' ability to ask questions and seek answers, as well as health professionals' communication skills to engage with the consumer, are necessary considerations for the collaborative decision-making process. This quality improvement initiative sought to understand the context of collaborative decision making from the perspective of consumers and health professionals adapted from the international Choosing Wisely program. Findings indicated health professionals acknowledged a problem with unnecessary and overuse of tests, treatments, and procedures. Consumers suggested they were confident asking questions about their health and care. The findings of this study highlight collaborative decision-making as a worthwhile and beneficial undertaking. Answer: Choosing a treatment often depends on making a diagnosis, as the classic sequential processing model of clinical decision-making suggests that the treatment choice follows and is contingent upon the diagnostic judgment (PUBMED:12365481). This model was observed in a study where US and French primary care physicians were presented with scenarios of children suspected of having acute otitis media (AOM). The study found that the majority of physicians followed this classic model, placing the greatest weight on diagnostic cues, particularly ear findings, when deciding on treatment. However, it also noted that a substantial minority of physicians used an independent processing model, where the treatment choice depended on an independent assessment of the diagnostic and other cues (PUBMED:12365481). In the context of the Choosing Wisely campaign, which aims to promote high-value healthcare practices and reduce unnecessary treatments, shared decision-making is emphasized. This involves engaging patients in conversations about their care, including treatment decisions, which are informed by a diagnosis but also consider the patient's values, preferences, and goals (PUBMED:31059674; PUBMED:30930391; PUBMED:36597681). Shared decision-making can be particularly important in situations where there is uncertainty about the best course of action or when multiple treatment options are available (PUBMED:37828732). In summary, while the traditional approach to clinical decision-making often involves making a diagnosis before selecting a treatment, there is a growing emphasis on shared decision-making that incorporates patient preferences and values, which may not strictly depend on a formal diagnosis but rather on a holistic assessment of the patient's situation.
Instruction: Surgical outcomes of degenerative lumbar spondylolisthesis with anterior vacuum disc: can the intervertebral cage overcome intradiscal vacuum phenomenon and enhance posterolateral fusion? Abstracts: abstract_id: PUBMED:25104604 Surgical outcomes of degenerative lumbar spondylolisthesis with anterior vacuum disc: can the intervertebral cage overcome intradiscal vacuum phenomenon and enhance posterolateral fusion? Background: The vacuum phenomenon within an intervertebral disc is not an uncommon radiographic finding in the elderly. However, no reports in the English literature have focused on the effect of an anterior vacuum disc in relation to surgical outcome of same-segment spondylolisthesis. We hypothesized that instrumented posterolateral fusion is not adequate in this situation and that additional interbody fusion with cages would provide better radiographic and clinical outcomes. Methods: The medical records of 72 patients who underwent instrumented fusion for one-segment degenerative lumbar spondylolisthesis with a vacuum disc were reviewed. Thirty-three patients were placed in the noncage group and 39 in the cage group based on whether or not augmentation with an intervertebral cage was carried out. Radiographic parameters (disc height, translation, intradiscal angle, segmental angle, and fusion) on preoperative, postoperative, and final radiographs were compared between groups. The Oswestry Disability Index (ODI) and Brosky criteria were used to evaluate clinical outcomes. Results: Blood loss and operation time were greater in the cage group but without a significant difference (p = 0.271, p = 0.108, respectively). Preoperative radiographic data were similar between groups, but the cage group had more intradiscal lordosis, less translation, a higher disc height after surgery, and maintained these advantages at the final follow-up. In addition, the posterolateral fusion rate was significantly higher in the cage group (92.3% vs. 56.1%, p < 0.001), and they obtained more ODI improvement (30.62 vs. 26.39, p = 0.369) and a higher final satisfaction rate (79.5% vs. 57.6%, p = 0.044). There was no decrease in the incidence of developed adjacent segmental disease in the noncage group. Conclusions: The vacuum sign at the spondylolisthesis segment should be regarded as another sign of instability. We suggest that instrumented posterolateral fusion simultaneous with intervertebral fusion with a cage can overcome this situation. abstract_id: PUBMED:27982763 Effects of intradiscal vacuum phenomenon on surgical outcome of lateral interbody fusion for degenerative lumbar disease. OBJECTIVE The authors investigated whether the presence of intradiscal vacuum phenomenon (IVP) results in greater correction of disc height and restoration of segmental lordosis (SL). METHODS A retrospective chart review was performed on every patient at the University of South Florida's Department of Neurosurgery treated with lateral lumbar interbody fusion between 2011 and 2015. From these charts, preoperative plain radiographs and CT images were reviewed for the presence of IVP. Preoperative and postoperative posterior disc height (PDH), anterior disc height (ADH), and SL were measured at disc levels with IVP and compared with those at disc levels without IVP using the t-test. Linear regression was used to evaluate the factors that predict changes in PDH, ADH, and SL. RESULTS One hundred forty patients with 247 disc levels between L-1 and L-5 were treated with lateral lumbar interbody fusion. Among all disc levels treated, the mean PDH increased from 3.69 to 6.66 mm (p = 0.011), the mean ADH increased from 5.45 to 11.53 mm (p < 0.001), and the mean SL increased from 9.59° to 14.55° (p < 0.001). Significantly increased PDH was associated with the presence of IVP, addition of pedicle screws, and lack of cage subsidence; significantly increased ADH was associated with the presence of IVP, anterior longitudinal ligament (ALL) release, addition of pedicle screws, and lack of subsidence; and significantly increased SL was associated with the presence of IVP and ALL release. CONCLUSIONS IVP in patients with degenerative spinal disease remains grossly underreported. The data from the present study suggest that the presence of IVP results in increased restoration of disc height and SL. abstract_id: PUBMED:25277044 The effects of anterior vacuum disc on surgical outcomes of degenerative versus spondylolytic spondylolisthesis: at a minimum two-year follow-up. Background: The vacuum phenomenon within the intervertebral disc usually represents disc degeneration. There are no reports in the English literature that focus on the effect of an anterior vacuum disc on surgical outcome of same-segment spondylolisthesis. Methods: Patients with degenerative spondylolisthesis (DS) or isthmic spondylolisthesis (IS) who underwent a spinal surgery between January 2005 and December 2006 were reviewed. Patients who met certain criteria, including (1) only mono-segment spondylolisthesis, (2) gas air within the disc space of the spondylolisthesis segment on preoperative radiographs, (3) having received posterior decompression, posterior pedicle screw fixation, and posterolateral fusion, and (4) at least 12 months of follow-up radiographs available to define the posterolateral fusion rate, were enrolled into the study. Four radiographic parameters (disc height, translation, intradiscal angle, segmental angle) were assessed. Two-year postoperative radiographs were used to determine whether the posterolateral segment was fused or not. Clinical outcome and complications during the follow-up period were documented. Results: Incidence of the disc vacuum phenomenon was significantly higher in the IS group than in the DS group (p < 0.001). The IS group had more listhesis and a narrower disc height on preoperative static radiographs; however, the DS group had a more prominent angle and listhesis change in preoperative dynamic variables. The posterolateral fusion rate was significantly higher in the IS group (p = 0.019). The preoperative Oswestry Disability Index (ODI) score, the final ODI, and the ODI difference were similar between groups. More excellent and good results were seen in the IS group. Besides, better final ODI and results were seen in the bilateral fusion group than in the nonfusion group. Conclusion: The disc vacuum phenomenon is not equal to anterior instability absolutely. Determination of stability or instability in a vacuum disc should be considered by a combination of dynamic radiographs. In the present study, vacuum discs in the DS group showed more instability and a higher posterolateral pseudoarthrosis rate. abstract_id: PUBMED:36697335 Does vacuum phenomenon at non-fused discs affect the postoperative course after transforaminal lumbar interbody fusion in patients showing a positive value of difference in lumbar lordosis? Introduction: Preoperative difference in lumbar lordosis (DiLL) was associated with surgical outcomes after single-level transforaminal lumbar interbody fusion (TLIF). Patients with DiLL>0 (DiLL (+)) tended to show worse clinical outcomes and postoperative greater restoration of lumbar lordosis (LL). However, some patients with DiLL (+) showed relatively good outcomes and no postoperative LL restration. This study aimed to elucidate whether the lumbar intervertebral disc vacuum phenomenon (VP) influences clinical course after single-level TLIF in patients with DiLL (+) and DiLL (-). Methods: Patients with lumbar spinal stenosis and degenerative spondylolisthesis treated with single-level TLIF were included. Pre- and postoperative LL were measured, and postoperative LL improvement was calculated. Preoperative DiLL was calculated as preoperative supine LL minus standing LL. Severity of VP at the non-fused discs (SVP (non-FS)) was evaluated using preoperative reconstructed computed tomography imaging. Clinical outcomes were assessed using the Oswestry disability index, visual analogue scale (VAS; low back pain (LBP), lower-extremity pain, numbness, and the Japanese Orthopaedic Association Back Pain Evaluation Questionnaire. Patients were stratified by the median preoperative SVP (non-FS) score into severe and mild VP groups in patients with DiLL (+) or DiLL (-), and their surgical outcomes were compared. Results: Overall, 89 patients were included. In patients with DiLL (+) (n = 37), patients with severe VP showed worse clinical outcomes, particulary for LBP and DiLL (+) patients with mild VP showed greater LL improvement (6.5° ± 10.0°). In patients with DiLL(-) (n = 52), patients with severe VP showed worse clinical outcomes, particularly for LBP and no differences in preoperative, postoperative, and improvement of LL were observed between two groups. Conclusion: Patients with DiLL (+) and DiLL (-) showed different clinical courses depending on VP severity at the non-fused discs after single-level TLIF. abstract_id: PUBMED:37651562 Severe Intervertebral Vacuum Phenomenon is Associated With Higher Preoperative Low Back Pain, ODI, and Indication for Fusion in Patients With Degenerative Lumbar Spondylolisthesis. Study Design: Retrospective study of prospective collected data. Objective: To analyze the association between intervertebral vacuum phenomenon (IVP) and clinical parameters in patients with degenerative spondylolisthesis. Summary Of Background Data: IVP is a sign of advanced disc degeneration. The correlation between IVP severity and low back pain in patients with degenerative spondylolisthesis has not been previously analyzed. Methods: We retrospectively analyzed patients with degenerative spondylolisthesis who underwent surgery. Vacuum phenomenon was measured on computed tomography scan and classified into mild, moderate, and severe. A lumbar vacuum severity (LVS) scale was developed based on vacuum severity. The associations between IVP at L4/5 and the LVS scale, preoperative and postoperative low back pain, as well as the Oswestry Disability Index was assessed. The association of IVP at L4/5 and the LVS scale and surgical decision-making, defined as decompression alone or decompression and fusion, was assessed through univariable logistic regression analysis. Results: A total of 167 patients (52.7% female) were included in the study. The median age was 69 years (interquartile range 62-72). Overall, 100 (59.9%) patients underwent decompression and fusion and 67 (40.1%) underwent decompression alone. The univariable regression demonstrated a significantly increased odds ratio (OR) for back pain in patients with more severe IVP at L4/5 [OR=1.69 (95% CI 1.12-2.60), P =0.01]. The univariable regressions demonstrated a significantly increased OR for increased disability with more severe L4/L5 IVP [OR=1.90 (95% CI 1.04-3.76), P =0.04] and with an increased LVS scale [OR=1.17 (95% CI 1.02-1.35), P =0.02]. IVP severity of the L4/L5 were associated with higher indication for fusion surgery. Conclusion: Our study showed that in patients with degenerative spondylolisthesis undergoing surgery, the severity of vacuum phenomenon at L4/L5 was associated with greater preoperative back pain and worse Oswestry Disability Index. Patients with severe IVP were more likely to undergo fusion. abstract_id: PUBMED:31150862 Vacuum Facet Phenomenon in Computed Tomography Imaging: A Sign of Instability in Degenerative Spondylolisthesis? Objective: The aim of the present study was to investigate the relationship between vacuum facet phenomena and lumbar instability in patients with degenerative spondylolisthesis (DS) in L4-L5. Methods: Patients with L4-L5 DS who had both lumbosacral flexion-extension radiographs and computed tomography (CT) scans available for review from January 2016 to December 2017 were eligible for the present study. The dynamic motion index (DMI) of each patient was used to represent the percentage of slippage of L-L4 on the L5 vertebral disks on the flexion radiographs minus the percentage on the extension radiographs. The facet vacuum index refers to the average width of the facet vacuum on the left and right sides. Results: A total of 67 patients with L4-L5 DS were included in the present study. Of the 67 patients, 35 had a vacuum facet phenomenon on their CT scan and 32 patients did not. The incidence of lumbar instability in the patients with a vacuum facet phenomenon was significantly greater than that in the patients without a vacuum facet phenomenon (P = 0.015). The mean DMI for the patients with a vacuum facet phenomenon was significantly greater than that for the patients without a vacuum facet phenomenon (P < 0.001). A positive linear correlation was found between the facet vacuum index and DMI for patients with a vacuum facet phenomenon (Pearson correlation coefficient, 0.597; P < 0.001). Conclusions: A linear correlation was found between the degree of segmental motion and the width of the vacuum facet phenomenon in patients with DS at L4-L5. Our study has shown that vacuum facet phenomena detected on the CT images of patients with DS are highly predictive of segmental instability. abstract_id: PUBMED:38416194 Intradiscal vacuum phenomenon matches lumbar spine degeneration patterns in an ageing population. Purpose: Intra-Discal Vacuum phenomenon (IDVP) is well-recognised, yet poorly visualised and poorly understood radiological finding in disc degeneration, particularly with regard to its role in spinal alignment. CT analysis of the lumbar spine in an aging population aims to identify patterns associated with IDVP including lumbopelvic morphology and associated spinal diagnoses. Methods: An analysis was performed of an over-60s population sample of 2020 unrelated abdominal CT scans, without acute spinal presentations. Spinal analysis included sagittal lumbopelvic reconstructions to assess for IDVP and pelvic incidence (PI). Subjects with degenerative pathologies, including previous vertebral fractures, auto-fusion, transitional vertebrae, and listhesis, were also selected out and analysed separately. Results: The prevalence of lumbar spine IDVP was 50.3% (955/1898) and increased with age (125 exclusions). This increased in severity towards the lumbosacral junction (L1L2 8.3%, L2L3 10.9%, L3L4 11.5%, L4L5 23.9%, and L5S1 46.3%). A lower PI yielded a higher incidence of IDVP, particularly at L5S1 (p < 0.01). A total of 292 patients had IDVP with additional degenerative pathologies, which were more likely to occur at the level of isthmic spondylolisthesis, adjacent to a previous fracture or suprajacent to a lumbosacral transitional vertebra (p < 0.05). Conclusions: This study identified the prevalence and severity of IDVP in an aging population. Sagittal patterns that influence the pattern of IVDP, such as pelvic incidence and degenerative pathologies, provide novel insights into the function of aging spines. abstract_id: PUBMED:27627708 Evolution of Design of Interbody Cages for Anterior Lumbar Interbody Fusion. Anterior lumbar interbody fusion (ALIF) is one of the surgical procedures for the relief of chronic back pain, radiculopathy and neurogenic claudication in patients with degenerative lumbar spine disease that is refractory to conservative therapy, low-grade spondylolisthesis and pseudo arthrosis. Over the past half century, both the surgical techniques and instrumentation required for ALIF have changed significantly. In particular, the designs of ALIF cage and the materials used have evolved dramatically, the common goal being to improve fusion rates and optimize clinical outcomes. The increasing popularity of ALIF is reflected by the increasing abundance of published studies reporting clinical outcomes, surgical techniques and grafting options for ALIF. Developments in cage designs include cylindrical Bagby and Kuslich, cylindrical ray, cylindrical mesh, lumbar-tapered, polyethyl-etherketone cage and integral fixation cages. Biologic implants include bone dowels and femoral ring allografts. Methods for optimization of cage design have included cage dimensions, use of novel composite cage materials and integral fixation technologies. However, the historical development and evolution of cages used for ALIF has not been extensively documented. This article therefore aims to provide an overview of the historical basis for the anterior approach, evolution in design of ALIF cage implants and potential future research directions. abstract_id: PUBMED:29732430 Effect of interbody fusion cage on clinical and radiological outcome of surgery in L4-L5 lumbar degenerative spondylolisthesis. Background: Lumbar degenerative spondylolisthesis (LDS) is a degenerative slippage of a lumbar vertebra relative to the adjacent vertebra below that can be clinically symptomatic. We aim to compare radiological and clinical outcome of surgery in L4-L5 LDS with or without applying the interbody fusion cage. Methods: We studied 60 patients (mean age, 51.3±13.2 years; follow-up, 53.3±14.6 months) with L4-L5 LDS who had been treated by decompression, pedicle screw and rod instrumentation associated with posterolateral fusion (PLF) without and with transforaminal lumbar interbody fusion (TLIF) in equally divided groups A and B, respectively. Intraoperative parameters were recorded and outcome assessment carried out by visual analog scale (VAS), Oswestry Disability Index questionnaire (ODI), subjective satisfaction, radiologic union, and loss of reduction. We judged the differences by t-test and cross-tab. Results: Using an interbody fusion cage was associated with a significant increase in intraoperative blood loss, operative time, and instrument cost. Improvement in VAS and ODI were significant but comparable. Satisfaction rate was higher in group A (P=0.52). Fusion rate and loss of correction were more favorable in group B but these differences were not significant. Conclusions: In the surgical treatment of the patients with L4-L5 LDS, interbody fusion cage probably does not significantly improve the radiologic and clinical outcomes and may also be associated with more complication and morbidity. abstract_id: PUBMED:24353968 A radiological comparison of anterior fusion rates in anterior lumbar interbody fusion. Aim To compare anterior fusion in standalone anterior lumbar interbody fusion (ALIF) using cage and screw constructs and anterior cage-alone constructs with posterior pedicle screw supplementation but without posterior fusion. Methods Eighty-five patients underwent single- or two-level ALIF procedure for degenerative disk disease or lytic spondylolisthesis (SPL). Posterior instrumentation was performed without posterior fusion in all cases of lytic SPL and when the anterior cage used did not have anterior screw through cage fixation. Results Seventy (82%) patients had adequate radiological follow-up at a mean of 19 months. Forty patients had anterior surgery alone (24 single level and 16 two levels) and 30 had front-back surgery (15 single level and 15 two levels). Anterior locked pseudarthrosis was only seen in the anterior surgery-alone group when using the STALIF cage (Surgicraft, Worcestershire, UK) (37 patients). This occurred in five of the single-level surgeries (5/22) and nine of the two-level surgeries (9/15). Fusion was achieved in 100% of the front-back group and only 65% (26/40) of the anterior surgery-alone group. Conclusion Posterior pedicle screw supplementation without posterolateral fusion improves the fusion rate of ALIF when using anterior cage and screw constructs. We would recommend supplementary posterior fixation especially in cases where more than one level is being operated. Answer: The surgical outcomes of degenerative lumbar spondylolisthesis with anterior vacuum disc suggest that the use of an intervertebral cage can indeed overcome the intradiscal vacuum phenomenon and enhance posterolateral fusion. A study reviewed the medical records of 72 patients who underwent instrumented fusion for one-segment degenerative lumbar spondylolisthesis with a vacuum disc. The patients were divided into two groups: those who received augmentation with an intervertebral cage (cage group) and those who did not (noncage group). The results showed that the cage group had more intradiscal lordosis, less translation, a higher disc height after surgery, and these advantages were maintained at the final follow-up. Moreover, the posterolateral fusion rate was significantly higher in the cage group (92.3% vs. 56.1%), and they obtained more improvement in the Oswestry Disability Index (ODI) and a higher final satisfaction rate. These findings led to the conclusion that the vacuum sign at the spondylolisthesis segment should be considered a sign of instability and that instrumented posterolateral fusion with intervertebral fusion using a cage can effectively address this issue (PUBMED:25104604). Additionally, other studies have found that the presence of intradiscal vacuum phenomenon (IVP) can result in greater correction of disc height and restoration of segmental lordosis when treated with lateral lumbar interbody fusion (PUBMED:27982763). Furthermore, the effects of anterior vacuum disc on surgical outcomes were found to be different in degenerative versus spondylolytic spondylolisthesis, with vacuum discs in the degenerative group showing more instability and a higher posterolateral pseudoarthrosis rate (PUBMED:25277044). In summary, the evidence suggests that the use of an intervertebral cage in the surgical treatment of degenerative lumbar spondylolisthesis with anterior vacuum disc can overcome the challenges posed by the intradiscal vacuum phenomenon and improve posterolateral fusion rates and clinical outcomes.
Instruction: Can lay people deliver asthma self-management education as effectively as primary care based practice nurses? Abstracts: abstract_id: PUBMED:18281394 Can lay people deliver asthma self-management education as effectively as primary care based practice nurses? Objectives: To determine whether well trained lay people could deliver asthma self-management education with comparable outcomes to that achieved by primary care based practice nurses. Design: Randomised equivalence trial. Setting: 39 general practices in West London and North West England. Participants: 567 patients with asthma who were on regular maintenance therapy. 15 lay educators were recruited and trained to deliver asthma self-management education. Intervention: An initial consultation of up to 45 min offered either by a lay educator or a practice based primary care nurse, followed by a second shorter face to face consultation and telephone follow-up for 1 year. Main Outcome Measures: Unscheduled need for healthcare. Secondary Outcome Measures: Patient satisfaction and need for courses of oral steroids. Results: 567 patients were randomised to care by a nurse (n = 287) or a lay educator (n = 280) and 146 and 171, respectively, attended the first face to face educational session. During the first two consultations, management changes were made in 35/146 patients seen by a practice nurse (24.0%) and in 56/171 patients (32.7%) seen by a lay educator. For 418/567 patients (73.7%), we have 1 year data on use of unscheduled healthcare. Under an intention to treat approach, 61/205 patients (29.8%) in the nurse led group required unscheduled care compared with 65/213 (30.5%) in the lay led group (90% CI for difference -8.1% to 6.6%; 95% CI for difference -9.5% to 8.0%). The 90% CI contained the predetermined equivalence region (-5% to +5%) giving an inconclusive result regarding the equivalence of the two approaches. Despite the fact that all patients had been prescribed regular maintenance therapy, 122/418 patients (29.2%) required courses of steroid tablets during the course of 1 year. Patient satisfaction following the initial face to face consultation was similar in both groups. Conclusions: It is possible to recruit and train lay educators to deliver a discrete area of respiratory care, with comparable outcomes to those seen by nurses. abstract_id: PUBMED:22958541 Nurse led versus lay educators support for those with asthma in primary care: a costing study. Background: Regular review and support for asthma self-management is promoted in guidelines. A randomised controlled trial suggested that unscheduled health care usage was similar when patients were offered self management support by a lay-trainer or practice nurses. Methods: Following the RCT, a costing study was undertaken using the trial data to account for the cost of delivery of the service under both strategies and the resulting impact on unscheduled healthcare (measure of effectiveness) in this trial. Results: One year data (n = 418) showed that 29% (61/205) of the nurse group required unscheduled healthcare (177 events) compared with 30.5% (65/213) for lay-trainers (178 events).The training costs for the lay-trainers were greater than nurses (£36 versus £18 respectively per patient, p<0.001), however, the consultation cost for lay-trainers were lower than nurses (£6 per patient versus £24, p<0.001). If the cost of unscheduled healthcare are accounted for then the costs of nurses is £161, and £135 for lay-trainers (mean difference £25, [95% CI = -£97, £149, p = 0.681]). The total costs (delivery and unscheduled healthcare) were £202 per patient for nurses versus £178 for lay-trainers, (mean difference £24, [95%CI = -£100, £147, p = 0.707]). Conclusions: There were no significant differences in the cost of training and healthcare delivery between nurse and lay trainers, and no significant difference in the cost of unscheduled health care use. abstract_id: PUBMED:32024406 Psychosocial issues need more attention in COPD self-management education. Objective: To find out how regularly the contents of patient education regarded as essential for COPD patients' self-management are provided by healthcare professionals in specialised healthcare (SHC) and primary healthcare (PHC) in Finland.Design: A cross-sectional study based on an e-questionnaire with 42 items on the content of self-management education of COPD patients.Setting: The study sample included all public SHC units with pulmonary outpatient clinics (n = 29) and nine out of 160 health centres in Finland.Subjects: 83 doctors and 162 nurses.Main outcome measures: The respondents' answers on how regularly they included the contents regarded as essential for COPD patients' self-management in their education of COPD patients.Results: COPD patients were educated regularly on medical issues regarding COPD treatment, such as smoking cessation, exercise and pharmacological treatment. However, issues vital for coping with the disease, such as psychological well-being, stress management or fatigue, were often ignored. Patient education in SHC seemed to be more systematic than education in PHC. The education provided by the asthma/COPD nurses (n = 70) was more systematic than the education provided by the other nurses (n = 84).Conclusion: Healthcare professionals' continuous education should cover not only the medical but also the psychosocial aspects of coping with COPD. The role of doctors and nurses should be considered to ensure that there is no gap in COPD patients' education. Training asthma/COPD nurses and promoting specialised nurse-led asthma/COPD clinics in primary care could be beneficial while improving practices of patient education that enhance patients' ability to cope with the disease.KEY POINTSIssues vital for coping with chronic obstructive pulmonary disease (COPD), such as psychological well-being, stress and fatigue, are irregularly included in self-management education both in primary and specialised healthcare.Patient education provided by asthma/COPD nurses is more regular than patient education provided by other nurses.The distribution of work between doctors and nurses should be considered to ensure that there is no gap in COPD patients' education. abstract_id: PUBMED:29753036 Explaining the Amount and Consistency of Medical Care and Self-Management Support in Asthma: A Survey of Primary Care Providers in France and the United Kingdom. Background: The quality of asthma primary care may vary between countries, health care practices, and health care professionals (HCPs). Identifying and explaining such differences is critical for health services improvement. Objectives: To examine the quality of asthma primary care in France and United Kingdom, and identify within-country and between-country predictors amenable to intervention. Methods: An online questionnaire to capture asthma medical care and self-management support, practice characteristics, and psychosocial determinants was completed by 276 HCPs. Mokken scaling analyses were used to examine item structure and consistency. Hierarchical regression analyses were used to identify predictors of the amount (number of asthma care activities HCPs delivered) and consistency (the degree to which HCPs deliver similar care) of asthma medical care and self-management support. Results: On average, HCPs reported delivering 74.2% of guideline-recommended care. Consistency of medical care and self-management support was lower among HCPs delivering a lower amount of care (r = 0.58 and r = 0.57, P < .001). UK HCPs provided more and more consistent asthma self-management support-but not medical care-than French HCPs, which was explained by the presence of practice nurses in the United Kingdom. More training, positive social norms, and higher behavioral control explained better quality of care across all HCPs. Conclusions: Using carefully developed questionnaires and advanced psychometric analyses, this study suggests that involving practice nurses, making social expectations visible, and providing more training to enhance skills and confidence in asthma care delivery could enhance the amount and consistency of asthma primary care. This needs to be corroborated in a future intervention trial. abstract_id: PUBMED:27566627 Internet-based self-management support for adults with asthma: a qualitative study among patients, general practitioners and practice nurses on barriers to implementation. Objectives: The aim of this study was to explore barriers among patients, general practitioners (GPs) and practice nurses to implement internet-based self-management support as provided by PatientCoach for asthma in primary care. Setting: Primary care within South Holland, the Netherlands. Participants: Twenty-two patients (12 women, mean age 38 years), 21 GPs (6 women, mean age 52 years) and 13 practice nurses (all women, mean age 41 years). Design: A qualitative study using focus groups and interviews. Outcomes: Barriers as perceived by patients, GPs and practice nurses to implementation of PatientCoach. Methods: 10 focus groups and 12 interviews were held to collect data: 4 patient focus groups, 4 GP focus groups, 2 practice nurse focus group, 2 patient interviews, 5 GP interviews and 5 practice nurse interviews. A prototype of PatientCoach that included modules for coaching, personalised information, asthma self-monitoring, medication treatment plan, feedback, e-consultations and a forum was demonstrated. A semistructured topic guide was used. Directed content analysis was used to analyse data. Reported barriers were classified according to a framework by Grol and Wensing. Results: A variety of barriers emerged among all participant groups. Barriers identified among patients include a lack of a patient-professional partnership in using PatientCoach and a lack of perceived benefit in improving asthma symptoms. Barriers identified among GPs include a low sense of urgency towards asthma care and current work routines. Practice nurses identified a low level of structured asthma care and a lack of support by colleagues as barriers. Among all participant groups, insufficient ease of use of PatientCoach, lack of financial arrangements and patient characteristics such as a lack of asthma symptoms were reported as barriers. Conclusions: We identified a variety of barriers to implementation of PatientCoach. An effective implementation strategy for internet-based self-management support in asthma care should focus on these barriers. abstract_id: PUBMED:17662568 Lay educators in asthma self management: reflections on their training and experiences. Objective: To capture the experiences and feelings of lay educators in an asthma self-management programme to aid understanding of optimal methods of recruitment, training and retention, and to enhance their value within the programme. Methods: A multi site randomised controlled equivalence trial of asthma educators and primary care practice based nurses during which the educators were asked to keep a diary of their experience. A qualitative thematic analysis of these diaries was undertaken. Results: Eight lay educators supplied diaries. From these diaries emerged personal reasons for involvement in the programme, constructive comments on the training programme, a need for preparation for the realities of clinical practice and significant ongoing support and training. Conclusion: Lay educators are a potential resource for giving self-management education to patients with long-term conditions such as asthma. However, there are some considerations that need to be taken into account regarding contracts, retention and continual support. Practice Implications: Lay educators need a flexible but comprehensive training programme, contracts, on site mentoring and support. They seem most contented when welcomed by health professionals and treated as part of the team. abstract_id: PUBMED:28720751 Exploring the perspectives of clinical professionals and support staff on implementing supported self-management for asthma in UK general practice: an IMP2ART qualitative study. Despite an overwhelming evidence base, supported self-management of asthma is poorly implemented into routine practice. Strategies for implementation must address organisational routines, as well as provide resources for patients and training to improve professionals' skills. We aimed to explore the priority that primary care practices attach to asthma self-management, to describe their existing asthma management routines, and to generate innovative implementation strategies. We recruited 33 participants (23 general practitioners; seven nurses; three administrative staff) from 14 general practices. The 12 interviews and three focus groups were transcribed, coded and analysed thematically. Supported self-management was largely a nurse-led task within clinic-based annual reviews. Barriers included poor attendance at asthma clinics, lack of time, demarcation of roles, limited access to a range of tailored resources, and competing agendas in consultation, often due to multimorbidity. Suggestions for initiatives to improve the provision of supported self-management included emphasising the evidence for benefit (to influence prioritisation), improving teamwork (including team-based education), organisational strategies (including remote consulting) which need to fit within existing practice routines. Technology offers some potential solutions (e.g., improved templates, 'app'-based plans), but must be integrated with the practice information technology systems. Building on these insights, we will now develop a theoretically-based implementation strategy that will address patient, professional, and organisational buy-in, provide team-based education and offer a range of practical options and tools, which can be adapted and integrated within existing routines of individual practices.OVERCOMING THE ORGANISATIONAL BARRIERS TO IMPLEMENTING ASTHMA SELF-MANAGEMENT: Understanding the routines of primary care practices can suggest strategies to implement supported self-management in general practice. Supported self-management of asthma including provision of individual action plans improves patient health and reduces the burden on healthcare services, but is not well implemented in routine practice. As part of a large-scale programme to implement self-management into UK general practice, Hilary Pinnock at the University of Edinburgh and co-workers conducted interviews and focus groups with 33 participants from 14 general practices to explore the organisational routines that hinder or enable professionals to provide support asthma self-management. Poor attendance at asthma clinics, demarcation of roles, lack of time and limited access to tailored resources were identified as specific barriers. Improvements suggested included improved teamwork between doctors and other medical healthcare professionals, comprehensive training, and improvements to IT systems. abstract_id: PUBMED:35461269 Developing a theoretically informed education programme within the context of a complex implementation strategy in UK primary care: an exemplar from the IMP2ART trial. Background: IMPlementing IMProved Asthma self-management as RouTine (IMP2ART) is a programme of work developing and evaluating a strategy for implementing supported asthma self-management in UK primary care. The strategy encompasses patient-facing resources, professional education, and organisational approaches to embed supported self-management. This paper reports the development of a theoretically informed interprofessional education programme which aims to raise awareness of and enable healthcare professionals to deliver effective supported self-management. Methods: Aligned with the Medical Research Council (MRC) Complex Intervention Framework, the multidisciplinary team developed educational content in three phases: (1) developmental phase, identifying educational and behaviour change theory to guide development, in consultation with a professional advisory group; (2) feasibility pilot phase, testing the education using a 'think-aloud' method; and (3) pre-pilot phase, delivering the education within the IMP2ART strategy. Results: The developmental phase identified educational and behaviour change theory and the need to provide two education modules: (1) a team module to raise awareness of supported asthma self-management for the whole team and (2) an individual study module for those who conduct asthma reviews with patients. The feasibility pilot highlighted content and design features in need of refinement and the pre-pilot identified substantial changes to the delivery strategy for the education modules. Conclusions: A multi-stage development process, aligned with the MRC Framework, contributed to the module design and delivery. Prior explorative work, multi-disciplinary team discussions, and professional advisory group consultation, informed the initial development, and in-practice testing and pre-pilot stages enabled refinement. In our experience, there were important benefits of working together as an educationalist/researcher team. The education programme, a core component of the implementation strategy, is now being tested in the IMP2ART UK-wide cluster randomised controlled trial. abstract_id: PUBMED:33959682 Theory-Based Advanced Nursing Practice: A Practice Update on the Application of Orem's Self-Care Deficit Nursing Theory. Introduction: Many researchers have commended the self-care deficit nursing theory (SCDNT) developed by Orem as a means of improving patients' health outcomes through nurses' contributions. However, experimental research has investigated specific aspects of SCDNT, such as self-care agency and self-care requisites, rather than how the construct is practiced and understood as a whole. The current research presents a case study in which an advanced practice nurse (APN) used SCDNT-led practice within a primary healthcare setting that illustrates how the theory is applied to case management. Methods: A case study was conducted by observing an APN during her work in the asthma clinic of a public hospital in Hong Kong. A comparison was made between the case management of the APN under observation with the nursing processes stipulated by the SCDNT across four key operations: diagnostic, prescriptive, treatment or regulatory, and case management. Conclusion: During the observed consultation, the APN applied the four key operations. In SCDNT, the role of the APN is to apply practical nursing knowledge by determining how a patient can best undertake self-care within the circumstances of their living arrangements and support facilities. The case study also demonstrated that SCDNT-based nursing practice has strengths and limitations in a primary healthcare setting. The study concluded that Orem's SCDNT serves as an appropriate theoretical framework for nursing practice within primary healthcare settings. One practical consequence of using SCDNT is that it enables APNs to use nurse-sensitive indicators when evaluating their clinical practice. This study offers a practice update to increase the accountability of nursing practice for nurse-led healthcare services. abstract_id: PUBMED:29634337 Supporting self-management of asthma through patient education. Asthma affects people worldwide. In developed countries 1 in 12 individuals suffer from asthma, while in Ireland this ratio is closer to 1 in 10. Managing asthma symptoms and triggers reduces the potential exacerbation of asthmatic attacks. This article identifies the importance of asthma management, triggers, inhaler techniques and self-management for optimal health. Education by nurses and health professionals can make a significant contribution to asthma care and self-management. The purpose of patient education for self-management of asthma is twofold: to raise awareness of effective inhaler technique and to support self-management of asthma triggers for health and symptom control. Answer: The research suggests that lay people can deliver asthma self-management education with comparable outcomes to those achieved by primary care based practice nurses. A randomized equivalence trial involving 567 patients with asthma who were on regular maintenance therapy compared the effectiveness of lay educators to practice nurses in delivering asthma self-management education. The study found that during the first two consultations, management changes were made in a similar proportion of patients seen by practice nurses and lay educators. Additionally, the use of unscheduled healthcare was nearly equivalent between the nurse-led group and the lay-led group, with the confidence intervals containing the predetermined equivalence region, although the result was inconclusive regarding the equivalence of the two approaches. Patient satisfaction following the initial face-to-face consultation was similar in both groups, indicating that lay educators can be recruited and trained to deliver respiratory care effectively (PUBMED:18281394). A costing study using the trial data from the aforementioned trial showed no significant differences in the cost of training and healthcare delivery between nurse and lay trainers, and no significant difference in the cost of unscheduled health care use. This suggests that lay educators could be a cost-effective alternative to practice nurses for delivering asthma self-management support (PUBMED:22958541). In conclusion, the evidence indicates that with proper recruitment, training, and support, lay people can deliver asthma self-management education as effectively as primary care based practice nurses.
Instruction: Health services use of Manitoba First Nations people: is it related to underlying need? Abstracts: abstract_id: PUBMED:15686152 Health services use of Manitoba First Nations people: is it related to underlying need? Background: To compare health status and health services use of Registered First Nations to all other Manitobans (AOM). If the Canadian health care system is meeting underlying need, those experiencing the greatest burden of morbidity and mortality should show the highest rates of health service use. Methods: Registered First Nations' (n = 85,959) hospitalization and physician visit rates were compared to rates of all other Manitobans (n = 1,054,422) for fiscal year 1998/99. The underlying "need" for health care was measured using premature mortality (PMR), an age- and sex-adjusted rate of death before age 75. Data were derived from Manitoba's Population Health Research Data Repository, linked to federal Status Verification System files to determine Registered First Nations status. Results: Registered First Nations' PMR was double the rate of all other Manitobans (6.61 vs. 3.30 deaths per thousand, p < 0.05). Registered First Nations ambulatory physician visit rates (6.13 vs. 4.85 visits per person, p < 0.05), hospital separation rates (0.348 vs. 0.156 separations per person, p < 0.05) and total days of hospital care (1.75 vs. 1.05 days per person, p < 0.05) were higher than AOM rates. Consultation rates (first visit to a specialist) were slightly higher for Registered First Nations (0.29 vs. 0.27 visits per person, p < 0.05), and overall specialist visit rates were lower (0.895 vs. 1.284 visits per person, p < 0.05) compared with AOM. Conclusion: Although hospitalization and ambulatory physician visit rates for First Nations reflect their poorer health status, consult and specialist rates do not reflect the underlying need for health care services. abstract_id: PUBMED:15686146 Building a health research relationship between First Nations and the University in Manitoba. This paper describes the emergence of a formal partnership between Manitoba First Nations and researchers in the Department of Community Health Sciences at the University of Manitoba. This partnership reflects two decades of a working relationship in Manitoba involving university researchers and First Nations communities, as well as new and innovative approaches to developing organizations, training initiatives and projects that strengthen First Nations principles of governance. The emerging trust that has developed between the Manitoba First Nations and the University has made it possible to extend this partnership into building further research capacity and evidence-based decision-making among First Nations. Discussions between the Assembly of Manitoba Chiefs and the Northern Health Research Unit resulted in the development of a Manitoba First Nations Centre for Aboriginal Health Research. Its mission is to initiate, coordinate and support research activities designed to assist First Nations and Aboriginal communities and organizations in their efforts to promote healing, wellness and improved health services in their communities. Much of the health research described in this journal was facilitated through this partnership, which demonstrated the value of partnerships and new funding opportunities to better address the health information needs of First Nations communities, particularly at a time when Aboriginal communities were highly skeptical of the value of academic research. abstract_id: PUBMED:29018072 Association between home visiting interventions and First Nations families' health and social outcomes in Manitoba, Canada: protocol for a study of linked population-based administrative data. Introduction: First Nations people are descendants of Canada's original inhabitants. In consequence of historical and ongoing structural injustices, many First Nations families struggle with challenging living conditions, including high rates of poverty, poor housing conditions, mental illness and social isolation. These risk factors impede caregivers' abilities to meet their children's basic physical and psychosocial needs. Home visiting programmes were developed to support child developmental health in families facing parenting challenges. However, whether home visiting is an effective intervention for First Nations families has not been examined. We are evaluating two home visiting programmes in Manitoba, Canada, to determine whether they promote nurturing family environments for First Nations children. Methods And Analysis: This research builds on new and established relationships among academic researchers, government decision-makers and First Nations stakeholders. We will link health, education and social services data from the Manitoba Population Research Data Repository to data from two home visiting programmes in Manitoba. Logistic regression modelling will be used to assess whether programme participation is associated with improved child developmental health, better connections between families and social services, reduced instances of child maltreatment and being taken into out-of-home care by child welfare and reduced inequities for First Nations families. Non-participating individuals with similar sociodemographic characteristics will serve as comparators. We will use an interrupted time series approach to test for differences in outcomes before and after programme implementation and a propensity score analysis to compare differences between participants and non-participants. Ethics And Dissemination: Approvals were granted by the Health Information Research Governance Committee of the First Nations Health and Social Secretariat of Manitoba and the University of Manitoba Health Research Ethics Board. Our integrated knowledge translation approach will involve disseminating findings through government and community briefings, developing lay summaries and infographics, presenting at academic conferences and publishing in scientific journals. abstract_id: PUBMED:33125638 The relationship between rates of hospitalization for ambulatory care sensitive conditions and local access to primary healthcare in Manitoba First Nations communities : Results from the Innovation in Community-based Primary Healthcare Supporting Transformation in the Health of First Nations in Manitoba (iPHIT) study. Objectives: The objective of this study was to assess the performance of models of primary healthcare (PHC) delivered in First Nation and adjacent communities in Manitoba, using hospitalization rates for ambulatory care sensitive conditions (ACSC) as the primary outcome. Methods: We used generalized estimating equation logistic regression on administrative claims data for 63 First Nations communities from Manitoba (1986-2016) comprising 140,111 people, housed at the Manitoba Centre for Health Policy. We controlled for age, sex, and socio-economic status to describe the relationship between hospitalization rates for ACSC and models of PHC in First Nation communities. Results: Hospitalization rates for acute, chronic, vaccine-preventable, and mental health-related ACSCs have decreased over time in First Nation communities, yet remain significantly higher in First Nations and remote non-First Nations communities as compared with other Manitobans. When comparing different models of care, hospitalization rates were historically higher in communities served by health centres/offices, whether or not supplemented by itinerant medical services. These rates have significantly declined over the past two decades. Conclusion: Local access to a broader complement of PHC services is associated with lower rates of avoidable hospitalization in First Nation communities. The lack of these services in many First Nation communities demonstrates the failure of the current Canadian healthcare system to meet the need of First Nation peoples. Improving access to PHC in all 63 First Nation communities can be expected to result in a reduction in ACSC hospitalization rates and reduce healthcare cost. abstract_id: PUBMED:11428225 The utilization of antenatal services in remote Manitoba First Nations communities. The purpose of this study was to describe the utilization of antenatal services by First Nations women in four northern Manitoba communities between January, 1996 and December 1996, and to explore possible relationships between the women's behaviors and antenatal clinic attendance. This study indicated that First Nations women received an optimal level of antenatal service. On average, the women first came to the nursing station in the ninth week of pregnancy and saw the health care provider generally a nurse, ten times for routine visits before maternal evacuation. A minority of women, however, had fewer than five visits. The frequency of routine antenatal clinic attendance was explained by the linear multiple regression model. A higher number of past pregnancies was associated with a decrease in the number of routine antenatal visits, while the number of pregnancy losses predicted the number of visits, after other variables had been taken into account. Married marital status was a positive predictor for the early initiation of care, after past pregnancies and risk score have been taken into account. abstract_id: PUBMED:32720215 Manitoba First Nation peoples' use of hospital-based mental health services: trends and solutions. Objectives: The objective of this article is to document patterns and trends of in-hospital mental health service use by First Nations (FN) living in rural and remote communities in the province of Manitoba. Methods: Our sample included all Manitoba residents eligible under the Manitoba Health Services Insurance Plan living on FN reserves and those living in rural and remote communities from 1986 to 2014. Using administrative claims data, we developed multi-level models that describe hospitalization for mental health conditions shown responsive to primary healthcare interventions. We aggregated the results by First Nation Tribal Councils and remoteness to derive rates of hospitalization episodes, length of stay and readmission rates. Results: Rates of hospitalization for mental health are increasing for FN males and females. This is particularly evident for those affiliated with the Island Lake and Keewatin Tribal Councils. The length of stay has increased. Changes in rates of readmissions were not statistically significant. FNs are admitted for mental health conditions at a younger age when compared with other Manitobans, and trends show that the FNs' average age at admission is decreasing. Conclusions: Our results raise serious concerns about the responsiveness of community-based mental health services for FNs in Manitoba, because of both increasing rates of episodes of hospitalization and decreasing age of admission. Given the documented lack of mental health services accessible on-reserve, levels of social distress associated with a history of oppressive policies, and continued lack of infrastructure, current trends are alarming. abstract_id: PUBMED:33438942 Early childhood oral health promotion for First Nations and Métis communities and caregivers in Manitoba. Introduction: Early childhood caries is a public health concern, and the considerable burden exhibited by Indigenous children highlights the oral health inequities across populations in Canada. Barriers include lack of access to oral health care and lack of culturally appropriate oral health promotion. The purpose of this study was to determine where and how First Nations and Métis parents, caregivers and community members learn about caring for young children's oral health, and what ideas and suggestions they have on how to disseminate information and promote early childhood oral health (ECOH) in Indigenous communities. Methods: Sharing circles and focus groups engaged eight groups of purposively sampled participants (n = 59) in four communities in Manitoba. A grounded theory approach guided thematic analysis of audiorecorded and transcribed data. Results: Participants said that they learned about oral health from parents, caregivers and friends, primary care providers, prenatal programs, schools and online. Some used traditional medicines. Participants recommended sharing culturally appropriate information through community and prenatal programs and workshops; schools and day care centres; posters, mailed pamphlets and phone communication (calls and text messages) to parents and caregivers, and via social media. Distributing enticing and interactive oral hygiene products that appeal to children was recommended as a way to encourage good oral hygiene. Conclusion: Evidence-based oral health information and resources tailored to First Nations and Métis communities could, if strategically provided, reach more families and shift the current trajectory for ECOH. abstract_id: PUBMED:20554364 Have investments in on-reserve health services and initiatives promoting community control improved First Nations' health in Manitoba? The objective of this study was to document the relationship between First Nation's community characteristics and the rates of hospitalization for Ambulatory Care Sensitive Conditions (ACSC) in the province of Manitoba, Canada. A population-based time trend analysis of selected ACSC was conducted using the de-identified administrative data housed at the Manitoba Centre for Health Policy, including vital statistics and health information. The study population included all Manitoba residents eligible under the universal Manitoba Health Services Insurance Plan and living on First Nation reserves between 1984/85 and 2004/05. Twenty-nine ACSC defined using 3, 4 and 5 digit ICD-9-CM and ICD-10-CM codes permitted cross-sectional and longitudinal comparison of hospitalization rates. The analysis used Generalized Estimated Equation (GEE) modeling. Two variables were significant in our model: level of access to primary health care on-reserve; and level of local autonomy. Communities with local access to a broader complement of primary health care services showed a lower rate of hospitalization for ACSC. We also examined whether there was a significant trend in the rates of hospitalization for ACSC over time following the signature of an agreement increasing local autonomy over resource allocation. We found the rates of hospitalization for ACSC decreased with each year following the signature of such an agreement. This article demonstrates that communities with better local access to primary health care consistently show lower rates of ACSC. Secondly, the longer community health services have been under community control, the lower its ACSC rate. abstract_id: PUBMED:27417420 Characteristics and residence of First Nations patients and their use of health care services in Saskatchewan, Canada: informing First Nations and Métis health services. Rationale: In Canada, Aboriginal people, particularly First Nations experience significant health disparities and a lower health status compared with the non-Aboriginal population. Aims And Objectives: The purpose of this study was to examine the demographic and clinical characteristics of First Nations patients admitted to hospitals in Saskatchewan, and the acute care services used by First Nations compared with non-First Nations, specifically hospital length of stay. Primary residence of patients was also mapped to determine geographic patterns that would inform health service provision. Methods: A retrospective medical chart audit was carried out to collect data on patient demographics and clinical characteristics of 203 First Nations and 200 non-First Nations patients admitted to two urban hospitals in Saskatchewan from 2012 to 2014. Results: The most common reason for admission of First Nations and non-First Nations patients was infection (24.6%) and cardiology conditions (19.5%), respectively. There was no significant difference in mean length of stay for First Nations (10.44 days) compared with non-First Nations (10.57 days). After adjusting for age, mean length of stay for First Nations was nearly 3 days longer than non-First Nations. First Nations patients' residence was from across the broad geography of Saskatchewan, with 45% living in rural communities or on reserve. Conclusions: The outcomes of this study are clinically meaningful and support the need for research, in collaboration with Aboriginal patients and families, to further examine and improve the care experience in order to decrease health disparities for Aboriginal patients in Canada. abstract_id: PUBMED:29439924 Canada's evacuation policy for pregnant First Nations women: Resignation, resilience, and resistance. Background: Aboriginal peoples in Canada are comprised of First Nations, Métis, and Inuit. Health care services for First Nations who live on rural and remote reserves are mostly provided by the Government of Canada through the federal department, Health Canada. One Health Canada policy, the evacuation policy, requires all First Nations women living on rural and remote reserves to leave their communities between 36 and 38 weeks gestational age and travel to urban centres to await labour and birth. Although there are a few First Nations communities in Canada that have re-established community birthing and Aboriginal midwifery is growing, most First Nations communities are still reliant on the evacuation policy for labour and birthing services. In one Canadian province, Manitoba, First Nations women are evacuated to The Pas, Thompson, or Winnipeg but most - including all women with high-risk pregnancies - go to Winnipeg. Aim: To contribute scholarship that describes First Nations women's and community members' experiences and perspectives of Health Canada's evacuation policy in Manitoba. Methods: Applying intersectional theory to data collected through 12 semi-structured interviews with seven women and five community members (four females, one male) in Manitoba who had experienced the evacuation policy. The data were analyzed thematically, which revealed three themes: resignation, resilience, and resistance. Findings: The theme of resignation was epitomized by the quote, "Nobody has a choice." The ability to withstand and endure the evacuation policy despite poor or absent communication and loneliness informed of resilience. Resistance was demonstrated by women who questioned the necessity and requirement of evacuation for labour and birth. In one instance, resistance took the form of a planned homebirth with Aboriginal registered midwives. Conclusion: There is a pressing need to improve the maternity care services that First Nations women receive when they are evacuated out of their communities, particularly when understood from the specific legal and constitutional position of First Nations women in Manitoba. Answer: The health services use of Manitoba First Nations people does appear to be related to underlying need. The study comparing Registered First Nations' health status and health services use to all other Manitobans (AOM) found that Registered First Nations' premature mortality rate (PMR) was double the rate of AOM, indicating a greater burden of morbidity and mortality. Correspondingly, Registered First Nations had higher rates of ambulatory physician visits, hospital separation rates, and total days of hospital care compared to AOM. However, the rates for consultations and overall specialist visits were lower for Registered First Nations than AOM, suggesting that while hospitalization and ambulatory physician visit rates reflect poorer health status, consult and specialist rates do not fully reflect the underlying need for health care services (PUBMED:15686152). Additionally, the relationship between local access to primary healthcare (PHC) and hospitalization rates for ambulatory care sensitive conditions (ACSC) in First Nations communities was assessed. The study found that local access to a broader complement of PHC services is associated with lower rates of avoidable hospitalization in First Nations communities. The lack of these services in many First Nations communities demonstrates the failure of the current Canadian healthcare system to meet the need of First Nation peoples (PUBMED:33125638). Furthermore, investments in on-reserve health services and initiatives promoting community control have been shown to improve First Nations' health in Manitoba. Communities with better local access to primary health care consistently show lower rates of ACSC, and the longer community health services have been under community control, the lower its ACSC rate (PUBMED:20554364). In summary, the use of health services by Manitoba First Nations people is indeed related to underlying need, as evidenced by higher rates of certain health services use that correspond to a greater burden of disease. However, there are disparities in access to certain types of care, such as specialist services, and improvements in local access to PHC have been associated with better health outcomes.
Instruction: Transforming growth factor β1 inhibits bone morphogenic protein (BMP)-2 and BMP-7 signaling via upregulation of Ski-related novel protein N (SnoN): possible mechanism for the failure of BMP therapy? Abstracts: abstract_id: PUBMED:22958403 Transforming growth factor β1 inhibits bone morphogenic protein (BMP)-2 and BMP-7 signaling via upregulation of Ski-related novel protein N (SnoN): possible mechanism for the failure of BMP therapy? Background: Bone morphogenic proteins (BMPs) play a key role in bone formation. Consequently, it was expected that topical application of recombinant human (rh)BMP-2 and rhBMP-7 would improve the healing of complex fractures. However, up to 36% of fracture patients do not respond to this therapy. There are hints that a systemic increase in transforming growth factor β1 (TGFβ1) interferes with beneficial BMP effects. Therefore, in the present work we investigated the influence of rhTGFβ1 on rhBMP signaling in primary human osteoblasts, with the aim of more specifically delineating the underlying regulatory mechanisms. Methods: BMP signaling was detected by adenoviral Smad-binding-element-reporter assays. Gene expression was determined by reverse transcription polymerase chain reaction (RT-PCR) and confirmed at the protein level by western blot. Histone deacetylase (HDAC) activity was determined using a test kit. Data sets were compared by one-way analysis of variance. Results: Our findings showed that Smad1/5/8-mediated rhBMP-2 and rhBMP-7 signaling is completely blocked by rhTGFβ1. We then investigated expression levels of genes involved in BMP signaling and regulation (for example, Smad1/5/8, TGFβ receptors type I and II, noggin, sclerostin, BMP and activin receptor membrane bound inhibitor (BAMBI), v-ski sarcoma viral oncogene homolog (Ski), Ski-related novel protein N (SnoN) and Smad ubiquitination regulatory factors (Smurfs)) and confirmed the expression of regulated genes at the protein level. Smad7 and SnoN were significantly induced by rhTGFβ1 treatment while expression of Smad1, Smad6, TGFβRII and activin receptor-like kinase 1 (Alk1) was reduced. Elevated SnoN expression was accompanied by increased HDAC activity. Addition of an HDAC inhibitor, namely valproic acid, fully abolished the inhibitory effect of rhTGFβ1 on rhBMP-2 and rhBMP-7 signaling. Conclusions: rhTGFβ1 effectively blocks rhBMP signaling in osteoblasts. As possible mechanism, we postulate an induction of SnoN that increases HDAC activity and thereby reduces the expression of factors required for efficient BMP signaling. Thus, inhibition of HDAC activity may support bone healing during rhBMP therapy in patients with elevated TGFβ serum levels. abstract_id: PUBMED:16786327 Clinical experience with bone morphogenetic protein 7 (BMP 7) in nonunions of long bones Background: Nonunions of long bone fractures are considerable therapeutic and economic problems with increasing tendency. Basic surgical options are autogenous cancellous bone grafting, rod dynamization, reamed nailing, plate fixation with compression, and bone transport techniques. If these methods fail to work, alternative treatment options are needed. Material And Methods: Since May 2001 treatment with recombinant human (rh) bone morphogenic protein 7 [BMP-7 or osteogenic protein 1 (OP-1)] in combination with a type one collagen carrier has gained interest. BMP 7 induces the formation of new bone by differentiation of stem cells, thereby initiating the reaction cascade of osteogenesis. Nonunions over 9 months and unsuccessful bone grafting delineate the indication. Results: We report our experience with 21 patients and nonunion of long bone fractures. Between July 2002 and June 2004, 23 units of BMP 7 were implanted. The implantation sites were 7 femora, 12 tibias, 2 humeri, and 2 forearms. In ten cases BMP 7 was combined with a new osteosynthesis and bone grafting and in five patients with bone grafting alone. In contrast in eight patients BMP 7 was applied as a single procedure without any bone grafting or any change of the osteosynthesis. Conclusion: There were no peri- or postoperative complications. Follow-up was obtained for a minimum of 6 months. Of 23 implantations, 22 were successful with bony healing revealed by clinical and radiological evaluation. In one patient no bony consolidation of the nonunion has been seen so far. In summary, based on our results we propose BMP 7 as an additional innovative therapy of long bone nonunions. abstract_id: PUBMED:24099773 The Groucho protein Grg4 suppresses Smad7 to activate BMP signaling. Groucho related genes encode transcriptional repressor proteins critical for normal developmental processes. The bone morphogenetic proteins belong to the transforming growth factor-β (TGF-β) superfamily and play important signaling roles in development and disease. However, the regulation of BMP signaling, especially within cells, is largely unknown. In this report, we show that expression of the Groucho related gene Grg4 robustly activates the expression of a BMP reporter gene, as well as enhancing and sustaining the upregulation of the endogenous Id1 gene induced by BMP7. BMP7 administration did not affect the endogenous level of Grg4 nor did it enhance the phosphorylation of receptor activated Smad proteins. Rather, Grg4 expression reduced the levels of the endogenous inhibitory Smad7, thus increasing the transcriptional responses mediated by BMP responsive sequences. The data point to a novel mechanisms for attenuating BMP signaling through altering the ratio of activating versus inhibitory Smad proteins. abstract_id: PUBMED:22298955 TGF-β and BMP signaling in osteoblast differentiation and bone formation. Transforming growth factor-beta (TGF-β)/bone morphogenic protein (BMP) signaling is involved in a vast majority of cellular processes and is fundamentally important throughout life. TGF-β/BMPs have widely recognized roles in bone formation during mammalian development and exhibit versatile regulatory functions in the body. Signaling transduction by TGF-β/BMPs is specifically through both canonical Smad-dependent pathways (TGF-β/BMP ligands, receptors and Smads) and non-canonical Smad-independent signaling pathway (e.g. p38 mitogen-activated protein kinase pathway, MAPK). Following TGF-β/BMP induction, both the Smad and p38 MAPK pathways converge at the Runx2 gene to control mesenchymal precursor cell differentiation. The coordinated activity of Runx2 and TGF-β/BMP-activated Smads is critical for formation of the skeleton. Recent advances in molecular and genetic studies using gene targeting in mice enable a better understanding of TGF-β/BMP signaling in bone and in the signaling networks underlying osteoblast differentiation and bone formation. This review summarizes the recent advances in our understanding of TGF-β/BMP signaling in bone from studies of genetic mouse models and human diseases caused by the disruption of TGF-β/BMP signaling. This review also highlights the different modes of cross-talk between TGF-β/BMP signaling and the signaling pathways of MAPK, Wnt, Hedgehog, Notch, and FGF in osteoblast differentiation and bone formation. abstract_id: PUBMED:33935961 microRNA-214-3p Suppresses Ankylosing Spondylitis Fibroblast Osteogenesis via BMP-TGFβ Axis and BMP2. Recent investigations suggest microRNAs (miRs) exert functions in fibroblast osteogenesis in ankylosing spondylitis (AS), an inflammatory rheumatic disease. But the mechanism of miR-214-3p in osteogenic differentiation in AS is not clearly understood yet. In this study, fibroblasts were obtained from the capsular ligament of patients with AS and femoral neck fracture and cultured for osteogenic induction and identified. The roles of miR-214-3p and bone morphogenic protein 2 (BMP2) in AS fibroblast osteogenesis were assessed via gain- and loss-of-function, alizarin red S staining, and alkaline phosphatase (ALP) detection. Levels of miR-214-3p, BMP2, osteogenic differentiation-related proteins, and BMP-TGFβ axis-related proteins were further measured. Consequently, miR-214-3p was downregulated in AS fibroblasts, with enhanced ALP activity and calcium nodules, which were reversed by miR-214-3p overexpression. BMP2 was a target gene of miR-214-3p and promoted AS fibroblast osteogenesis by activating BMP-TGFβ axis, while miR-214-3p inhibited AS fibroblast osteogenesis by targeting BMP2. Together, miR-214-3p could prevent AS fibroblast osteogenic differentiation by targeting BMP2 and blocking BMP-TGFβ axis. This study may offer a novel insight for AS treatment. abstract_id: PUBMED:30602563 High-throughput screens for agonists of bone morphogenetic protein (BMP) signaling identify potent benzoxazole compounds. Bone morphogenetic protein (BMP) signaling is critical in renal development and disease. In animal models of chronic kidney disease (CKD), re-activation of BMP signaling is reported to be protective by promoting renal repair and regeneration. Clinical use of recombinant BMPs, however, requires harmful doses to achieve efficacy and is costly because of BMPs' complex synthesis. Therefore, alternative strategies are needed to harness the beneficial effects of BMP signaling in CKD. Key aspects of the BMP signaling pathway can be regulated by both extracellular and intracellular molecules. In particular, secreted proteins like noggin and chordin inhibit BMP activity, whereas kielin/chordin-like proteins (KCP) enhance it and attenuate kidney fibrosis or CKD. Clinical development of KCP, however, is precluded by its size and complexity. Therefore, we propose an alternative strategy to enhance BMP signaling by using small molecules, which are simpler to synthesize and more cost-effective. To address our objective, here we developed a small-molecule high-throughput screen (HTS) with human renal cells having an integrated luciferase construct highly responsive to BMPs. We demonstrate the activity of a potent benzoxazole compound, sb4, that rapidly stimulated BMP signaling in these cells. Activation of BMP signaling by sb4 increased the phosphorylation of key second messengers (SMAD-1/5/9) and also increased expression of direct target genes (inhibitors of DNA binding, Id1 and Id3) in canonical BMP signaling. Our results underscore the feasibility of utilizing HTS to identify compounds that mimic key downstream events of BMP signaling in renal cells and have yielded a lead BMP agonist. abstract_id: PUBMED:25401122 Bone Morphogenetic Protein (BMP) signaling in development and human diseases. Bone Morphogenetic Proteins (BMPs) are a group of signaling molecules that belongs to the Transforming Growth Factor-β (TGF-β) superfamily of proteins. Initially discovered for their ability to induce bone formation, BMPs are now known to play crucial roles in all organ systems. BMPs are important in embryogenesis and development, and also in maintenance of adult tissue homeostasis. Mouse knockout models of various components of the BMP signaling pathway result in embryonic lethality or marked defects, highlighting the essential functions of BMPs. In this review, we first outline the basic aspects of BMP signaling and then focus on genetically manipulated mouse knockout models that have helped elucidate the role of BMPs in development. A significant portion of this review is devoted to the prominent human pathologies associated with dysregulated BMP signaling. abstract_id: PUBMED:30414136 Pharmacologic Strategies for Assaying BMP Signaling Function. The bone morphogenetic protein (BMP) signaling pathway, a subset of the transforming growth factor β (TGF-β) signaling family, consists of structurally diverse receptors and ligands whose combinatorial specificity encodes autocrine, paracrine, and endocrine signals essential for regulating tissue growth, differentiation, and survival during embryonic patterning and postnatal tissue remodeling. Aberrant signaling of these receptors and ligands is implicated in a variety of inborn and acquired diseases. The roles of various receptors and their ligands can be explored using small molecule inhibitors of the BMP receptor kinases. Several BMP type I receptor kinase inhibitor tool compounds have been described that exhibit sufficient selectivity to discriminate BMP receptor signaling in vitro or in vivo, with various trade-offs in selectivity, potency, cell permeability, and pharmacokinetics. Several methods for assaying BMP function via pharmacologic inhibition are presented. Two in vitro methods, an In-Cell Western assay of BMP-mediated SMAD1/5/8 phosphorylation and an alkaline phosphatase osteogenic differentiation assay, represent efficient high-throughput methodologies for assaying pharmacologic inhibitors. Two in vivo methods are described for assaying the effects of BMP signaling inhibition in embryonic zebrafish and mouse development. Small molecule inhibitors of BMP receptor kinases represent an important complementary strategy to genetic gain- and loss-of-function and ligand-trap approaches for targeting this signaling system in biology and disease. abstract_id: PUBMED:25893161 ABCC6 deficiency is associated with activation of BMP signaling in liver and kidney. Mutations in ABCC6 (ATP-binding cassette, subfamily C, member 6), an orphan transporter expressed in the liver, are the cause of pseudoxanthoma elasticum. Since ABCC6 was reported to affect matrix Gla protein (MGP), an inhibitor of bone morphogenetic proteins (BMPs), we studied BMP signaling and expression in various tissues of mice with and without functional ABCC. Enhanced BMP signaling was found in all examined tissues in the absence of ABCC6. Despite this, the expression of particular BMP proteins varied widely between tissues. Interestingly, the expression of most BMP proteins in the liver moved in the opposite direction to the same BMP proteins in kidneys in response to ABCC6 alterations. Thus, ABCC6 deficiency stimulates BMP signaling by acting on the expression of multiple BMPs. abstract_id: PUBMED:11121043 Ski represses bone morphogenic protein signaling in Xenopus and mammalian cells. The bone morphogenic proteins (BMPs) play important roles in vertebrate development. In Xenopus, BMPs act as epidermal inducers and also as negative regulators of neurogenesis. Antagonism of BMP signaling results in neuralization. BMPs signal through the cell-surface receptors and downstream Smad molecules. Upon stimulation with BMP, Smad1, Smad5, and Smad8 are phosphorylated by the activated BMP receptors, form a complex with Smad4, and translocate into the nucleus, where they regulate the expression of BMP target genes. Here, we show that the Ski oncoprotein can block BMP signaling and the expression of BMP-responsive genes in both Xenopus and mammalian cells by directly interacting with and repressing the activity of BMP-specific Smad complexes. This ability to antagonize BMP signaling results in neuralization by Ski in the Xenopus embryo and blocking of osteoblast differentiation of murine W-20-17 cells. Thus, Ski is able to repress the activity of all receptor-associated Smads and may regulate vertebrate development by modulating the signaling activity of transforming growth factor-beta family members. Answer: Transforming growth factor β1 (TGFβ1) has been found to inhibit bone morphogenic protein (BMP) 2 and BMP 7 signaling through the upregulation of Ski-related novel protein N (SnoN). This upregulation of SnoN is accompanied by increased histone deacetylase (HDAC) activity. The addition of an HDAC inhibitor, such as valproic acid, can fully abolish the inhibitory effect of TGFβ1 on BMP 2 and BMP 7 signaling. This suggests that the induction of SnoN, which increases HDAC activity, reduces the expression of factors required for efficient BMP signaling. Therefore, the inhibition of HDAC activity may support bone healing during BMP therapy in patients with elevated TGFβ serum levels, providing a possible mechanism for the failure of BMP therapy in some patients (PUBMED:22958403). Clinical experience with BMP 7 in nonunions of long bones has shown that BMP 7 can be an effective treatment for long bone nonunions, with successful bony healing in the majority of cases. However, the failure of BMP therapy in up to 36% of fracture patients suggests that there may be factors interfering with BMP signaling, such as the systemic increase in TGFβ1 (PUBMED:16786327). The regulation of BMP signaling within cells is complex and involves various proteins that can either enhance or suppress the signaling pathway. For instance, the Groucho protein Grg4 has been shown to suppress Smad7, which is an inhibitory Smad, thereby activating BMP signaling (PUBMED:24099773). This highlights the intricate balance of regulatory mechanisms that control BMP signaling and the potential for disruption that could lead to therapy failure. In summary, the failure of BMP therapy in some patients may be attributed to the inhibitory effects of TGFβ1 on BMP signaling, which is mediated through the upregulation of SnoN and increased HDAC activity. Understanding these regulatory mechanisms may lead to improved strategies for enhancing BMP therapy efficacy, such as the use of HDAC inhibitors to counteract the inhibitory effects of TGFβ1 (PUBMED:22958403).
Instruction: Can visco-elastic phase separation, macromolecular crowding and colloidal physics explain nuclear organisation? Abstracts: abstract_id: PUBMED:17430588 Can visco-elastic phase separation, macromolecular crowding and colloidal physics explain nuclear organisation? Background: The cell nucleus is highly compartmentalized with well-defined domains, it is not well understood how this nuclear order is maintained. Many scientists are fascinated by the different set of structures observed in the nucleus to attribute functions to them. In order to distinguish functional compartments from non-functional aggregates, I believe is important to investigate the biophysical nature of nuclear organisation. Results: The various nuclear compartments can be divided broadly as chromatin or protein and/or RNA based, and they have very different dynamic properties. The chromatin compartment displays a slow, constrained diffusional motion. On the other hand, the protein/RNA compartment is very dynamic. Physical systems with dynamical asymmetry go to viscoelastic phase separation. This phase separation phenomenon leads to the formation of a long-lived interaction network of slow components (chromatin) scattered within domains rich in fast components (protein/RNA). Moreover, the nucleus is packed with macromolecules in the order of 300 mg/ml. This high concentration of macromolecules produces volume exclusion effects that enhance attractive interactions between macromolecules, known as macromolecular crowding, which favours the formation of compartments. In this paper I hypothesise that nuclear compartmentalization can be explained by viscoelastic phase separation of the dynamically different nuclear components, in combination with macromolecular crowding and the properties of colloidal particles. Conclusion: I demonstrate that nuclear structure can satisfy the predictions of this hypothesis. I discuss the functional implications of this phenomenon. abstract_id: PUBMED:34206440 Liquid-Liquid Phase Separation in the Presence of Macromolecular Crowding and State-dependent Kinetics. Biomolecular condensates formed via liquid-liquid phase separation (LLPS) are increasingly being shown to play major roles in cellular self-organization dynamics in health and disease. It is well established that macromolecular crowding has a profound impact on protein interactions, particularly those that lead to LLPS. Although synthetic crowding agents are used during in vitro LLPS experiments, they are considerably different from the highly crowded nucleo-/cytoplasm and the effects of in vivo crowding remain poorly understood. In this work, we applied computational modeling to investigate the effects of macromolecular crowding on LLPS. To include biologically relevant LLPS dynamics, we extended the conventional Cahn-Hilliard model for phase separation by coupling it to experimentally derived macromolecular crowding dynamics and state-dependent reaction kinetics. Through extensive field-theoretic computer simulations, we show that the inclusion of macromolecular crowding results in late-stage coarsening and the stabilization of relatively smaller condensates. At a high crowding concentration, there is an accelerated growth and late-stage arrest of droplet formation, effectively resulting in anomalous labyrinthine morphologies akin to protein gelation observed in experiments. These results not only elucidate the crowder effects observed in experiments, but also highlight the importance of including state-dependent kinetics in LLPS models, and may help in designing further experiments to probe the intricate roles played by LLPS in self-organization dynamics of cells. abstract_id: PUBMED:38331392 Macromolecular Crowding, Phase Separation, and Homeostasis in the Orchestration of Bacterial Cellular Functions. Macromolecular crowding affects the activity of proteins and functional macromolecular complexes in all cells, including bacteria. Crowding, together with physicochemical parameters such as pH, ionic strength, and the energy status, influences the structure of the cytoplasm and thereby indirectly macromolecular function. Notably, crowding also promotes the formation of biomolecular condensates by phase separation, initially identified in eukaryotic cells but more recently discovered to play key functions in bacteria. Bacterial cells require a variety of mechanisms to maintain physicochemical homeostasis, in particular in environments with fluctuating conditions, and the formation of biomolecular condensates is emerging as one such mechanism. In this work, we connect physicochemical homeostasis and macromolecular crowding with the formation and function of biomolecular condensates in the bacterial cell and compare the supramolecular structures found in bacteria with those of eukaryotic cells. We focus on the effects of crowding and phase separation on the control of bacterial chromosome replication, segregation, and cell division, and we discuss the contribution of biomolecular condensates to bacterial cell fitness and adaptation to environmental stress. abstract_id: PUBMED:38139331 Macromolecular Crowding and DNA: Bridging the Gap between In Vitro and In Vivo. The cellular environment is highly crowded, with up to 40% of the volume fraction of the cell occupied by various macromolecules. Most laboratory experiments take place in dilute buffer solutions; by adding various synthetic or organic macromolecules, researchers have begun to bridge the gap between in vitro and in vivo measurements. This is a review of the reported effects of macromolecular crowding on the compaction and extension of DNA, the effect of macromolecular crowding on DNA kinetics, and protein-DNA interactions. Theoretical models related to macromolecular crowding and DNA are briefly reviewed. Gaps in the literature, including the use of biologically relevant crowders, simultaneous use of multi-sized crowders, empirical connections between macromolecular crowding and liquid-liquid phase separation of nucleic materials are discussed. abstract_id: PUBMED:34717122 An improved macromolecular crowding sensor CRONOS for detection of crowding changes in membrane-less organelles under stressed conditions. Membrane-less organelles (MLOs) formed by liquid-liquid phase separation (LLPS) play pivotal roles in biological processes. During LLPS, proteins and nucleotides are extremely condensed, resulting in changes in their conformation and biological functions. Disturbed LLPS homeostasis in MLOs is thought to associate with fatal diseases such as amyotrophic lateral sclerosis. Therefore, it is important to detect changes in the degree of crowding in MLOs. However, it has not been investigated well due to the lack of an appropriate method. To address this, we developed a genetically encoded macromolecular crowding sensor CRONOS (crowding sensor with mNeonGreen and mScarlet-I) that senses the degree of macromolecular crowding in MLOs using a fluorescence resonance energy transfer (FRET) system. CRONOS is a bright biosensor with a wide dynamic range and successfully detects changes in the macromolecular volume fraction in solution. By fusing to the scaffold protein of each MLO, we delivered CRONOS to MLO of interest and detected previously undescribed differences in the degree of crowding in each MLO. CRONOS also detected changes in the degree of macromolecular crowding in nucleolus induced by environmental stress or inhibition of transcription. These findings suggest that CRONOS can be a useful tool for the determination of molecular crowding and detection of pathological changes in MLOs in live cells. abstract_id: PUBMED:24380594 Phase separation as a possible means of nuclear compartmentalization. The nucleus is perhaps the most familiar organelle within eukaryotic cells, serving as a compartment to house the genetic material. The nuclear volume is subdivided into a variety of functional and dynamic nuclear bodies not separated from the nucleoplasm by membranes. It has been hypothesized that aqueous phase separation brought about by macromolecular crowding may be in part responsible for these intranuclear compartments. This chapter discusses macromolecular solution chemistry with regard to several common types of phase separation in polymer solutions as well as to recent evidence that suggests that cytoplasmic and nuclear bodies may exist as liquid phases. We then examine the functional significance of phase separation and how it may serve as a means of compartmentalizing various nuclear activities, and describe recent studies that have used simple model systems to generate coexisting aqueous phase compartments, concentrate molecules within them, and perform localized biochemical reactions. abstract_id: PUBMED:37257544 Macromolecular crowding impact on anti-CRISPR AcrIIC3/NmeCas9 complex: Insights from scaled particle theory, molecular dynamics, and elastic networks models. The coupling of Cas9 and its inhibitor AcrIIC3, both from the bacterium Neisseria meningitidis (Nme), form a homodimer of the (NmeCas9/AcrIIC3)2 type. This coupling was studied to assess the impact of their interaction with the crowders in the following environments: (1) homogeneous crowded, (2) heterogeneous, and (3) microheterogeneous cytoplasmic. For this, statistical thermodynamic models based on the scaled particle theory (SPT) were used, considering the attractive and repulsive protein-crowders contributions and the stability of the formation of spherocylindrical homodimers and the effects of changes in the size of spherical dimers were estimated. Studies based on models of dynamics, elastic networks, and statistical potentials to the formation of complexes NmeCas9/AcrIIC3 using PEG as the crowding agent support the predictions from SPT. Macromolecular crowding stabilizes the formation of the dimers, being more significant when the attractive protein-crowder interactions are weaker and the crowders are smaller. The coupling is favored towards the formation of spherical and compact dimers due to crowding addition (excluded-volume effects) and the thermodynamic stability of the dimers is markedly dependent on the size of the crowders. These results support the experimental mechanistic proposal of inhibition of NmeCas9 mediated by AcrIIC3. abstract_id: PUBMED:36108633 Condensed-phase signaling can expand kinase specificity and respond to macromolecular crowding. Phase separation can concentrate biomolecules and accelerate reactions. However, the mechanisms and principles connecting this mesoscale organization to signaling dynamics are difficult to dissect because of the pleiotropic effects associated with disrupting endogenous condensates. To address this limitation, we engineered new phosphorylation reactions within synthetic condensates. We generally found increased activity and broadened kinase specificity. Phosphorylation dynamics within condensates were rapid and could drive cell-cycle-dependent localization changes. High client concentration within condensates was important but not the main factor for efficient phosphorylation. Rather, the availability of many excess client-binding sites together with a flexible scaffold was crucial. Phosphorylation within condensates was also modulated by changes in macromolecular crowding. Finally, the phosphorylation of the Alzheimer's-disease-associated protein Tau by cyclin-dependent kinase 2 was accelerated within condensates. Thus, condensates enable new signaling connections and can create sensors that respond to the biophysical properties of the cytoplasm. abstract_id: PUBMED:34313292 The kinetics of islet amyloid polypeptide phase-separated system and hydrogel formation are critically influenced by macromolecular crowding. Many protein misfolding diseases (e.g. type II diabetes and Alzheimer's disease) are characterised by amyloid deposition. Human islet amyloid polypeptide (hIAPP, involved in type II diabetes) spontaneously undergoes liquid-liquid phase separation (LLPS) and a kinetically complex hydrogelation, both catalysed by hydrophobic-hydrophilic interfaces (e.g. air-water interface and/or phospholipids-water interfaces). Gelation of hIAPP phase-separated liquid droplets initiates amyloid aggregation and the formation of clusters of interconnected aggregates, which grow and fuse to eventually percolate the whole system. Droplet maturation into irreversible hydrogels via amyloid aggregation is thought to be behind the pathology of several diseases. Biological fluids contain a high volume fraction of macromolecules, leading to macromolecular crowding. Despite crowding agent addition in in vitro studies playing a significant role in changing protein phase diagrams, the mechanism underlying enhanced LLPS, and the effect(s) on stages beyond LLPS remain poorly or not characterised.We investigated the effect of macromolecular crowding and increased viscosity on the kinetics of hIAPP hydrogelation using rheology and the evolution of the system beyond LLPS by microscopy. We demonstrate that increased viscosity exacerbated the kinetic variability of hydrogelation and of the phase separated-aggregated system, whereas macromolecular crowding abolished heterogeneity. Increased viscosity also strengthened the gel meshwork and accelerated aggregate cluster fusion. In contrast, crowding either delayed cluster fusion onset (dextran) or promoted it (Ficoll). Our study highlights that an in vivo crowded environment would critically influence amyloid stages beyond LLPS and pathogenesis. abstract_id: PUBMED:18723053 Macromolecular crowding and its potential impact on nuclear function. It is well established, that biochemical reactions are dependent on pH, ionic strength, temperature and the concentration of reactants. However, the steric repulsion among bulky components of biological systems also affect biochemical behavior: The 'excluded volume effect of macromolecular crowding' drives bulky components into structurally compact organizations, increases their thermodynamic activities and slows down diffusion. The very special composition of the cell nucleus, which is packed with high-molecular chromatin, ribonucleo-particles and associated proteins, suggests that crowding-effects are part of nuclear functionality. Realizing that many nuclear processes, notably gene transcription, hnRNA splicing and DNA replication, use macromolecular machines, and taking into account that macromolecular crowding provides a cooperative momentum for the assembly of macromolecular complexes, we here elaborate why macromolecular crowding may be functionally important in supporting the statistical significance of nuclear activities. Answer: Yes, viscoelastic phase separation, macromolecular crowding, and colloidal physics can explain nuclear organization. The cell nucleus is compartmentalized into well-defined domains, and the maintenance of this nuclear order is not fully understood. However, the hypothesis is that nuclear compartmentalization can be explained by the viscoelastic phase separation of dynamically different nuclear components, combined with macromolecular crowding and the properties of colloidal particles (PUBMED:17430588). The chromatin compartment in the nucleus displays slow, constrained diffusional motion, while the protein/RNA compartment is very dynamic. Physical systems with such dynamical asymmetry can undergo viscoelastic phase separation, leading to the formation of a long-lived interaction network of slow components (chromatin) within domains rich in fast components (protein/RNA). Additionally, the high concentration of macromolecules in the nucleus, around 300 mg/ml, leads to volume exclusion effects that enhance attractive interactions between macromolecules, a phenomenon known as macromolecular crowding. This crowding favors the formation of compartments (PUBMED:17430588). Moreover, macromolecular crowding has been shown to have a profound impact on protein interactions and the formation of biomolecular condensates via liquid-liquid phase separation (LLPS), which are crucial for cellular self-organization dynamics (PUBMED:34206440). Crowding, along with other physicochemical parameters, influences the structure of the cytoplasm and indirectly affects macromolecular function. It also promotes the formation of biomolecular condensates by phase separation, which is a mechanism for maintaining physicochemical homeostasis and is key in bacterial cell fitness and adaptation to environmental stress (PUBMED:38331392). In summary, the principles of viscoelastic phase separation, macromolecular crowding, and colloidal physics provide a biophysical framework for understanding the complex and dynamic organization of the cell nucleus. These phenomena contribute to the formation and maintenance of nuclear compartments, which are essential for the proper functioning of eukaryotic cells.
Instruction: Serrated lesions of the appendix: do they differ from their colorectal counterparts? Abstracts: abstract_id: PUBMED:35457279 Serrated Colorectal Lesions: An Up-to-Date Review from Histological Pattern to Molecular Pathogenesis. Until 2010, colorectal serrated lesions were generally considered as harmless lesions and reported as hyperplastic polyps (HPs) by pathologists and gastroenterologists. However, recent evidence showed that they may bear the potential to develop into colorectal carcinoma (CRC). Therefore, the World Health Organization (WHO) classification has identified four categories of serrated lesions: hyperplastic polyps (HPs), sessile serrated lesions (SSLs), traditional serrated adenoma (TSAs) and unclassified serrated adenomas. SSLs with dysplasia and TSAs are the most common precursors of CRC. CRCs arising from serrated lesions originate via two different molecular pathways, namely sporadic microsatellite instability (MSI) and the CpG island methylator phenotype (CIMP), the latter being considered as the major mechanism that drives the serrated pathway towards CRC. Unlike CRCs arising through the adenoma-carcinoma pathway, APC-inactivating mutations are rarely shown in the serrated neoplasia pathway. abstract_id: PUBMED:36304580 The relationship between serum lipid levels and colorectal serrated lesions: A systematic review and meta-analysis. Objective: To clarify the relationship between colorectal serrated lesions and serum lipid levels, and provide a scientific basis for the identification and early clinical prevention and treatment of populations that are at risk for colorectal serrated lesions. Methods: Studies comparing serum lipid levels in patients with colorectal serrated lesions and controls were searched in PubMed, Embase, Web of Science, the Cochrane Library, China Biomedical Literature Database, CNKI, Wanfang Database, and VIP Database. Relevant literature was screened according to the inclusion and exclusion criteria. The mean and standard deviation of the serum lipid levels in patients and controls were extracted from the included literature. The combined weighted mean difference (WMD) and 95% confidence intervals (CIs) were calculated using Review Manager 5.0 software to evaluate the relationship between serum lipid levels and colorectal serrated lesions. Publication bias of the included studies was evaluated by the Egger test. Results: Twenty-three studies were included, comprising 2,063 patients and 63,909 controls. The serum high-density lipoprotein cholesterol (HDL-C) levels in the case group was significantly lower than in the control group (WMD = -0.122 mmol/L, 95% CI: 0.170-0.073). Total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), and serum triglyceride levels in the case group were significantly higher than in the control group, and the WMDs were 0.180 mmol/L (95% CI: 0.061-0.299), 0.155 mmol/L (95% CI: 0.038-0.273), and 0.241 mmol/L (95% CI: 0.181-0.302), respectively. Conclusion: Colorectal serrated lesions may be related to blood lipid levels. Hyperlipidemia might be a risk factor for colorectal serrated lesions. abstract_id: PUBMED:29805602 Endoscopic analysis of colorectal serrated lesions with cancer. Serrated lesions, including hyperplastic polyps (HPs), traditional serrated adenomas (TSAs) and sessile serrated adenomas/polyps (SSA/Ps), are important contributors to colorectal carcinogenesis. The aim of the present study was to analyze the potential of conventional endoscopy and advanced endoscopic imaging techniques to delineate the characteristic features of serrated lesions with cancer. The present study was a retrospective analysis of the data of 168 patients who had undergone colonoscopy, and a total of 228 serrated lesions (77 HPs, 58 TSAs, 84 SSA/Ps, 9 SSA/P plus TSAs) have been identified in these patients. A cancer component was identified in 2.6% of HPs, 13.8% of TSAs and 10.7% of SSA/Ps, but none of SSA/P plus TSAs. Compared with the lesions without cancer, the lesions with cancer exhibited a larger size (HP, TSA and SSA/P), a reddish appearance (SSA/P), a two-tier raised appearance (HP and SSA/P), a central depression (HP, TSA and SSA/P), the type V pit pattern (HP, TSA and SSA/P), and/or the type III capillary pattern (TSA and SSA/P). Deep invasion was identified in 50.0% of HPs, 12.5% of TSAs and 55.6% of SSA/Ps with cancer. The Ki-67 proliferative zone was distributed diffusely within the area of the cancer, but partially within the non-cancer area of HPs, TSAs and SSA/Ps. The lesion types were also analyzed on the basis of mucin phenotype. The present study suggested that a detailed endoscopic analysis of serrated lesions with cancer is useful for delineating characteristic features, and the analysis aids treatment selection. abstract_id: PUBMED:37124132 Current progress on the endoscopic features of colorectal sessile serrated lesions. Along with the discovery and refinement of serrated pathways, the World Health Organization amended the classification of digestive system tumors in 2019, recommending the renaming of sessile serrated adenomas/polyps to sessile serrated lesions (SSLs). Given the particularity of the endoscopic appearance of SSLs, it could easily be overlooked and missed in colonoscopy screening, which is crucial for the occurrence of interval colorectal cancer. Existing literature has found that adequate bowel preparation, reasonable withdrawal time, and awareness of colorectal SSLs have improved the quality and accuracy of detection. More particularly, with the continuous advancement and development of endoscopy technology, equipment, and accessories, a potent auxiliary tool is provided for accurate observation and immediate diagnosis of SSLs. High-definition white light endoscopy, chromoendoscopy, and magnifying endoscopy have distinct roles in the detection of colorectal SSLs and are valuable in identifying the size, shape, character, risk degree, and potential malignant tendency. This article delves into the relevant factors influencing the detection rate of colorectal SSLs, reviews its characteristics under various endoscopic techniques, and expects to attract the attention of colonoscopists. abstract_id: PUBMED:34294939 The histomorphological and molecular landscape of colorectal adenomas and serrated lesions. The 2019 WHO classification of digestive system tumors significantly reformed the classificatory definition of serrated lesions of the colorectal mucosa and added new essential diagnostic criteria for both conventional adenomas and hereditary gastrointestinal polyposis syndromes. Histopathological examination of colorectal adenocarcinoma precursors lesions represents an important segment of daily clinical practice in a pathology department and is essential for the implementation of current colorectal adenocarcinoma secondary prevention strategies. This overview will focus on a schematic histopathological and molecular classification of precursor lesions arising within colorectal mucosa. abstract_id: PUBMED:34276195 Underwater endoscopic mucosal resection of adenomas and colorectal serrated lesions: a prospective clinical study. Background: Underwater endoscopic mucosal resection (UEMR) without submucosal injection has been described as an alternative technique to the endoscopic resection of adenomas and colorectal serrated lesions. We aimed to assess the feasibility, safety, and efficacy of UEMR in a Brazilian setting. Methods: This was a prospective observational study of consecutive patients who underwent UEMR between January and July 2019, in a single tertiary care center. Inclusion criteria were lesions without endoscopic stigmata of deep submucosal invasion in patients referred for endoscopic resection of colorectal adenomas, and serrated lesions detected in a previous colonoscopy. The following features were assessed: complete resection rate, en bloc resection rate, resection time, adverse events, and resection infeasibility. Results: A total of 36 patients underwent UEMR for 51 colorectal lesions. The mean/median lesion size was 16.24/13 mm and the mean/median resection time was 16.97/9.19 min. Histopathology revealed the following: tubular adenoma (43.1%), tubulovillous adenoma (13.7%), serrated lesions (41.2%), and intramucosal adenocarcinoma (2%). Complete resection was achieved in 86.3% of cases; 52.9% of the lesions were removed en bloc, while 47.1% were resected in a piecemeal fashion. UEMR was feasible in 96.1% of cases and failed on 2 occasions, requiring conversion to standard endoscopic mucosal resection. Minor intraoperative bleeding occurred in 5 patients (9.8%) and only 1 presented with delayed bleeding (2%), all controlled endoscopically. Conclusion: UEMR for removal of adenomas and colorectal serrated lesions was demonstrated to be feasible, safe and effective. abstract_id: PUBMED:34984965 Colorectal Serrated Lesions: A Current View on Clinical, Morphological, Molecular, and Genetic Diagnostic Criteria. Objective: Serrated colorectal lesions are a group of colonic lesions with a serrated (saw-tooth) profile of the surface epithelium and crypts, and peculiar molecular and genetic developmental mechanisms that are incompletely understood. These formations cause concern due to their premalignant potential. Aim: The review is dedicated to serrated lesions of colon and appendix. We focused on modern classification, role in carcinogenesis, as well as new approaches to morphological diagnosis. Methods: A literature search was performed using PubMed, Scopus, ResearchGate, Google, MEDLINE, and ScienceDirect databases to find studies of serrated colorectal lesions related cancer published between 2000 and 2020 that address epidemiological risk factors, underlying pathophysiological mechanism and enable our review of these factors as well as molecular, genetics, and structural patterns. Results: Serrated colorectal lesions take one third of all benign neoplasms of the colon in the pathologist's practice. The active study of serrated lesions began in the 1900s. Terminology and diagnostic criteria changed in the updated classification in 2019. Morphological criteria, immunohistochemical and molecular profile, endoscopic and clinical characteristics are reviewed. Conclusion: Although significant efforts were made in attempt to improve our understanding and diagnostic criteria of serrated polyps of colorectum, very little has changed since the original morphologic description of preneoplastic serrated lesions in early 2000s. There remains a need for more research in order to develop more definitive immuophenotypic and molecular biomarkers in order to distinguish between non-neoplastic and neoplastic serrated lesions. abstract_id: PUBMED:34629791 Serrated lesions: A challenging enemy. The serrated pathway accounts for 30%-35% of colorectal cancer (CRC). Unlike hyperplastic polyps, both sessile serrated lesions (SSLs) and traditional serrated adenomas are premalignant lesions, yet SSLs are considered to be the principal serrated precursor of CRCs. Serrated lesions represent a challenge in detection, classification, and removal-contributing to post-colonoscopy cancer. Therefore, it is of the utmost importance to characterize these lesions properly to ensure complete removal. A retrospective cohort study developed a diagnostic scoring system for SSLs to facilitate their detection endoscopically and subsequent removal. From the study, it can be ascertained that both indistinct border and mucus cap are essential in both recognizing and diagnosing serrated lesions. The proximal colon poses technical challenges for some endoscopists, which is why high-quality colonoscopy plays such an important role. The indistinct border of some SSLs poses another challenge due to difficult complete resection. Overall, it is imperative that gastroenterologists use the key features of mucus cap, indistinct borders, and size of at least five millimeters along with a high-quality colonoscopy and a good bowel preparation to improve the SSL detection rate. abstract_id: PUBMED:29491757 Serrated Colorectal Lesions in Patients With Inflammatory Bowel Disease. The serrated neoplasia pathway is thought to account for up to 30% of sporadic colorectal cancer, but its role in inflammatory bowel disease (IBD)-related colorectal cancer is still not well elucidated. Hyperplastic polyps are not thought to impart an increased risk of colorectal cancer; however, sessile serrated adenomas/polyps and traditional serrated adenomas may have malignant potential. From the limited research currently available, this appears to hold true for IBD patients as well. IBD patients do not seem to be at a higher risk of typical serrated colorectal lesions than the general population, but it is still not known if they have a quicker progression to colorectal cancer. Serrated epithelial change is a newly described finding in patients with longstanding colitis that may increase the risk of colorectal cancer in IBD patients. Overall, serrated lesions are not uncommon in the IBD population, and further research is needed to understand the role that serrated lesions play in the development of colorectal cancer. abstract_id: PUBMED:38129291 Serum DNA methylome of the colorectal cancer serrated pathway enables non-invasive detection. The clinical relevance of the colorectal cancer serrated pathway is evident, but the screening of serrated lesions remains challenging. We aimed to characterize the serum methylome of the serrated pathway and to evaluate circulating cell-free DNA (cfDNA) methylomes as a potential source of biomarkers for the non-invasive detection of serrated lesions. We collected serum samples from individuals with serrated adenocarcinoma (SAC), traditional serrated adenomas, sessile serrated lesions, hyperplastic polyps and individuals with no colorectal findings. First, we quantified cfDNA methylation with the MethylationEPIC array. Then, we compared the methylation profiles with tissue and serum datasets. Finally, we evaluated the utility of serum cfDNA methylation biomarkers. We identified a differential methylation profile able to distinguish high-risk serrated lesions from no serrated neoplasia, showing concordance with tissue methylation from SAC and sessile serrated lesions. Serum methylation profiles are pathway-specific, clearly separating serrated lesions from conventional adenomas. The combination of ninjurin 2 (NINJ2) and glutamate-rich 1 (ERICH1) methylation discriminated high-risk serrated lesions and SAC with 91.4% sensitivity (64.4% specificity), while zinc finger protein 718 (ZNF718) methylation reported 100% sensitivity for the detection of SAC (96% specificity). This is the first study exploring the serum methylome of serrated lesions. Differential methylation of cfDNA can be used for the non-invasive detection of colorectal serrated lesions. Answer: The provided abstracts do not directly address the specific differences between serrated lesions of the appendix and their colorectal counterparts. However, they do provide a general understanding of serrated lesions within the colorectal region and their significance in colorectal carcinogenesis. Serrated lesions, including hyperplastic polyps (HPs), traditional serrated adenomas (TSAs), and sessile serrated lesions/polyps (SSLs/SSA/Ps), are recognized as important contributors to colorectal carcinogenesis (PUBMED:29805602). The World Health Organization (WHO) classification has identified four categories of serrated lesions: HPs, SSLs, TSAs, and unclassified serrated adenomas, with SSLs with dysplasia and TSAs being the most common precursors of colorectal carcinoma (CRC) (PUBMED:35457279). These lesions can develop into CRC via two different molecular pathways, namely sporadic microsatellite instability (MSI) and the CpG island methylator phenotype (CIMP) (PUBMED:35457279). The abstracts also discuss the challenges in detecting and classifying serrated lesions, which contribute to post-colonoscopy cancer (PUBMED:34629791). The detection and accurate diagnosis of serrated lesions are crucial, as they can be premalignant and may progress to CRC (PUBMED:34984965). Advanced endoscopic imaging techniques are useful in delineating the characteristic features of serrated lesions with cancer and aid in treatment selection (PUBMED:29805602). While the abstracts provide valuable information on colorectal serrated lesions, they do not offer specific insights into how serrated lesions of the appendix might differ from those in the colorectal region. To answer the question about the differences between serrated lesions of the appendix and their colorectal counterparts, additional literature that specifically addresses the histological, molecular, and clinical characteristics of appendiceal serrated lesions would be required.
Instruction: Do caregiving burden and satisfaction predict loneliness in older care recipients? Abstracts: abstract_id: PUBMED:25782044 Do caregiving burden and satisfaction predict loneliness in older care recipients? Objectives: The goal of this study was to assess the extent to which caregiving burden and satisfaction of primary family caregivers in Israel predict loneliness among their older care recipients. Method: The study included a convenience sample of 325 dyads of respondents. The short version of Zarit Caregiving Burden Interview and Lawton Caregiving Appraisal measures were used to assess caregiving burden and satisfaction; the de Jong Gierveld Loneliness Scale was used to assess care recipient's loneliness. Results: The results showed that greater caregiving burden was significantly correlated with increased loneliness, whereas greater caregiving satisfaction was significantly correlated with lower levels of loneliness. Conclusion: The literature has broadly addressed the association between caregiving burden and satisfaction and the caregivers' well-being, whereas this association with regard to care recipients' well-being has been barely examined. The multivariate analyses showed that caregiving burden was insignificant in explaining loneliness, whereas greater caregiving satisfaction was found significant in explaining lower levels of loneliness. The results and implications for further research and practice are discussed. abstract_id: PUBMED:30260715 Positive Caregiving Characteristics as a Mediator of Caregiving Burden and Satisfaction With Life in Caregivers of Older Adults. Background: Satisfaction with life is recognized to be a factor in alleviating burden in stressful caregiving duties. However, the mechanism underlying this relationship is indistinct. Positive aspects of caregiving (PAC) may help to regulate caregiving burden among caregivers of older adults. The study aims to examine whether positive caregiving characteristics mediate the effect between satisfaction with life and burden of care. Methods: Participants were 285 caregivers of older adults (aged 60 and above) in Singapore and were recruited in a cross-sectional, self-report study (mean [M] = 47.0 years; 64.6% females). Measures included in the study were the Zarit Burden Interview (ZBI), Positive Aspects of Caregiving (PAC), and the Satisfaction with Life Scale (SWLS). Mediation analyses were used to study the indirect effects of life satisfaction on caregiver burden through features of PAC. Results: Mean scores for the ZBI, PAC, and SWLS scales were M = 23.15 (standard deviation [SD] = 15.98), M = 34.55 (SD = 8.19), and M = 23.56 (SD = 6.62) respectively. Results from the mediation analysis revealed that the association between life satisfaction and caregiving burden was significantly mediated by the PAC ( P < .001). Discussion: Positive aspects of caregiving may be a mechanism that links satisfaction with life and caregiver burden. Findings may represent attempts to manage caregiving duties as well as maintaining a positive attitude toward their responsibilities. abstract_id: PUBMED:24840915 Profiles of Loneliness in the Caregiving Unit. Purpose Of The Study: The study evaluated profiles (a typology) of loneliness within the caregiving unit, which was composed of an older care recipient with functional impairment, a family member, and a home care worker. Design And Methods: Overall, 223 complete caregiving units completed the 3-item Revised-University of California San Francisco Loneliness scale. Latent profile analysis was used to identify profiles of loneliness within the caregiving unit. Subsequently, latent profile membership was used as a between-subject variable to examine correlates of the latent profiles. Results: A 2-profile solution was deemed most plausible. This classification consisted of a large (174 caregiving units; 78%) more favorable profile in terms of loneliness and a smaller (49 caregiving units; 22%) lonelier profile. Profile classification was associated with a variety of quality of life, well-being, social relations, and sociodemographic indicators of the 3 members of the caregiving unit. Implications: The study provides a needed recognition of the potential interdependence among members of the caregiving unit and calls for research and practice that go beyond the individual level. The assessment of loneliness at the caregiving unit can provide valuable information about at-risk units as well as about the potential effectiveness of interventions that target the entire caregiving unit. abstract_id: PUBMED:26898819 Influence of home care services on caregivers' burden and satisfaction. Aims And Objectives: To examine the factors affecting the burden and satisfaction of family caregivers, focusing on the beneficial impacts of home care service use. Background: Long-term care for older patients is a multifaceted process that brings both burden and satisfaction to family caregivers. It is expected that home care services offered by the Korean long-term care insurance may contribute to decreasing the burden of family caregivers and enhancing their satisfaction by assisting with practical caregiving tasks. Design: A cross-sectional study. Methods: A convenience sample of 157 family caregivers was recruited from five home care service agencies in South Korea. Information about the caregivers, caregiving history, older care recipients and use of home care services was assessed. The effects of home care service use on caregiving burden and satisfaction were tested using hierarchical multiple regression analyses after adjusting for the characteristics of the caregivers, caregiving history and older care recipients. Results: There was no significant influence of home care service use on reducing caregiving burden or on increasing caregivers' satisfaction. Although several factors were associated with caregiving burden and satisfaction, family functioning was the most unique factor to significantly affect both caregiving burden and satisfaction. Conclusions: Home care services might not automatically have a positive impact on caregivers' burden and satisfaction, but maintaining healthy family functioning is an important issue for family caregivers. The findings highlight the important need to reconsider ways to provide home care services and to develop nursing interventions to reinforce supportive family functioning. Relevance To Clinical Practice: Practical strategies for providing home care services should be developed through a concrete assessment of the family dynamics and the needs of family caregivers. Health professionals should play a pivotal role in performing the assessment and in developing interventions to strengthen supportive family functioning. abstract_id: PUBMED:30328702 Impact of informal caregiving on loneliness and satisfaction with leisure-time activities. Findings of a population-based longitudinal study in germany. Objectives: Recent longitudinal studies have shown that informal caregiving increases depressive symptoms. However, little is known about the relation between informal caregiving and social ties as well as the evaluation of leisure-time activities longitudinally. Consequently, the aim of the current study was to investigate whether informal caregiving affects social ties as well as the perception of leisure-time activities. Method: Data were drawn from a population-based sample of community-dwelling individuals (≥40 years) in Germany from 2002 to 2014 (n = 21,762 observations). Social ties were assessed twofold: The number of important people in regular contact, and a six-item scale by Gierveld and van Tilburg. Furthermore, using a 5-point rating scale, satisfaction with leisure-time activities were quantified. Individuals were asked whether they provide informal care regularly. Results: Fixed effects regressions revealed that the onset of informal caregiving were not associated with changes in loneliness in the total sample and in both sexes. Besides, the onset of informal caregiving was associated with an increase in the number of individuals in regular contact in the total sample and in men, whereas it was associated with a decrease in satisfaction with leisure-time activities in the total sample and in both sexes. Conclusion: Data suggest that changes in informal caregiving were not associated with changes in loneliness. While the onset of informal caregiving was associated with an increase in the number of individuals in regular contact, it was associated with negative changes in satisfaction with leisure-time activities, emphasizing the variety of consequences that may unfold. abstract_id: PUBMED:36330754 Lonely in a Crowd: Social Isolation Profiles and Caregiver Burden Among Family Caregivers of Community-Dwelling Older Adults With Cognitive Impairment. Objectives: This study identified distinct social isolation profiles among caregivers, each formed by varied combinations of social disconnectedness and loneliness, and examined if and how the profiles were associated with caregiver burden. Methods: Latent class analysis and multivariable regression were applied to data from 266 caregivers of community-dwelling older Singaporeans with cognitive impairment. Results: Two caregiver social isolation profiles were identified: strongly connected, not lonely (86%), and moderately connected, lonely (14%). Moderately connected and lonely caregivers tended to perceive a higher level of burden than strongly connected and not lonely caregivers. Moderately connected and lonely caregivers were also more likely to be burdened by their care recipients' poor health than their connected and not lonely counterparts. Discussion: Caregivers who feel "lonely in a crowd" are vulnerable to caregiving stress and burden. Tailored interventions, such as cognitive behavioral therapy, are needed to reduce the loneliness of moderately connected caregivers. abstract_id: PUBMED:34831857 Informal Caregiving, Loneliness and Social Isolation: A Systematic Review. Background: Several empirical studies have shown an association between informal caregiving for adults and loneliness or social isolation. Nevertheless, a systematic review is lacking synthesizing studies which have investigated these aforementioned associations. Therefore, our purpose was to give an overview of the existing evidence from observational studies. Materials and Methods: Three electronic databases (Medline, PsycINFO, CINAHL) were searched in June 2021. Observational studies investigating the association between informal caregiving for adults and loneliness or social isolation were included. In contrast, studies examining grandchild care or private care for chronically ill children were excluded. Data extractions covered study design, assessment of informal caregiving, loneliness and social isolation, the characteristics of the sample, the analytical approach and key findings. Study quality was assessed based on the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. Each step (study selection, data extraction and evaluation of study quality) was conducted by two reviewers. Results: In sum, twelve studies were included in our review (seven cross-sectional studies and five longitudinal studies)-all included studies were either from North America or Europe. The studies mainly showed an association between providing informal care and higher loneliness levels. The overall study quality was fair to good. Conclusion: Our systematic review mainly identified associations between providing informal care and higher loneliness levels. This is of great importance in assisting informal caregivers in avoiding loneliness, since it is associated with subsequent morbidity and mortality. Moreover, high loneliness levels of informal caregivers may have adverse consequences for informal care recipients. abstract_id: PUBMED:29764607 The impact of loneliness and relationship quality on life satisfaction: A longitudinal dyadic analysis in persons with physical disabilities and their partners. Objective: Persons with physical disabilities and their caregiving partners are at an increased risk of experiencing reduced life satisfaction. One potential explanation for this trend may be the potentially harmful effects of loneliness and poor relationship quality which this population often experience. To date, little is known about how the perceptions of loneliness and relationship quality affect life satisfaction in the disability and caregiving setting, furthermore the directionality of effect is not well understood. In this study, we investigate the actor and partner effects, and the reciprocal effects of loneliness and relationship quality on life satisfaction. Methods: The analyses are based on longitudinal dyadic data from a Swiss community survey of persons with spinal cord injury (SCI) and their partners (n = 246). We employed mixed effects modelling to explore standardized (β) and unstandardized (B) actor and partner effects, and used cross-lagged path analysis to explore reciprocal effects. Results: Loneliness was more prevalent in persons with SCI than in their caregiving partners. In caregiving partners, we found significant negative actor effects of loneliness (β = -0.20 (-0.31, -0.10)) and positive actor effects of relationship quality (β = 0.15 (0.04, 0.26)) on life satisfaction, and significant partner effects of relationship quality on wellbeing. In persons with SCI, only the negative actor effect of loneliness was significant (β = -0.30 (-0.41, -0.18)). Over time, loneliness demonstrated reciprocal associations with life satisfaction. Conclusion: The findings of our study highlight the importance of reducing loneliness and strengthening relationship quality to improve life satisfaction in partnerships of persons coping with disability. abstract_id: PUBMED:27438060 Predictors of caregiving burden: impact of subjective health, negative affect, and loneliness of octogenarians and centenarians. Objectives: This study aimed (1) to determine whether octogenarian and centenarian care recipients' self-report on physical, social, and emotional status are different from caregivers' reports, (2) to assess associations between octogenarian and centenarian care recipients' poor physical, social, and emotional status and caregiver burden, and (3) to determine which report, the care recipients' self-report or caregivers' report, about the participants' physical and emotional status predicted more accurately levels of caregiver burden. Method: Self-ratings and caregiver informant ratings were obtained from 309 participants of the Georgia Centenarian Study. Care recipients' health, negative affect, and loneliness were reported by both the caregivers and care recipients for the analyses. Differences between care recipients' and caregivers' reports were assessed by t-test. Blockwise multiple regression analysis was computed to assess predictors of caregiver burden. Results: Caregivers' reports on the three measures were significantly higher than self-reports. Caregivers' negative affect and loneliness, not physical health, reported by caregivers predicted higher caregiver burden. Care recipients' reports did not predict caregiver burden. Conclusion: Caregivers perceived care recipients' social and emotional status more negatively, and caregivers' negative perceptions on care recipients' well-being status were an important predictor of caregiver burden. abstract_id: PUBMED:36251518 Transcultural adaptation of the revised caregiving appraisal scale (RCAS) in the Spanish population. Aim: To develop a transcultural adaptation of the Revised Caregiving Appraisal Scale among Spanish caregivers of dependent older people and to test the psychometric properties of the scale. Design: Cross-sectional study. Methods: The Revised Caregiving Appraisal Scale was transculturally adapted to the Spanish population following the methodology of direct and back translation. The Spanish version of the Revised Caregiving Appraisal Scale was administered to a total of 182 family caregivers of older dependent people. The study began in January 2016 and ended in December of the same year. The construct validity was studied by means of the scree plot and parallel analysis. The exploratory factorial analysis was carried out, and the correlation between factors was studied. To verify the reliability of the process, Cronbach's alpha and homogeneity were calculated by the corrected total item correlation. The validity of the convergent criterion was studied by means of the Pearson correlation coefficient, using the Zarit Caregiver Load Interview and the Family Satisfaction Scale as the gold standard. Results: The construct validity revealed three factors: 'Subjective Burden' (15 items), 'Satisfaction' (7 items) and 'Competence' (3 items). The Cronbach alpha was .86 for 'Subjective Burden', .74 for 'Satisfaction' and .74 for 'Competence'. The corrected total item correlation was greater than .25. The validity of the convergent criterion of the 'Subjective Burden' and 'Competence' factors with the 'Zarit Caregiver's Load Interview' presented a very high statistically significant correlation, unlike 'Satisfaction' which presented a low positive correlation with the 'Family Satisfaction Scale'. Conclusion: The Spanish version of the Revised Caregiving Appraisal Scale is a valid and reliable scale according to the tests performed on a random sample of family caregivers of older dependent people in Spain. Impact: This scale will enable the simultaneous assessment of negative ('Subjective Burden' and 'Competence') and positive ('Satisfaction') perceptions among family caregivers of older dependent people. Answer: Yes, caregiving burden and satisfaction do predict loneliness in older care recipients. A study conducted in Israel with a sample of 325 dyads of respondents found that greater caregiving burden was significantly correlated with increased loneliness, while greater caregiving satisfaction was significantly correlated with lower levels of loneliness among older care recipients (PUBMED:25782044). However, in multivariate analyses, caregiving burden was insignificant in explaining loneliness, whereas greater caregiving satisfaction was found significant in explaining lower levels of loneliness. This suggests that while the burden of caregiving itself may not directly predict loneliness, the satisfaction derived from caregiving relationships can be a protective factor against loneliness in older adults receiving care.
Instruction: Does infertility cause marital benefit? Abstracts: abstract_id: PUBMED:37034295 Investigating the role of defense mechanisms on marital adjustment in infertile couples based on the cause of infertility: A cross-sectional study. Background: Infertility affects individual's and couples' adjustment. The defense mechanism and marital adjustment are mental processes that play a key role in infertile couples' life. Objective: This study aims to investigate the role of defense mechanisms on marital adjustment in infertile couples based on the cause of infertility. Materials And Methods: This cross-sectional study was conducted on 400 infertile couples at Royan Institute (A referral center for infertility, Tehran, Iran). Infertile couples were divided into 4 groups based on the cause of infertility (female, male, both, and unknown). Demographic questionnaire, revised dyadic adjustment scale, and defense styles questionnaire were used for data collection. Results: Results showed that the mean score of marital adjustment of men based on the cause of infertility (female, male, and both) was significantly higher in comparison with their wives (p = 0.04, p ≤ 0.001, p ≤ 0.001, respectively). However, no difference was observed between the mean score of women, men, and couples (women and husbands) in defense mechanisms. But marital adjustment has a significant positive correlation with mature defense mechanisms and a negative correlation with immature ones (p < 0.05). Conclusion: The mean score of marital adjustment is higher in men than in their wives in all groups. So, counseling infertile couples in the field of marital adjustment and training in the use of mature defense mechanisms is recommended especially in women. abstract_id: PUBMED:16310331 Does infertility cause marital benefit? An epidemiological study of 2250 women and men in fertility treatment. Objective: To investigate (i) marital benefit, e.g., that infertility has strengthen the marriage and brought the partners closer together among people beginning fertility treatment and (ii) communication and coping strategies as predictors of marital benefit 12 months later. Methods: A prospective cohort design including 2250 people beginning fertility treatment and a 12-month follow-up. Data were based on self-administered questionnaires measuring marital benefit, communication, and coping strategies. The analyses of predictors were based on the sub-cohort (n=816) who had not achieved a delivery after fertility treatment. Results: 25.9% of women and 21.1% of men reported high marital benefit. Among men medium use of active-confronting coping (e.g., letting feelings out, asking others for advice) and use of meaning-based coping were significant predictors for high marital benefit. Having the infertility as a secret, difficult marital communication, and using active-avoidance coping (e.g., avoid being with pregnant women or children, turning to work to take mind off things) were among men significant predictors for low marital benefit. No significant predictors were identified among women. Conclusion: Fertility patients frequently experience marital benefit. Practice Implications: The study provides information about where to intervene with male fertility patients in order to increase their marital benefit after medically unsuccessful treatment. abstract_id: PUBMED:28658854 Marital Intimacy and Predictive Factors Among Infertile Women in Northern Iran. Introduction: Infertility is a stressful state that may decrease attachment between spouses. Marital intimacy is a real need in infertile women. Aim: The aim of this study was to evaluate marital intimacy and predictive factors among infertile women in Northern Iran. Materials And Methods: This cross-sectional study was conducted at Fatemeh Zahra Infertility and Reproductive Health Center of Babol Medical Sciences University in 2014. A total of 221 infertile women participated in this study. The instrument used in this research was Marital Intimacy Need Questionnaire (MINQ). Statistical analyses was performed using linear and logistic regression with p<0.05 indicating statistical significance. Results: An 88.5% of infertile women had good marital intimacy. The mean and standard deviation of the marital intimacy was 349.11±49.26 and in marital intimacy domains including: emotional (42.28±7.23), psychological (41.84±7.59), intellectual (42.56±7.46), sexual (42.90±7.41), physical (43.59±6.96), spiritual (51.61±8.06), aesthetic (42.66±6.75), and social intimacy (42.59±6.89). The highest mean of marital intimacy domains is related to spirituality in infertile women. Physical and sexual domains had the high mean in infertile women. The lowest mean in marital intimacy domains was psychological intimacy. There was a significant correlation between the domains of marital intimacy. The strongest correlation was between the physical and sexual intimacy (r=0.85). There was a significant inverse association in marital intimacy with the age difference of spouses (p<0.01), and tobacco use (p<0.02). There was a statistically significant association in the marital intimacy with husband's occupation, and cause of infertility (p<0.02). Conclusion: Early screening and psychosocial intervention strategies suggest in the setting of female infertility to identify and prevent the predictive factors that may cause marital conflict. abstract_id: PUBMED:28868842 Effect of Marital Relationship Enrichment Program on Marital Satisfaction, Marital Intimacy, and Sexual Satisfaction of Infertile Couples. Background: Infertile couples only think of having children during their sexual intercourse, and their constant concern about this issue increases their stress level. Psychosocial and social stress leads to decreased life satisfaction, increased marital problems, and reduced sexual confidence. This study aims to determine the effect of enrichment program on marital and sexual satisfaction as well as marital intimacy among infertile couples. Materials And Methods: This randomized controlled clinical trial was conducted on 50 infertile couples in 2013 in Hamedan. The marital relationship enrichment program was taught to the experimental group during seven 90 minutes sessions. Enrich marital satisfaction, Linda Berg sexual satisfaction, and marital intimacy questionnaires were completed by both groups in 3 pretest steps immediately after the end of training sessions, and 8 weeks later. The results were analyzed in STATA11 software using t test, Chi-square, ANCOVA, RM-ANOVA, and Bonferroni post-hoc test. To check the data normality, Kolmogorov-Smirnov test was used. P<0.05 was considered significant. Results: Comparison of mean scores related to pretest on the one hand and immediately after the test in 8 week later on the other hand showed marital relationship enrichment program significantly increased marital and sexual satisfaction (P<0.001). Also, mean score of marital intimacy immediately after the test (P=0.04) and 8 weeks after the test (P<0.001) significantly increased in comparison with the pretest under the influence of the program. Conclusion: Enrichment training can increase marital intimacy and also marital and sexual satisfaction in infertile couples (Registration Number: IRCT201604299014N97). abstract_id: PUBMED:28101115 Marital Satisfaction and Its Influencing Factors in Fertile and Infertile Women. Objective: To determine marital satisfaction and its influencing factors among fertile and infertile women in Shahroud. Materials and methods: In this comparative study, 1528 participants (511 infertile and1017 fertile women) were evaluated using Enrich Marital Satisfaction Scale. Data were analyzed using chi-square and t-test. Results: A total of 1402 participants (78.7%) had high marital satisfaction. The results show that no significant differences exist between marital satisfaction, marital communication, conflict resolution and idealistic distortion in fertile and infertile women. However, a significant difference was observed between marital satisfaction, and job, spouse's job and income in fertile and infertile groups, but the place of residence, education, spouse's education and fertility status showed no significant difference. Conclusion: Results showed that infertility does not reduce marital satisfaction. Since marital satisfaction is moderate in both groups, sex education for people bound to marry and sexual counseling for couples can lead to improved sexual satisfaction. abstract_id: PUBMED:37101648 Complex factors related to marital and sexual satisfaction among couples undergoing infertility treatment: A cross-sectional study. Objectives: This study assesses complex factors related to marital and sexual satisfaction among couples undergoing infertility treatment. Methods: This cross-sectional study was conducted with 140 couples who visited fertility centers in Iran, between September 2015 and July 2016. Data collection was done by Marital and Sexual Satisfaction Questionnaires and were analyzed by the IBM SPSS 26 software. Results: There was a significant difference between wives and husbands in the MSQ total scores (p = 0.027). However, there was no significant difference between wives and husbands in the SSQ total scores (p = 0.398). Sexual satisfaction and decision-maker in life among wives and husbands were significant predictors of MSQ. The kind of treatment, cause of infertility and BMI among wives and kinds of treatment, cause of infertility, and decision-maker among husbands were also significant predictors of SSQ. Conclusion: The findings of this study showed that there is a difference between the understanding of marital and sexual satisfaction in wives and their husbands. Healthcare providers need to pay more attention to these differences. abstract_id: PUBMED:30089956 Bipolar affective disorder and its impact on various aspects of marital relationship. Bipolar disorder (BD) is a severe mental disorder, which not only has negative consequences on the life of the patient, but also affects the immediate family members. As it often starts early in life, patients who get married often suffer from many negative consequences in their marital life due to the illness. This review focuses on studies which have evaluated marital rates, rate of marital failure, divorce, infertility, and procreation among patients with BD. In addition, this review evaluates the existing literature pertaining to marital adjustment, marital satisfaction, sexual dysfunction, and sexual satisfaction among patients with BD and their partners/spouses. Data suggest that very high proportion of patients with BD get married and marital rates are higher for patients with BD, when compared with those suffering from schizophrenia. In terms of divorce rates, studies suggest that patients with BD have higher rates of divorce. In terms of fertility rates, studies suggest that compared to those without the illness, the fertility rates among patients with BD are lower. In terms of marital adjustment, results are mixed with some studies suggesting poorer marital adjustment among patients and their spouses too. In terms of sexual dysfunction, studies suggest the presence of sexual dysfunction among one-third to half of the patients receiving lithium. To conclude, this review suggests that patients with BD experience multiple negative marital and sexual consequences. abstract_id: PUBMED:26972768 The relations between marital quality, social support, social acceptance and coping strategies among the infertile Iranian couples. Objective: Researchers aimed to assess marital quality among the infertile couples undergoing assistive reproductive treatments and their coping strategies, social support and social acceptance. Methods: In a cross-sectional study, 133 infertile couples undergoing assisted reproductive treatments were assessed for marital satisfaction, conflict resolution and marital communication, and coping strategies, using a self-report questionnaire. Also, the level of perceived social support and social acceptance as moderator variables were measured. Data were analyzed using independent t test, Pearson correlation coefficient, and linear regressions, after adjusting for age, cause of infertility and the duration of infertility. Results: No significant difference was found in the use of various coping strategies between couples. The correlation for marital satisfaction, marital communication and conflict resolution by using some coping strategy and the level of perceived social acceptance were significantly positive among women. Also, marital relationships had a positive and significant correlation with the level of perceived social support for men. But, unlike women, the couples' scales were significant for the perceived social support. The use of different coping strategies by men and women had a positive correlation with their perceived social support. Conclusion: Coping strategies used by the infertile couples had an important role in different aspects of their married life in search of marital satisfaction, and intermediates by the perceived social acceptance for women and social support for men. abstract_id: PUBMED:30847733 Relationship Between Marital Quality, Social Support and Mental Health During Pregnancy. The aim of this study was to investigate the relationship between marital quality and mental health during pregnancy. This cross-sectional study was conducted on 300 Iranian pregnant women. The ENRICH marital satisfaction subscales, levels of domestic violence, perceived social support, as well as depression and anxiety levels were assessed using a questionnaire. AMOS path analysis was used to explore the causal relationship and the mediating effect of social support among the variables of marital quality subscales and mental health. The results showed that, adjusted for age, the history of infertility and level of anxiety were related to marital satisfaction, marital relationships and level of depression which were related to the level of domestic violence. Sexual satisfaction and the level of perceived social support had a mediating role in such relationships. The study confirmed that marital quality is an important predictor of mental health. Improving the marital quality may lead to improvement in mental health. abstract_id: PUBMED:37822291 Mediating role of relational aggression in the association between infertility stigma and marital relationship distress in infertile women. Infertility is a multifaceted problem that can cause significant impairments with emotional, social, and psychological consequences, including problems in the marital relationship. This study aimed to mediate the role of relational aggression in the association between infertility stigma and marital relationship distress in infertile women. The research method was descriptive and path analysis. The statistical population of the study consisted of all infertile women who were referred to infertility centers and obstetrics and gynecology medical centers in Tehran in 2021 (July to October), and due to lack of full access to them, 300 people were selected by available sampling method who participated in the research through an online questionnaire. Data were collected using the Marital Self-Reporting Questionnaire, Infertility Stigma, and Relational Aggression Questionnaire. Data analysis was performed using structural equation modeling. The results showed that the causal model of the relationship between infertility stigma, relational aggression, and marital relationship distress in infertile women was confirmed based on different fitting indices. Infertility stigma and associated aggression directly affect the marital turmoil of infertile women. On the other hand, infertility stigma indirectly affects infertile women's marital distress through relational aggression (P < .05). Therefore, the infertility stigma and relational aggression play an important role in marital distress in infertile women, and targeting these two components in psychological therapies can effectively reduce marital chaos. Answer: The relationship between infertility and marital benefit is complex and can vary among couples. Some studies suggest that infertility can indeed lead to marital benefit, which is defined as the strengthening of the marriage and bringing partners closer together. For instance, a study found that 25.9% of women and 21.1% of men reported high marital benefit when beginning fertility treatment. Among men, certain coping strategies such as medium use of active-confronting coping and meaning-based coping were significant predictors for high marital benefit. Conversely, keeping infertility a secret, difficult marital communication, and active-avoidance coping were predictors for low marital benefit (PUBMED:16310331). However, other studies emphasize the importance of interventions to improve marital outcomes for infertile couples. For example, a marital relationship enrichment program was shown to significantly increase marital and sexual satisfaction as well as marital intimacy among infertile couples (PUBMED:28868842). Additionally, marital intimacy was found to be high among 88.5% of infertile women in a study conducted in Northern Iran, with spirituality being the highest domain of intimacy reported (PUBMED:28658854). On the other hand, some research indicates that infertility does not necessarily reduce marital satisfaction, as both fertile and infertile groups showed moderate levels of marital satisfaction (PUBMED:28101115). Moreover, complex factors such as sexual satisfaction, decision-making in life, and the cause of infertility were found to be significant predictors of marital satisfaction among couples undergoing infertility treatment (PUBMED:37101648). In summary, while infertility can lead to marital benefit for some couples, likely due to improved communication and coping strategies, it is not a universal outcome. The impact of infertility on marital benefit appears to be influenced by various factors, including coping strategies, communication, and interventions aimed at enriching the marital relationship.
Instruction: Is there a threshold level of maternal education sufficient to reduce child undernutrition? Abstracts: abstract_id: PUBMED:26297004 Is there a threshold level of maternal education sufficient to reduce child undernutrition? Evidence from Malawi, Tanzania and Zimbabwe. Background: Maternal education is strongly associated with young child nutrition outcomes. However, the threshold of the level of maternal education that reduces the level of undernutrition in children is not well established. This paper investigates the level of threshold of maternal education that influences child nutrition outcomes using Demographic and Health Survey data from Malawi (2010), Tanzania (2009-10) and Zimbabwe (2005-06). Methods: The total number of children (weighted sample) was 4,563 in Malawi; 4,821 children in Tanzania; and 3,473 children in Zimbabwe Demographic and Health Surveys. Using three measures of child nutritional status: stunting, wasting and underweight, we employ a survey logistic regression to analyse the influence of various levels of maternal education on child nutrition outcomes. Results: In Malawi, 45% of the children were stunted, 42% in Tanzania and 33% in Zimbabwe. There were 12% children underweight in Malawi and Zimbabwe and 16% in Tanzania.The level of wasting was 6% of children in Malawi, 5% in Tanzania and 4% in Zimbabwe. Stunting was significantly (p values < 0.0001) associated with mother's educational level in all the three countries. Higher levels of maternal education reduced the odds of child stunting, underweight and wasting in the three countries. The maternal threshold for stunting is more than ten years of schooling. Wasting and underweight have lower threshold levels. Conclusion: These results imply that the free primary education in the three African countries may not be sufficient and policies to keep girls in school beyond primary school hold more promise of addressing child undernutrition. abstract_id: PUBMED:35242991 Maternal education and sibling inequalities in child nutritional status in Ethiopia. In many societies child nutritional status varies between siblings because of parental gender and birth order preferences and differential intra-household resource allocation. While more educated women have been found to improve children's nutrition overall, it is unclear whether they also buffer sibling inequalities in nutritional status. We study the interplay between parental preferences, maternal education, and sibling inequalities in child nutritional status in Ethiopia, the second most populous country in sub-Saharan Africa, with high rates of malnutrition, rapid socio-economic change, urban fertility decline, and low, but increasing female education. We base our analysis on a pooled sample of the 2011/12, 2013/14, and 2015/16 waves of the Ethiopian Socioeconomic Survey using 8275 observations from 4402 children between the age of six months and 9 years old nested in 1687 households. Results from multilevel and fixed effects models show sizable gender and birth order differences in nutritional status. Boys had a better nutritional status than girls and earlier born children had a better nutritional status than later born children, both in terms of height-for-age and weight-for-age. More educated mothers buffered sibling inequalities in nutritional status according to birth order, but not according to gender. The height penalty of being a higher order child disappeared for children whose mothers had about eight years of education or more (primary school finished/some secondary school). The beneficial impact of maternal education, counteracting some within-family inequalities, asks for continued investments in girls' and women's education. abstract_id: PUBMED:25613979 The post-2015 agenda: staying the course in maternal and child survival. In this article, we draw on available evidence from Countdown to 2015 and other sources to make the case for keeping women and children at the heart of the next development agenda that will replace the Millennium Development Goal (MDG) framework after 2015. We provide a status update on global progress in achieving MDGs 4 and 5, reduce child mortality and improve maternal health, respectively--showing that although considerable mortality reductions have been achieved, many more women's and children's lives can be saved every day through available, cost effective interventions. We describe key underlying determinants of poor maternal and child health outcomes and the need for well-coordinated, comprehensive approaches for addressing them such as introducing a combination of nutrition specific and sensitive interventions to reduce pervasive malnutrition, targeting interventions to the underserved to reduce inequities in access to care, and increasing women's social status through improved access to education and income-earning opportunities. In the wake of population momentum and emergencies such as the recent ebola outbreak and other humanitarian crises, health systems must be strengthened to be able to respond to these pressures. In conclusion, we underscore that the unfinished business of women's and children's health must be prioritized in the days ahead, and that ending preventable maternal and child deaths is not only a moral obligation but is achievable and essential to sustainable development moving forward. abstract_id: PUBMED:25720451 The role of maternal education in the 15-year trajectory of malnutrition in children under 5 years of age in Bangladesh. Malnutrition in children under 5 years of age (U5s) is a serious public health problem in low- and middle-income countries including Bangladesh. Improved maternal education can contribute effectively to reduce child malnutrition. We examined the long-term impact of maternal education on the risk of malnutrition in U5s and quantified the level of education required for the mothers to reduce the risk. We used pooled data from five nationwide demographic and health surveys conducted in 1996-1997, 1999-2000, 2004, 2007 and 2011 in Bangladesh involving 28 941 U5s. A log-binomial regression model was used to examine the association between maternal education (no education, primary, secondary or more) and malnutrition in children, measured by stunting, underweight and wasting controlling for survey time, maternal age, maternal body mass index, maternal working status, parity, paternal education and wealth quintile. An overall improvement in maternal educational attainment was observed between 1996 and 2011. The prevalence of malnutrition although decreasing was consistently high among children of mothers with lower education compared with those of mothers with higher education. In adjusted models incorporating time effects, children of mothers with secondary or higher education were at lower risk of childhood stunting [risk ratio (RR): 0.86, 95% confidence interval (CI): 0.81, 0.89], underweight (RR: 0.83, 95% CI: 0.78, 0.88) and wasting (RR: 0.82, 95% CI: 0.74, 0.91) compared with children of mothers with no education. We demonstrated the importance of promoting women's education at least up to the secondary level as a means to tackle malnutrition in Bangladesh. abstract_id: PUBMED:21959924 Prioritizing maternal and child health in independent South Sudan. With its independence secured on 9th July 2011, the Republic of South Sudan faces a daunting task to improve public health and primary care in one of the poorest countries in the world. Very high maternal and child mortality rates must be a major concern for the new national government and for the many international agencies working in the country. Poor maternal health outcomes are primarily due to poor prenatal, delivery and post natal care services in health facilities, coupled with low literacy, widespread poverty, and poor nutrition among the general population. Child mortality is the result of widespread malnutrition, pneumonia, malaria, vaccine preventable diseases and diarrheal diseases. National responses to HIV and AIDS with international assistance have been encouraging with relatively low rates of infection. This paper explores barriers and identifies opportunities available to work toward achieving the targets for Millennium Development Goals (MDGs) 5 and 4 to reduce maternal mortality from its current rate of 2,054 deaths per 100,000 live births, and child mortality (currently 135 deaths per 1,000 live births) respectively in the new nation. National and international organizations have a social responsibility to mobilize efforts to focus on maternal, child health and nutrition issues targeting the worst affected regions for improving access to primary care and obstetrical services. Initiatives are needed to build up community access to primary care with a well supervised community health workers program, as well as training mid level management capacity with higher levels of funding from national and international sources to promote public health than current in the new republic. abstract_id: PUBMED:16122090 Child nutritional status and maternal factors in an urban slum in Nairobi, Kenya. Objective: To assess the relationship between maternal factors and child nutritional status among children aged 6-36 months. Design: Cross sectional descriptive survey. Setting: Urban slum settlement in Nairobi, Kenya. Subjects: This study included a random sample of 369 households of mothers with children aged 6-36 months at the time of the study. Results: Maternal factors which showed a positive significant association with at least one of the three child nutritional status indicators (height for age, weight for age and weight for height) were birth spacing, parity, maternal education level and mothers marital status. Child spacing and parity emerged as the most important predictors of stunting among study children. Maternal nutritional status was also shown to be positively associated with child nutritional status. Maternal ill health had a negative effect on child nutritional status. Conclusion: Maternal factors are an underlying cause of childhood malnutrition. abstract_id: PUBMED:27501820 The association of parental education with childhood undernutrition in low- and middle-income countries: comparing the role of paternal and maternal education. Background: Most existing research on the association of parental education with childhood undernutrition focuses on maternal education and often ignores paternal education. We systematically investigate differences in maternal and paternal education and their association with childhood undernutrition. Methods: One hundred and eighty Demographic and Health Surveys from 62 countries performed between 1990 and 2014 were analysed. We used linear-probability models to predict childhood undernutrition prevalences, measured as stunting, underweight and wasting, for all combinations of maternal and paternal attainment in school. Models were adjusted for demographic and socio-economic covariates for the child, mother and household, country-level fixed effects and clustering. Additional specifications adjust for local area characteristics instead of country fixed effects. Results: Both higher maternal and paternal education levels are associated with lower childhood undernutrition. In regressions adjusted for child age and sex as well as country-level fixed effects, the association is stronger for maternal education than for paternal education when their combined level of education is held constant. In the fully adjusted models, the observed differences in predicted undernutrition prevalences are strongly attenuated, suggesting a similar importance of maternal and paternal education. These findings are confirmed by the analysis of composite schooling indicators. Conclusions: We find that paternal education is similarly important for reducing childhood undernutrition as maternal education and should therefore receive increased attention in the literature. abstract_id: PUBMED:37444102 Maternal Employment and Child Malnutrition in Ecuador. Background: This paper estimates the causal impact of maternal employment on childhood malnutrition status in Ecuador to understand the trade-off between the time mothers devote to work and the time they dedicate to child-caring activities. Methods: We use the instrumental variables (IV) approach and exogenous cantonal variation in maternal labor market conditions to account for the potential endogeneity of mothers' employment. The analysis employs the Ecuadorian National Health and Nutrition Survey 2018 and the Living Conditions Survey 2014. Results: The IV estimations indicate that maternal employment increases the probability of having stunted children by between 4.2 and 18.1 percent, while no significant effect is found in the case of children suffering from wasting, being underweight, or being overweight. The effect of maternal employment on stunting is stronger among mothers with high education and living in high-income households. Inconclusive effects of mothers' overweight status are reported. The results are robust to several robustness checks. Conclusions: Overall, our findings suggest that the additional income that a working mother may obtain (the income effect) does not offset the loss of time available for direct childcare (the time constraint) in terms of child health status, and this effect is even more apparent for more affluent and more educated mothers. Government interventions, including effective conditional cash transfers and/or in-kind family policies, intended to reduce the cost of raising children among vulnerable families appear to be aligned with our findings. abstract_id: PUBMED:35784199 An Evolutionary Model of "Sexual Conflict" Over Women's Age at Marriage: Implications for Child Mortality and Undernutrition. Background: Early women's marriage is associated with adverse outcomes for mothers and their offspring, including reduced human capital and increased child undernutrition and mortality. Despite preventive efforts, it remains common in many populations and is often favored by cultural norms. A key question is why it remains common, given such penalties. Using an evolutionary perspective, a simple mathematical model was developed to explore women's optimal marriage age under different circumstances, if the sole aim were to maximize maternal or paternal lifetime reproductive fitness (surviving offspring). Methods: The model was based on several assumptions, supported by empirical evidence, regarding relationships between women's marital age and parental and offspring outcomes. It assumes that later marriage promotes women's autonomy, enhancing control over fertility and childcare, but increases paternity uncertainty. Given these assumptions, optimal marriage ages for maximizing maternal and paternal fitness were calculated. The basic model was then used to simulate environmental changes or public health interventions, including shifts in child mortality, suppression of women's autonomy, or promoting women's contraception or education. Results: In the basic model, paternal fitness is maximized at lower women's marriage age than is maternal fitness, with the paternal optimum worsening child undernutrition and mortality. A family planning intervention delays marriage age and reduces child mortality and undernutrition, at a cost to paternal but not maternal fitness. Reductions in child mortality favor earlier marriage but increase child undernutrition, whereas ecological shocks that increase child mortality favor later marriage but reduce fitness of both parents. An education intervention favors later marriage and reduces child mortality and undernutrition, but at a cost to paternal fitness. Efforts to suppress maternal autonomy substantially increase fitness of both parents, but only if other members of the household provide compensatory childcare. Conclusion: Early women's marriage maximizes paternal fitness despite relatively high child mortality and undernutrition, by increasing fertility and reducing paternity uncertainty. This tension between the sexes over the optimal marriage age is sensitive to ecological stresses or interventions. Education interventions seem most likely to improve maternal and child outcomes, but may be resisted by males and their kin as they may reduce paternal fitness. abstract_id: PUBMED:26744152 Pakistan and the Millennium Development Goals for Maternal and Child Health: progress and the way forward. The world has made substantial progress in reducing maternal and child mortality, but many countries are projected to fall short of achieving their Millennium Development Goals (MDGs) 4 and 5 targets. The major objective of this paper is to examine progress in Pakistan in reducing maternal and child mortality and malnutrition over the last two decades. Data from recent national and international surveys suggest that Pakistan lags behind on all of its MDGs related to maternal and child health and, for some indicators especially related to nutrition, the situation has worsened from the baseline of 1990. Progress in addressing key social determinants such as poverty, female education and empowerment has also been slow and unregulated population growth has further compromised progress. There is a need to integrate the various different sectors and programmes to achieve the desired results effectively and efficiently as many of the determinants and influencing factors are outside the health sector. Answer: The research suggests that there is indeed a threshold level of maternal education that is associated with reduced levels of child undernutrition. In Malawi, Tanzania, and Zimbabwe, higher levels of maternal education were found to reduce the odds of child stunting, underweight, and wasting, with the threshold for stunting being more than ten years of schooling, while wasting and underweight had lower threshold levels (PUBMED:26297004). Similarly, in Ethiopia, the height penalty of being a higher order child disappeared for children whose mothers had about eight years of education or more (PUBMED:35242991). In Bangladesh, maternal education up to the secondary level was shown to be important in reducing the risk of malnutrition in children under 5 years of age (PUBMED:25613979). However, the relationship between maternal education and child nutritional status is complex and influenced by various factors. For instance, maternal employment in Ecuador was found to increase the probability of having stunted children, suggesting that the additional income from a working mother may not offset the loss of time available for direct childcare (PUBMED:37444102). Moreover, the role of paternal education should not be overlooked, as it has been found to be similarly important for reducing childhood undernutrition as maternal education (PUBMED:27501820). Overall, the evidence indicates that promoting women's education, particularly beyond primary school, is crucial for addressing child undernutrition. However, the specific threshold level of education may vary depending on the context and other social determinants such as poverty, female empowerment, and access to health services (PUBMED:16122090, PUBMED:21959924, PUBMED:26744152).
Instruction: Class A1 gestational diabetes: a meaningful diagnosis? Abstracts: abstract_id: PUBMED:8336875 Class A1 gestational diabetes: a meaningful diagnosis? Objective: To compare pregnancy outcomes in women diagnosed as having class A1 gestational diabetes with those of a group with a normal 3-hour glucose tolerance test (GTT) to assess morbidities attributable to glucose intolerance. Methods: Selective 50-g GTT identified pregnant women who received a 3-hour GTT. Over a 16-month period, 159 women were diagnosed as having class A1 gestational diabetes according to the National Diabetes Data Group criteria. During the latter 12 months of this time period, 151 women who had a normal GTT result were identified for comparison. Results: There were statistically significant differences in age and the development of peripartum hypertension in women with class A1 gestational diabetes compared with the normal 3-hour GTT group. There were no significant differences in any neonatal outcome variable, including percent delivering large for gestational age (LGA) neonates in women with A1 diabetes compared to controls. Overall, 111 (36%) of the 310 neonates were classified as LGA, a rate more than double that in the singleton population in our hospital. Maternal weight, parity, and a history of a previous macrosomic infant were significantly associated with LGA outcome. Mean maternal weight was the same in the two GTT groups, implying an independent effect on fetal size. Obstetric interventions were not significantly different between the groups, so differences in intervention could not account for the lack of difference in outcome variables. The impact of dietary counseling in the class A1 diabetic women is also an unlikely explanation for the lack of differences in outcome. Within the normal-GTT group, women with one abnormal 3-hour value had a frequency of LGA infants similar to that of women with all normal 3-hour GTT values. These results suggest that there is a selection effect of screening for glucose intolerance that may relate more to other risk factors for LGA outcome than to glucose intolerance. Maternal obesity is an independent and more potent risk factor for large infants than is glucose intolerance. Conclusion: The diagnosis of class A1 gestational diabetes is not significantly associated with obstetric and perinatal morbidities. A nondiscriminating diagnostic test undermines the validity of population screening for glucose intolerance. abstract_id: PUBMED:34774643 Gestational weight gain and the risk of gestational diabetes mellitus: A latent class trajectory analysis using birth cohort data. Aims: To explore trajectories of gestational weight gain (GWG) before diagnosis and its association with risk of gestational diabetes mellitus (GDM). Methods: A population-based retrospective cohort study including 37,060 women with live singleton was conducted between 2013 and 2019 in China. Latent class trajectory model (LCTM) was used to identify GWG trajectories, and Poisson regression with robust error estimates was used to estimate risk ratio (RR) of GDM. Results: Among total 37,060 participants, 25.47% of women were developed with GDM. Two trajectories of GWG were identified as non-excessive weight gain (94.31%) and excessive weight gain (5.69%) before diagnosis of GDM. Women with excessive GWG trajectory before diagnosis had significantly 32.8% (aRR = 1.328, 95 %CI: 1.252 ∼ 1.409, P < 0.001) increased risk of developing GDM compared with non-excessive GWG trajectory. Women with excessive GWG trajectory also had higher risk of macrosomia (aRR = 1.476, 95 %CI: 1.307 ∼ 1.666, P < 0.001) and cesarean delivery (aRR = 1.126, 95 %CI: 1.081 ∼ 1.174, P < 0.001). The impact of excessive GWG trajectory on GDM was greater among pre-pregnancy normal weight women compared with overweight/obese or underweight women. Conclusion: Women with excessive GWG trajectory before diagnosis had significantly higher risk of GDM and GDM-related adverse outcomes, and pre-pregnancy normal weight women with excessive GWG trajectory should also be concerned. abstract_id: PUBMED:38264000 Perinatal outcomes in women with class IV obesity compared to women in the normal or overweight body mass index categories: A population-based cohort study in Qatar. Background: The prevalence of childhood and adult obesity is rising exponentially worldwide. Class IV obesity (body mass index, BMI ≥50 kg/m2) is associated with a higher risk of adverse perinatal outcomes. This study compared these outcomes between women with class IV obesity and women in the normal or overweight categories during pregnancy. Methods: A retrospective cohort study was performed in Qatar, including women having singleton live births beyond 24 weeks of gestation, classified into two class IV obesity and normal/overweight (BMI between 18.5 and 30.0 kg/m2). The outcome measures included the mode of delivery, development of gestational diabetes and hypertension, fetal macrosomia, small for date baby, preterm birth and neonatal morbidity. Adjusted odds ratios (aOR) with 95% confidence intervals (95% CI) were determined using multivariable logistic regression models. Results: A total of 247 women with class IV obesity were compared with 6797 normal/overweight women. Adjusted analysis showed that women with class IV obesity had 3.2 times higher odds of cesarean delivery (aOR: 3.19, CI: 2.26-4.50), 3.4 times higher odds of gestational diabetes (aOR: 3.39, CI: 2.55-4.50), 4.2 times higher odds of gestational hypertension (aOR: 4.18, CI: 2.45-7.13) and neonatal morbidity (aOR: 4.27, CI: 3.01-6.05), and 6.5 times higher odds of macrosomia (aOR 6.48, CI 4.22-9.99). Conclusions: Class IV obesity is associated with more adverse perinatal outcomes compared with the normal or overweight BMI categories. The study results emphasized the need for specialized antenatal obesity clinics to address the associated risks and reduce complications. abstract_id: PUBMED:16836615 The presence of gestational diabetes is associated with increased detection of anti-HLA-class II antibodies in the maternal circulation. Problem: Gestational diabetes (GD) may be associated with temporarily reduced immune tolerance toward alloantigens for the time of pregnancy. The aim of this study was to assess anti-HLA-class I and -II antibodies as markers for an aberrant immunostimulation in women with GD. Method Of Study: The percentage of anti-HLA-class I and -II antibodies was estimated in women with GD, normal term delivery and fetal distress, which was confirmed by demonstrating low cord blood pH for this patient group. These antibodies may cross the placental barrier and cause interleukin-6 (IL-6) release from fetal monocytes by cross-linking monocytes with antibody-loaded cells. Therefore we estimated the percentage of IL-6-positive monocytes in the fetal circulation of these three patient groups. Results: We found a significantly increased percentage of anti-HLA-class II in the circulation of women with GD. In comparison with women with normal term delivery, a significantly increased percentage of IL-6-positive monocytes was detected for women with GD and for women with fetal distress. Significantly decreased cord blood pH were detected for neonates born in the presence of fetal distress but not for neonates born in the presence of GD. Conclusions: Our results suggest that GD is associated with an increased humoral immune response against HLA-class II antigens. abstract_id: PUBMED:2006934 Histocompatibility antigen subtypes in black women with class A1 or class GB diabetes mellitus. Insulin-dependent diabetes mellitus is associated with an increased frequency of certain histocompatibility antigens located on chromosome six, the most common types being B-8, B-15, DR-3, DR-4, and DR-7. We therefore theorized that screening for these subtypes may allow the identification of those women with gestational diabetes who will remain euglycemic on dietary modification (class A1) compared with those who will require insulin to achieve euglycemia (class GB). From 1982 to 1987, 228 black women with gestational diabetes were screened for the above histocompatibility antigens. As theorized, certain histocompatibility antigen subtypes were more common in women with class GB gestational diabetes mellitus; DR-2 (41.8% versus 23.7% p = 0.015), B-15 (p = 0.07), and DR-3 (p = 0.08). However, because of the low sensitivity (42%), specificity (75%), and positive predictive value (36%), this test is impractical in the clinical management of women with gestational diabetes mellitus. abstract_id: PUBMED:37747284 Trajectories of cognitive reactivity and its predictive value on postpartum depression in Chinese women: a latent class growth modeling analysis. Many women are experiencing postpartum depression (PPD) after giving birth. How to recognize and intervene in high-risk PPD women early and effectively remains unknown. Our objective is to describe the latent trajectory groups of cognitive reactivity (CR) in perinatal women, and their relationship to demographic and disease-related factors, as well as investigate the associations with PPD. Data from 321 perinatal women who were evaluated in urban tertiary hospitals in China at three-time points: 32-35 weeks of pregnancy, 1 week postpartum, and 6 weeks postpartum. Latent class growth modeling was used to identify the trajectory patterns of CR and logistic regression was used to explore the association between demographic and disease-related factors, CR trajectories, and depression. Three trajectory groups were identified: the continuing deterioration group (17.2%), the postpartum deterioration group (22.1%), and the consistent resilient group (60.7%). Participants with a bachelor's degree or higher and with gestational diabetes diagnosis were more likely to be in the continuing deterioration group. Those who were from only-child families were more likely to be in the postpartum deterioration group. Women in the continuing deterioration group and postpartum deterioration group were more likely to experience PPD. Targeted interventions should be developed based on trajectory group of CR. abstract_id: PUBMED:2006933 Autoantibodies in black women with class A1 or class GB diabetes mellitus. No marker except repeated fasting glucose determinations has proven useful to ascertain prospectively which women with gestational diabetes mellitus will remain euglycemic by diet modification or will require insulin therapy. We screened 183 black women with gestational diabetes mellitus to determine if the presence of islet cell, mitochondrial, nuclear, DNA, parietal cell, smooth muscle, thyroid microsomal, thyroid thyroglobulin autoantibodies, or rheumatoid factor predicted the need for insulin therapy to maintain euglycemia in women with gestational diabetes mellitus. One hundred forty-two women maintained normal fasting plasma glucose levels with dietary modifications and 41 required institution of split-dose insulin therapy. We found no significant differences in the prevalence of these autoantibodies in black women with Class GB versus Class A1 diabetes mellitus. We conclude that screening for autoantibodies in women with gestational diabetes mellitus is not useful in determining which patients will subsequently require insulin therapy during their pregnancies. abstract_id: PUBMED:29016493 Early Pregnancy Diabetes Screening and Diagnosis: Prevalence, Rates of Abnormal Test Results, and Associated Factors. Objective: To evaluate the prevalence of early diabetes screening in pregnancy, rates of abnormal diabetes test results before 24 weeks of gestation, and factors associated with early diabetes screening. Methods: This was a retrospective cohort study of all singleton deliveries from 2012 to 2014 among diverse clinical practices at a large academic medical center. We assessed rates of early (less than 24 weeks of gestation) and routine (at or beyond 24 weeks of gestation) diabetes screening, with abnormal test results defined using the Carpenter-Coustan criteria, a 50-g glucose challenge test result greater than 200 mg/dL, or a hemoglobin A1C level greater than 6.5%. Univariate and multivariate analyses were used to evaluate clinical and demographic determinants of screening and diagnosis. Results: Overall, 1,420 of 11,331 (12.5%) women underwent early screening. Increasing body mass index (BMI) category, race, public insurance, history of gestational diabetes mellitus, a family history of diabetes, and chronic hypertension were associated with early screening. Early screening rates rose with increasing BMI category, but only 268 of 551 (48.6%) of women with class III obesity underwent early screening. Among those screened early, 2.0% of normal-weight women, 4.0% of overweight women, 4.2% of class I obese women, 3.8% of class II obese women, and 9.0% of class III obese women had abnormal early test results (P<.001). Conclusion: Early diabetes screening is used inconsistently, and many women with risk factors do not undergo early screening. A significant proportion of women with class III obesity will test positive for gestational diabetes mellitus before 24 weeks of gestation, and studies are urgently needed to assess the effect of early diabetes screening and diagnosis on perinatal outcomes in high-risk women. abstract_id: PUBMED:15603871 Different HLA-DR-DQ and MHC class I chain-related gene A (MICA) genotypes in autoimmune and nonautoimmune gestational diabetes in a Swedish population. The genetic susceptibility for gestational diabetes (GDM) was estimated by comparisons of genotypes within human leukocyte antigen (HLA) and major histocompatibility complex class I chain-related gene A (MICA) in 199 women with GDM and 213 healthy women. At least one of ICA, glutamic acid decarboxylase antibodies, or islet cell antigen-2 antibodies/tyrosine phosphatase antibodies was found in 6.0% (12/199) of women with GDM and were considered as autoimmune GDM, whereas the remaining 187 were considered as nonautoimmune GDM. HLA genotyping was done with polymerase chain reaction and sequence-specific oligonucleotides. MICA polymorphism was determined with polymerase chain reaction and fragment size determination. HLA-DR3-DQ2/x or DR4-DQ8/x and MICA5.0/5.1 were more frequent in autoimmune GDM compared with controls; 92% versus 46% and 42% versus 13% and conferred increased risk (odds ratio [OR] = 13; 95% confidence interval [CI] 1.7-104) and (OR = 4.7; 95%CI 1.4-16). Four other genotypes were more frequent in nonautoimmune GDM compared with controls: HLA-DR7-DQ2/y, 24% versus 14%; DR9-DQ9/y, 9.6% versus 1.9%; DR14-DQ5/y, 7.5% versus 0.94%; and MICA5.0/z, 24% versus 13% and gave increased risk: OR = 2.0; 95%CI 1.2-3.4, OR = 5.6; 95%CI 1.8-17, OR = 8.5; 95%CI 1.9-38, and OR = 2.0; 95%CI 1.2-3.4, respectively. We concluded that autoimmune diabetes with onset during pregnancy is associated with the type 1 diabetes-associated genotypes and also with MICA5.0/5.1, whereas DR7-DQ2/y, DR9-DQ9/y, DR14-DQ5/y, and MICA5.0/z are risk factors for nonautoimmune GDM. abstract_id: PUBMED:10404800 Predictive value of human leukocyte antigen class II typing for the development of islet autoantibodies and insulin-dependent diabetes postpartum in women with gestational diabetes. Gestational diabetes mellitus (GDM) is a risk factor for the development of insulin-dependent diabetes mellitus (IDDM) and noninsulin-dependent diabetes mellitus postpartum. To evaluate whether there is any association of human leukocyte antigen (HLA) class II alleles (DR and DQ) with GDM and the postpartum development of IDDM, we analyzed 184 women with GDM from Germany for HLA class II alleles, islet autoantibodies [islet cell autoantibodies (ICA), glutamic acid decarboxylase autoantibodies (GADA), and protein tyrosine phosphatase IA-2 autoantibodies (IA-2A), and the postpartum development of diabetes. No elevation in the frequency of any HLA class II alleles was observed in GDM patients compared to 254 nondiabetic unrelated subjects. DR3 allele frequency was significantly increased in 43 women with islet autoantibodies [corrected P value (Pc) = 0.02], in particular in those with GADA (Pc = 0.002), or in the 24 women who developed IDDM postpartum (Pc = 0.005). In women with GADA, DR4 and DQB1*0302 were significantly elevated (Pc = 0.009). Twenty-five (59.5%) islet antibody-positive women and 17 (74%) women who developed IDDM postpartum had a DR3- or DR4-containing genotype. The cumulative risk to develop IDDM within 2 yr postpartum in GDM women with either DR3 or DR4 was 22% compared to 7% in women without those alleles (P = 0.02) and rose to 50% in the DR3- or DR4-positive women who had required insulin during pregnancy (P = 0.006). Combining the determination of susceptible HLA alleles (DR3, DR4) with islet autoantibody measurement increased the sensitivity of identifying GDM women developing postpartum IDDM to 92%, but did not improve risk assessment above that achieved using GADA measurement alone, which was the strongest predictor of IDDM. These results indicate that women with GDM who have islet autoantibodies at delivery or develop IDDM postpartum have HLA alleles typical of late-onset type 1 diabetes, and that both HLA typing and islet antibodies can predict the development of postpartum IDDM. Answer: The diagnosis of class A1 gestational diabetes (GDM) has been questioned regarding its significance in predicting obstetric and perinatal morbidities. A study comparing pregnancy outcomes between women with class A1 GDM and those with a normal 3-hour glucose tolerance test (GTT) found no significant differences in neonatal outcomes, including the percentage of large for gestational age (LGA) neonates. The study suggested that maternal obesity is a more potent risk factor for large infants than glucose intolerance, and the diagnosis of class A1 GDM was not significantly associated with obstetric and perinatal morbidities (PUBMED:8336875). In contrast, another study found that excessive gestational weight gain (GWG) before the diagnosis of GDM was associated with a significantly increased risk of developing GDM and related adverse outcomes, such as macrosomia and cesarean delivery. This study emphasized that even pre-pregnancy normal weight women with excessive GWG should be monitored for GDM (PUBMED:34774643). Furthermore, a study on women with class IV obesity (BMI ≥50 kg/m2) showed that they had significantly higher odds of adverse perinatal outcomes, including gestational diabetes, compared to women in the normal or overweight BMI categories. This indicates that obesity is a significant risk factor for GDM and its complications (PUBMED:38264000). Other studies have explored the immunological and genetic aspects of GDM. One study found an increased presence of anti-HLA-class II antibodies in the maternal circulation of women with GDM, suggesting an aberrant immunostimulation (PUBMED:16836615). Genetic studies have identified certain HLA-DR-DQ and MICA genotypes associated with autoimmune and nonautoimmune GDM, indicating a genetic predisposition to the condition (PUBMED:15603871; PUBMED:10404800). In summary, while the diagnosis of class A1 GDM may not be significantly associated with certain obstetric and perinatal morbidities, other factors such as excessive GWG and obesity, particularly class IV obesity, are important risk factors for GDM and its complications. Additionally, immunological and genetic factors may play a role in the development of GDM, suggesting that the condition is multifactorial and that class A1 GDM may still be a meaningful diagnosis when considered alongside other risk factors.
Instruction: Role of male factor in early recurrent embryo loss: do antioxidants have any effect? Abstracts: abstract_id: PUBMED:18077859 Early embryo death: does the male factor play a role? Objective: To discuss the possible role of the male factor in early embryo death. Method: A detailed bibliographic review has been put together to establish which alterations in spermatozoa can be associated with early embryo death. Results: Before the fusion between plasma membranes of the sperm and the oocyte occurs, both germ cells must undergo a maturation process that allows successful fertilization and embryo development. The study of couples with early embryo loss is usually approached from the side of the woman due to the obvious relationship that exists between the female and the developing embryo. However, it is not illogical to suppose that a genetic or epigenetic alteration of the sperm could have important consequences on these losses due to the necessary contribution of the male gamete not only to embryonic but also to placental development. On the other hand, spermatozoa have certain characteristics such as a highly compact DNA, they undergo apoptosis and the seminal plasma contains antioxidants that protect the structural and functional integrity of the germ cell. These factors assure fertilization and embryo development. Nevertheless, epigenetic alterations of the sperm such as altered chromatin packing, mistakes in imprinting, absence or alteration of the centrosome, telomeric shortening and absence of sperm RNA, could affect functions leading to early embryo loss. Conclusions: Knowledge concerning sperm intervention previous to embryo development will provide the basis for better understanding and for possible diagnosis and treatment of diverse reproductive alterations in men that could impede embryo development. abstract_id: PUBMED:18829003 Role of male factor in early recurrent embryo loss: do antioxidants have any effect? Objective: To evaluate whether increasing antioxidant intake in men with high levels of DNA damage or lipid peroxidation improves gestational results in couples with history of recurrent embryo loss. Design: Descriptive study (case series). Setting: Early recurrent embryo loss program at the University of Antioquia, Medellín, Colombia. Patient(s): Seventeen men whose spouses had a history of two or more embryo losses before 12 weeks of gestation. Intervention(s): Male partners with increased DNA fragmentation index (%DFI) or high thiobarbituric acid reactive substances (TBARS) were instructed to consume a diet rich in antioxidants or commercial multivitamins containing beta-carotene, vitamin C, vitamin E, and zinc for at least 3 months. Main Outcome Measure(s): Pregnancy outcome was recorded in the spouses of men with increased %DFI or TBARS who received antioxidant supplementation. Results: Of the 17 men, 9 (53%) presented with an increased %DFI or TBARS. They were started on an antioxidant supplementation regimen. Of these nine men, six of their spouses became pregnant. All couples whose male partners accepted antioxidant supplementation achieved a successful pregnancy. Conclusions: Our study demonstrates the benefits of an increased intake of antioxidant-rich food or antioxidant supplements by men who show high levels of sperm DNA fragmentation or lipid peroxidation, which could result in an improvement in gestational outcomes in couples with history of recurrent embryo losses. abstract_id: PUBMED:2426377 Clinical significance of early pregnancy factor We studied the clinical significance of Early Pregnancy Factor (EPF), which was detected at a very early stage after fertilization in maternal serum, using a new stable assay system. The results obtained were as follows: EPF was useful for cyesiognosis at the earliest stage, as EPF was detected earlier than beta-hCG. Therefore the detection of EPF makes it possible to supply the first information concerning pregnancy. EPF was also useful in judging the potential for fertilization of gametes, especially spermatozoon, and other various factors. Differentiation between obstruction of fertilization and that of implantation was made possibly by measuring EPF. A high incidence of embryo loss was suggested. And we suggested the importance of the serum progesterone level in the preimplantation period as one of the many factors involved in embryo loss. EPF was also able to contribute to the prognostic diagnosis of abortion in early pregnancy, for it was a very sensitive marker of a viable embryo. These findings showed that EPF supplied new and very useful information as a marker of fertilization and a viable embryo, and it possessed clinical significance in the treatment of sterility. abstract_id: PUBMED:11076092 Early pregnancy factor activity in serum of patients with surgical abortion. Problem: The presence of early pregnancy factor (EPF) has been repeatedly confirmed as indispensable to successful pregnancy. However, there is as yet little reported about how surgical abortion would affect the EPF activity, owing to the induced embryo loss. The aim of this study was to pursue this among a large number of patients available in the People's Republic of China. Method: Sera from aborters were collected before surgical abortion and again on the 3rd, 5th and 7th day after treatment. EPF activity was detected by rosette inhibition assay. Results: Before surgical abortion, the mean level of EPF in pregnancy sera was about the same as that of the positive control. After surgical abortion, the EPF level declined rapidly for the first 3 days and then dropped gradually within the negative control range after 5-7 days. Discussion: Quantitative study of EPF activity along temporal dimensions (duration) due to surgical abortion further promotes the efficiency to take EPF activity and its rate of change as truly index for monitoring embryonic care and development of normal pregnancy. abstract_id: PUBMED:29556954 Role of chemerin/CMKLR1 in the maintenance of early pregnancy. Chemerin is a cytokine that attracts much attention in the reproductive process. This study aimed to explore the effects of chemerin and its receptor chemokine-like receptor 1 (CMKLR1) on the maintenance of early pregnancy. The expression levels of chemerin and CMKLR1 in the decidua tissues of 20 early normal pregnant women and 20 early spontaneous abortion women were examined by Western blot and real-time polymerase chain reaction analyses. CMKLR1 receptor antagonist (α-NETA) was then intrauterinely injected into normal pregnant mice model to assess its effect on the outcome of pregnancy and the phosphorylation rate of ERK1/2 in decidua tissues.We found that the expression level of chemerin in women who had experienced early spontaneous abortion was lower than in those who had experienced normal early pregnancy (P < 0.01); conversely, CMKLR1 expression was higher in the former than in the latter (P < 0.01). In a pregnant-mouse model, the embryo resorption rate of α-NETA group was higher than that in the negative control group (61.5% vs. 10.8%) (P < 0.001). Compared with the control group, ERK1/2 phosphorylation in decidua tissues decreased in the α-NETA-treated group (P < 0.01). These results suggested that the inhibition of the chemerin/CMKLR1 signaling pathway can lead to the abortion of mouse embryos, and that chemerin/CMKLR1 may play an important role in the maintenance of early pregnancy possibly by regulating ERK1/2 phosphorylation. abstract_id: PUBMED:37033974 PlGF/FLT-1 deficiency leads to reduced STAT3-C/EBPβ signaling and aberrant polarization in decidual macrophages during early spontaneous abortion. Introduction: Dysregulated macrophage polarization (excessive M1-like or limited M2-like macrophages) in the early decidua contributes to allogeneic fetal rejection and thus early spontaneous abortion. However, the modulators of M1/M2 balance at the early maternal-fetal interface remain mostly unknown. Methods: First-trimester decidual tissues were collected from normal pregnant women undergoing elective pregnancy terminations and patients with spontaneous abortion. We measured the expression of placental growth factor (PlGF) and Fms-like-tyrosine-kinase receptor 1 (FLT-1), and characterized the profiles of macrophages in decidua. Notably, we investigated the effect of recombinant human PlGF (rhPlGF) on decidual macrophages (dMφs) from normal pregnancy and revealed the underlying mechanisms both in vitro and in vivo. Results: The downregulated expression of PlGF/ FLT-1 may result in spontaneous abortion by inducing the M1-like deviation of macrophages in human early decidua. Moreover, the CBA/J×DBA/2 abortion-prone mice displayed a lower FLT-1 expression in uterine macrophages than did CBA/J×BALB/c control pregnant mice. In in vitro models, rhPlGF treatment was found to drive the M2-like polarization of dMφs via the STAT3/CEBPB signaling pathway. These findings were further supported by a higher embryo resorption rate and uterine macrophage dysfunction in Pgf knockout mice, in addition to the reduced STAT3 transcription and C/EBPβ expression in uterine macrophages. Discussion: PlGF plays a key role in early pregnancy maintenance by skewing dMφs toward an M2-like phenotype via the FLT-1-STAT3-C/EBPβ signaling pathway. Excitingly, our results highlight a rationale that PlGF is a promising target to prevent early spontaneous abortion. abstract_id: PUBMED:9196793 A study of early pregnancy factor activity in preimplantation. Problem And Method: Early pregnancy factor (EPF), an Immunosuppressive substance, which appears in pregnant women's sera 48 h after fertilization, is a kind of pregnancy-specific protein. To determine whether the EPF activity could be a super early indicator of pregnancy, we used rosette inhibition assay to detect EPF activity in the sera, collected from 70 women 2-7 days after ovulation intending to conceive monitored by ultrasonography. Simultaneously we selected 40 non-pregnant sera and 12 early-pregnant sera as negative control and positive control, respectively. Results: The results of this study demonstrated that EPF activity is detected in 35 women's sera out of 70 women within 2-7 days after ovulation, and 28 women out of the 35 were pregnant, which was known by follow-up, and 7 were not pregnant, possibly due to either false positive results or embryo loss because of preimplantation failure, thus causing no pregnancy. The other 35 out of 70 had no EPF activity and 34 of them were not pregnant, which was known by follow-up, but one case became pregnant, which was false negative result. Our study showed that diagnosis of the super early pregnancy could be made by detecting EPF activity in maternal serum within the time of preimplantation. The accuracy of pregnancy diagnosis by this method is 88.6%, with a false negative rate of 3.4% and a false positive rate of 17.1%. The beta-HCG level was measured from the above 70 women's sera in order to contrast EPF activity. All of the sera collected 2-6 days following ovulation indicated that there were lower beta-HCG values in very early pregnancy (> or = a5 mIU/ml). On the seventh day after ovulation, EPF activity was detected in 11 out of 15 sera with only 2 of them with a b-HCG level that reached or slightly surpassed that of the early pregnancy diagnosis (5 mIU/ml and 5.4 mIU/ml, respectively). This demonstrated that beta-HCG is not the earliest signal of pregnancy; otherwise the EPF activity is one that appears 2-6 days earlier than beta-HCG appears. We measured the progesterone level of the 48 sera from the 70 collected above within 2-7 days postovulation and found most of them reached the level of progesterone in the luteal phase (7.5-98.3 nmol/L). This indicated that ovulation had taken place in these women, which was in accordance with observations by ultrasonography. Conclusions: Our study showed that diagnosis (of 88.6%) of super early pregnancy could be made with an accuracy of 88.6% by detecting EPF activity in maternal serum within 2-days after ovulation. This offers a basis for pregnancy diagnosis for the women who attempt to terminate their pregnancy safely or who conceive unexpectedly, and it contributes to family-planning. abstract_id: PUBMED:1526259 Early pregnancy factor (EPF) as a marker for the diagnosis of subclinical embryonic loss. The validation of EPF as a possible correlate of early fertilization has made it possible to study and detect fertilization of the ovum in normal fertile women (during the luteal phase) and also in women with infertility, where the fertilization of the ovum may not be affected but there may be impairment in early embryonic development which results in early embryo loss or subclinical embryo loss. Our results have suggested that using EPF as a marker, we could detect subclinical embryonic loss in 57.8% of the infertile women where more than one menstrual cycle was studied and the blood was collected 4-7 days after ovulation. After the missed period, 80% of the patients who were negative for EPF but positive for hCG had spontaneous abortions. It would be interesting to study how EPF behaves as a marker, to detect subclinical embryonic loss in diverse pathological situations such as recurrent abortions, parental age and translocation carrier parents. abstract_id: PUBMED:19559676 Knockout of the regulatory factor X1 gene leads to early embryonic lethality. The biological function of regulatory factor X1 (RFX1), the prototype member of the transcription factor RFX family, is not clear. We have used gene trap technique to disrupt the expression of RFX1 in mice. Although, heterozygous RFX1(+/-) mice appear normal and fertile, homozygous RFX1(-/-) embryos died at an early stage (most likely before embryonic day 2.5). Our results indicate that RFX1 regulates expression of genes that are essential for early embryonic development/survival and that RFX1 function can not be compensated by other RFX1 family members. abstract_id: PUBMED:7799317 Effects of sensory stimuli on the incidence of fetal resorption in a murine model of spontaneous abortion: the presence of an alien male and postimplantation embryo survival. Pregnancy outcome may be altered by both genetic and environmental factors. The mating of CBA/J female mice with DBA/2 males normally results in pregnancies characterized by a relatively high incidence of early embryo compared with most other syngeneic or allogeneic matings. This study addressed the role of normal laboratory stress in the induction of early embryo loss. The previously studied 'Bruce effect' describes the total loss of preimplantation embryos (pregnancy block) that is apparently caused by the stress induced by the presence of an alien male and mediated by neuroimmunological effects on prolactin activity. To determine whether this effect could be responsible for the high incidence of postimplantation embryo losses in the CBA/J x DBA/2 model, the original DBA/2 male was replaced on day 6 of gestation by another DBA/2 male, a CBA/J, a C57Bl/6 or a BALB/c male. The relatively high incidence of embryo loss was not affected by removing the original DBA/2 male or introducing another DBA/2 or a CBA/J male, indicating that stress induced by an alien male did not increase the postimplantation losses in this model. Furthermore, the introduction of a DBA/2 male to a CBA/J female that had been mated with a BALB/c male did not elicit early embryo loss. However, the replacement of the original DBA/2 male by a BALB/c male dramatically reduced the incidence of early embryo loss in pregnant CBA/J female mice. The introduction of a C57Bl/6 male also reduced embryo loss but to a lesser extent.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: The role of the male factor in early recurrent embryo loss has been investigated, and there is evidence suggesting that antioxidants may have a beneficial effect. A study described in PUBMED:18829003 found that increasing antioxidant intake in men with high levels of DNA damage or lipid peroxidation improved gestational outcomes in couples with a history of recurrent embryo loss. Specifically, male partners with increased DNA fragmentation index (%DFI) or high thiobarbituric acid reactive substances (TBARS) were instructed to consume a diet rich in antioxidants or commercial multivitamins containing beta-carotene, vitamin C, vitamin E, and zinc for at least 3 months. The study reported that all couples whose male partners accepted antioxidant supplementation achieved a successful pregnancy, demonstrating the benefits of increased intake of antioxidants in men with high levels of sperm DNA fragmentation or lipid peroxidation. This finding supports the hypothesis that genetic or epigenetic alterations in sperm, such as altered chromatin packing, mistakes in imprinting, absence or alteration of the centrosome, telomeric shortening, and absence of sperm RNA, could affect functions leading to early embryo loss (PUBMED:18077859). Therefore, interventions that target the male factor, such as antioxidant supplementation, could potentially improve the chances of a successful pregnancy by protecting the structural and functional integrity of spermatozoa, which is crucial for fertilization and embryo development.
Instruction: Does percent body fat predict outcome in anorexia nervosa? Abstracts: abstract_id: PUBMED:17541059 Does percent body fat predict outcome in anorexia nervosa? Objective: The goal of this study was to investigate the relationship of body composition and neuroendocrine levels with clinical outcome in women with anorexia nervosa in a relapse-prevention trial. Method: Body composition and fasting cortisol and leptin levels were assessed before random assignment in 32 weight-recovered subjects with anorexia nervosa from the New York site of the Fluoxetine to Prevent Relapse in Women With Anorexia Nervosa trial. Clinical outcome at the end of study participation was defined using modified Morgan-Russell criteria (full, good, fair, poor), then dichotomized into treatment "success" or "failure." Results: In a binary logistic regression model examining the effect of percent body fat, body mass index, anorexia nervosa subtype, waist-to-hip ratio, and serum cortisol and leptin levels on treatment outcome, only percent body fat was significantly associated with outcome. Conclusions: In recently weight-restored women with anorexia nervosa, lower percent body fat was associated with poor long-term outcome. abstract_id: PUBMED:20127939 Percent body fat is a risk factor for relapse in anorexia nervosa: a replication study. Objective: We aimed to replicate and extend our previous findings of an association between percent body fat in recently weight-restored patients with Anorexia Nervosa (AN) and clinical outcome in the year following treatment. Method: Twenty-two hospitalized, weight-restored women with AN underwent whole body MRI to determine percent adipose tissue. Following hospital discharge, patients were contacted regularly, and at the end of the year, clinical outcome was determined using modified Morgan-Russell (MR) criteria: full, good, fair or poor. Identical to our previous study, outcome was dichotomized into "full, good or fair" and "poor" groups. Results: Data from 21 subjects were available for analysis. Percent body fat was significantly lower in the poor outcome group (22% ± 5%) compared to the "full, good or fair" outcome group (27 % ± 4%)(p < 0.035). To further examine the relationship, we combined data from the current study with data from the 26 subjects on whom we had previously reported. Univariate analysis of variance (ANOVA) demonstrated a significant difference in percent adipose tissue across the four MR outcome groups (F = 3.416, df = 3, p<0.03). Discussion: Lower percent adipose tissue after short-term weight normalization is associated with poor clinical outcome in the year following inpatient treatment. These findings may be important in the assessment of risk for relapse in patients with AN. abstract_id: PUBMED:26419195 Body mass index, body fat and risk factor of relapse in anorexia nervosa. Background/objectives: Preliminary studies have reported an association between poor long-term outcome and lower total body fat percentage in weight-restored inpatients treated for anorexia nervosa (AN). A possible link between poor long-term outcome and higher trunk fat percentage has also been hypothesized. The aim was to assess the association between percentage and distribution of body fat at inpatient discharge and the maintenance of normal weight at 1-year follow-up in a sample of weight-restored females with AN. Subjects/methods: Fifty-four short-term weight-restored (body mass index (BMI; in kg/m(2)) ⩾18.5) adult females with AN treated in a specialist inpatient unit underwent dual-energy X-ray absorptiometry to determine total body fat and trunk fat percentages. Patients were contacted regularly following discharge, and at the end of the year clinical outcome was dichotomized as either 'full, good or fair' (a group that includes individuals with a BMI ⩾18.5 kg/m(2)) or 'poor' (BMI<18.5 kg/m(2)), using the modified Morgan-Russell criteria. Results: No significant differences were found between 'full, good or fair' and 'poor' outcome groups in either total body fat or trunk fat percentages. Only lower BMI at inpatient discharge was associated with poor clinical outcome in the year following inpatient treatment. Conclusions: In short-term weight-restored adult females with AN, BMI, but not body fat percentage or distribution, at inpatient discharge is associated with long-term normal weight maintenance. abstract_id: PUBMED:33185933 The association between leptin and weight maintenance outcome in anorexia nervosa. Objective: Relapse after weight restoration in anorexia nervosa (AN) is a critical problem. Higher body fat percentage after weight gain has been shown to predict better weight maintenance outcome. Leptin, a fat-derived hormone, has been associated with progress during weight gain, but its association with weight maintenance is unknown. This study aims to determine whether leptin levels after weight restoration in AN are associated with weight maintenance. Method: Participants were 41 women with AN hospitalized for inpatient treatment. Participants were evaluated 2-4 weeks after weight restoration to body mass index (BMI) ≥ 19.5 kg/m2 for plasma leptin and body composition. Weight maintenance outcome was defined by whether a participant maintained a BMI of at least 18.5 kg/m2 at the end of 1 year following hospital discharge. Results: Twenty (48.8%) out of 41 patients maintained their weight at 1 year. Percent body fat and leptin were significantly higher in the group who maintained weight (body fat, p = .004, Hedges' g = 0.944; log-leptin, p = .010, Hedges' g = 0.821), but there were no differences in predischarge BMI, duration of illness, and duration of amenorrhea. Using regression modeling, only higher log-leptin (pWald = .021) and percent body fat (pWald = .010), as well as fat-adjusted leptin (pWald = .029), independently predicted weight maintenance at 1 year. Discussions: Our findings suggest that for acutely-weight restored women with AN, higher predischarge leptin measurements are associated with better outcome in the year following treatment. Prospective studies examining leptin as well as other parameters of metabolic health could offer insights into biomarkers that may improve clinical outcomes. abstract_id: PUBMED:30528301 Predicting Menstrual Recovery in Adolescents With Anorexia Nervosa Using Body Fat Percent Estimated by Bioimpedance Analysis. Objective: To identify the threshold of total body fat percentage (TBF%) required for the resumption of menses (ROM) in hospitalized female adolescents with anorexia nervosa (AN) using bioimpedance analysis (BIA). Methods: All female adolescents hospitalized with AN in our medical center were evaluated in a longitudinal prospective study during the years of 2012-2017. Anthropometric data, body fat measured by BIA, and hormonal determinants were collected periodically, in addition to routine medical and gynecological assessments. Results: Sixty-two participants presented with secondary amenorrhea, of which 20 remained with amenorrhea and 42 had ROM during hospitalization. At discharge, participants with ROM regained significantly more weight, and had higher mean body mass index (BMI), BMI standard deviation scores, and TBF% than those who remained with amenorrhea. Receiver operating characteristic analysis identified that a TBF% of 21.2% had the highest discriminative ability for ROM (sensitivity = 88%, specificity = 85%, positive predictive value = 93%). Compared with the anthropometric parameters, TBF% had the highest area under curve (AUC = .895), which significantly differed from that of BMI standard deviation scores (AUC = .643, p = .007) and body weight (AUC = .678, p = .03). Conclusions: BIA is a safe and relatively simple method to assess the TBF% required for the return of balanced menstrual cycles in female adolescents with AN. The TBF% with the highest discriminative ability for menstrual resumption as assessed by BIA is 21.2%. abstract_id: PUBMED:25251296 Anorexia nervosa and body fat distribution: a systematic review. The aim of this paper was to conduct a systematic review of body fat distribution before and after partial and complete weight restoration in individuals with anorexia nervosa. Literature searches, study selection, method development and quality appraisal were performed independently by two authors, and data was synthesized using a narrative approach. Twenty studies met the inclusion criteria and were consequently analyzed. The review had five main findings. First, during anorexia nervosa adolescent females lose more central body fat, while adult females more peripheral fat. Second, partial weight restoration leads to greater fat mass deposition in the trunk region than other body regions in adolescent females. Third, after short-term weight restoration, whether partial or complete, adults show a central adiposity phenotype with respect to healthy age-matched controls. Fourth, central fat distribution is associated with increased insulin resistance, but does not adversely affect eating disorder psychopathology or cause psychological distress in female adults. Fifth, the abnormal central fat distribution seems to normalize after long-term maintenance of complete weight restoration, indicating that preferential central distribution of body fat is a transitory phenomenon. However, a discrepancy in the findings has been noted, especially between adolescents and adults; besides age and gender, these appear to be related to differences in the methodology and time of body composition assessments. The PROSPERO Registry-Anorexia Nervosa and Body Fat Distribution: A Systematic Review (CRD42014008738). abstract_id: PUBMED:10441243 Body fat, caloric intake, and plasma leptin levels in women with anorexia nervosa. Objective: To measure serum leptin concentrations in women with anorexia nervosa (AN) and to identify independent predictors of leptin levels, thereby allowing hormone levels to be modeled using regression analysis. Methods: A cross-sectional study of 15 AN inpatients and 16 healthy control subjects. Age, height, weight, percent body fat, total caloric intake, fat intake, and fasting plasma leptin levels were recorded. Stepwise forward regression analysis was carried out to identify predictors of leptin levels. Results: Circulating leptin concentrations in AN were 3.5+/-0.5 versus 7.6+/-1.2 ng/ml in control subjects. Percent body fat correlated best with leptin levels in anorexic and control subjects (r =.63; p =.0002), with caloric intake showing only a moderate correlation (r = .47; p = .008). Only percent body fat was a significant predictor of plasma leptin levels in our regression model. Discussion: Leptin levels are proportionately lower in AN as compared to healthy controls. It appears that physiological regulation of plasma levels with respect to percent body fat is maintained in individuals with little body fat. abstract_id: PUBMED:24613094 Percentage body fat by dual-energy X-ray absorptiometry is associated with menstrual recovery in adolescents with anorexia nervosa. Purpose: To evaluate mediators of resumption of menses (ROM) in adolescents with anorexia nervosa (AN). Methods: Anthropometrics, body composition by dual-energy X-ray absorptiometry, hormonal studies, and responses to mental health screens were obtained at 6-month intervals for 18 months in 37 adolescents with AN randomized to the placebo arm of a double-blind treatment trial. Outcomes were compared between subjects with menstrual recovery and those without. Results: Twenty-four subjects (65%) had ROM. Higher percentage body fat was associated with ROM (odds ratio, 1.19; 95% confidence interval, 1.06, 1.33; p < .01), as was body mass index and percent median body weight. Estradiol ≥30 ng/mL alone did not predict menses (p = .08) but was associated with ROM when coupled with percent mean body weight (odds ratio, 2.49; 95% confidence interval, 1.09, 5.65; p = .03). Changes in leptin, cortisol, and mental health were not associated with return of menses. Conclusions: Percentage body fat may be an additional, useful clinical assessment to follow in caring for adolescents with AN. abstract_id: PUBMED:31904201 The Importance of Restoring Body Fat Mass in the Treatment of Anorexia Nervosa: An Expert Commentary. Anorexia nervosa is a severe mental disorder that is characterised by dietary restriction, low weight and widespread endocrine abnormalities. Whilst the importance of weight restoration has been recognised in recent guidelines, the significance of normalising body fat mass has received less attention. A recent systematic review and meta-analysis found that a minimum of 20.5% body fat mass is necessary for regular menses in women with anorexia nervosa of reproductive age. This has significant implications for both treatment and research. It is important to help the patient and carers understand that a certain level of body fat percentage is essential for optimal health, such as the return of menstruation. Further research is needed into how best to use this information to help motivation to change as part of treatment. The benefit of the return of menstruation goes beyond improved fertility: it signals the normalisation of sexual hormones, which have a widespread impact on the body and multiple pathways in the brain. Given the complex functions of adipocytes in various organs of the body, the metabolic effects of the normal body fat tissue should not be underestimated. Further research is needed to elucidate the mechanisms behind the link between minimum body fat mass, menstruation, bone and brain health in anorexia nervosa. abstract_id: PUBMED:10728155 Randomized trial of measures of body fat versus body weight in the treatment of anorexia nervosa. Objective: To determine whether treatment goals based on body weight or body fat assessed by anthropometrics caused a larger reduction in the Eating Disorder Examination (EDE) over 3 months during the treatment of anorexia nervosa (AN). Methods: Randomized trial comparing the use of body weight versus measures of body fat by anthropometrics, with the EDE as the outcome measure. Of 23 patients enrolled, 11 subjects who met diagnostic criteria for AN (DSM-IV) and received outpatient treatment from the Eating Disorders clinic of a university teaching hospital completed the study. Results: Comparison of the 2 methods by independent t-tests, verified by Mann-Whitney and Sign tests, suggests no difference (p = 0.33). Due to the small sample size, statistical significance could not be assured. A cross-sectional survey of 48 clinic users found the majority of patients preferred to be followed by skinfold measurements and felt they were more helpful than weight. Discussion: Regardless of which measurement is used, if the clinic staff firmly believe that a method is more helpful and more useful, it is likely that the patients will find it acceptable. Answer: Yes, percent body fat does predict outcome in anorexia nervosa. Several studies have found a significant association between percent body fat and clinical outcomes in individuals with anorexia nervosa. For instance, a study found that in recently weight-restored women with anorexia nervosa, lower percent body fat was associated with poor long-term outcomes (PUBMED:17541059). This finding was replicated in another study where lower percent adipose tissue after short-term weight normalization was associated with poor clinical outcomes in the year following inpatient treatment (PUBMED:20127939). Moreover, a study that assessed the association between percentage and distribution of body fat at inpatient discharge and the maintenance of normal weight at 1-year follow-up found that BMI, but not body fat percentage or distribution, at inpatient discharge is associated with long-term normal weight maintenance (PUBMED:26419195). However, another study indicated that higher predischarge leptin measurements, which are associated with higher body fat percentage, are associated with better outcomes in the year following treatment for acutely-weight restored women with anorexia nervosa (PUBMED:33185933). Additionally, research has shown that a certain threshold of total body fat percentage is required for the resumption of menses in hospitalized female adolescents with anorexia nervosa, suggesting that body fat percentage is an important factor in recovery (PUBMED:30528301). A systematic review also highlighted that body fat distribution normalizes after long-term maintenance of complete weight restoration, indicating that the distribution of body fat is a transitory phenomenon during the course of anorexia nervosa (PUBMED:25251296). In summary, percent body fat is a significant predictor of clinical outcomes in anorexia nervosa, with lower percent body fat being associated with poorer outcomes, and higher percent body fat, as well as higher leptin levels, being associated with better weight maintenance and recovery indicators such as the resumption of menses (PUBMED:17541059, PUBMED:20127939, PUBMED:26419195, PUBMED:33185933, PUBMED:30528301, PUBMED:25251296).
Instruction: Do surgeons have a role as infectious disease consultants? Abstracts: abstract_id: PUBMED:8790171 Do surgeons have a role as infectious disease consultants? Background: Antibiotic drug restriction policies have become widespread in North America hospitals mandating medical infectious disease consultation for use of routine antimicrobial agents. Objective: To determine physician and institutional attitudes regarding the credibility of surgeons in the area of infectious disease. Design: A survey of the Surgical Infectious Disease Society (SIS) membership. Participants: Members of the SIS practicing in the United States and Canada (n = 464) were sent a simple opinion poll regarding surgical infectious disease specialists. After receiving the survey, the SIS members were given approximately 4 weeks to anonymously complete the questionnaire and return it to our office. Results: Responses were received from 198 SIS members (43%). Most were from large (> 400 bed) institutions (63%), and the majority of them were from university centers (76%). Predictably, 86% of the respondents were in full-time academic practice. Some SIS respondents (21%) had undergone special training in infectious disease, and of these, 63% actually had completed a formal surgical infectious disease fellowship. Antibiotic restriction policies were nearly universal and required in 87% of institutions. Only 44% of surgeons in these hospitals, however, were privileged to release antibiotic drugs. Medical infectious disease physicians recognized surgical expertise in infectious disease in few instances (32%). The great majority of responding SIS members (81%) believed that an examination or certification in infectious disease for surgeons was not warranted. Conclusions: Antibiotic drug restriction is prevalent in North America, and medical infectious disease specialists do not generally recognize surgical expertise in the area of infectious disease. Despite this environment, responding SIS members believe that special credentialing of surgeons in the area of infectious disease is unnecessary. abstract_id: PUBMED:15034823 Role of infectious diseases consultants in management of antimicrobial use in hospitals. The Infectious Diseases Society of America Emerging Infections Network (EIN) surveyed its members to characterize antimicrobial restriction policies in their hospitals and the involvement of infectious diseases consultants in this process. Of the 502 respondents (73%), 250 (50%) indicated that their hospital pharmacies would not dispense certain antimicrobials without approval of infectious diseases consultants. Moreover, 89% agreed that infectious diseases consultants need to be directly involved in the approval process. At hospitals with control policies, commonly restricted agents included lipid formulations of amphotericin B, carbapenems, fluoroquinolones, piperacillin-tazobactam, and vancomycin. Only 46 EIN members (18%) reported remuneration of infectious diseases consultants for participation in the approval process. Pediatric infectious diseases consultants were more likely to practice in hospitals with restriction policies than were adult infectious diseases consultants (64% vs. 45%; P<.001). Similarly, teaching hospitals were more likely to have antimicrobial-control policies than were nonteaching facilities (60% vs. 17%; P<.001). abstract_id: PUBMED:37170315 The lasting impact of COVID-19 on surgical training from the perspective of surgical residents and consultants in Saudi Arabia: a nationwide cross-sectional study. Background: Since the start of the COVID-19 pandemic, many precautionary measures have been set to curb the transmission of the virus. That has led to changes, most notably in surgical education, like lack of surgical exposure and clinical activities. However, the question aiming at the impact of changes made by the COVID-19 pandemic on surgical education and its extent remains unanswered. Materials & Methods: An electronic survey was distributed among surgical residents and consultants from all over Saudi Arabia, starting from the 6th till the 21st of July, 2021. Descriptive statistics were presented using counts and proportions (%). Study subjects were compared with the different perspectives during the COVID-19 pandemic by using Chi-square test. A p-value cut-off point of 0.05 at 95% CI was used to determine statistical significance. Results: A total of 243 out of 500 surgical residents and consultants responded to the survey, giving a response rate of 48.6%. The majority were general surgeons (50.5%) and cardiothoracic surgeons (21.8%). Nearly 66% of surgeons, both residents and consultants, strongly agreed on the importance of training for infectious disease outbreaks. 44.7% of the consultants and 48% of the residents showed their willingness to respond to the pandemic regardless of its severity. Over 70% of surgeons agreed that developing clinical skills was compromised by the COVID-19 pandemic, and 40% expected a negative impact of the COVID-19 on their operative skills. Simulation was ranked best for disaster medicine training by over 77% of the respondents. The most common concern among surgeons during the COVID-19 pandemic was their family's health and safety. Regarding virtual curriculum components, online practice questions and surgical videos were preferred by the surgical consultant and resident, respectively. Conclusions: Although the COVID-19 pandemic has impacted surgical education, it has highlighted the alarming need for adopting new components. For surgical training programs, we recommend improving the virtual curriculum, incorporating disaster medicine training, providing psychological services, and prioritizing immunization and treatment access for surgeons' families. abstract_id: PUBMED:8228314 The present and future role of the consultant physician with an interest in infection. Council of the British Society for the Study of Infection. The Council of the British Society for the Study of Infection (BSSI) has considered the present and future role of the Consultant Physician with an interest in infection. The BSSI is anxious to set standards and improve the quality of care delivered to patients with infection; to give advice on the prevention and management of infection and to provide such services economically to assist Health Managers in the purchase of infection-related medical services appropriate to the needs of the community they serve. New approaches to the care of infected patients are reviewed where the Clinical Specialist, Microbiologist and Consultant in Communicable Disease Control (CCDC)--in Scotland, Consultant in Public Health Medicine (CPHM)--work together in a team. As the U.K. moves closer to its European partners in 1992, it seems timely for each specialty to make a clear statement about its training programme and the requirements expected of a fully trained Consultant in each discipline. Comparisons between the ratio of Consultants with a special interest in infection and the population are made, where known, between U.K. and other European countries. A model job description for the U.K. clinical specialist is therefore being made available, the present inadequate numbers of consultants are stated and recommendations made to expand urgently the number of Consultant Physicians with an interest in infection. abstract_id: PUBMED:12971666 Impact of national consultants on successful expansion of effective tuberculosis control in India. Setting: India, during a period of rapid expansion of DOTS services. DOTS expansion has been slow in many countries. Objective: To document use of consultants to expand DOTS effectively. Design: Staff were contracted to monitor DOTS expansion and implementation. To estimate the impact of these staff, we compared areas with and without consultants, and individual areas before and after consultants were assigned. Consultants were preferentially assigned to the more difficult areas; the temporary absence of consultants reflected non-availability of candidates. Results: Areas with consultants met pre-defined criteria and began DOTS service delivery faster (median 9 vs. 18 months of preparation) than areas without consultants. Rates of sputum conversion (87% vs. 83%, P < 0.001) and treatment success (83% vs. 78%, P < 0.001) were significantly higher in areas with consultants present. Conclusion: Assignment of consultants resulted in much more rapid implementation of the DOTS strategy, and better quality performance. Continued effective performance in these areas will rely on many factors, but the need for consultants appears to be decreasing, suggesting that they have provided sustainable improvements. The effectiveness of local consultants may have important implications for efforts to scale up public health interventions for tuberculosis, malaria, AIDS and other diseases in developing countries. abstract_id: PUBMED:28949838 The interactions of Canadian ethics consultants with health care managers and governing boards during times of crisis. Background: Health care organizations can be very complex, and are often the setting for crisis situations. In recent years, Canadian health care organizations have faced large-scale systemic medical errors, a nation-wide generic injectable drug shortage, iatrogenic infectious disease outbreaks, and myriad other crises. These situations often have an ethical component that ethics consultants may be able to address. Organizational leaders such as health care managers and governing boards have responsibilities to oversee and direct the response to crisis situations. This study investigates the nature and degree of involvement of Canadian ethics consultants in such situations. Methods: This qualitative study used semi-structured interviews with Canadian ethics consultants to investigate the nature of their interactions with upper-level managers and governing board members in health care organizations, particularly in times of organizational crisis. We used a purposive sampling technique to identify and recruit ethics consultants throughout Canada. Results: We found variability in the interactions between ethics consultants and upper-level managers and governing boards. Some ethics consultants we interviewed did not participate in managing organizational crisis situations. Most ethics consultants reported that they had assisted in the management of some crises and that their participation was usually initiated by managers. Some ethics consultants reported the ability to bring issues to the attention of upper-level managers and indirectly to their governing boards. The interactions between managers and ethics consultants were characterized by varying degrees of collegiality. Ethics consultants reported participating in or chairing working groups, participating in incident management teams, and developing decision-making frameworks. Conclusions: Canadian ethics consultants tend to believe that they have valuable skills to offer in the management of organizational crisis situations. Most of the ethics consultants we interviewed believed that they play an important role in this regard. abstract_id: PUBMED:17581155 Bacterial infections associated with blood transfusion: experience and perspective of infectious diseases consultants. Background: On March 1, 2004, the AABB adopted a new standard that requires member blood banks and transfusion services to implement measures to limit and detect bacterial contamination in all platelet (PLT) components. The AABB has since developed several guidelines to assist blood transfusion services and blood banks in this area, some of which are relevant to clinical practice. Knowledge and experience among clinicians (including infectious disease consultants, who can play an important role in managing patients with sepsis) concerning risk of bacterial infections associated with transfusion, however, are unknown. Study Design And Methods: Experience concerning management and prevention of transfusion-associated bacterial infection, including knowledge of the AABB standard requiring bacterial screening of PLTs, was assessed through an Infectious Diseases Society of America Emerging Infections Network (IDSA/EIN) survey. Results: Overall, 405 (47%) EIN members responded to the survey; of those responding, 12 percent of respondents had encountered transfusion reactions potentially due to bacterial contamination in the prior 10 years, 36 percent were aware of the transmission risk of bacteria through blood transfusion, and 20 percent were aware of the new AABB standard for bacterial screening of PLTs. Conclusions: Understanding by EIN infectious disease consultants of the significance of transfusion-associated bacterial infection and associated AABB standards and guidelines may indicate lack of other clinicians' awareness on these issues. Improving awareness of the risk of bacterial contamination of PLTs appears warranted to improve clinical management of infected blood donors or recipients, particularly when follow-up for transfusion of a culture-positive PLT unit is needed. abstract_id: PUBMED:17051494 Experience of infectious diseases consultants with outpatient parenteral antimicrobial therapy: results of an emerging infections network survey. Background: Despite the increasing use of outpatient parenteral antimicrobial therapy (OPAT), little is known about the role of infectious diseases consultants in the process or their perceptions of OPAT. Methods: In May 2004, the Infectious Diseases Society of America Emerging Infections Network (EIN) surveyed its members to characterize their involvement and experiences with OPAT. Results: Of the 454 respondents (54%) who completed the questionnaire, 426 (94%) indicated that patients in their primary inpatient facility were "frequently" discharged while receiving OPAT, estimating that, on average, 19 patients are discharged from their hospitals while receiving OPAT each month. Although 86% of EIN members stated that they personally order OPAT for some patients, 18% indicated that they have no involvement, and 37% stated they only rarely or occasionally oversee OPAT. EIN members involved in OPAT estimated that approximately 90% of their patients who take OPAT received therapy at home, and the members described variable monitoring and oversight methods. Of the respondents, 68% of providers collectively estimated that they encountered 1951 infectious and serious noninfectious complications of OPAT in the past year. The most frequently used antibiotics included vancomycin, ceftriaxone, and cefazolin, most commonly used for bone and joint infections. Conclusions: These results testify to the pervasive use of OPAT in today's health care system, the variable role of infectious diseases consultants, and the heterogeneity in oversight and management practices. The widespread use of OPAT and its frequent complications indicate the need for additional studies to establish optimal methods of delivery and management to insure the quality and safety of the process. abstract_id: PUBMED:32683969 An International Group of Hepatopancreaticobiliary Surgeons Respond to the COVID-19 Pandemic. An informal workgroup of 9 hepatopancreaticobiliary (HPB) surgeons from 6 different countries on 4 continents shifted the focus of their quarterly tumor board discussions to their responses to the current COVID-19 pandemic. Just as they had discussing HPB cases, they share their experiences and ideas in dealing with the outbreak that faced their hospitals and communities. Their efforts to improve care proved that professionalism in surgery, like the global pandemic, has no boundaries. abstract_id: PUBMED:32505456 A nationwide survey of UK cardiac surgeons' view on clinical decision making during the coronavirus disease 2019 (COVID-19) pandemic. Background: No firm recommendations are currently available to guide decision making for patients requiring cardiac surgery during the coronavirus disease 2019 (COVID-19) pandemic. Systematic appraisal of senior surgeons' consensus can be used to generate interim recommendations until data from clinical observations become available. Hence, we aimed to collect and quantitatively appraise nationwide UK consultants' opinions on clinical decision making for patients requiring cardiac surgery during the COVID-19 pandemic. Methods: We E-mailed a Web-based questionnaire to all consultant cardiac surgeons through the Society for Cardiothoracic Surgery in Great Britain and Ireland mailing list on the April 17, 2020, and we predetermined to close the survey on the April 21, 2020. This survey was primarily designed to gather information on UK surgeons' opinions using 12 items. Strong consensus was predefined as an opinion shared by at least 60% of responding consultants. Results: A total of 86 consultant surgeons undertook the survey. All UK cardiac units were represented by at least 1 consultant. Strong consensus was achieved for the following key questions: (1) before any hospital admission for cardiac surgery, nasopharyngeal swab, polymerase chain reaction, and computed tomography of the chest should be performed; (2) the use of full personal protective equipment should to be adopted in every case by the theater team regardless of the patient's COVID-19 status; (3) the risk of COVID-19 exposure for patients undergoing heart surgery should be considered moderate to high and likely to increase mortality if it occurs; and (4) cardiac procedures should be decided based on a rapidly convened multidisciplinary team discussion for every patient. The majority believed that both aortic and mitral surgery should be considered in selected cases. The role of coronary artery bypass graft surgery during the pandemic was controversial. Conclusions: In this unprecedented pandemic period, this survey provides information for generating interim recommendations until data from clinical observations become available. Answer: The role of surgeons as infectious disease consultants is a topic that has been explored in various studies. According to a survey of the Surgical Infectious Disease Society (SIS) membership, antibiotic drug restriction policies are widespread in North American hospitals, often requiring medical infectious disease consultation for the use of routine antimicrobial agents. However, medical infectious disease physicians recognized surgical expertise in infectious disease in only a few instances (32%). Despite this, the majority of responding SIS members (81%) believed that special credentialing of surgeons in the area of infectious disease is unnecessary (PUBMED:8790171). In contrast, the Infectious Diseases Society of America Emerging Infections Network (EIN) survey indicated that infectious diseases consultants play a crucial role in the management of antimicrobial use in hospitals, with 50% of respondents stating that their hospital pharmacies would not dispense certain antimicrobials without approval from infectious diseases consultants. Furthermore, 89% agreed that infectious diseases consultants need to be directly involved in the approval process (PUBMED:15034823). The COVID-19 pandemic has also highlighted the importance of training for infectious disease outbreaks, with 66% of surgeons in Saudi Arabia strongly agreeing on its importance. The pandemic has affected surgical education and training, emphasizing the need for incorporating disaster medicine training and improving virtual curricula (PUBMED:37170315). In the UK, the British Society for the Study of Infection has considered the role of the Consultant Physician with an interest in infection, emphasizing the need to set standards and improve the quality of care for patients with infection (PUBMED:8228314). In summary, while surgeons may not traditionally be seen as infectious disease consultants, there is recognition of their role in antimicrobial management and the importance of infectious disease training, particularly in light of the COVID-19 pandemic. However, the extent to which surgeons are involved in infectious disease consultation varies, and there is a call for improved recognition and involvement of surgeons in this area.
Instruction: Central pancreatectomy for benign pancreatic pathology/trauma: is it a reasonable pancreas-preserving conservative surgical strategy alternative to standard major pancreatic resection? Abstracts: abstract_id: PUBMED:17054548 Central pancreatectomy for benign pancreatic pathology/trauma: is it a reasonable pancreas-preserving conservative surgical strategy alternative to standard major pancreatic resection? Background: The aim of this study was to assess the technical feasibility, safety and outcome of central pancreatectomy (CP) with pancreaticogastrostomy or pancreaticojejunostomy in appropriately selected patients with benign central pancreatic pathology/trauma. Benign lesions/trauma of the pancreatic neck and proximal body pose an interesting surgical challenge. CP is an operation that allows resection of benign tumours located in the pancreatic isthmus that are not suitable for enucleation. Methods: Between January 2000 and December 2005, eight central pancreatectomies were carried out. There were six women and two men with a mean age of 35.7 years. The cephalic pancreatic stump is oversewn and the distal stump is anastomosed end-to-end with a Roux-en-Y jejunal loop in two and with the stomach in six patients. The indications for CP were: non-functional islet cell tumours in two patients, traumatic pancreatic neck transection in two and one each for insulinoma, solid pseudopapillary tumour, splenic artery pseudoaneurysm and pseudocyst. Pancreatic exocrine function was evaluated by a questionnaire method. Endocrine function was evaluated by blood glucose level. Results: Morbidity rate was 37.5% with no operative mortality. Mean postoperative hospital stay was 10.5 days. Neither of the patients developed pancreatic fistula nor required reoperations or interventional radiological procedures. At a mean follow up of 26.4 months, no patient had evidence of endocrine or exocrine pancreatic insufficiency, all the patients were alive and well without clinical and imaging evidence of disease recurrence. Conclusion: When technically feasible, CP is a safe, pancreas-preserving pancreatectomy for non-enucleable benign pancreatic pathology/trauma confined to pancreatic isthmus that allows for cure of the disease without loss of substantial amount of normal pancreatic parenchyma with preservation of exocrine/endocrine function and without interruption of enteric continuity. abstract_id: PUBMED:30386905 Indications and outcomes of duodenum-preserving resection of the pancreatic head in children. Aim Of Study: Duodenum-preserving resection of the pancreatic head (DPRPH) with Roux-en-Y pancreatojejunostomy is a procedure used to remove focal pathological lesions of the pancreatic head. Although predominantly used in adult patients, it is both safe and effective in children. The aim of this study was to review our experience with this procedure, with focus on its indications, complications and long-term outcomes. Methods: A retrospective analysis of pediatric patients who underwent DPRPH between 1994 and 2015 was performed. Patient files were reviewed for demographic, diagnostic, operative and histological details, postoperative complications. Patients were contacted telephonically and sent questionnaires to determine long-term outcomes. Results: The study cohort consists of 21 patients, 14 girls and 7 boys, with an average age of 11.72 years (range 3 months to 18.6 years), who underwent DPRPH with end-to-end anastomosis of the jejunum to the pancreatic body (Roux-en-Y anastomosis). In four cases the head and also part of the body of the pancreas was resected. In the remaining 17 cases, only the head of the pancreas was resected. Indications for DPRPH were solid pseudopapillary tumor of the pancreas (n = 10), trauma (n = 8), pancreas divisum (n = 1), focal congenital hyperinsulinism (n = 1) and pancreatic cyst (n = 1). The length of follow-up ranged from 1 to 22 years (average 9.66). One patient developed a biliary fistula, which closed spontaneously within 2 weeks after stent insertion. A recurrence of abdominal pain was reported in two patients, occurring at 7 months after the operation in one patient and at 1 year in the other. Pancreatic endocrine insufficiency did not occur in any of the 21 patients. Seven patients currently require a low fat diet, five of which need pancreatic enzyme supplementation. An additional two patients need enzyme supplementation without dietary restriction. Conclusion: DPRPH is a safe and effective procedure for the treatment of large focal pathological lesions of the pancreatic head in children. As a less invasive procedure than pancreatoduodenectomy, it is more appropriate for the developing child. abstract_id: PUBMED:17448780 Central pancreatectomy with pancreaticogastrostomy in children. Children requiring surgical intervention for pancreatic disease may be at risk long term for exocrine insufficiency and glucose intolerance. Pediatric surgeons must balance the need to perform adequate surgical resection while preserving as much normal pancreatic parenchyma as possible. Neoplasms of the middle pancreatic segment with low malignant potential and isolated trauma to the pancreatic body or neck represent 2 conditions where extensive pancreatic resection is unnecessary. Central pancreatectomy for such lesions is well described in adults. Reconstruction of the distal pancreatic remnant is traditionally performed via Roux-en-Y pancreaticojejunostomy. Pancreaticogastrostomy is an alternative approach that has been used to reconstruct the distal pancreas in the adults. Pancreaticogastrostomy offers several technical advantages over pancreaticojejunostomy. Because children may be uniquely susceptible to the long-term consequences of excessive pancreatic resection, 2 cases using this technique of central pancreatectomy with pancreaticogastrostomy are described. abstract_id: PUBMED:28124001 Conservative and surgical management of pancreatic trauma in adult patients. Background: The management of pancreatic trauma is complex. The aim of this study was to report our experience in the management of pancreatic trauma. Methods: All patients hospitalized between 2005 and 2013 for pancreatic trauma were included. Traumatic injuries of the pancreas were classified according to the American Association for Surgery of Trauma (AAST) in five grades. Mortality and morbidity were analyzed. Results: A total of 30 patients were analyzed (mean age: 38±17 years). Nineteen (63%) patients had a blunt trauma and 12 (40%) had pancreatic injury ≥ grade 3. Fifteen patients underwent exploratory laparotomy and the other 15 patients had nonoperative management (NOM). Four (13%) patients had a partial pancreatectomy [distal pancreatectomy (n=3) and pancreaticoduodenectomy (n=1)]. Overall, in hospital mortality was 20% (n=6). Postoperative mortality was 27% (n=4/15). Mortality of NOM group was 13% (n=2/15) in both cases death was due to severe head injury. Among the patients who underwent NOM, three patients had injury ≥ grade 3, one patient had a stent placement in the pancreatic duct and two patients underwent endoscopic drainage of a pancreatic pseudocyst. Conclusions: Operative management of pancreatic trauma leads to a higher mortality. This must not be necessarily related to the pancreas injury alone but also to the associated injuries including liver, spleen and vascular trauma which may cause impaired outcome more than pancreas injury. abstract_id: PUBMED:34364681 Resection versus drainage in the management of patients with AAST-OIS grade IV blunt pancreatic injury: A single trauma centre experience. Introduction: We aimed to compare outcomes of pancreatic resection with that of peripancreatic drainage for American Association for the Surgery of Trauma-Organ Injury Scale (AAST-OIS) grade IV blunt pancreatic injury in order to determine the optimal treatment method. Materials And Methods: Nineteen surgical patients with AAST-OIS grade IV blunt pancreatic injury between 1994 and 2016 were retrospectively studied. Results: Among the 19 patients, 14 were men and 5 were women (median age: 33 years). Twelve patients underwent pancreatic resection (spleen-sacrificing distal pancreatectomy, n = 6; spleen-preserving distal pancreatectomy, n = 3; and central pancreatectomy with Roux-en-Y anastomosis, n = 3), and seven underwent peripancreatic drainage. After comparing these two groups, no significant differences were found in terms of gender, shock at triage, laboratory data, injury severity score, associated injury, length of hospital stay, and complication. The only significant difference was that in the drainage group, the duration from injury to surgery was longer than that from injury to resection (median, 48 hours vs. 24 hours; P = 0.036). In the drainage group, three patients required reoperation, and another three required further pancreatic duct stent therapy. Conclusions: In the surgery of the grade IV blunt pancreatic injury, pancreatic resection is warranted in early, conclusive MPD injury; if surgery is delayed or MPD injury has not been clearly assessed, peripancreatic drainage is an alternative method. However, peripancreatic drainage alone is not adequate and further pancreatic duct stent or reoperation is required. Further studies should be conducted to confirm our conclusions. abstract_id: PUBMED:29589453 Multidisciplinary approach to surgical disorders of the pancreas in children Introduction: Surgical diseases of the pancreas in children are not common and may be associated with significant morbidity and potential mortality. A multidisciplinary approach is essential for correct diagnosis, surgical strategy and postoperative as well as follow-up care. Method: Retrospective analysis of patients operated on due to a pathological lesion of the pancreas focused on diagnostics, operating procedures, postoperative complications, and long-term results. Between 1991 and 2016, eighty-nine children were treated in our department for a pathologic lesion of the pancreas. 39 of them were boys and 50 were girls. Results: Mean age of the patients was 9.3 years (1 month-18.4 years). Patients were followed from the operation to the age of 19, after which they were referred for follow-up to adult specialists. The indications for surgery were trauma in 34 children, solid pseudopapillary tumor in 23 children, biopsy in 10, hyperinsulinism in 8, chronic pancreatitis in 4, pancreatic cyst in 3, insulinoma in 3, carcinoma in 2, and serous cystadenoma and pancreas divisum in one patient. The most frequent procedures performed on the pancreas were distal pancreatectomy in 35 cases, the duodenum-preserving pancreatic head resection in 23 cases, pseudocystogastroanastomosis in 11 cases, 9095% pancreatic resection in 5 cases, Whipple operation in two cases, Puestow procedure in one case, tumor enucleation in one case, and tumor biopsy for cancer in one case. In 5 patients after major pancreatic injury, ERCP and papillotomy with insertion of a stent into the pancreatic duct was performed. 3 patients died, one after a polytrauma with severe pancreatic injury and two patients with pancreatic cancer. Conclusion: Pancreatic surgery in children is not a common operation, and individual as well as institutional experience remains limited. After more than 20 years of experience with pancreatic surgery, we believe that close cooperation with surgeons, pediatric gastroenterologists, radiologists, anesthesiologists, intensivist, pathologists and ERCP specialists is necessary for successful diagnosis and treatment of pancreatic disease in children.Key words: pancreas pancreatic surgery in children duodenum preserving head resection of the pancreas. abstract_id: PUBMED:31164222 Lessons learned from isolated blunt major pancreatic injury: Surgical experience in one trauma centre. Introduction: The aim of this study was to present our surgical experience of isolated blunt major pancreatic injury (IBMPI), and to compare its characteristic outcomes with that of multi-organ injury. Materials And Methods: From 1994-2015, 31 patients with IBMPI and 54 patients with multi-organ injury, who underwent surgery, were retrospectively studied. Results: Of the 31 patients with IBMPI, 22 were male and 9 were female. The median age was 30 years (interquartile range, 20-38). Twenty-one patients were classified as the American Association for the Surgery of Trauma-Organ Injury Scale Grade III, and 10 patients as Grade IV. Patients with IBMPI had significantly lower shock-at-triage rates, lower injury severity scores, longer injury-to-surgery time, and shorter length of hospital stay than those with multi-organ injury. There were no statistically significant differences in sex, age, trauma mechanism, laboratory data, surgical procedures, and complications between the two groups. Eight patients with IBMPI underwent endoscopic retrograde pancreatography, and 5 patients with complete major pancreatic duct (MPD) disruption underwent pancreatectomy eventually. The remaining 3 patients had partial MPD injury and two of them received a pancreatic duct stent for the treatment of existing postoperative pancreatic fistula. Spleen-sacrificing distal pancreatectomy (SSDP) was performed in 13 patient with IBMPI, followed by spleen-preserving distal pancreatectomy (n = 12), peripancreatic drainage (n = 4), and central pancreatectomy with Roux-en-Y reconstruction (n = 2). The overall complication rates, related to the SSDP, SPDP, peripancreatic drainage, and central pancreatectomy, were 10/13 (77%), 4/12 (33%), 3/4 (75%), and 2/2 (100%), respectively. Three patients died resulting in a 10% mortality rate, and the other 16 patients developed intra-abdominal complications resulting in a 52% morbidity rate. In the subgroup analysis of the 25 patients who underwent distal pancreatectomy, SPDP was associated with a shorter injury-to-surgery time than SSDP. Conclusions: Patients with IBMPI have longer injury-to-surgery times, compared to those with multi-organ injury. Of the distal pancreatectomy patients, the time interval from injury to surgery was a significant associated factor in preserving or sacrificing the spleen. abstract_id: PUBMED:6881731 Staple closure of the pancreas. Since modern surgical stapling instruments have found a wide range of usefulness in gastrointestinal surgery, we have attempted to establish the feasibility of staple rather than suture closure of the transected pancreas. Distal pancreatic resection has been performed in eight dogs utilizing the Auto Suture Model TA-55 surgical staple instrument for primary closure of the transected margin of the remaining pancreas. Serial serum amylase determinations for each of the animals remained within normal range. Each animal was sacrificed one week after the initial surgery with examination of the remaining pancreas. None of the animals demonstrated any evidence of pancreatic fistula or abscess. Similar technique for distal pancreatic resection has been utilized in 12 patients. Five of the resections were performed for malignancies, four for trauma, and three for benign pancreatic disease. In each case, the residual pancreas was closed with the TA-55 surgical staple instrument. Although two patients died during the postoperative period of causes unrelated to the pancreas, none developed either fistula or other evidence of difficulty in healing of the pancreas. It is our conclusion that staple closure of the transected margin of the pancreas is a reasonable and safe alternative to standard suture closure technique. abstract_id: PUBMED:14254965 PANCREATIC DISEASE IN INFANCY AND CHILDHOOD. SURGICAL IMPLICATIONS. Surgeons whose practice involves many infants and children should be acquainted with all abnormalities of pancreatic malformation and function. Conditions amenable to surgical treatment are few, but serious. Trauma to the pancreas in childhood is most commonly diagnosed by fever, leukocytosis, rectus spasm and elevated serum amylase. Drainage of the lesser sac and debridement of devitalized tissue may prevent the sequelae of pseudocyst formation which seems to follow the untreated injury. True congenital cysts are characterized by an epithelial lining.Mucoviscidosis complicated by meconium ileus remains a challenging disease of the newborn that requires early operation. Ten per cent of infants with cystic fibrosis may be threatened by intestinal obstruction from this cause. Some children surviving the newborn period go on to develop obstruction later. Annular malformation of pancreas may produce upper intestinal (duodenal) obstructive symptoms immediately after birth. Surgical correction by duodenojejunostomy should be postponed only long enough to correct severe fluid or electrolyte imbalances. Idiopathic spontaneous hypoglycemia has the most serious prognosis if convulsions are allowed to recur. Increased metabolic rates in infants increase the need for control of blood sugar levels by either administration of cortisone or pancreatic resection. If adenoma is the cause, a conservative resection of the tumor suffices. If serial frozen section fails to reveal either tumor or hypertophy of insulin-producing cells, blind pancreatectomy may be indicated, for irreversible brain damage develops early in uncontrolled hypoglycemia. abstract_id: PUBMED:31299958 A rare case of central pancreatectomy for isolated complete pancreatic neck transection trauma. Background: Pancreatic trauma accounts for only 0.2% of blunt trauma and 1-12% of penetrating injuries. Injuries to other organs, such as spleen, liver, or kidney, are associated with 50.5% of the cases. The isolated complete traumatic transection of the pancreatic neck is rare. In the past, pancreatoduodenectomy or distal pancreatectomy and splenectomy was the standard care for patients with traumatic transection of the pancreatic head, duodenum or distal pancreas, and pancreatic neck. However, limited cases have been reported on the central pancreatectomy for pancreatic neck injuries. We present a rare case of a 21-year-old male patient who received central pancreatectomy for isolated complete traumatic transection of the pancreatic neck. Case Presentation: A 21-year-old male patient with mild abdominal pain and showed no apparent abnormality in the initial abdominal computed tomography (CT) was brought to the local hospital's emergency department due to a traffic accident. The patient's abdominal pain became progressively worse during observation in the hospital that led to the patient being referred to our hospital. The patient's vital signs were stable, and a physical examination revealed marked tenderness and rebound pain throughout the abdomen. The patient's white blood cells were increased; The serum amylase and lipase levels were elevated. The abdominal computed tomography revealed pancreatic neck parenchymal discontinuity, peripancreatic effusion, and hemorrhage. The patient underwent exploratory laparotomy. Intraoperative examination identified the neck of the pancreas was completely ruptured, and no apparent abnormalities were observed in the other organs. The patient underwent central pancreatectomy and Roux -Y pancreaticojejunostomy. The patient was treated with antibiotics, acid inhibition and nutritional supports for 10 days after surgery. Symptoms of the patient were significantly relieved, and white blood cells, serum amylase, and lipase levels returned to normal. The patient underwent follow up examination for 6 months with no evidence of exocrine or endocrine insufficiency. Conclusions: Central pancreatectomy is an effective pancreas parenchyma preserving procedure, may be a promising alternative to distal pancreatectomy and splenectomy for this complex pancreatic trauma in hemodynamically stable patients. Patient selection and surgeon experience are crucial in the technical aspects of this procedure. Answer: Central pancreatectomy (CP) is considered a reasonable pancreas-preserving conservative surgical strategy alternative to standard major pancreatic resection for benign pancreatic pathology/trauma. This is supported by several studies that have assessed the technical feasibility, safety, and outcomes of CP in patients with benign central pancreatic pathology or trauma. One study found that CP, with either pancreaticogastrostomy or pancreaticojejunostomy, was a safe and effective procedure for non-enucleable benign pancreatic pathology/trauma confined to the pancreatic isthmus. The study reported no operative mortality, and at a mean follow-up of 26.4 months, no patient had evidence of endocrine or exocrine pancreatic insufficiency, and all patients were alive and well without clinical and imaging evidence of disease recurrence (PUBMED:17054548). Another study on children who underwent duodenum-preserving resection of the pancreatic head (DPRPH) with Roux-en-Y pancreatojejunostomy showed that it was a safe and effective procedure for the treatment of large focal pathological lesions of the pancreatic head. The study concluded that DPRPH is more appropriate for the developing child than pancreatoduodenectomy, which is more invasive (PUBMED:30386905). Additionally, a case report presented a rare case of a 21-year-old male patient who received central pancreatectomy for isolated complete traumatic transection of the pancreatic neck. The patient underwent CP and Roux-Y pancreaticojejunostomy, and at 6 months follow-up, there was no evidence of exocrine or endocrine insufficiency. The report concluded that CP could be a promising alternative to distal pancreatectomy and splenectomy for complex pancreatic trauma in hemodynamically stable patients (PUBMED:31299958). In summary, CP is a viable option for preserving pancreatic function in patients with benign central pancreatic pathology or trauma, offering a conservative alternative to more extensive resections. It allows for the cure of the disease without loss of substantial amounts of normal pancreatic parenchyma and preserves exocrine and endocrine functions. Patient selection and surgeon experience are crucial for the success of this procedure.
Instruction: Is contact allergy to disperse dyes and related substances associated with textile dermatitis? Abstracts: abstract_id: PUBMED:24348384 Allergic contact dermatitis induced by textile necklace. Allergic contact dermatitis to textile dyes is considered to be a rare phenomenon. A recent review reported a prevalence of contact allergy to disperse dyes between 0.4 and 6.7%. The relevance of positive patch testing was not reported in all studies. Textile dye allergy is easily overlooked and is furthermore challenging to investigate as textile dyes are not labelled on clothing. In this report, we present a case of allergic contact dermatitis to a textile necklace. The patch test showed strong reactions to the necklace and the azo dyes Disperse Orange 1 and Disperse Yellow 3. Despite the European legislation and the reduced use of disperse dyes in Third World countries, disperse azo dyes still induce new cases of allergic contact dermatitis. abstract_id: PUBMED:19067698 Is contact allergy to disperse dyes and related substances associated with textile dermatitis? Background: Disperse dyes (DDs) are the most common sensitizers among textile dyes, but there is little knowledge of the clinical relevance of positive patch test reactions. Objective: To investigate if patient-reported textile-related skin problems can be explained by contact allergy to eight different DDs and/or to chemically related substances, by occupation or by atopic constitution, and if the skin problems are influenced by age or sex. Methods: A questionnaire on textile-related skin problems was answered by 858 of 982 consecutively patch tested patients in Malmö, Sweden and in Leuven, Belgium. The baseline series used for patch testing was supplemented with a textile dye mix (TDM) consisting of the eight DDs and with the separate dyes. The association between textile-related skin problems and contact allergy to the DDs and other risk factors was investigated using multiple logistic regression analysis. Results: Eighteen per cent of the patients suspected textiles as a cause of their skin problems. Atopic constitution and female sex were risk factors for skin reactions. Synthetic materials were the most common textiles to give skin problems. A significant association was found between self-reported textile-related skin problems and contact allergy to para-phenylenediamine (PPD) [adjusted odds ratio (OR) 2.1; 95% confidence interval (CI) 1.0-4.3]. A similar, but more imprecise, adjusted OR was found for TDM (OR 1.9; 95% CI 0.57-5.6). Contact allergy to black rubber mix was too rare to be evaluated. Conclusions: Contact allergy to PPD was a more prevalent indicator for skin reactions to textiles than the TDM used in this study. abstract_id: PUBMED:28233329 Contact sensitization in patients with suspected textile allergy. Data of the Information Network of Departments of Dermatology (IVDK) 2007-2014. Background: Textile dyes, rubber, elements or textile resins carry the risk of inducing allergic contact sensitization. Objectives: To assess clinical data and patch test results for dermatitis patients with suspected textile allergy. Methods: A retrospective analysis of Information Network of Departments of Dermatology data of the years 2007-2014 of patients patch tested because of suspected textile allergy was performed. Results: Patients of the study group (n = 3207) suffered more frequently from leg, trunk and generalized dermatitis than patients of the control group (n = 95210). Among the allergens of the textile dye series, the highest frequency of positive reactions was observed for p-aminoazobenzene (5.1%) and p-phenylenediamine (PPD) (4.5%), followed by Disperse Orange 3 (3.1%), Disperse Blue 124 (2.3%), Disperse Blue 106 (2.0%), Disperse Red 17 (1.1%), and Disperse Yellow 3 (1.1%), partly with concomitant reactions. Patch testing with the patients' own textiles was performed in 315 patients, with positive reactions in 18 patients. These were mostly elicited by blue or black textiles with tight skin contact. Only 2 of these patients also reacted to textile dyes from the German Contact Dermatitis Research Group series. Conclusions: For the comprehensive diagnosis of contact sensitization in patients with suspected textile dermatitis, combined patch testing is indicated, with (i) PPD and a textile dye series and (ii) patients' own clothing. abstract_id: PUBMED:36112512 Exclusion of Disperse Orange 3 is possible from the textile dye mix present in the Swedish baseline patch test series. A study by the Swedish Contact Dermatitis Research Group. Background: The textile dye mix (TDM) 6.6% in petrolatum contains Disperse Blue (DB) 35, Disperse Yellow 3, Disperse Orange (DO) 1 and 3, Disperse Red 1 and 17, and DB 106 and 124. The most frequent allergen in TDM-positive patients is DO 3. Around 85% of para-phenylenediamine (PPD)-allergic dermatitis patients have been positive to DO 3. There has been a discussion to exclude DO 3 from TDM 6.6% because of strong simultaneous reactions to TDM and PPD. Objectives: To study if DO 3 can be excluded from TDM 6.6%. Methods: Patch tests were performed on 1481 dermatitis patients with TDM 6.6%, TDM 7.0% (without DO 3 but the other disperse dyes at 1.0% each), DO 3 1.0%, and PPD 1.0% pet. Results: Contact allergy to TDM 6.6% was 3.6% and to TDM 7.0% was 3.0%. All 26 DO 3-positive patients were positive to PPD. The 44 patients positive to TDM 7.0% plus the 13 positive to PPD and TDM 6.6% but negative to TDM 7.0% were 57, outnumbering the 53 positive to TDM 6.6%. Conclusion: TDM 7.0% can replace TDM 6.6% in the Swedish baseline series, since TDM 7.0% together with PPD 1.0% will detect patients with textile dye allergy. abstract_id: PUBMED:25925831 Recommendation to include a textile dye mix in the European baseline series. Background: Disperse dyes are common sensitizers, but are not currently included in the European baseline series. Objectives: To justify the inclusion of a textile dye mix in the European baseline patch test series. Methods: A survey of the reported frequencies of contact allergy to textile dyes and textile dye mixes was performed by searching PubMed (http://www.ncbi.nlm.gov.ludwig.lub.lu.se/pubmed, last accessed 14 December 2014). The results from a multicentre study performed within the European Environmental Contact Dermatitis Research Group (EECDRG) and published in 2014 formed the final basis for the recommendation. Results: The EECDRG study performed in Europe and the United States showed that 2.1-6.9% of consecutively tested dermatitis patients reacted to a 6.6% wt/wt textile dye mix consisting of eight disperse dyes. The clinical relevance was ascertained in >30% of the positive cases. A high frequency of simultaneous sensitivity to Disperse Orange 3 (DO 3) and p-phenylenediamine (PPD) was seen. Active sensitization to the mix was not noted. Conclusions: It is recommended to include a 6.6% textile dye mix consisting of eight disperse dyes in the European baseline series, even though one component, DO 3, may be superfluous, owing to its frequent cross-reactivity with PPD. Removal of DO 3 from the mix would need further study. abstract_id: PUBMED:23289879 Contact allergy from disperse dyes in textiles: a review. Several disperse dyes (DDs) are still considered to be the most important allergens in textile dermatitis, but there are sparse data about their current use in textiles. The aim of this review was to evaluate published studies and reports on contact allergy to DDs published in PubMed during the last 22 years (1990-2012). Prevalence data are provided by study and by dye, as well as by the described clinical peculiarities of DD dermatitis. We reviewed 54 studies. In total, 26 DDs were tested. The average prevalence in screening studies was >1% for Disperse Blue 106, Disperse Blue 124, and Disperse Orange 3. There is a lack of data on patch testing with Disperse Blue 26, Disperse Blue 102, Disperse Orange 37, Disperse Orange 149, Disperse Yellow 23 and Disperse Yellow 49, which are listed as allergens by the EU Commission. It is necessary to check the purity and identity of dyes used for patch testing, confirm the clinical relevance of positive reactions by patch testing with suspected textiles, and, if the results are positive, determine the culprit dye. abstract_id: PUBMED:24815318 Patch testing with a textile dye mix--a multicentre study. Background: Disperse dyes are well-known contact sensitizers. However, they are not included in the majority of commercially available baseline patch test series. Objectives: To investigate the outcome of patch testing with a textile dye mix (TDM) consisting of eight disperse dyes at dermatology clinics in various countries. Patients/materials/methods: Two thousand nine hundred and seven consecutive dermatitis patients at 12 dermatology clinics representing nine countries were tested with a TDM at 6.6%, consisting of Disperse Blue 35, Disperse Yellow 3, Disperse Orange 1 and 3, and Disperse Red 1 and 17, all at 1.0%, and Disperse Blue 106 and Disperse Blue 124, each at 0.3%, provisionally included in the baseline series. Eighty-seven per cent of the patients allergic to the TDM were also tested with the eight separate dyes. Results: Contact allergy to TDM was found in 108 patients (3.7%). The frequency of contact allergy varied from 2.1% to 6.9% in different centres. Simultaneous reactivity to p-phenylenediamine was found in 57 of the TDM-positive patients (53%). The most frequent dye allergen among the TDM-positive patients was Disperse Orange 3. The contact allergy could have explained or contributed to the dermatitis in approximately one-third of the patients for whom clinical relevance of the TDM contact allergy was recorded. Conclusions: The TDM should be considered for inclusion in the European baseline series. abstract_id: PUBMED:32666533 Patch testing with a textile dye mix with and without Disperse Orange 3. Background: The textile dye mix (TDM) 6.6% pet. contains Disperse Blue (DB) 35, Disperse Yellow 3, Disperse Orange (DO) 1 and 3, Disperse Red 1 and 17, and DB 106 and 124. The most frequent allergen in TDM-positive patients is DO 3. Around 85% of p-phenylenediamine (PPD)-allergic dermatitis patients have shown positive patch test reactions to DO 3. There has been a discussion to exclude DO 3 from TDM 6.6% because of frequent, strong reactions to TDM 6.6% and PPD. Objectives: To study if DO 3 can be omitted from a TDM. Methods: Patch tests were performed on 2250 dermatitis patients with TDM 6.6%, TDM 5.6% pet., TDM 7.0% pet., and PPD 1.0% pet.; 122 patients were also patch tested with DO 3 1.0% pet. Results: Among the 2250 patients patch tested, contact allergy prevalence to TDM 6.6% was 2.4%, to TDM 5.6% 1.8%, and to TDM 7.0% 2.0%. Of the 54 TDM 6.6%-positive patients, 55.6% reacted to PPD; as much as 42.2% of PPD-allergic women and 50% of PPD-allergic men reacted to TDM 6.6%. Of the 17 DO 3-positive patients, 94.1% showed a positive reaction to PPD. Conclusion: Results indicate that DO 3 can probably be omitted from TDM, but patch testing with TDM 6.6%, TDM 7.0%, DO 3 1.0%, and PPD 1.0% simultaneously is needed to finally decide whether it is possible or not. abstract_id: PUBMED:26172486 Patch testing to a textile dye mix by the international contact dermatitis research group. Background: Disperse dyes are well-known contact sensitizers not included in the majority of commercially available baseline series. Objective: To investigate the outcome of patch testing to a textile dye mix (TDM) consisting of 8 disperse dyes. Methods: Two thousand four hundred ninety-three consecutive dermatitis patients in 9 dermatology clinics were patch tested with a TDM 6.6%, consisting of Disperse (D) Blue 35, D Yellow 3, D Orange 1 and 3, D Red 1 and 17, all 1.0% each, and D Blue 106 and D Blue 124, each 0.3%. 90 reacted positively to the TDM. About 92.2% of the patients allergic to the TDM were also tested with the 8 separate dyes. Results: Contact allergy to TDM was found in 3.6% (1.3-18.2) Simultaneous reactivity to p-phenylenediamine was found in 61.1% of the TDM-positive patients. Contact allergy to TDM and not to other p-amino-substituted sensitizers was diagnosed in 1.2%. The most frequent dye allergen in the TDM-positive patients was D Orange 3. Conclusions: Over 30% of the TDM allergic patients had been missed if only the international baseline series was tested. Contact allergy to TDM could explain or contribute to dermatitis in over 20% of the patients. Textile dye mix should be considered for inclusion into the international baseline series. abstract_id: PUBMED:14738722 Disperse dyes in fabrics of patients patch-test-positive to disperse dyes. Background: Little evidence exists that dyes to which individuals are patch test positive are in those garments that they suspect cause their dermatitis, which makes diagnosis and management of colored-textile allergic contact dermatitis difficult. Objective: We determined whether disperse dyes to which a patient suspected of having a colored garment-dye contact allergy were in the garment that the patient suspected to be the cause of his or her skin lesions. Methods: Each patient was patch-tested with 12 disperse dyes in a commercial patch test series. Disperse dyes in the submitted garment(s) were identified. The dyes to which each patient was patch test positive were compared with the dyes identified in that patient's submitted fabric(s). When a dye appeared in both lists, a correlation was found. Results: Twenty-two of 32 garments received contained disperse dyes. In all, 35 different disperse dyes were identified. Twelve dyes elicited a reaction in at least one patient. Only nine patients were patch test positive to at least one dye identified in the fabric submitted. Dyes appearing in both lists were Blue 106 (8 times), Blue 124 (2 times), Yellow 3 (once), and Red 1 (once). Conclusion: Dyes to which a patient was patch test positive were infrequently identified in the fabric suspected to be the cause of the skin lesions, which means that the next step will be to patch-test with fabric swatches and extracts of dye from the submitted garments. Answer: Contact allergy to disperse dyes and related substances is indeed associated with textile dermatitis. Disperse dyes are recognized as common sensitizers among textile dyes, and they are the most frequent allergens in cases of textile dermatitis (PUBMED:19067698, PUBMED:28233329, PUBMED:23289879). The prevalence of contact allergy to disperse dyes has been reported to be between 0.4 and 6.7%, with the highest frequency of positive reactions observed for specific dyes such as p-aminoazobenzene and p-phenylenediamine (PPD) (PUBMED:24348384, PUBMED:28233329). A significant association has been found between self-reported textile-related skin problems and contact allergy to PPD, which was a more prevalent indicator for skin reactions to textiles than the textile dye mix (TDM) used in one study (PUBMED:19067698). Moreover, a multicentre study by the International Contact Dermatitis Research Group found that contact allergy to a TDM consisting of eight disperse dyes was present in 3.6% of tested patients, with simultaneous reactivity to PPD found in 61.1% of TDM-positive patients (PUBMED:26172486). Patch testing with patients' own textiles has also been recommended for a comprehensive diagnosis of contact sensitization in patients with suspected textile dermatitis, as positive reactions were mostly elicited by blue or black textiles with tight skin contact (PUBMED:28233329). Furthermore, studies have suggested that the inclusion of a TDM in the European baseline series could improve the detection of textile dye allergies (PUBMED:25925831, PUBMED:24815318). In summary, there is a clear association between contact allergy to disperse dyes and related substances and textile dermatitis, with clinical data and patch test results supporting this link. The presence of such allergens in textiles can lead to allergic contact dermatitis, and patch testing with a TDM and individual dyes is a valuable tool for diagnosing and managing this condition.
Instruction: The VIVA urethral plug: a sensible expansion of the spectrum for conservative therapy of urinary stress incontinence? Abstracts: abstract_id: PUBMED:8674956 The VIVA urethral plug: a sensible expansion of the spectrum for conservative therapy of urinary stress incontinence? Objective: To evaluate applicability, acceptance, side effects and complications and the possible curative effect on female stress urinary incontinence of the treatment with the urethral plug VIVA (Braun Melsungen, Germany). Design: Part I: Prospective clinical study. Part II: Ongoing prospective-longitudinal study. Additionally two case reports. Subjects: Part II: 156 consecutive patients of the urodynamic out patient clinic. Part II: 21 patients with urinary stress incontinence (SUI). Main Outcome Measures: Part I: Ability to hold the plug while walking and standing, acceptance rate of plug therapy. Part II: Subjective improvement of SUI, pad-weighing test, cough test, urinary tract infections, other complications during four months of plug treatment. Results: Part I: 62% of 156 patients were able to hold the plug. 40% of 53 patients with SUI accepted the plug treatment. Part II: 14 patients completed the study. 4 pat. were subjectively and objectively cured, 3 were improved, 7 unchanged. The cured patients all had a low grade SUI with a urine loss of 3 g in the pad-weighing test before treatment. 12/21 pat, showed 1-2 urinary tract infections and 1 patient dropped out from the study because of recurrent urinary tract infections. In one patient a plug migrated into the bladder and was removed cystoscopically. In two case reports the possibility of symptomatic plug treatment for patients with severe stress incontinence after surgical and radiological treatment of cervical cancer is demonstrated. Conclusions: Plug treatment is a broadening of the spectrum of conservative treatment of SUI as a symptomatic treatment in pat. with contraindications to anti-incontinence surgery and as a curative treatment in low grade SUI. abstract_id: PUBMED:31066956 Complications of urethral bulking therapy for female stress urinary incontinence. Aims: To review, report, and discuss the complications associated with urethral bulking therapy in female stress urinary incontinence. Methods: An extensive nonsystematic literature review on complications associated with injectable bulking agents used in the clinical practice was conducted. We reviewed articles published in English and indexed in the PubMed, Embase, and Google Scholar databases. Original articles, case reports, and case series were taken into consideration. Data regarding the safety of injectable bulking agents and the complications associated with their utility within the context of urethral bulking therapy for female stress urinary incontinence were extracted and discussed. Results: Approximately, 1/3 of the patients experience some type of a complication after urethral bulking therapy. The majority of these complications are of low grade, transient, do not necessitate additional surgical intervention, and amenable to treatment with conservative measures such as clean intermittent catheterization and antibiotics. However, more serious complications such as abscess formation, delayed hypersensitivity reactions, and vaginal erosion have been reported. Some of the injectable bulking agents have been withdrawn from the market because of their unfavorable adverse effect profile. Conclusions: Urethral bulking therapy can be considered as a low-risk procedure. However, it is not without complications which can be severe in rare instances. The search for the ideal urethral bulking agent is ongoing and future comparative studies assessing the safety and efficacy of these compounds in randomized controlled settings are warranted. abstract_id: PUBMED:29675556 Patients' perspectives on urethral bulk injection therapy and mid-urethral sling surgery for stress urinary incontinence. Introduction And Hypothesis: The aim of this study was to identify all treatment decision factors that determined the preference for peri-urethral bulk injection therapy (PBI) or mid-urethral sling (MUS) surgery in patients with primary stress urinary incontinence (SUI). Second, we explored what patients expect from treatment for SUI and whether patients would consider PBI as a primary treatment option. Methods: In a qualitative design, 20 semi-structured, face-to-face interviews were conducted in women with primary SUI. Exclusion criteria were: previous PBI or MUS surgery; predominating urgency. Interviews were guided by three open-ended questions and a topic list. PBI treatment and MUS surgery were described in detail, and the efficacy was stated as 70% and 90%, respectively. Data saturation was reached when no new treatment decision factors were identified in three consecutive interviews. Interviews were audiotaped and fully transcribed. Thematic analysis by a coding process was done independently by two researchers. Results: Sixteen procedural, personal, professional, social and external treatment decision factors were identified. Regarding expectations about treatment for SUI, women believed 'becoming dry' was wishful thinking. The majority of patients accepted a small degree of persistent urinary incontinence after treatment. Regardless of their treatment preference, patients indicated that women should be informed about PBI as a primary treatment option. Conclusion: Patients with primary SUI are open to PBI as an alternative treatment option even with lower cure rates compared with MUS surgery performed under general or spinal anesthesia. Patients indicated that women with primary SUI seeking treatment should be informed about PBI as a treatment option. abstract_id: PUBMED:29406061 Penile and Urethral Reconstructive Surgery. Penile and urethral reconstructive surgical procedures are used to treat a variety of urologic diagnoses. Urethral stricture disease can lead to progressive lower urinary tract symptoms and may require multiple surgical procedures to improve patient's symptoms. Male stress urinary incontinence is associated with intrinsic sphincter deficiency oftentimes associated with radical prostatectomy. Men suffering from urethral stricture disease and stress urinary incontinence should be referred to a urologist because multiple treatment options exist to improve their quality of life. abstract_id: PUBMED:12534925 Transabdominal vesical sonography of urethral syndrome and stress incontinence. Background: Transabdominal ultrasonography was used to study the bladder neck morphology in women with urethral syndrome or stress urinary incontinence, in order to determine the ultrasonographic findings of these conditions. Methods: A total of 210 female patients with a normal bladder, asymptomatic trigonitis, urethral syndrome, and stress incontinence were studied. The mucosal thickness around the bladder neck, the length of the anterior base plate of the bladder, and the anteroposterior vesical wall angle (APVA) at the bladder neck were measured on sagittal transabdominal vesical ultrasonograms with the patient in the supine position. Results: Patients with asymptomatic trigonitis or urethral syndrome had thicker mucosa around the bladder neck than the subjects with a normal bladder, and the subjects with stress incontinence had normal mucosa. The APVA was 158 +/- 17 (mean +/- SD) degrees in the subjects with a normal bladder. It was smaller in symptomatic patients and decreased to 109 +/- 10 degrees in those with conservative therapy-resistant incontinence. The anterior edge of the vesical base plate was visible approximately 2 cm from the bladder neck in subjects without incontinence, while it tended to be absent in patients with incontinence and an APVA of less than 126 degrees. Conclusion: A small APVA appears to reflect bladder neck descent, while a small APVA without a visible anterior base plate edge may reflect hypotonia of the vesical base plate. Transabdominal vesical ultrasonography with the patient in the supine position provides useful information and can be carried out as a routine examination in female patients with micturition disorders. abstract_id: PUBMED:15098765 Cell therapy for urethral sphincter incompetence by muscle precursor cell autograft This article reviews cell therapy for muscle diseases and describes the development of a treatment for urethral sphincter insufficiency by muscle precursor cell (MPC) autograft. These studies were conducted in several successive steps, comprising: 1) comparative study of the cellular mechanisms of regeneration of skeletal striated muscle and urethral striated sphincter and development of a method of extraction of MPC; 2) creation of an animal model of sphincter injury reproducing chronic denervation lesions and fibrosis responsible for sphincter insufficiency in human; 3) study of the biology of intrasphincteric transplantation of MPC extracted from peripheral muscle, taking into account the interactions between these cells and the peripheral nervous system. abstract_id: PUBMED:8582498 Conservative therapy of female genuine stress incontinence with vaginal cones. Objective: The objective was to analyse urodynamic data before and after conservative treatment with vaginal cones. Study Design: The design was an open clinical study and was carried out at the Urogynaecology Unit of the University women's hospital. Eighteen women with genuine urinary stress incontinence were treated with vaginal cones for 6 weeks. Cystometry was performed before and after conservative therapy. The patients' subjective improvement and the urodynamic data have been compared. The Mann-Whitney U-test was used for statistical analysis. Results: Of eighteen women with cone therapy, eight were continent after 6 weeks and showed a significant increase of the dynamic urethral closure pressure. Seven patients reported a subjective improvement, and in three women no change of stress incontinence was observed Colposuspension was performed at a later date in these three cases. Conclusion: Vaginal cone therapy is a successful method to cure mild female stress incontinence and has the advantage of avoiding incontinence operation. Therapeutic success can be assessed by urodynamic evaluation. abstract_id: PUBMED:26872643 The promise of urethral pressure reflectometry: an update. Introduction And Hypothesis: The urethral closure mechanism in women is incompletely understood. Measuring closure function in a collapsed urethral tube without distending the tube and changing its natural shape is impossible with conventional methods. Over the last decade, studies with urethral pressure reflectometry (UPR), a method that measures pressure and the corresponding cross-sectional area along the entire length of the urethra continuously by means of acoustic reflectometry, have been performed. The studies have mainly been performed on patients with stress urinary incontinence (SUI). Methods: Our aim was to provide an overview of the studies conducted with UPR, establishing whether the method may be used in clinical practice. We reviewed all literature published on UPR. Results: Urethral pressure reflectometry is easily performed with limited bother for the patient. The catheter consists of a 45-cm-long PVC tube connected to a thin and highly flexible polyurethane bag, which is placed in the urethra. When inserted, the bag only occupies 0.4 mm(2) of the urethra, respecting the natural shape and orientation of the urethra and, most importantly; respecting the laws of physics when measuring urethral pressure. UPR can discriminate patients with SUI from continent women and separate assessment of the sphincter function and support system is possible. Also, UPR has revealed statistically significant differences in urethral pressures after drug therapy for SUI. Conclusions: We conclude that UPR has a place in clinical practice. The method has shown groundbreaking potential in the understanding of urethral dysfunction in SUI patients. abstract_id: PUBMED:6974921 Urethral syndrome or urinary tract infection? Urodynamic studies of 25 women between eleven and forty-five years of age whose history included frequent urinary tract infections disclosed urethral activity different from that of women with pure stress incontinence. Often the former had associated symptoms of urge, although the latter did not. Not all symptoms of infections were documented by culture, hence could have been related to tension in the pelvic floor rather than to bacteriuria. Urodynamic studies showed spasticity of the urethral musculature. The women with frequent urinary tract infections uniformly had hyperactive urethral striated muscle, those with the most severe form of urge showing varying resting tension in the urethral sphincter. In all cases, discomfort or urge was proportional to the elevation in urethral sphincter pressure and dramatically diminished with drop in closure tonus. In many cases relief of urge occurred only with voiding-when relaxation of the pelvic floor musculature could be induced by the patient. Urethral symptoms result from physiologic imbalance in urethral activity and may predispose to urinary tract infection. Thus, physicians treating urge complaints in young women should be aware that both entities have to be treated. abstract_id: PUBMED:409856 Urethral pressure profile: current concepts. The urethral pressure profile is an artifactual assessment of urethral response to physiologic distension. It is a reproducible clinical tool used to evaluate numerous disease states. Examples of its usefulness are presented to illustrate the wide variety of clinical uses. The urethral pressure profile assists the clinician in the selection of the proper form of management in difficult cases of urinary incontinence or obstruction. Answer: The VIVA urethral plug can be considered a sensible expansion of the spectrum for conservative therapy of urinary stress incontinence (SUI). According to a prospective clinical study and ongoing prospective-longitudinal study, the VIVA urethral plug showed that 62% of 156 patients were able to hold the plug, and 40% of 53 patients with SUI accepted the plug treatment (PUBMED:8674956). In the second part of the study, 14 patients completed the study, with 4 patients being subjectively and objectively cured, 3 improved, and 7 unchanged. The cured patients had a low grade of SUI with a urine loss of 3 g in the pad-weighing test before treatment. However, there were some side effects and complications, such as urinary tract infections in 12 out of 21 patients and one case of plug migration into the bladder, which required cystoscopic removal (PUBMED:8674956). The plug treatment is seen as a broadening of the spectrum of conservative treatment of SUI, serving as a symptomatic treatment in patients with contraindications to anti-incontinence surgery and as a curative treatment in low-grade SUI (PUBMED:8674956). This suggests that the VIVA urethral plug could be a viable option for certain patients, particularly those with mild symptoms or those who are not candidates for more invasive procedures. It is important to note that while urethral bulking therapy can be considered a low-risk procedure, it is not without complications, which can be severe in rare instances (PUBMED:31066956). Patients' perspectives also indicate openness to peri-urethral bulk injection therapy as an alternative treatment option, even with lower cure rates compared to mid-urethral sling surgery (PUBMED:29675556). Therefore, the VIVA urethral plug could be part of a range of conservative treatments available to patients, with the understanding that there are potential risks and varying degrees of effectiveness.
Instruction: Is an ileal pouch an alternative for patients requiring surgery for Crohn's proctocolitis? Abstracts: abstract_id: PUBMED:37601738 Restorative pouch surgery following proctocolectomy for inflammatory bowel disease: past experience and future direction. Major advancements in surgery for patients suffering proctocolitis from ulcerative colitis (UC) or selected patients with Crohn's disease (CD) have emerged in a relatively short time. Historically, patients underwent a proctocolectomy with end ileostomy, however, a restorative proctocolectomy with an ileal pouch-anal anastomosis (IPAA) was introduced in the late 1970s in the United Kingdom and gave patients the opportunity to avoid a permanent stoma. Initially designed as a hand-sewn "S" shaped pouch, with the invention of the linear stapler, a "J" shaped pouch was described in Japan, and subsequent advances in the United States largely contributed to the pelvic pouch's evolution to the standard of care in the management of patients with inflammatory bowel disease (IBD). The procedure was then divided into different stages depending on the medical condition of the patient and minimally invasive techniques (laparoscopic & robotic surgery) have continued to advance the success of the operation. Unfortunately, pouch complications occur, and seem to be occurring at an increasing frequency with the adoption of minimally invasive surgery. The field of reoperative pouch surgery has emerged to offer patients the opportunity to restore their quality of life (QOL) without the need for a permanent ostomy. Many patients with signs of pouch failure such as pouchitis, fistulae, pain and obstruction are diagnosed with Crohn's of the pouch, but many have mechanical complications that can be corrected with surgery, rather than offering pouch excision with a permanent ostomy (continent or traditional). Patients with Crohn's may be offered an IPAA but they will not have success if they, like patients with UC, have mechanical complications leading to their pouch failure. Patients who undergo reoperative pouch surgery do well with an acceptable QOL. abstract_id: PUBMED:16328125 Is an ileal pouch an alternative for patients requiring surgery for Crohn's proctocolitis? Background: Most surgeons consider Crohn's colitis to be an absolute contraindication for a continent ileostomy, due to high complication and failure rates. This opinion may, however, be erroneous. The results may appear poor when compared with those after pouch surgery in patients with ulcerative colitis (UC), but the matter may well appear in a different light if the pouch patients are compared with Crohn's colitis patients who have had a proctocolectomy and a conventional ileostomy. Methods: We assessed the long-term outcomes in a series of patients with Crohn's colitis who had a proctocolectomy and a continent ileostomy (59 patients) or a conventional ileostomy (57 patients). The median follow-up time was 24 years for the first group and 27 years for the second group. Results: The outcomes in the two groups of patients were largely similar regarding both mortality and morbidity; the rates of recurrent disease and reoperation with loss of small bowel were also similar between groups. Conclusions: The possibility of having a continent ileostomy, thereby avoiding a conventional ileostomy-even if only for a limited number of years--may be an attractive option for young, highly motivated patients. abstract_id: PUBMED:10210930 Emergency surgery in inflammatory bowel disease. Experience in 26 cases Background: The emergency surgical treatment of inflammatory chronic bowel diseases is closely related to the classification of patients according to their symptoms and clinical conditions, as well as possible surgical options. In our study, an actual set of criteria is proposed for the classification of the degree of seriousness of symptoms, related to patient conditions, and applicable surgical strategies. Methods: Retrospectively evaluation of the outcome of the disease has been performed over 26 patients undergoing emergency treatment in our Hospital, and with at least 5 years of follow-up. Fourteen patients were affected by ulcerative rectocolitis and 11 underwent subtotal colectomy with ileostomy; the remaining 12 were affected by Crohn's disease and were treated with colic or ileal local resection. Results: Among 11 operated patients with RCU, 6 toxic megacolon, 4 severe colitis and one perforation (postoperative death) have been diagnosed. Recanalization was possible in 8 patients. On the other hand proctectomy was necessary in the remaining 3 patients as final operation. In MC patients 6 occlusions, 4 severe colitis, one multiple perineal fistulization and one perforation of occult right colon tumor have been diagnosed. We performed 5 right colectomy, 4 jejuno-ileal resections, one Hartmann's operation, one colostomy in emergency and one multiple bypass. Because of recurrence, one right colectomy needed following total colectomy and two jejuno-ileal resections needed right colectomy soon after. Three jejuno-ileal resections were performed with a conservative purpose in patients treated by right colectomy. Conclusions: It has resulted that in ulcerative rectocolitis total colectomy actually permits an adequate control of the disease, as well as a satisfactory therapy of the rectal stump, with subsequent recanalization in the majority of cases, whereas in Crohn's disease the frequency of recidive is higher and it seems more advisable to opt for a radical resection treatment (chiefly right colectomy), but with a conservative purpose. abstract_id: PUBMED:2844477 Emergency colectomy for cytomegalovirus ileocolitis in patients with the acquired immune deficiency syndrome. The charts of all patients with the acquired immune deficiency syndrome (AIDS) who underwent emergency intra-abdominal surgery between January 1981 and July 1987 were reviewed. Eleven AIDS patients underwent 13 emergency laparotomies. Seven of these patients (64 percent) had cytomegalovirus (CMV) ileocolitis as the pathologic process requiring emergent surgical intervention. Four patients had hemorrhagic CMV proctocolitis and three had perforations of CMV ulcers of the ileum or rectosigmoid. The operations performed included three subtotal colectomies, two segmental resections, and two diverting stomas. The postoperative mortality rate in the CMV group was 28 percent at one day, 71 percent at one month, and 86 percent at six months. Furthermore, CMV ileocolonic pathology was directly responsible for 70 percent of the deaths in AIDS patients who underwent emergent exploratory laparotomy. abstract_id: PUBMED:1914729 Fate of the rectum after colectomy and ileostomy for Crohn's colitis. Eighty-four patients had colectomy with ileostomy and oversewing of the rectum for Crohn's colitis. Seventy-two patients were operated on because of intractable disease, colitis in combination with rectal fistulas, and toxic megacolon. The operative mortality was 6 percent, and neither emergency surgery nor treatment with steroids correlated with operative morbidity. After a median 7.7 years of follow-up, 25 ileorectal anastomoses had been undertaken, 16 of which were successful. Twenty-nine protectomies were performed; the resulting 10-year cumulative risk of proctectomy was 50 percent. While the risk of proctectomy was significantly less among patients with a normal rectum at colectomy compared with patients with proctitis, the initial macroscopic degree of proctitis did not correlate with the risk of subsequent proctectomy. The 5-year cumulative ileal resection rate in 29 patients with a rectum in situ but out of circuit was 29 percent. The possibility of a future ileorectal anastomosis should still be considered in patients with proctocolitis. abstract_id: PUBMED:8245664 The differential diagnosis and comorbidity of hidradenitis suppurativa and perianal Crohn's disease. Over the last 8 years, 61 patients with hidradenitis suppurativa (HS) have been treated at this institution. Twenty-four have also had a diagnosis of Crohn's disease (38%). This dual pathology is examined in detail in this retrospective review. There were 11 males and 13 females with a mean age of 39 years (range 18 to 75 years). The Crohn's disease was ileal in 1 patient, ileocolic in 4, and affected the large bowel only in 19. The diagnosis of Crohn's disease predated that of HS by an average of 3.5 years. At the time of review, 22 patients had a stoma, 23 had undergone laparotomy and 17 had lost their rectum. Hidradenitis suppurativa occurred in the perineal or perianal area in all patients but involved other sites in 20 cases. Skin grafting had been done in 9 and local procedures in 19 patients. Granulomas were found in excised skin in 6 cases but this finding was not associated with a poor outcome. At a mean follow-up of 3.2 years from the most recent surgery for HS (range 1 to 11 years) 11 were asymptomatic for HS, 11 had symptoms and no follow up was available in 2. These data show that HS may coexist with Crohn's proctocolitis, complicating the diagnosis and management of patients in whom it occurs. An increased appreciation of the possibility is recommended. abstract_id: PUBMED:8763061 Proctitis of diversion on an excluded rectum in patients with hemorrhagic rectocolitis Objectives: Diversion colitis is characterized by inflammatory lesions affecting colon or rectum excluded from the fecal stream for varied etiologies. These lesions may mimic both ulcerative colitis and Crohn's disease. The aim of our study was to examine the excluded rectum in patients with ulcerative colitis, and to study the evolution of the pathological lesions after ileo-rectal anastomosis. Methods: Eighteen patients with ulcerative colitis treated by total colectomy before ileo-rectal anastomosis were studied. The pathological features i.e. glandular alteration, inflammatory infiltrate and mucosal ulceration or fissure, were studied during 3 periods: initial colectomy, excluded rectum at surgery for anastomosis and rectal biopsies after anastomosis. Results: We observed on the excluded rectum a follicular lymphoid hyperplasia (18 cases), granulomas with giant cells (9 cases), mucosal fissures (9 cases). The inflammation extended to the submucosa in all cases and was occasionaly transmural. These lesions disappeared after the anastomosis and then seemed to be connected with the rectal diversion. Conclusions: The pathological changes of diversion proctitis that includes mucosal fissures, granulomas with giants cells or transmural inflammation, may lead to an erroneous diagnosis of Crohn's disease. The review of the previous colectomy is then mandatory to confirm the initial diagnosis of ulcerative colitis. These lesions disappear after anastomosis. abstract_id: PUBMED:19533065 Therapy of refractory proctocolitis and Crohn's disease. Incisionless laparoscopic proctocolectomy with a Brooke ileostomy Background: Laparoscopic colorectal surgery is nowadays also widely used in surgery of inflammatory bowel disease. With the correct indications laparoscopic surgery is an attractive cosmetic alternative for the predominantly juvenile patients. Refractory fistulizing Crohn's proctocolitis is a very severe disease with a maximal limitation on the quality of life. Proctocolectomy with a Brooke ileostomy represents a very effective option for these patients. The laparoscopic technique can at least spare the patients a salvage laparotomy incision. We report about our preliminary experiences Patients And Method: A total of 8 patients (mean age 25 years, range 19 Background 31 years, female:male ratio 5:3) were operated on. The mean preoperative time course of the disease was 28 months (range 12 Background 156 months). All patients had received long-term prednisolone therapy of >15mg, 2 patients received azathioprine medication, 2 underwent anti TNF-alpha therapy and 6 received 5-aminosalicylic acid (5-ASA). The mean preoperative BMI was 19 (range 15 Background 21). All patients suffered from Crohn's pancolitis with anorectal fistulas. Laparoscopic proctocolectomy was performed using 4 trocars place in a semicircular fashion. The resected tissue was salvaged transanally and the Brooke ileostomy was drained via the right lateral trocar. The terminal exit of the rectum occurred transanally with preservation of the pelvic floor and the anal sphincter and the anal fistulas were separated. The small pelvis was filled by a transanally fixed omentum. Results: The median time for surgery was 236.5 mins (range 220-330 mins). A complication of postoperative paralysis of the bowel occurred in two patients and 4 patients could be discharged problem-free according to the fast-track concept. Cosmetic results were excellent in all cases. Perianal and perirectal manifestations healed completely after a median of 4 weeks. Conclusions: Incisionless proctocolectomy represents a good and realizable alternative to open surgery. The main advantages are excellent cosmetic results and a better preservation of the external integrity of the abdomen. abstract_id: PUBMED:21243793 p-ANCA prevalence in ulcerative colitis patients and first-degree relatives Unlabelled: Perinuclear anti-neutrophil cytoplasmic antibodies (p-ANCA) are more common in patients with ulcerative colitis (UC) than in patients with Crohn's disease (CD), but its prevalence depends on the population being studied and the method employed for its detection. Aim: To determinate the prevalence of p-ANCA in ulcerative colitis (UC) patients and their first degree in NE Romania area. Material And Method: In this study, we investigated the prevalence of p-ANCA as detected by ELISA, in the serum of 44 patients with UC and 22 first degree relatives. We also correlated the presence of this antibody with disease activity and extent, extraintestinal complications and therapy. 26 healthy individuals comprised the control group. Results: p-ANCA was detected in 11% of the patients with UC. Of the 5 p-ANCA positive UC patients, 1 case were found to have proctosigmoiditis, 1 cases had left-sided colitis and 3 cases had pancolitis. There was no correlation between the presence of this antibody and any of the studied clinical variables (extend disease, complication, family disease, surgery). No person of the first degree and control group presented positive test. Conclusions: The prevalence of p-ANCA in NE Romania patients with UC (11%) is lower than that in the Western population. The negativity of p-ANCA in all first degree relatives of NE Romania UC patients should be further elucidated. abstract_id: PUBMED:814826 Hyperalimentation in inflammatory bowel disease. The effect of total parenteral nutrition on a group of thirty-four patients with inflammatory bowel disease over the past three and a half years was reviewed. Only patients in whom medical management had failed were included. Patients in whom the decision for surgery had been made and who were treated with total parenteral nutrition in an effort to prepare them for surgery were excluded. Of the group with Crohn's disease, those patients with small bowel involvement appeared to fare best; surgery was avoided in approximately 70 per cent of these admissions. Crohn's disease with colonic involvement had a less favorable prognosis, and 43 per cent of these patients underwent operation. Parenteral nutrition does not appear to affect the course of ulcerative colitis, as almost all patients in the group were treated by colectomy. Answer: An ileal pouch, specifically an ileal pouch-anal anastomosis (IPAA), has been considered an alternative for patients requiring surgery for Crohn's proctocolitis, although traditionally, Crohn's colitis has been viewed as a contraindication for a continent ileostomy due to high complication and failure rates (PUBMED:16328125). However, this perspective may be challenged when comparing outcomes of Crohn's colitis patients with a continent ileostomy to those with a conventional ileostomy. Long-term outcomes in a series of patients with Crohn's colitis who had a proctocolectomy and either a continent ileostomy or a conventional ileostomy showed largely similar results regarding mortality and morbidity, suggesting that a continent ileostomy could be an attractive option for young, highly motivated patients (PUBMED:16328125). Despite this, the success of an IPAA in Crohn's disease patients is not guaranteed, especially if they have mechanical complications leading to pouch failure, similar to patients with ulcerative colitis (UC) (PUBMED:37601738). The field of reoperative pouch surgery has emerged to address pouch complications, which seem to be increasing with the adoption of minimally invasive surgery. This offers patients the chance to restore their quality of life without the need for a permanent ostomy (PUBMED:37601738). In conclusion, while an ileal pouch can be considered an alternative for patients requiring surgery for Crohn's proctocolitis, the decision must be made cautiously, taking into account the potential for complications and the possibility of pouch failure. The success of the procedure may depend on the individual patient's condition and the presence of mechanical complications that could be corrected surgically (PUBMED:37601738; PUBMED:16328125).
Instruction: Do Prior Abdominal Surgeries Increase Complications in Abdominally Based Breast Reconstructions? Abstracts: abstract_id: PUBMED:24691317 Do Prior Abdominal Surgeries Increase Complications in Abdominally Based Breast Reconstructions? Background: A large proportion of patients presenting for autologous breast reconstruction have a history of prior abdominal surgeries such as obstetric, gynecologic, or general surgical procedures. The impact of prior abdominal wall violation on the ability to perform free tissue transfer from the abdomen needs to be explored and characterized. The purpose of this study was to assess the impact of prior abdominal surgery on perfusion-related complications and donor-site morbidity in free tissue abdominal transfer. Methods: All patients who underwent abdominally based free tissue transfer for breast reconstruction from 2005 to 2011 at the Hospital of the University of Pennsylvania were included. In addition to collecting data on standard patient past medical histories, comorbidities, and case characteristics, we also noted specific types of prior abdominal surgeries, number of prior surgeries, and if the rectus sheath was violated. Outcomes assessed included thrombotic complications, flap loss, major surgical complications, minor surgical complications, delayed wound healing, and subsequent hernia formation. Results: Eight hundred twelve patients underwent 1257 free flap breast reconstructions during the study period. Four hundred seventeen (51.4%) women had undergone prior abdominal surgery. The most common prior abdominal surgeries included total abdominal hysterectomy and/or bilateral salpingo-oophorectomy (35.7%), cesarean delivery (33.8%), and appendectomy (12.7%). No significant differences were noted in the number of major intraoperative complications (P = 0.68), total thrombotic events (P = 0.339), or flap losses (P = 0.53). Patients who had undergone prior rectus sheath violation were found to experience a greater amount of delayed healing of the donor site (22.7% vs 16.5%, P = 0.03). Additionally, a higher rate of postoperative hernia formation was noted in patients who had undergone prior hernia repairs (13.6% vs 3.3%, P = 0.04). Conclusions: A significant portion of patients presenting for breast reconstruction have had prior abdominal surgeries. This study demonstrates that these prior procedures represent an acceptable level of risk; although this issue should still be addressed during preoperative patient counseling. In patients with prior hernia repairs, however, additional care should be given to the fascial closure as these patients may be at higher risk for subsequent hernia formation after abdominally based breast reconstruction. abstract_id: PUBMED:24572867 The effect of prior abdominal surgery on abdominally based free flaps in breast reconstruction. Background: The abdomen has long remained the preferred donor site in breast reconstruction. Over time, the flap has evolved to limit morbidity with reduced muscular harvest. Previous abdominal operations, however, may limit the ability to perform a muscle- or fascia-sparing flap. The purpose of this study was to evaluate outcomes in women who had prior abdominal operations and underwent abdominally based autologous breast reconstruction. Methods: All patients who underwent abdominally based breast free flap reconstruction between 2004 and 2009 were reviewed. A study group of patients with previous open abdominal surgery were compared to patients with no prior abdominal surgery. Patient demographics, operative details, and flap and donor-site complications were analyzed. Results: A total of 539 patients underwent abdominally based breast free flap reconstruction. The study group consisted of 268 patients (341 flaps) and the control group consisted of 271 patients (351 flaps). Prior abdominal surgery led to greater muscular harvest, as 19.9 percent in the study group versus 12.0 percent required muscle-sparing 1-type harvest (p < 0.01). Both groups presented similar overall complications, with the exception of lower partial flap loss and increased wound healing complications in the study group (p < 0.05). Abdominal wall laxity became less frequent with increasing number of prior abdominal operations. Conclusions: Abdominally based flaps for breast reconstruction, including muscle-sparing 3 (deep inferior epigastric perforator) flaps, can be performed safely in patients with prior abdominal surgery. These patients should be informed, however, of an increased chance of muscular harvest and wound healing complications. Clinical Question/level Of Evidence: Risk, III. abstract_id: PUBMED:33648009 The Impact of Prior Abdominal Surgery on Complications of Abdominally Based Autologous Breast Reconstruction: A Systematic Review and Meta-Analysis. Background: Approximately half of all patients presenting for autologous breast reconstruction have abdominal scars from prior surgery, the presence of which is considered by some a relative contraindication for abdominally based reconstruction. This meta-analysis examines the impact of prior abdominal surgery on the complication profile of breast reconstruction with abdominally based free tissue transfer. Methods: Literature search was conducted using PubMed, Scopus, and Web of Science. Included studies examined patients with a history of prior abdominal surgery who then underwent abdominally based free flap breast reconstruction. Prior liposuction patients and those with atypical flap designs were excluded. The Newcastle-Ottawa Scale was used to assess study quality. Flap complications included total and partial flap loss, fat necrosis, infection, and reoperation. Donor-site complications included delayed wound healing, infection, seroma, hematoma, and abdominal wall morbidity (hernia, bulge, laxity). Relative risk and 95% confidence intervals (CIs) between groups were calculated. Forest plots, I2 statistic heterogeneity assessments, and publication bias funnel plots were produced. Publication bias was corrected with a trim-and-fill protocol. Overall effects were assessed by fixed-effects and random-effects models. Results: After inclusion and exclusion criteria were applied, 16 articles were included for final review. These included 14 cohort and 2 case-control studies, with 1,656 (46.3%) patients and 2,236 (48.5%) flaps having undergone prior surgery. Meta-analysis showed patients with prior abdominal surgery were significantly more likely to experience donor-site delayed wound healing with a risk ratio of 1.27 (random 95% CI [1.00; 1.61]; I2= 4) after adjustment for publication bias. No other complications were statistically different between groups. Conclusion: In patients with a history of prior abdominal surgery, abdominally based free tissue transfer is a safe and reliable option. Abdominal scars may slightly increase the risk of delayed donor-site wound healing, which can aid the surgeon in preoperative counseling. abstract_id: PUBMED:37994593 Patterns of major cutaneous surgeries and reconstructions in patients with cutaneous squamous cell carcinoma in the USA. Aim: Since use of major cutaneous surgeries/reconstructions among patients with cutaneous squamous cell carcinoma (CSCC) is not well described, we sought to quantify major cutaneous surgeries/reconstructions among patients with CSCC who were newly diagnosed and for those treated with systemic therapy, stratified by immune status. Methods: We used the Optum® Clinformatics® Data Mart database (2013-2020) and Kaplan-Meier estimators to assess risk of surgeries/reconstructions. Results: 450,803 patients were identified with an incident CSCC diagnosis, including 4111 patients with CSCC who initiated systemic therapy. The respective 7-year risks of major cutaneous surgeries/reconstructions were 10.9% (95% CI: 10.7-11.0) and 21.8% (95% CI: 17.6-25.8). Overall risk of major cutaneous surgeries/reconstructions was higher in patients who were immunocompromised than those who were immunocompetent. Conclusion: Approximately one in nine patients with CSCC will undergo ≥1 major cutaneous surgeries/reconstructions within 7 years of diagnosis; the risk increases in patients who initiate systemic therapy and among those who are immunocompromised. abstract_id: PUBMED:32434696 Increasing abdominal wall thickness predicts complications in abdominally based breast reconstruction: A review of 106 consecutive patients. Background: Body mass index (BMI) has long been the proxy for patient selection in obese patients presenting for abdominally based breast reconstruction. BMI, however, fails to accurately reflect the distribution of abdominal adipose tissue. This study aims to quantify the effect of abdominal wall thickness on the incidence of post-operative complications and contrast abdominal wall thickness and BMI as predictors of post-operative morbidity. Methods: We performed a retrospective review of 106 consecutive patients who underwent abdominally based breast reconstruction. Abdominal wall thickness was quantified using preoperative CT angiograms. Primary outcomes included delayed wound healing (abdomen and/or breast), flap fat necrosis, return to OR in 30 days, infection, and flap loss. Results: Patients experiencing delayed abdominal wound healing (n=38), delayed breast wound healing (n=27), and flap fat necrosis (n=24) had significantly thicker abdominal wall measurements (p<0.0015). Of the 24 patients with palpable fat necrosis, 11 required excision. Increasing abdominal wall thickness significantly increased the odds of delayed abdominal wound healing (p=0.0005), delayed breast wound healing (p=0.0009), flap fat necrosis (p=0.0028), and infection (p=0.0198). Compared to BMI, abdominal wall thickness proved to be a more accurate predictor of delayed breast wound healing, any delayed wound healing, flap fat necrosis, and infection. Conclusions: Our data indicate that as abdominal wall thickness increases, so does the risk of postoperative morbidity. Abdominal wall thickness outperformed BMI as a predictor of postoperative morbidity in several areas. This suggests that objective data obtained from preoperative CT scans may allow more accurate, individualized perioperative risk assessment. abstract_id: PUBMED:24005365 Comprehensive analysis of donor-site morbidity in abdominally based free flap breast reconstruction. Background: This study aimed to provide a comprehensive analysis of factors that might contribute to abdominal donor-site morbidity after abdominally based free flap breast reconstruction. Methods: The authors performed a retrospective analysis of all abdominally based free flap breast reconstructions performed from January of 2000 through December of 2010 at their institution. Results: Overall, 89 of 1507 patients developed an abdominal bulge/hernia (unilateral: 57 of 1044; bilateral: 32 of 463). A unilateral transverse rectus abdominis musculocutaneous (TRAM) flap was significantly more likely to develop an abdominal bulge/hernia than was a muscle-sparing TRAM flap or a deep inferior epigastric perforator (DIEP) flap (9.9 percent versus 3.7 percent versus 5.9 percent; p = 0.004). However, there was no difference in the risk of developing an abdominal bulge/hernia between a muscle-sparing TRAM and a DIEP flap (p = 0.36). Patients who underwent bilateral reconstructions were 1.35 times more likely to develop an abdominal bulge/hernia than patients who underwent unilateral reconstruction, but the difference was not significant. Harvesting more fascia as occurs when both medial and lateral rows are used was significantly associated with need for mesh (p < 0.0001). Overall, placement of mesh for fascia closure reduced the odds of occurrence of bulge/hernia by 70 percent compared with primary fascia closure. Conclusions: There was no significant difference in the risk of developing abdominal bulge/hernia between bilateral versus unilateral breast reconstruction. For abdominally based free flap breast reconstruction, the extent of the fascia harvested, how it is repaired, and the amount of muscle preserved might play an important role in donor-site morbidity. Clinical Question/level Of Evidence: Therapeutic, III. abstract_id: PUBMED:23945529 Optimizing the fascial closure: an analysis of 1261 abdominally based free flap reconstructions. Background: Donor-site morbidity continues to be a significant complication in patients undergoing abdominally based breast reconstruction. The purposes of our study were to critically examine abdominal donor-site morbidity and to present our algorithm for optimizing donor site closure to reduce these complications. Methods: We performed a retrospective cohort study examining all patients undergoing abdominally based free tissue transfer for breast reconstruction from 2005 to 2011 at our institution. Data were analyzed for overall donor site morbidity, as defined by hernia/bulge or reoperation for debridement and/or mesh removal and for hernia/bulge alone. Results: A total of 812 patients underwent 1261 free tissue transfers. Fifty-three patients (6.5%) experienced donor-site morbidity, including 27 hernias/bulges (3.3%). No significant difference in overall abdominal morbidity was found between unilateral and bilateral reconstructions (P = 0.39) or the use of muscle in the flap (P = 0.11 unilateral msfTRAM, P = 0.76 bilateral). Prior lower abdominal surgery was associated with higher rates of donor-site morbidity (P = 0.04); hypertension (P = 0.012) and multiple medical comorbidities (P < 0.001) were also significantly more common in these patients. Obesity was the only patient characteristic associated with higher rates of hernia/bulge (P = 0.04). Delayed abdominal would healing was associated with hernia/bulge (P < 0.001); these patients were significantly more likely to develop this complication (odds ratio = 6.3, P < 0.001). Conclusions: Particular attention must be provided to donor-site closure in obese patients and those with hypertension and multiple medical comorbidities. Low rates of abdominal wall morbidity result from meticulous fascial reconstruction and reinforcement and careful attention to tension-free soft tissue closure. abstract_id: PUBMED:32673425 The effect of sarcopenia on perioperative complications in abdominally based free-flap breast reconstruction. Purpose: The identification of patient-specific risk factors, which predict morbidity following abdominally based microvascular breast reconstruction is difficult. Sarcopenia is a proxy for patient frailty and is an independent predictor of complications in a myriad of surgical disciplines. We predict that sarcopenic patients will be at higher risk for surgical complications following abdominally based microvascular breast reconstruction. Methods: A retrospective study of all patients who underwent delayed abdominally based autologous breast reconstruction following postmastectomy radiation therapy from 2007 to 2013 at a single institution was conducted. Univariate and multiple logistic regression models were used to assess the effect of sarcopenia on postoperative outcomes. Results: Two hundred and eight patients met the inclusion criteria, of which 30 met criteria for sarcopenia (14.1%). There were no significant differences in demographics between groups. There were no significant differences in minor (36.7% vs 44.4%; P = .43) or major (16.7% vs 25.3%; P = .36) complications between groups as well as hospital length of stay. Multivariable logistic regression demonstrated that a staged reconstruction with the use of a tissue expander was the only consistent variable, which predicted major complications (OR, 2.24; 95% CI, 1.18-4.64; P = .015). Conclusions: Sarcopenia does not predispose to minor or major surgical complications in patients who undergo abdominally based microsurgical breast reconstruction. abstract_id: PUBMED:35832166 The Impact of Abdominal Liposuction on Abdominally Based Autologous Breast Reconstruction: A Systematic Review. Prior abdominal liposuction can be viewed as a relative or absolute contraindication to abdominally based autologous breast reconstruction given concerns for damaged perforators and scarring complicating intraoperative dissection. This systematic review aims to explore the outcomes of abdominally based breast reconstruction in patients with a history of abdominal liposuction. A Preferred Reporting Items for Systematic Reviews and Meta-Analyses-guided literature search was conducted using PubMed, Scopus, and Web of Science from the earliest available date through June 2020. Deep inferior epigastric perforator, muscle-sparing transverse rectus abdominis musculocutaneous (TRAM), superficial inferior epigastric artery, and pedicled TRAM flaps were included for evaluation. Complications included total or partial flap loss, fat necrosis, seroma, delayed wound healing, and donor site complications. After inclusion criteria were applied, 336 nonduplicate articles were screened, yielding 11 for final review, representing 55 flaps in 43 patients. There was no instance of total flap loss, eight (14.5%) flaps developed partial loss or fat necrosis, three (5.4%) flaps had delayed wound healing, and two (4.6%) patients had donor site complications. Most authors (8/11) utilized some type of preoperative imaging. Doppler ultrasonography was the most used modality, and these patients had the lowest rate of partial flap loss or flap fat necrosis (8%), followed by those without any preoperative imaging (10%). In conclusion, this review supports that patients undergoing abdominally based autologous breast reconstruction with a history of abdominal liposuction are not at an increased risk of flap or donor site complications. Although preoperative imaging was common, it did not reliably decrease complications. Further prospective studies are needed to address the role of imaging in improving outcomes. abstract_id: PUBMED:38481916 Prophylactic Intra-abdominal Drains in Major Elective Surgeries: A Comprehensive Review. This comprehensive review explores the use of prophylactic intra-abdominal drains in major elective surgeries, offering a retrospective analysis of their historical evolution, current evidence, and implications for clinical practice. The definition and rationale for drain placement are elucidated, emphasizing their role in preventing postoperative complications. The review synthesizes conflicting evidence, acknowledging the efficacy of drains in specific surgical contexts while addressing concerns and criticisms regarding associated complications. The implications for clinical practice underscore the importance of a nuanced and individualized approach, incorporating shared decision-making between healthcare providers and patients. Looking ahead, areas for future research are identified, including the refinement of patient selection criteria, determination of optimal timing and duration of drain use, and exploration of innovative alternatives. This review contributes to the ongoing discourse as surgical practices evolve, providing insights that may shape evidence-based recommendations and refine best practices in major elective surgeries. Answer: Prior abdominal surgeries do not significantly increase the risk of major complications in abdominally based breast reconstructions. Studies have shown that patients with a history of prior abdominal surgery can safely undergo abdominally based free flap breast reconstruction, with no significant differences in major intraoperative complications, total thrombotic events, or flap losses compared to those without prior surgeries (PUBMED:24691317). However, patients with prior abdominal surgeries may experience a greater amount of delayed healing of the donor site and a higher rate of postoperative hernia formation, particularly in those who had undergone prior hernia repairs (PUBMED:24691317). Additionally, prior abdominal surgery may lead to greater muscular harvest, and these patients should be informed of an increased chance of muscular harvest and wound healing complications (PUBMED:24572867). A systematic review and meta-analysis also indicated that abdominal scars from prior surgery may slightly increase the risk of delayed donor-site wound healing, but no other complications were statistically different between groups (PUBMED:33648009). Furthermore, increasing abdominal wall thickness has been identified as a predictor of postoperative morbidity, suggesting that objective data from preoperative CT scans may allow more accurate, individualized perioperative risk assessment (PUBMED:32434696). The extent of fascia harvested, how it is repaired, and the amount of muscle preserved might play an important role in donor-site morbidity (PUBMED:24005365). Meticulous fascial reconstruction and reinforcement, as well as careful attention to tension-free soft tissue closure, can result in low rates of abdominal wall morbidity (PUBMED:23945529). In summary, while prior abdominal surgeries may slightly increase the risk of certain complications such as delayed wound healing and hernia formation, they do not significantly increase the risk of major complications in abdominally based breast reconstructions. Proper patient counseling and careful surgical planning can mitigate these risks (PUBMED:24691317; PUBMED:24572867; PUBMED:33648009; PUBMED:32434696; PUBMED:24005365; PUBMED:23945529).
Instruction: Multi-slice CT angiography of small cerebral aneurysms: is the direction of aneurysm important in diagnosis? Abstracts: abstract_id: PUBMED:38429470 Diagnostic value of multi-slice spiral computed tomography angiography for intracranial aneurysm. Objective: To investigate the clinical value of multi-slice spiral computed tomography angiography (MSCTA) in the diagnosis of intracranial aneurysms (ICA). Methods: The imaging data of 68 patients suspected of having ICA who were examined in the Department of Radiology of the First Affiliated Hospital of Nanjing Medical University from March 2018 to March 2021 were retrospectively analyzed. MSCTA and digital subtraction angiography (DSA) were performed on each patient, and the MSCTA imaging analysis was compared with DSA. Results: The accuracy of DSA in the diagnosis of ICA was 98.53% (67/68), while the accuracy of MSCTA in the diagnosis of ICA was 97.06% (66/68), with no significant difference in diagnostic accuracy (P &gt; 0.05). There were no significant differences in the diameter of the aneurysm, the width of the aneurysm neck, or the location of the aneurysm in the comparison of the ICA image between DSA and MSCTA (P &gt; 0.05). Conclusion: MSCTA offers high accuracy and has favorable clinical value in the diagnosis of ICA. It is worth popularizing as the recommended examination method in clinical practice. abstract_id: PUBMED:37066985 Application effect of multi-slice spiral CT angiography combined with MRI in the diagnosis of cerebral aneurysm. To investigate the clinical value of multi-slice spiral computed tomography (CT) angiography (MSCTA) combined with MRI in the diagnosis of cerebral aneurysm. A total of 90 patients with cerebral aneurysms diagnosed by DSA were selected as the subjects of this study. Another 30 patients with cerebral infarction were selected as negative controls (NC). Before diagnosis, all patients underwent comprehensive examination using MSCTA and MRI. The results of the comparison and the clinical data of all patients were retrospectively analyzed. MSCTA and MRI examinations can clearly show the specific location, shape, size and anatomical relationship with surrounding tissues of cerebral aneurysms. MSCTA diagnosed 82 patients and missed or misdiagnosed 8 patients in the 90 patients with cerebral aneurysm. The diagnostic sensitivity and accuracy of MSCTA were 91.1 (82/90) and 89.2 (107/120), respectively. MRI examination diagnosed 87 patients and missed or misdiagnosed 3 patients in the 90 patients with cerebral aneurysm. The diagnostic sensitivity and accuracy of MRI were 96.7 (87/90) and 96.7 (116/120), respectively. The sensitivity and accuracy of MSCTA combined with MRI were 100.0 (90/90) and 99.2 (119/120), respectively. MSCTA combined with MRI can not only display the whole picture of brain tissue, but also display the size, shape and relationship with the parent vessel of the aneurysm. The combination of MSCTA and MRI has high sensitivity and accuracy in diagnosing intracranial aneurysms, which provides a promising diagnostic protocol for patients with aneurysms. abstract_id: PUBMED:15741020 Multi-slice CT angiography of small cerebral aneurysms: is the direction of aneurysm important in diagnosis? Purpose: Multi-slice CT (MSCT) has great potential in evaluation of vascular structures. Our purpose was to investigate if there is any difference in detection of superiorly, inferiorly and horizontally directed small cerebral aneurysms (&lt;5 mm) on MSCTA compared to digital subtraction angiography (DSA) or surgery. Materials And Methods: One hundred and three consecutive patients who underwent MSCTA and DSA or surgery were included in the study. MSCTA and DSA results were evaluated independently by two different neuroradiologists who performed aneurysm detection, quantitation, and characterization using 2D multiplanar reconstructions, 3D maximum intensity projection and volume-rendered techniques. Findings: MSCTA detected 49 small cerebral aneurysms (&lt;5 mm) in 37 (36%) of 103 patients. The overall sensitivity, specificity, and accuracy of MSCTA for detecting small aneurysms were 0.85, 0.65, and 0.79, respectively. There was moderate agreement between MSCTA and DSA/surgery for detecting small aneurysms (kappa: 0.51). The sensitivity of detecting small aneurysms directed superiorly, inferiorly and horizontally was 0.94, 0.84, and 0.75, respectively. There was no statistically significant difference in detection between small aneurysms directed superiorly, inferiorly and horizontally on MSCTA (P &gt; 0.05). Conclusion: The direction of small cerebral aneurysms is not important in diagnosis on multi-slice CT scanners, although the detection of small cerebral aneurysms with superior or inferior direction is slightly easier than the detection of horizontally directed aneurysms. abstract_id: PUBMED:14661595 Multi-slice CT angiography in the diagnosis and characterization of cerebral aneurysms Purpose: To assess the diagnostic accuracy of multislice computed tomography angiography in the detection and characterization of cerebral aneurysms. Materials And Methods: Multislice CT angiography and DSA were performed in 41 patients suspected to have intracranial aneurysms. A volume rendering method was used to produce three-dimensional CT angiograms. The diameter, location, neck and branching pattern of aneurysms were evaluated with CT angiography and DSA and compared with each other. Results: Of 41 patients, DSA depicted 44 aneurysms in 33 patients. Multislice CT angiography depicted 44 aneurysms in 34 patients. One aneurysm demonstrated with DSA was not detected at multislice CT angiography. One aneurysm demonstrated with multislice CT angiography and confirmed at surgery was not demonstrated with DSA. There was no false positive aneurysm in multislice CT angiography. There were 4 aneurysms smaller than 3 mm and multislice CT angiography demonstrated all these aneurysms. In 9 aneurysms, the neck and branching pattern could only be shown with multislice CT angiography. Sensitivity and specificity of multislice CT angiography were 97.7% and 87.5%, respectively. Conclusion: Multislice CT angiography has a high sensitivity and specificity in the diagnosis of intracranial aneurysms and is superior to DSA in the delineation of the aneurysm neck and branching pattern. Multislice technology with submillimeter collimation improves the detection of small aneurysms and the delineation of the neck of the aneurysms. abstract_id: PUBMED:12181705 Multi-slice 3D-CTA - an improvement over single slice helical CTA for cerebral aneurysms. Background And Objective: The aim of this study was to demonstrate the utility of volume rendered multi-slice helical three-dimensional CT angiography in patients with cerebral aneurysm when compared with single slice CT angiography and formal digital subtraction angiography. Methods: A prototype Toshiba Aquilon multi-slice CT scanner was employed with the following scan conditions: voltage 135 kV; current 300 mA; slice thickness 0.8 mm; scan speed 0.75 sec/cycle; couch speed 1 mm/sec; range 50 mm from foramen magnum; scan pitch 3; three dimensional images were reconstructed using multiple image projections and integral volume rendering algorithms on a Xlink/Xtension workstation. 80 cases of multi-slice CTA for cerebral aneurysm carried out at Fujita University from January 1999 to January 2001 were reviewed. Results: The advantages of multi-slice imaging are illustrated with representative cases of cerebral aneurysm - good demonstration of three dimensional anatomy, appreciation of perforators down to 1 mm in size, delineation of the vessels around the aneurysm complex, relationship between the aneurysm and skull base, information on calcification, thrombus and blebs in the wall and eleven routine views for perusal. Conclusion: Multi-slice CTA is a significant improvement over single slice CTA for cerebral aneurysms. It is our experience the superior and precise images produced by multi-slice technology displays anatomical information not readily available from standard DSA. Multislice 3D-CTA is relatively non-invasive and provides better and adequate detail for surgical planning. The basis of multi-slice CT angiography is described. Multi slice CTA is changing the way cerebral aneurysms are being managed nowadays. New advances in the technology of multi-slice CTA resulting in increased image resolution are outlined. abstract_id: PUBMED:15502126 Multi-section CT angiography for detection of cerebral aneurysms. Background And Purpose: Multi-section CT has great potential for use in vascular studies. Our purpose was to determine the accuracy of multi-section CT angiography in detecting cerebral aneurysms compared with digital subtraction angiography or surgery. Methods: One hundred consecutive patients who underwent multi-section CT angiography and either digital subtraction angiography or surgery were included in the study. Multi-section CT angiography and digital subtraction angiography results were evaluated independently by different neuroradiologists who performed aneurysm detection, quantitation, and characterization by using 2D multiplanar reconstructions, 3D maximum intensity projection, and volume-rendered techniques. Results: When using intra-arterial digital subtraction angiography or surgery, 113 aneurysms (true positives and false negatives) were detected in 83 of the 100 patients. A total of 106 aneurysms (true positives) were confirmed by using digital subtraction angiography or surgery, or both. Seven aneurysms were missed when using multi-section CT angiography. Eight aneurysms were not confirmed by digital subtraction angiography and were considered to be false positive evaluations. The sensitivity for detecting aneurysms &lt; 4 mm, 4 to 10 mm, and &gt; 10 mm on a per-aneurysm basis was 0.84 (95% confidence interval: 0.72, 0.92), 0.97 (95% confidence interval: 0.91, 0.99), and 1.00 (95% confidence interval: 0.88, 1.00), respectively. The sensitivity, specificity, and accuracy of multi-section CT angiography for detecting aneurysms on a per-patient basis were 0.99 (95% confidence interval: 0.96, 1.00), 0.88 (95% confidence interval: 0.69, 0.94), and 0.98 (95% confidence interval: 0.95, 1.00), respectively. Conclusion: Multi-section CT angiography has a high sensitivity in detecting aneurysms (especially aneurysms &gt; 3 mm). However, CT angiography is currently not sensitive enough to replace digital subtraction angiography. abstract_id: PUBMED:12070447 Efficiency of CT-angiography in the diagnosis of intracranial aneurysms. Background: CT-angiography is a non-invasive method, alternative to cerebral angiography in the diagnosis of intracranial aneurysms. The aim of the study was the assessment of the efficiency of CT-angiography in a large material, which has been mostly verified with angiography and/or surgery. Material/methods: CT-angiography was performed in 196 patients with intracranial bleeding. Seventy-three (73) patients underwent also cerebral angiography (CA), 121 were operated on, 74 of them solely on the basis of CT-angiography. A spiral CT scanning (2 mm slices, 1-1.5 pitch, 1 mm-gap reconstruction) combined with an injection of 120 mL of contrast medium (5 mL/sec) was used. Subsequently three-dimensional MIP reconstructions were received, supplemented in some cases with SSD, VRT and MPR. Results: CT-angiography revealed 128 aneurysms in 106 patients, located mostly in anterior communicating, middle cerebral and internal carotid arteries. The size of most aneurysms was 6-10 mm, but nearly one-third of them did not exceed 5 mm. In the group of patients who underwent CA, there was only 1 false negative CT-angiography result, while in 7 patients CT-angiography allowed for the diagnosis of small aneurysms which were not clearly visible on CA. CT-angiography was superior to CA in assessment of aneurysm morphology by 3 independent radiologists in 17 out of 22 cases. CT-angiography findings were confirmed during surgery in all 74 patients. Conclusions: CT-angiography is highly effective in the diagnosis of intracranial aneurysms and the evaluation of their morphology. In our opinion, in most cases, CT-angiography is precise enough to plan a surgical or endovascular procedure. abstract_id: PUBMED:23954458 Can CT angiography rule out aneurysmal subarachnoid haemorrhage in CT scan-negative subarachnoid haemorrhage patients? Current management guidelines for CT scan-negative subarachnoid haemorrhage (SAH) patients recommend cerebral digital subtraction angiography (DSA). We aimed to investigate the utility of CT angiography (CTA) as a substitute for DSA in these patients. We included patients who presented with SAH confirmed by spectrophotometric xanthochromia analysis of cerebrospinal fluid (CSF) whereby the CT scan was negative. Electronic records were reviewed to collect data on non-contrast CT scan, CTA and DSA results. Patients without DSA or with other explanations for CSF xanthochromia were excluded. Sixty-three patients with CT scan-negative SAH were included. The diagnosis of SAH was confirmed by CSF analysis. All 63 patients underwent both DSA and CTA. Using DSA as the benchmark, CTA demonstrated a negative predictive value, positive predictive value, sensitivity and specificity of 98%, 82%, 90% and 96%, respectively, for the detection of intracranial aneurysms. CTA correctly identified patients in whom there were no underlying aneurysms responsible for SAH, with one patient with suspected dissection referred for further evaluation using MRI and DSA. abstract_id: PUBMED:12917124 Intracanalicular aneurysm of the anterior inferior cerebellar artery revealed by multi-detector CT angiography. A 62-year-old woman had sudden-onset headache and posterior neck pain, and a subarachnoid hemorrhage was revealed by unenhanced CT. Both multi-detector CT angiography and digital subtraction angiography were performed and revealed a small intracanalicular aneurysm of the left anterior inferior cerebellar artery. The patient underwent successful retrosigmoid craniectomy and trapping of the aneurysm. This case shows the ability of multi-detector CT angiography to indicate bony landmarks that can alter the surgical approach. abstract_id: PUBMED:14699177 Detection of intracranial aneurysms: multi-detector row CT angiography compared with DSA. Purpose: To prospectively compare the effectiveness of multi-detector row computed tomographic (CT) angiography with that of conventional intraarterial digital subtraction angiography (DSA) used to detect intracranial aneurysms in patients with nontraumatic acute subarachnoid hemorrhage. Materials And Methods: Thirty-five consecutive adult patients with acute subarachnoid hemorrhage were recruited into the institutional review board-approved study and gave informed consent. All patients underwent both multi-detector row CT angiography and DSA no more than 12 hours apart. CT angiography was performed with a multi-detector row scanner (four detector rows) by using collimation of 1.25 mm and pitch of 3. Images were interpreted at computer workstations in a blinded fashion. Two radiologists independently reviewed the CT images, and two other radiologists independently reviewed the DSA images. The presence and location of aneurysms were rated on a five-point scale for certainty. Sensitivity and specificity were calculated independently for image interpretation performed by the two CT image readers and the second DSA image reader by using the first DSA reader's interpretation as the reference standard. Results: A total of 26 aneurysms were detected at DSA in 21 patients, and no aneurysms were detected in 14 patients. Sensitivity and specificity for CT angiography were, respectively, 90% and 93% for reader 1 and 81% and 93% for reader 2. The mean diameter of aneurysms detected on CT angiographic images was 4.4 mm, and the smallest aneurysm detected was 2.2 mm in diameter. Aneurysms that were missed at initial interpretation of CT angiographic images were identified at retrospective reading. Conclusion: Multi-detector row CT angiography has high sensitivity and specificity for detection of intracranial aneurysms, including small aneurysms, in patients with nontraumatic acute subarachnoid hemorrhage. Answer: The direction of small cerebral aneurysms does not appear to be a significant factor in their diagnosis using multi-slice CT angiography (MSCTA). A study aimed at investigating the detection of small cerebral aneurysms (<5 mm) directed superiorly, inferiorly, and horizontally on MSCTA compared to digital subtraction angiography (DSA) or surgery found that the direction of the aneurysms was not important in diagnosis. The sensitivity of detecting small aneurysms directed superiorly, inferiorly, and horizontally was 0.94, 0.84, and 0.75, respectively, and there was no statistically significant difference in detection between small aneurysms directed in different orientations on MSCTA (P > 0.05). This suggests that MSCTA is capable of detecting small cerebral aneurysms regardless of their directional orientation (PUBMED:15741020).
Instruction: Does violence in the emotional relationships make STD/AIDS prevention more difficult? Abstracts: abstract_id: PUBMED:14513135 Does violence in the emotional relationships make STD/AIDS prevention more difficult? Objectives: To identify violent situations in the daily life of adolescents and young people of low-income communities; to establish a relation between the use of drugs and STD/AIDS risk behavior; and to verify if violence in the emotional relationships between adolescents and young people make the STD/AIDS prevention more difficult. Method: Epidemiological study with adolescents and young people of two neighborhoods in the city of Rio de Janeiro, based on the results obtained from a structured questionnaire that dealt with subjects' profile, information about the family, use of drugs, daily violent situations, sexual experience, among others. For the present article, only the variables that dealt with aggressiveness, use of drugs, sexual risk behavior and violence in the emotional relationships were analyzed. The association between the variable "I used a condom the last time I had sex" and the questions that indicated violent or non-violent attitudes in emotional relationships received special attention. Results: A total of 1,041 young people aged 14-22 years old took part in the study, 53.6% of them were female. A statistically significant relation was observed between not using condoms (p&lt;0.05) and the categorical variables revealing aggressiveness in emotional relations. Conclusion: The study showed that there is an important association between violence in the emotional relationships and the inconsistent use of condoms in the group studied. Therefore, preventive measures should be taken regarding youth behavior that involves the risk of STD/AIDS infection, associated with campaigns against violence. abstract_id: PUBMED:22909921 Sexual violence and associated factors among women in HIV discordant and concordant relationships in Uganda. HIV serodiscordance is a sexual partnership in which one partner is infected with HIV while the other is not. Managing emotional and sexual intimacy in HIV serodiscordant unions can be difficult due to concerns about HIV transmission and the challenge of initiating and maintaining safe sex. In situations where couples are jointly aware of their HIV status, women in serodiscordant unions may face increased risk of partner violence. We conducted an investigation to assess risk factors for HIV serodiscordance and determine if HIV serodiscordance is associated with incident sexual violence among a cohort of women attending HIV post-test club services at three AIDS Information Centers (AICs) in Uganda. Using a prospective study of 250 women, we elicited information about sexual violence using structured face-to-face interviews. Sexual violence and risk factors were assessed and compared among HIV positive women in HIV discordant unions, HIV negative women in discordant unions, and HIV negative women in negative concordant unions. Multivariable logistic regression was used to assess the association between participants' serostatus and sexual violence. HIV negative women in serodiscordant relationships (36.1±11.1 years, range: 19-65 years) were significantly older than either HIV positive women in serodiscordant relationships (32.2±9.0 years, range: 18-56 years), or HIV negative women in concordant relationships (32.3±11.0 years, range: 18-62), (p=0.033). Early age at sexual debut was associated with a 2.4-fold increased risk of experiencing sexual violence (OR 2.4, 95% CI 1.27-4.65). Based on unadjusted analysis, HIV positive women in discordant relationship were at highest risk for sexual violence compared to HIV negative women in discordant unions, and HIV negative women in negative concordant unions. HIV negative women in discordant relationships and those in concordant negative relationships showed no increased risk for sexual violence. However, couples' HIV serostatus was not significant related to incident sexual violence after controlling for potential confounding covariates. Nevertheless, the results were able to elucidate the sexual violence risk factor profile of participants based on couples' HIV serostatus. Couple counseling protocols at HIV voluntary counseling and testing centers in Uganda should identify those at risk for sexual violence and develop interventions to reduce its incidence. abstract_id: PUBMED:8042060 Using STD occurrence to monitor AIDS prevention. Monitoring the effects of AIDS prevention programmes is increasingly important but methodologically difficult. The use of surveillance derived measures of the occurrence of sexually transmitted diseases as indicators of high risk sexual behaviour, or of HIV incidence, has been widely recognized as a possible approach. This paper first examines the theoretical and empirical basis for this strategy, and highlights, using examples, some of the pitfalls in the interpretation of trends in sexually transmitted disease occurrence. Problems arising in the interpretation of the types of STD surveillance data currently available in countries in Western Europe are discussed. Ways in which STD surveillance systems might be developed so as to enhance their value in monitoring AIDS prevention are proposed. The paper goes on to identify areas of clinical and epidemiological research which might improve our ability to interpret such enhanced STD surveillance data. abstract_id: PUBMED:37161437 Understanding the effect of gender-based violence on uptake and utilisation of HIV prevention, treatment, and care services among transgender women: a qualitative study in the greater Kampala metropolitan area, Uganda. Background: Transwomen (also known as transgender women) are disproportionately affected by all forms of gender-based violence (GBV). The high prevalence of physical, sexual and emotional violence not only predisposes transwomen to HIV infection but also limits the uptake/access to HIV prevention, care, and treatment services. Despite the high prevalence of HIV infection and GBV among transwomen, there is limited evidence on how GBV affects the uptake and utilisation of HIV prevention, care, and treatment services. Therefore, this qualitative study explored how GBV affects uptake and utilisation of HIV prevention, treatment, and care services among transwomen in the Greater Kampala Metropolitan Area (GKMA), Uganda. Methods: This participatory qualitative study was conducted among transwomen in the GKMA. A total of 20 in-depth interviews, 6 focus group discussions, and 10 key informant interviews were conducted to explore how GBV affects the uptake and utilisation of HIV prevention, treatment, and care services among transwomen. Data were analysed using a thematic content analysis framework. Data were transcribed verbatim, and NVivo version 12 was used for coding. Results: At the individual level, emotional violence suffered by transwomen led to fear of disclosing their HIV status and other health conditions to intimate partners and healthcare providers respectively; inability to negotiate condom use; and non-adherence to antiretroviral therapy (ART). Sexual violence compromised the ability of transwomen to negotiate condom use with intimate partners, clients, and employers. Physical and emotional violence at the community level led to fear among transwomen traveling to healthcare facilities. Emotional violence suffered by transwomen in healthcare settings led to the limited use of pre-exposure prophylaxis and HIV testing services, denial of healthcare services, and delays in receiving appropriate care. The fear of emotional violence also made it difficult for transwomen to approach healthcare providers. Fear of physical violence such as being beaten while in healthcare settings made transwomen shun healthcare facilities. Conclusion: The effects of GBV on the uptake and utilisation of HIV prevention, care, and treatment services were observed in individual, community, and healthcare settings. Across all levels, physical, emotional, and sexual violence suffered by transwomen led to the shunning of healthcare facilities, denial of healthcare services, delays in receiving appropriate care, and the low utilisation of post-exposure prophylaxis, and HIV testing services. Given its effects on HIV transmission, there is a need to develop and implement strategies/ interventions targeting a reduction in GBV. Interventions should include strategies to sensitize communities to accept transwomen. Healthcare settings should provide an enabling environment for transwomen to approach any healthcare provider of their choice without fear of experiencing GBV. abstract_id: PUBMED:14666225 Violent relationship in young people and STD/AIDS risk To verify whether affective relationships involving violence are associated with increased risk of sexually transmitted diseases (STDs), including AIDS, we conducted a survey among youth 14 to 22 years of age residing in two low-income communities in Rio de Janeiro, Brazil. We used a qualitative methodology with focal groups and individual interviews. Violence is part of the routine among these youth in both their community and families. The following factors were associated with violence in interpersonal relations: lack of money, unemployment, drug and alcohol use, jealousy, and infidelity. The young people reported that condom use is not negotiated with violent partners, resulting in increased risk of STD/AIDS. The results indicate that violence is multi-factorial, and when present in interpersonal relationships it intervenes negatively in relation to protection against STD/AIDS. abstract_id: PUBMED:27398257 Adapting an Evidence-Based HIV-Prevention Intervention for Women in Domestic Violence Shelters. Objective: Despite the documented intersection of intimate partner violence and HIV, there is a paucity of evidence-based HIV prevention interventions for female survivors of intimate partner violence in the United States. This paper describes the adaptation of an effective HIV prevention intervention, Sisters Informing Sisters about Topics on AIDS (SISTA), for women in domestic violence shelters and the steps taken to improve the adapted intervention's implementation. Method: The adaptation process was guided by the ADAPT-ITT framework and data collected from directors, direct client service providers, and residents of two domestic violence shelters located in urban areas, as well as topical experts. Results: Eleven of 12 shelter staff (92%) reported that HIV interventions had never been implemented at their shelter and 64% reported they had not provided residents with educational brochures about HIV prevention. Changes made to adapt SISTA for this population and enhance the implementation of the intervention included reducing the intervention's duration; adding education about the intersection of intimate partner violence, substance use, and HIV; and adding an HIV risk assessment and safety plan. Conclusions: Next steps will include implementing the adapted intervention and evaluating its perceived acceptability and efficacy, and assessing whether contextual factors influence the intervention's implementation. abstract_id: PUBMED:35076797 Intimate Partner Violence and Preferences for Pre-exposure Prophylaxis (PrEP) Modes of Delivery Among A Sample of Gay, Bisexual, and Other Men Who Have Sex with Men. While there is strong evidence that the experience of intimate partner violence (IPV) shapes PrEP use among heterosexual women, evidence for similar relationships among gay, bisexual and other men who have sex with men (GBMSM) is scant. In this paper we analyze baseline data from a large randomized controlled trial (RCT) of an HIV prevention intervention for GBMSM recruited from three cities (Atlanta, Detroit and New York City) to examine how the recent experience of IPV shapes their rankings of PrEP delivery options. Men were asked to rank from 1 to 8 PrEP taken by daily pill, event-based pill, injection, anal suppository (before sex), suppository (after sex), gel (penile or rectal) (before sex), and gel (after sex) and condoms. The analysis sample is 694 HIV-negative, sexually active GBMSM. Analysis considers an ordinal outcome measuring participant's ranked preferences for their future use of eight HIV prevention options. Men who experienced physical IPV preferred PrEP in pill form, while men who experienced partners monitoring their behaviors (monitoring IPV) preferred PrEP by injection. Men who experienced emotional IPV ranked PrEP by pill lower than other methods. Sexual and controlling IPV were not significantly associated with PrEP modality ranking. As more modes of PrEP delivery become available, providers should be encouraged to screen GBMSM seeking PrEP for IPV, and to provide men with the necessary information to facilitate an informed choice when deciding on a PrEP modality that will work for them and their relationship context. abstract_id: PUBMED:9138771 Verbal intercourse and sexual communication: impediments to STD prevention. This article explores the problems of risky sexual behavior by examining the ways people verbally negotiate sexual interaction regarding sexually transmitted diseases. Based on in-depth interviews with 124 adults (ages 21-63) who are infected with genital herpes, the article shows that knowledge about sexually transmitted diseases is not necessarily related to the action needed for their prevention. People are more afraid of being rejected by a partner than they are of contracting an STD (except, of course, AIDS). By examining difficulties people have with sexual health discourse and showing how these difficulties are related to both the problem of communicating politeness and the problem of representing the self and the sexual other, this article demonstrates that STD prevention programs overlook a very important emotional and communicative issue: the lack of a culturally sanctioned language with which to discuss sexual health with partners. abstract_id: PUBMED:28299090 Risk Factors for Physical Domestic Violence in a High-Prevalence HIV Setting: Findings from Project Accept Baseline Data (HPTN-043). Zimbabwe faces an acute generalized HIV/AIDS epidemic combined with rapidly deteriorating economic and political conditions, under which levels of domestic violence are on the rise. We aimed to determine possible demographic and behavioral factors associated with physical domestic violence in a rural setting in order to better inform both national and local domestic violence and HIV prevention policies. Using the Project Accept baseline data set, we selected demographic, socio-economic, and behavioral variables that might be associated with physical domestic violence based on a review of the literature. Univariate and multivariate analyses were carried out, and odds ratios (OR) were computed using logistic regression. Women reporting physical domestic violence were significantly more likely to report (i) a history of childhood domestic violence (OR=2.96, P&lt;0.001), (ii) two or more lifetime partners (OR=1.94, P&lt;0.001), (iii) some form of sexual abuse as a child (OR=1.82, not significant), and (iv) low or medium socio-economic status as measured by type of homestead (OR=1.4, P=0.04) than women who reported no experience of physical domestic violence. Married women were less likely to experience physical domestic violence than unmarried women (OR=0.65, P=0.011). Women at greatest risk of domestic violence include those with a personal history of violence or sexual abuse, multiple lifetime partners, and low or medium socio-economic status. Risk assessments and joint interventions for both domestic violence reduction and HIV prevention should target these population groups, which are effective both on the public health and global heath diplomacy levels. abstract_id: PUBMED:18007273 STD/HIV prevention practices among primary care clinicians: risk assessment, prevention counseling, and testing. Objective: To describe current practices of primary care (PC) clinicians for STD/HIV control services: risk assessment, prevention counseling, and offering tests. Study Design: We identified clinical strategies through qualitative interviews. We then surveyed by mail a random sample of Washington State family physicians, general internists, obstetrician-gynecologists, nurse practitioners, and certified nurse midwives. We identified characteristics of clinicians and their practices associated with each strategy and universal provision of each service. Results: We report on 519 clinicians (80% adjusted response rate). Clinicians provided services to selected patients they considered high risk. Universal practices were less common: risk assessment (56%), prevention counseling (60%), STD tests (30%), and HIV tests (19%). Universal services were more common among nurses, those recently trained, and those seeing more STD patients. Conclusion: Different types of PC clinicians use widely differing clinical strategies and many use selective rather than universal approaches to STD/HIV control services. Further research is needed to develop tailored interventions to improve provision of these services. Answer: Yes, violence in emotional relationships does make STD/AIDS prevention more difficult. Studies have shown that violence in emotional relationships is associated with inconsistent condom use and increased risk of sexually transmitted diseases (STDs), including AIDS. A study conducted among adolescents and young people in low-income communities in Rio de Janeiro found a statistically significant relationship between not using condoms and aggressiveness in emotional relations, suggesting that violence in emotional relationships can interfere with the consistent use of condoms and thus make STD/AIDS prevention more challenging (PUBMED:14513135). Additionally, gender-based violence (GBV), including physical, sexual, and emotional violence, has been found to affect the uptake and utilization of HIV prevention, care, and treatment services among transgender women, indicating that violence can create barriers to accessing necessary health services (PUBMED:37161437). Furthermore, research has indicated that women in violent relationships may not negotiate condom use with their partners, which can lead to an increased risk of STD/AIDS (PUBMED:14666225). Therefore, preventive measures and interventions should address the issue of violence in emotional relationships as part of comprehensive strategies to reduce the risk of STD/AIDS infection.
Instruction: Symptom-free women at increased risk of ectopic pregnancy: should we screen? Abstracts: abstract_id: PUBMED:9166305 Screening for ectopic pregnancy in symptom-free women at increased risk. Objective: To determine the ectopic pregnancy rate among symptom-free women at increased risk undergoing a screening program involving transvaginal sonography and serum hCG testing. Methods: Consecutive symptom-free women at increased risk for ectopic pregnancy were studied prospectively by transvaginal sonography and serum hCG measurement to detect ectopic pregnancy before the onset of symptoms. Results: Between September 1993 and May 1996, 143 symptom-free pregnant women with pregnancies of a gestational age of less than 7 weeks were screened. Eight had ectopic pregnancies, 129 had intrauterine pregnancies, and six had trophoblast in regression. Among the eight women with ectopic pregnancies, one was initially diagnosed as having an intrauterine pregnancy. This women returned 1 week later with abdominal pain, and an ectopic pregnancy with intra-abdominal bleeding was found. Ectopic pregnancies were present in 5.6% (95% confidence interval 2.5%, 10.7%) of the women screened. This was significantly lower than reported in a previous study. Conclusion: The ectopic pregnancy rate in the population that was offered screening was low. Thus, it is questionable whether the possible benefits (prevention of complications and reassurance of the woman) outweigh possible detriments (false-positive diagnosis, financial costs, and emotional stress that could be induced by screening. abstract_id: PUBMED:12190842 Symptom-free women at increased risk of ectopic pregnancy: should we screen? Background: Transvaginal sonography, serum human chorionic gonadotrophin (hCG) measurement, and serum progesterone measurement provide the possibility to screen symptom-free women at increased risk of ectopic pregnancy. The objective of the present study was to evaluate the cost-effectiveness of screening for ectopic pregnancy, by addressing the expected benefits and costs of screening for ectopic pregnancy. Methods: Screening programs incorporating transvaginal sonography, serum hCG measurement, and serum progesterone measurement were compared with a 'watchful waiting' strategy. Data were extracted from the literature. The strategies were compared on the expected number of prevented tubal ruptures, the expected number of false-positive diagnoses, and expected costs. Results: The cost-effectiveness of screening appeared to be strongly dependent on the prevalence of ectopic pregnancy. At a prevalence of ectopic pregnancy of 6%, a screening program with transvaginal sonography and serum hCG measurement would reduce the number of patients with ruptured ectopic pregnancy from 2.1 to 0.61 per 100 screened women. Screening was expected to cost approximately Euro 933 per prevented tubal rupture, whereas the number of expected false-positive diagnoses was 0.64 per prevented tubal rupture. Conclusion: We conclude that screening for ectopic pregnancy reduces the number of patients with tubal rupture, but only at the expense of a large false-positive rate. Although sonography in symptom-free women at risk of ectopic pregnancy might be justified for psychological reasons, the medical and economic benefits of such a policy seem to be limited. abstract_id: PUBMED:7906763 Early screening for ectopic pregnancy in high-risk symptom-free women. We screened 225 symptom-free pregnant women at increased risk for ectopic pregnancy with transvaginal sonography and human chorionic gonadotropin (hCG) assays. Among 55 (24.4%), who proved to have an ectopic pregnancy, 46 (84%) cases were diagnosed at the initial screening at a median of 37 days of gestation, and the rest at repeated scans. The false-positive rate was 1.2%. Early diagnosis prevented tubal rupture, substantial haemorrhage, and the need for emergency care, allowing elective treatment. Such early surveillance reduced the risk of complications and facilitated treatment, and should be offered to at-risk women. abstract_id: PUBMED:29034324 A Review of Epidemiologic Studies of the Health of Gulf War Women Veterans. Introduction: In the 25 years since the 1990-1991 Gulf War (GW), studies have evaluated Gulf War Illness (GWI), sometimes referred to as medically unexplained multi symptom illness, and other medical and neurological conditions in women GW veterans. Materials And Methods: In this article, we review epidemiologic studies of the health of women who served in the 1990-1991 GW based upon bibliographic searches in PubMed and CINAHL with relevant search terms through September 2015. Results: A total of 56 articles were identified in the bibliographic searches. By screening abstracts or full-text articles, a total of 21 relevant studies were identified. Results from some studies, but not all, suggest that GWI is more common in women GW veterans than their male counterparts. Few studies of GW veterans focused on women's health. A small number of studies suggested excess rates of woman's health problems, e.g., breast cysts, abnormal Papanicolaou (Pap) smears, yeast infections, and bladder infections. Several studies have identified significantly elevated rates of birth defects and adverse reproductive outcomes among GW veterans. However, findings have varied with different study designs and sample sizes, with some studies showing elevated risks of stillbirths, miscarriages, and/or birth defects and others have not. In some studies, participants reported increased risks of ectopic pregnancies and spontaneous abortions. Conclusion: Further research is needed to provide a comprehensive picture of the health of women GW veterans and to examine a broad range of women's health issues including adverse reproductive outcomes. Some deployment-related health problems only become apparent decades later and other conditions may worsen or improve over time. Assessments are needed of current health status, changes in health symptoms and conditions over time, and possible differences in health outcomes associated with specific experiences and exposures during the war. Future studies would be strengthened by assessing GWI symptom patterns that may be specific to women veterans, examine diagnosed medical conditions among women veterans, and evaluate changes in women's health over time, including changes potentially associated with menopause and age. abstract_id: PUBMED:35595197 Increased versus standard gonadotrophin dosing in predicted poor responders of IVF: an open-label randomized controlled trial. Study Question: Does an increased dosing of FSH improve the live birth rate as compared to standard FSH dosing in expected poor responders who undergo IVF? Summary Answer: In this trial, women with an expected poor response allocated to increased FSH dosing did not have a statistically significant increase in cumulative live births as compared to a standard FSH dose. What Is Known Already: Poor ovarian reserve leads to worse IVF outcomes owing to the low number and quality of oocytes. Clinicians often individualize the FSH dose using ovarian reserve tests, including antral follicle count (AFC), and basal plasma FSH or anti-Müllerian hormone level. However, the evidence that increased FSH dosing improves fertility outcomes in women with an expected poor response is lacking. Study Design, Size, Duration: We performed a parallel, open-label randomized controlled trial between March 2019 and October 2021 in an assisted reproduction centre. Participants/materials, Setting, Methods: Women &lt;43 years of age with AFC &lt;10 referred for their first IVF cycle were randomized for increased or standard FSH dosing. In participants allocated to increased FSH dosing, women with AFC 1-6 started with 300 IU/day, while women with AFC 7-9 started with 225 IU/day. In participants allocated to the standard care, women started with 150 IU/day. The primary outcome was cumulative live birth attributable to the first IVF cycle including fresh and subsequent frozen-thawed cycles within 18 months of randomization. Live birth was defined as the delivery of one or more living infants ≥24 weeks' gestation. This trial was powered to detect an 11% difference in live birth attributable to the first IVF cycle. Outcomes were evaluated from an intention-to-treat perspective. Main Results And The Role Of Chance: We randomized 661 women to start FSH at increased dosing (n = 328) or standard dosing (n = 333). The primary outcome cumulative live birth occurred in 162/328 (49.4%) women in the increased group versus 141/333 (42.3%) women in the standard group [risk ratio (RR) 1.17 (95% CI, 0.99-1.38), risk difference 0.07 (95% CI, -0.005, 0.15), P = 0.070]. The live birth rate after the first embryo transfer in the increased versus standard group was 125/328 (38.1%) versus 117/333 (35.1%), respectively [RR 1.08 (95% CI, 0.83-1.33), P = 0.428]. Cumulative clinical pregnancy rates were 59.1% versus 57.1% [RR 1.04 (95% CI, 0.91-1.18), P = 0.586] with miscarriage rates of 9.8% versus 14.4% [RR 0.68 (95% CI, 0.44-1.03), P = 0.069] in the increased versus standard group, respectively. Other secondary outcomes, including biochemical pregnancy, ongoing pregnancy, multiple pregnancy and ectopic pregnancy, were not significantly different between the two groups both from the first and cumulative embryo transfer. Limitations, Reasons For Caution: As this study is open-label, potential selective cancelling and small dose adjustments could have influenced the results. Wider Implications Of The Findings: In women with predicted poor response, we did not find evidence that increased FSH dosing improves live birth rates. A standard dose of 150 IU/day is recommended at the start of IVF in these women to reduce potential adverse effects and costs. Study Funding/competing Interest(s): This study was funded by the General Projects of Social Development in Shaanxi Province (No. 2022SF-565). B.W.M. is supported by NHMRC (GNT1176437). B.W.M. reports personal fees from ObsEva, and funding from Merck and Ferring outside the submitted work. Trial Registration Number: Registered at Chinese clinical trial registry (www.chictr.org.cn). Registration number ChiCTR1900021944. Trial Registration Date: 17 March 2019. Date Of First Patient’s Enrolment: 20 March 2019. abstract_id: PUBMED:37043745 Smoking in women. Smoking has numerous increased health risks for women, including the risks for cancer, cardiovascular disease, chronic obstructive pulmonary disease, gastric and duodenal ulcers, reduced fertility, ovulatory dysfunction, ectopic pregnancy, spontaneous abortion, sudden infant death, and earlier menopause. Such health risks, smoking cessation therapy, and unique obstacles to smoking cessation in women are the focus of this article. abstract_id: PUBMED:16740648 Presence of cell-free fetal DNA in plasma of women with ectopic pregnancies. Background: The quantity of cell-free fetal DNA in the plasma of pregnant women changes during pregnancy and seems to be different in normal and pathologic pregnancies. We investigated the possible diagnostic applications of the detection and measurement of cell-free fetal DNA by comparing quantities found in women with ectopic (EP) or intrauterine (IUP) pregnancies. Methods: We collected blood samples from 58 women who had positive pregnancy tests and specific complaints and sonographic findings suggestive of EP and from 45 women with confirmed IUP. We performed quantitative real-time PCR analysis of the sex-determining region Y (SRY) gene to detect and measure the amount of cell-free fetal DNA. The diagnosis of EP was confirmed by histologic examination. Results: SRY was detected in 15 EP and 14 IUP cases. The mean (SD) amount of cell-free fetal DNA was significantly higher (P&lt;0.005) in women with EP [565 (136) genome-equivalents (GE)/mL] than in women with IUP [72 (19) GE/mL] at the same gestational age. Conclusions: Our results confirm that cell-free fetal DNA is present in plasma of women with EP. The finding of higher amounts of cell-free fetal DNA in EP cases than in IUP cases suggests that this method might be useful for early diagnosis of EP. abstract_id: PUBMED:20331888 Female reproductive tract infections: understandings and care seeking behaviour among women of reproductive age in Lagos, Nigeria. Background: Reproductive tract infections (RTI's) are endemic in developing countries and entail a heavy toll on women. If untreated, RTI's can lead to adverse health outcomes such as infertility, ectopic pregnancy and increased vulnerability to transmission of the human immunodeficiency virus. It is also associated with adverse pregnancy outcomes. While RTI's and its sequelae abound in Nigeria, there is paucity of publications on the subject in the country. This study assessed the understandings and care seeking behavior with regards to RTI's among women of reproductive age in Lagos, Nigeria with the aim of improving awareness on the subject. Methods: A descriptive cross sectional survey of women attending the gynaecological outpatient and family planning clinics of the Lagos State University Teaching Hospital was carried out between 1st June 2008 and 31st August 2008 using a pre-tested questionnaire. Data was analysed using the Epi-Info 3.5 statistical software of the Centre for Disease Control and Prevention, Atlanta U.S.A. Results: Most of the respondents (77.2%) had heard of RTI's. Toilet was the most perceived mode of contracting RTI's (44.6%), followed by sexual intercourse and poor hygiene. Vaginal discharge was the commonest symptom of RTI's named while inability to get pregnant was the commonest named complication. Majority of the respondent's demonstrated poor overall knowledge of symptoms and complications of RTI"s. 37.4% of the respondents had experienced symptoms of RTI's in the preceding six months. Vaginal discharge was the commonest symptom reported (21.8%) and the majority of those who reported symptoms sought medical treatment. Government health centres were the most visited health facilities for treatment. Conclusion: Even though most of the respondents have heard of RTI's and sought treatment when symptomatic, they demonstrated poor overall understanding of the subject. There is need to educate women on preventive strategies, as RTI's are often assymptomatic. abstract_id: PUBMED:21421213 Conscientious commitment to women's health. Conscientious commitment, the reverse of conscientious objection, inspires healthcare providers to overcome barriers to delivery of reproductive services to protect and advance women's health. History shows social reformers experiencing religious condemnation and imprisonment for promoting means of birth control, until access became popularly accepted. Voluntary sterilization generally followed this pattern to acceptance, but overcoming resistance to voluntary abortion calls for courage and remains challenging. The challenge is aggravated by religious doctrines that view treatment of ectopic pregnancy, spontaneous abortion, and emergency contraception not by reference to women's healthcare needs, but through the lens of abortion. However, modern legal systems increasingly reject this myopic approach. Providers' conscientious commitment is to deliver treatments directed to women's healthcare needs, giving priority to patient care over adherence to conservative religious doctrines or religious self-interest. The development of in vitro fertilization to address childlessness further illustrates the inspiration of conscientious commitment over conservative objections. abstract_id: PUBMED:38269079 Family building and pregnancy experiences of cisgender sexual minority women. Background: Although 10% to 20% of cisgender women aged 18 to 40 years have a sexual minority identity (eg, bisexual, lesbian, and queer), there is limited research on the family building and pregnancy experiences of sexual minority cisgender women. Improving our understanding of the family building and pregnancy experiences of cisgender sexual minority women is critical for improving the perinatal health of this population. Objective: This study aimed to compare the mode of family building, past pregnancy experiences, and future pregnancy intentions among cisgender sexual minority women by sexual orientation. Study Design: This is an observational study which was conducted using cross-sectional data collected in 2019 from a national sample of 1369 cisgender sexual minority women aged 18 to 45 years. Results: Most participants (n=794, 58%) endorsed multiple sexual orientations, most commonly queer (n=641, 47%), lesbian (n=640, 47%), and/or bisexual (n=583, 43%). There were 243 (18%) cisgender sexual minority women who were parents. Pregnancy was used by 74% (181/243) of women to build their families. Among participants who used pregnancy, 60% (108/181) became pregnant through sexual activity with another parent of the child, whereas 27% (64/243) of women used donor sperm. An additional 10% (n=24) became parents through second-parent adoption, 10% (n=25) through adoption, and 14% (n=35) through step-parenting. Bisexual women more often used sexual activity to become parents (61/100, 61%) compared with queer (40/89, 45%) and lesbian women (40/130, 31%). In contrast, lesbian (50/130, 39%) and queer (25/89, 27%) women more often used donor sperm to become parents compared with bisexual women (11/100, 11%). Among the 266 (19%) cisgender sexual minority women who had ever been pregnant, there were 545 pregnancies (mean, 2.05 pregnancies per woman). Among those pregnancies, 59% (n=327) resulted in live birth, 23% (n=126) resulted in miscarriage, 15% (n=83) resulted in abortion, and 2% (n=9) resulted in ectopic pregnancy. A quarter of women had future pregnancy intentions, with no differences by sexual orientation. Overall, few participants (16%) reported that all of their healthcare providers were aware of their sexual orientation. Conclusion: Cisgender sexual minority women primarily built their families through pregnancy and a quarter have future pregnancy desires. In addition, there were important differences in family building methods used by sexual orientation. Providers should be aware of the pregnancy and family-building patterns, plans, and needs of cisgender sexual minority women. Answer: The question of whether symptom-free women at increased risk of ectopic pregnancy should be screened is a complex one, with various studies providing insights into the potential benefits and drawbacks of such screening programs. One study found that among symptom-free pregnant women at increased risk for ectopic pregnancy, the rate of ectopic pregnancies was 5.6%, which was significantly lower than reported in a previous study (PUBMED:9166305). The authors concluded that the benefits of screening, such as prevention of complications and reassurance for the woman, might not outweigh the detriments, which include false-positive diagnoses, financial costs, and potential emotional stress induced by screening. Another study evaluated the cost-effectiveness of screening for ectopic pregnancy using transvaginal sonography, serum hCG measurement, and serum progesterone measurement. The study concluded that while screening reduces the number of patients with tubal rupture, it does so at the expense of a high false-positive rate. The medical and economic benefits of such a policy were considered limited, although sonography in symptom-free women at risk of ectopic pregnancy might be justified for psychological reasons (PUBMED:12190842). A different study that screened high-risk symptom-free women early in pregnancy found that early diagnosis through transvaginal sonography and hCG assays prevented tubal rupture, substantial hemorrhage, and the need for emergency care, allowing elective treatment. This study suggested that early surveillance reduced the risk of complications and facilitated treatment, and should be offered to at-risk women (PUBMED:7906763). In summary, the decision to screen symptom-free women at increased risk of ectopic pregnancy should be weighed against the potential for false-positive results, the costs involved, and the emotional impact on the women being screened. While early screening can prevent serious complications and facilitate elective treatment, the overall benefits may be limited, and screening programs should be carefully considered on a case-by-case basis.
Instruction: Contraception, adolescence, and ethics in Brazil: are we prepared? Abstracts: abstract_id: PUBMED:19230453 Contraception, adolescence, and ethics in Brazil: are we prepared? Purpose: to assess knowledge of pediatric residents attending adolescents, including ethical issues on contraceptive advice. Methods: A prospective study was conducted involving 75 residents in order to assess their knowledge, along with ethical issues involved, in clinical attendance of adolescents. The SPSS 13 software package was used for data analysis. Results: 100% of interviewees stated that adolescents have the right to privacy during medical appointments. Yet, when asked from what age this privacy should be granted, 62% reported at any age. When questioned on whether an adolescent could be examined alone, without another person with them in the room, 50% of physicians answered yes and 50% no. 91.7% of interviewees felt that adolescents had the right to confidentiality and medical secrecy, but only 84.8% knew the contents of the Code of Medical Ethics regarding medical secrecy. A further 79.2% felt prepared to give advice on contraception to adolescents. Conclusions: The attitudes held by pediatric residents on privacy, confidentiality and medical secrecy are highly ambiguous. The doctors interviewed were ill-prepared to deal with the issues of contraception in adolescence on both practical and legal levels. abstract_id: PUBMED:2614792 Islamic medical ethics in the 20th century. While the practice of Western medicine is known today to doctors of all ethnic and religious groups, its standards are subject to the availability of resources. The medical ethics guiding each doctor is influenced by his/her religious or cultural background or affiliation, and that is where diversity exists. Much has been written about Jewish and Christian medical ethics. Islamic medical ethics has never been discussed as an independent field of ethics, although several selected topics, especially those concerning sexuality, birth control and abortions, have been more discussed than others. Islamic medical ethics in the 20th century will be characterised on the basis of Egyptian fatawa (legal opinions) issued by famous Muslim scholars and several doctors. Some of the issues discussed by Islamic medical ethics are universal: abortions, organ transplants, artificial insemination, cosmetic surgery, doctor-patient relations, etc. Other issues are typically Islamic, such as impediments to fasting in Ramadan, diseases and physical conditions that cause infringement of the state of purity, medicines containing alcohol, etc. Muslims' attitudes to both types of ethical issues often prove that pragmatism prevails and the aim is to seek a compromise between Islamic heritage and the achievements of modern medicine, as long as basic Islamic dogma is not violated. abstract_id: PUBMED:35677872 Stable Anxiety and Depression Trajectories in Late Adolescence for Oral Contraceptive Users. Background: The use of oral contraceptives (OCs) has been associated with increased incidences of anxiety and depression, for which adolescents seem to be particularly vulnerable. Rather than looking at singular outcomes, we examined whether OC use is associated with depressive and anxiety symptom trajectories from early adolescence into early adulthood. Materials And Methods: Data from 178 girls were drawn from the Research on Adolescent Development and Relationships (RADAR-Y) younger cohort study. We used assessments on 9 waves from age 13 until 24. Developmental trajectories of ratings on the Reynolds Adolescent Depression Scale (RADS-2) and the Screen for Child Anxiety Related Emotional Disorders (SCARED) were compared between never and ever users of OCs. Results: Never users showed increases in depressive and anxiety symptoms in late adolescence, whereas OC users showed a stable level of symptoms throughout adolescence. This effect remained after adjusting for baseline differences between groups in romantic relationships, sexual debut, educational level, smoking, drinking, and drug use. Age of OC use onset did not significantly predict symptom development. Conclusions: OC use in adolescence was related to an altered developmental trajectory of internalizing symptoms, in which OC users did not show an increase in depressive and anxiety symptoms in late adolescence, whereas never users did. The question remains whether this altered symptom trajectory can be considered a protective effect of OC use on psychopathology. Additional research is needed to improve our understanding of the long-term consequences of OC use on mental health. abstract_id: PUBMED:11189965 Pregnancy in adolescence. Consequences and considerations Many authors have pointed out that sexual activity is starting at an increasingly early age in young adolescents. In 1990 pregnancies in under 15-year-olds in the United States accounted for 3% of the total, a figure that has increased by 13% over the past decade. Moreover, pregnancy in adolescence may result in a number of complications, including pre-term birth, PIH, sexually transmitted diseases. The aim of this study was to evaluate the incidence of adolescent pregnancy in our user basin and to analyse the clinical evolution of these cases. The sample included 61 girls aged between 15.4 and 17.9 years old, mean age 16.7. Of these, 65.6% were students. The majority opted for voluntary abortion (85.2%). Those who chose to continue the pregnancy came from small towns with less than 1000 inhabitants (77.7%). They subsequently married their partners and continued to live with their parents. From an obstetric point of view, only one case of pre-term birth was recorded at week 26, and two cases of IUGR. The fact that the percentage of pregnancies in adolescence has remained unchanged over the years in spite of the numerous health and contraception campaigns represents a strong stimulus to investigate the countless facets of this problem. abstract_id: PUBMED:35256036 Developing and piloting a context-specified ethics framework for health technology assessment: the South African Values and Ethics for Universal Health Coverage approach. Objectives: While ethics has been identified as a core component of health technology assessment (HTA), there are few examples of practical, systematic inclusion of ethics analysis in HTA. Some attribute the scarcity of ethics analysis in HTA to debates about appropriate methodology and the need for ethics frameworks that are relevant to local social values. The "South African Values and Ethics for Universal Health Coverage" (SAVE-UHC) project models an approach that countries can use to develop HTA ethics frameworks that are specific to their national contexts. Methods: The SAVE-UHC approach consisted of two phases. In Phase I, the research team convened and facilitated a national multistakeholder working group to develop a provisional ethics framework through a collaborative, engagement-driven process. In Phase II, the research team refined the model framework by piloting it through three simulated HTA appraisal committee meetings. Each simulated committee reviewed two case studies of sample health interventions: opioid substitution therapy and either a novel contraceptive implant or seasonal influenza immunization for children under five. Results: The methodology was fit-for-purpose, resulting in a context-specified ethics framework and producing relevant findings to inform application of the framework for the given HTA context. Conclusions: The SAVE-UHC approach provides a model for developing, piloting, and refining an ethics framework for health priority-setting that is responsive to national social values. This approach also helps identify key facilitators and challenges for integrating ethics analysis into HTA processes. abstract_id: PUBMED:19485140 Medical ethics and ethical dilemmas. Background: Ethical problems routinely arise in the hospital and outpatient practice settings and times of dilemma do occur such that practitioners and patients are at cross-roads where choice and decision making become difficult in terms of ethics. This paper attempts a synopsis of the basic principles of medical ethics, identifies some ethical dilemmas that doctors often encounter and discusses some strategies to address them as well as emphasizes the need for enhanced ethics education both for physicians and patients particularly in Nigeria. Method: Literature and computer programmes (Medline and PsychoInfo databases) were searched for relevant information. Result: The search showed that the fundamental principles suggested by ethicists to assist doctors to evaluate the ethics of a situation while making a decision include respect for autonomy, beneficence, non-maleficence and justice. Although the above principles do not give answers as to how to handle a particular situation, they serve as a guide to doctors on what principles ought to apply to actual circumstances. The principles sometimes conflict with each other leading to ethical dilemmas when applied to issues such as abortion, contraception, euthanasia, professional misconduct, confidentiality truth telling, professional relationship with relatives, religion, traditional medicine and business concerns. Resolution of dilemmas demand the best of the doctor's knowledge of relevant laws and ethics, his training and experience, his religious conviction and moral principles as well as his readiness to benefit from ethics consultation and the advice of his colleagues. Conclusion: Ethics education should begin from the impressionable age in homes, continued in the medical schools and after graduation to ensure that doctors develop good ethical practices and acquire the ability to effectively handle ethical dilemmas. Also, education of patients and sanction of unethical behaviour will reduce ethical dilemmas. abstract_id: PUBMED:37344185 Sustainability, population and reproductive ethics. Introduction: The paper explores the links between sustainability, population and reproductive ethics, because sustainability goals and population matters both imply ethical commitments. Materials And Methods: This article is based on a critical analysis of current scientific and philosophical literature on sustainability, population and reproductive ethics. Results: The idea of sustainability, as enshrined in the United Nations Sustainable Development Goals, is a concept whose goal is to protect the environment, strengthen human communities and foster prosperity; in other words, to create a world in which all can thrive and prosper. However, humanity is moving quickly in the opposite direction. The main causes of unsustainability are excessive human numbers and the excessive human economic activity to which they lead. Sustainability is achievable, but it requires a sustainable human population. According to the latest studies, that is somewhere around three billion humans. Reaching this goal requires targeting all four reachable roots of the population's growth. Supportive measures, such as voluntary family planning, education and empowerment, combat (1) unwanted fertility and (2) coerced fertility. However, (3) population momentum and (4) wanted fertility also must be addressed. Conclusion: The latter two can be approached through promotion of reproductive ethics of small families, ideally one-child families, as a new global ethical norm. abstract_id: PUBMED:33309606 Evaluation of the ethics of pharmaceutical practice: A literature review Introduction: Ethics are at the heart of pharmacy practice. Method: This is a literature review. The main objective is to carry out a review of studies relating on the evaluation of ethics in pharmacy practice. Documentary research on Pubmed was carried out from 1990 to 2020. All studies relating to pharmaceutical ethics that may be applied to the pharmacy practice in pharmacies or in health establishments and that conducted a qualitative or quantitative evaluation of pharmaceutical ethics (e.g. surveying pharmacists or pharmacy students, using a measurement tool, quantifying or qualifying a perception or behaviour) were included. Results: The studies come mainly from Anglo-Saxon countries (29/38). Studies related to pharmacy practice, mainly target pharmacists (n=27) and pharmacy students (n=16), and more often have a quantitative component of (n=28). The main ethical dilemmas observed dealt with emergency oral contraception (EOC), voluntary termination of pregnancy with drugs, euthanasia, commercial practices of the pharmaceutical industry, refusal to dispense a prescription in a broader context and plagiarism or dishonesty in academic settings. Conclusion: Few studies have been conducted on the review of ethics of pharmacy practices from 1990 to 2020. Pharmacy students and practicing pharmacists are regularly exposed to ethical dilemmas and respond to these dilemmas with due consideration, taking into account six principles, eight values and other factors. Exposure to these dilemmas creates inaction, stress and distress. A few possible solutions have been identified. abstract_id: PUBMED:19838589 Risk factors for pregnancy in adolescence in a teaching maternity in Paraíba: a case-control study Purpose: To identify factors associated with gestation in adolescence in a State of the northeast of Brazil. Methods: A case-control study in the ratio of one 10 to 19-year-old adolescent (case) for two 20 to 35-year-old women (controls), with a total of 168 cases and 337 controls. The variables analyzed were: schooling, marital status, origin, family income per capita in Brazilian currency, paid job, mother's schooling, and presence of adolescent's father at home. Reproductive variables such as age at the first intercourse, mother's history of adolescence pregnancy, gynecological appointments before the pregnancy, knowledge, access and use of contraceptive methods were also included in the analysis. Results: The following variables were associated with gestation in adolescence: schooling lower than eight years, lack of a regular mate, and maternal history of adolescence gestation. Also, the age at the first intercourse was significantly lower among the adolescents and that they had a lower rate of gynecological appointments. Knowledge of hormonal methods and access to contraceptive methods were also less frequent among the adolescents. After the multiple logistic regression analysis, risk factors for pregnancy at adolescence were: low schooling (OR=2.3; CI 95%=1.3-3.8), age at the first intercourse lower than 15 years old (OR=3.6; CI 95%=2.2-5.7), history of maternal pregnancy at adolescence (OR=2.6; CI 95%=1.7-3.4). The history of previous gynecological appointments (OR=0.3; CI 95%=0.2-0.4) and the use of hormonal methods (OR=0.6; CI 95%=0.4-0.9) were protecting variables. Conclusions: The main factors associated with pregnancy in adolescence were: the adolescent's low schooling, maternal history of adolescence gestation, lack of previous gynecological appointments and lack of access to contraceptive methods. abstract_id: PUBMED:11654505 Nature and grace: the paradox of Catholic ethics. Roman Catholic bioethics seems to be caught in a paradox. On the one hand it is committed to the natural law tradition and the power of reason to understand the structures of creation and the moral law. On the other hand there is a greater and greater appeal to Scripture and revelation. The tradition maintains that reason is capable of understanding the rational structures of reality and that ethics is properly built on metaphysics. In this way ethics, bioethics, is non-sectarian. However, the tradition also recognizes the effects of Original Sin on the will and intellect and the broad cultural changes that have affected our understanding of metaphysics. The appeal to Revelation is a corrective to many contemporary trends in ethics and bioethics. This article will examine the interplay of reason and revelation in the Church's teaching on sexuality (particularly contraception and in vitro fertilization), suffering, and death. Catholic bioethics is in the end prophetic and ecumenical and not gnostic and non-ecumenical. Answer: The study assessing the preparedness of pediatric residents in Brazil to address contraception and ethical issues in adolescent patients revealed that while all interviewees acknowledged adolescents' right to privacy during medical appointments, there was ambiguity regarding the age from which this privacy should be granted, with 62% stating it should be at any age. Half of the physicians were unsure if an adolescent could be examined alone. Although a high percentage of interviewees believed in the right to confidentiality and medical secrecy, only 84.8% were familiar with the contents of the Code of Medical Ethics regarding this matter. Furthermore, 79.2% felt prepared to give advice on contraception to adolescents, but the overall conclusion was that the doctors interviewed were not well-prepared to handle contraception issues in adolescence, both practically and legally (PUBMED:19230453). This lack of preparedness in Brazil reflects a broader challenge in integrating ethical considerations into medical practice, which is influenced by cultural and religious backgrounds. For instance, Islamic medical ethics, which includes discussions on sexuality, birth control, and abortions, seeks a compromise between Islamic heritage and modern medicine without violating basic Islamic dogma (PUBMED:2614792). Similarly, Catholic ethics grapples with the paradox of relying on both natural law tradition and revelation, particularly in teachings on sexuality and contraception (PUBMED:11654505). The ethical dilemmas faced by healthcare providers extend beyond Brazil and are influenced by various factors, including religious convictions, legal knowledge, and personal moral principles (PUBMED:19485140). Addressing these dilemmas requires comprehensive ethics education for both physicians and patients, starting from an early age and continuing throughout medical practice (PUBMED:19485140). In conclusion, the study suggests that there is a need for improved education and training for pediatric residents in Brazil to adequately prepare them to address contraception and associated ethical issues in adolescent patients. This preparation should encompass a thorough understanding of legal requirements, ethical principles, and cultural sensitivities to ensure that adolescents receive appropriate and respectful medical care (PUBMED:19230453).
Instruction: Contact lenses vs spectacles in myopes: is there any difference in accommodative and binocular function? Abstracts: abstract_id: PUBMED:21104269 Contact lenses vs spectacles in myopes: is there any difference in accommodative and binocular function? Background: Theoretically, the accommodative and vergence demands are different between single-vision contact lenses and spectacle lenses. The aim of the present study was to determine whether these differences exist when these two correction methods are used in clinical practice. For this, different visual parameters that characterize the accommodative (accommodation amplitude, accommodative facility, and accommodative response) and binocular function (near and distance horizontal and vertical dissociated phorias, near and vertical associated phorias, near and distance negative and positive fusional vergence, vergence facility, near point of convergence, negative and positive relative accommodation, stimulus AC:A ratio and stereoacuity) were evaluated in a student population when their myopia was corrected with either spectacles or soft contact lenses (SCL). Methods: All parameters were measured on two separate occasions in 30 myopic habitual contact lens and spectacle wearers of mean age 19 ± 2.4 years. Some parameters such as accommodation amplitude, accommodative response, and stimulus AC:A ratio were measured using two measurement methods which are commonly used in clinical practice. Three measurements were taken for each parameter and averaged. For the comparative statistical analysis, we used the Student's t-test (p value &lt; 0.05). Results: The following statistically significant differences were found with the use of SCL in comparison to spectacles: higher accommodative lags, higher negative relative accommodation, more esophoric near horizontal dissociated phoria, and lower negative fusional vergence in near vision. Conclusions: The results found in this study show a definite trend towards poorer accommodative and vergence function with the use of contact lenses in comparison to glasses. This downward trend, though not statistically significant in accommodative function (lower PRA values and less lens amplitude of accommodation) might suggest that temporal insufficiency in the accommodation process could be occurring while contact lenses are used, thereby possibly creating a lag in accommodation to reduce associated overconvergence. This would be manifested in more esophoric values being found in the vergence function. The higher accommodative lags found in this study with SCL could indicate that prolonged use of SCL in near tasks may provoke a continuous hyperopic retinal defocus, a risk factor for the onset and progression of myopia, as indicated in numerous studies. abstract_id: PUBMED:38324957 Binocular vision parameters and visual performance in bilateral keratoconus corrected with spectacles versus rigid gas-permeable contact lenses. Purpose: To analyze binocular vision of individuals aged 18 to 35 years diagnosed with keratoconus, utilizing spectacles and rigid gas-permeable (RGP) contact lenses. Research was led by the Universidad Autónoma de Aguascalientes, México and Fundación Universitaria del Área Andina Pereira, Colombia. Methods: A single center, prospective non-randomized, comparative, interventional, open-label study, in which the differences in binocular vision performance with both spectacles and RGP contact lenses was carried out from December 2018 to December 2019. Sampling was performed according to consecutive cases with keratoconus that met the inclusion criteria until the proposed sample size was reached. Results: Rigid gas-permeable (RGP) contact lenses notably enhanced distance and near visual acuity in keratoconus patients compared to spectacles. Visual alignment analysis shows exophoria at both distances and is slightly higher with RGP contact lenses. The difference was statistically significant (p&lt;0.05), with 82.5 % presenting compensated phoria with spectacles and pnly 42.50% with RGP contact lenses. Stereoscopic vision improved while wearing RGP contact lenses (42.59 %), although accommodation and accommodative flexibility remained within normal ranges. Conclusions: Patients with keratoconus fitted with RGP contact lenses have improved binocular vision skills such as visual acuity, stereopsis, and accommodative flexibility. However, even when the vergence and motor system is decompensated with respect to normal ranges, the range between break and recovery points for both fusional reserves and the near point of convergence (NPC) improves with the use of RGP contact lenses, giving indications of an adaptive condition of the motor system from the medium to the long term. abstract_id: PUBMED:30196481 Binocular and accommodative function in the controlled randomized clinical trial MiSight® Assessment Study Spain (MASS). Purpose: To evaluate the binocular and accommodative function in children wearing dual focus (DF) MiSight® contact lenses (CLs) for myopia control compared with children wearing single-vision (SV) spectacles. Methods: This was a randomized, controlled clinical trial involving subjects aged 8 to 12, with myopia ranging from - 0.75 to - 4.00D and astigmatism &lt; 1.00D, allocated to MiSight® study CLs group or control group wearing SV. Binocular and accommodative function was determined at baseline, 12-, and 24-month visits, assessed by the following sequence of tests: distance and near horizontal phoria, accommodative convergence/accommodation (AC/A) ratio, stereopsis, accommodative amplitude (AA), and accommodative response (AR) at 33, 25 and 20 cm. Results: Seventy-four children completed the study: 41 in the CL group and 33 in the SV group. CLs group did not show any significant differences in binocular and accommodative measurements throughout the study. In control group, distance and near phoria, stereopsis, AC/A and AR at 20 cm did not show any significant change, but AA, AR at 33 cm and AR at 25 cm were greater at 24-month visit compared with baseline (p &lt; 0.05). Conclusions: DF lenses do not change the binocular and accommodative function in children wearing dual focus CLs. Trial Registration: NCT01917110. abstract_id: PUBMED:36282205 Visual Performance and Binocular/Accommodative Function of S.T.O.P. Contact Lenses Compared With MiSight. Objectives: The objective of this study was to compare the visual performance and binocular/accommodative function of two novel S.T.O.P. design (F2 and DT) contact lenses against MiSight when worn by myopic, young adults. Method: This was a prospective, randomized, cross-over, single-masked study. Each lens was worn daily wear with overnight peroxide disinfection for approximately 7 days. Visual performance was assessed with subjective ratings (0-100): clarity of vision and lack of ghosting (far away, intermediate, and near), vision when driving, overall vision satisfaction, and with monocular high-contrast and low-contrast visual acuity (HCVA/LCVA) at 6 m, binocular HCVA (6 m, 70 cm, 50 cm, and 40 cm), binocular LCVA (6 m and 70 cm). Binocular function was assessed with heterophorias (3 m and 40 cm). Accommodative function was assessed with monocular accommodative facility (AF: 40 cm) and dynamic monocular accommodative response (AR: 6 m, 70 cm, and 40 cm). Results: F2 was rated higher than MiSight for clarity of vision (near and intermediate) and lack-of-ghosting ( P &lt;0.001), while MiSight was rated higher than DT for clarity of vision (near, P &lt;0.001). MiSight was better than F2 and DT for monocular HCVA (6 m) and binocular HCVA (6 m and 40 cm, P ≤0.02), but the maximum difference was ≤2 letters. There were no differences between designs for heterophoria ( P =0.61) nor were there any differences between DT and MiSight for any accommodative measure ( P &gt;0.1). F2 was higher for monocular-AF ( P =0.007) and lower for AR (70 cm and 40 cm; P ≤0.007) compared with MiSight. Conclusions: The visual performance and binocular/accommodative function of S.T.O.P. designs F2 and DT were comparable with MiSight. F2 outperformed MiSight in some aspects of subjective visual performance and monocular accommodative function. abstract_id: PUBMED:34674953 Dynamics of the accommodative response and facility with dual-focus soft contact lenses for myopia control. Objective: To assess the impact of using dual-focus soft contact lenses for myopia control on the dynamics of the accommodative response and facility. Methods: 24 young adult myopes were fitted with dual-focus soft contact lenses for myopia control (MiSight®) and single-vision soft contact lenses (Proclear®). The WAM-5500 open-field autorefractor was used to measure the dynamics of the accommodative response (magnitude and variability) in binocular conditions, with accommodative data being gathered from the dominant eye, at three viewing distances (500 cm, 40 cm, and 20 cm) during 90 s. Also, the binocular accommodative facility was assessed with the WAM-5500 autorefractor. All participants performed the same experimental protocol with the dual-focus (MiSight) and single-vision (Proclear) soft contact lenses, with both experimental sessions being carried in two different days and following a counterbalanced order. Results: This study showed greater lags of accommodation with the MiSight than the Proclear lenses at near distances (40 cm: 1.27 ± 0.77 vs. 0.68 ± 0.37 D, corrected p-value = 0.002, Cohen-d = 0.90; and 20 cm: 1.47 ± 0.84 vs. 1.01 ± 0.52 D, corrected p-value = 0.007, Cohen-d = 0.75), whereas a higher variability of accommodation was observed with the dual-focus than the single-vision lenses at 500 cm (0.53 ± 0.11 vs. 0.23 ± 0.10 D), 40 cm (0.82 ± 0.31 vs. 0.68 ± 0.37 D), and 20 cm (1.50 ± 0.56 vs. 1.15 ± 0.39 D) (corrected p-value &lt; 0.001 in all cases, and Cohen-ds = 0.67-2.33). Also, a worse quantitative (27.75 ± 8.79 vs. 34.29 ± 10.08 cycles per minute, p = 0.029, Cohen-d = 0.48) and qualitative (23.68 ± 7.12 vs. 28.43 ± 7.97 score, p = 0.039, Cohen-d = 0.45) performance was observed with the MiSight when compared to the Proclear lenses. Conclusions: The use of dual-focus soft contact lenses for myopia control alters the dynamics of accommodative response and facility in the short-term. Although this optical design has demonstrated its effectiveness for myopia control, eye care specialists should be aware of the acute effects of these lenses on accommodation performance. abstract_id: PUBMED:22831805 A comparative study of visual function of young myopic adults wearing contact lenses vs. spectacles. Purpose: To evaluate the impact of contact lenses wear vs. spectacles wear on visual function of young adults with mild to moderate myopia. Methods: 57 students (27 male and 30 female) with a mean age of 23 years old participated in the study. Their mean best corrected visual acuity was 10/10 binocularly, all suffered from mild to moderate myopia (-3.75 sph/SD 1.25 sph) and they were both contact lenses and spectacles wearers. The VF-14 questionnaire was administered to assess the contact lenses wear vs. spectacles wear impact score on general daily living among the young individuals. It was used translated in Greek after following the 'translation-back translation' procedure. Results: The mean VF-14 score among spectacles and contact lenses wearers was 100 and 86.78 (SD 4.08) respectively. Although there was a significant difference between the two groups (p&lt;0.05), both scores were related to a satisfactory functional vision for daily living. The contact lenses wearers were facing difficulty especially while driving at night, seeing steps, as well as doing fine handwork (i.e. sewing, knitting or carpentry). Conclusions: The use of both spectacles and contact lenses provides satisfactory visual functioning for daily activities in young individuals suffering from mild to moderate myopia. However, there is a spectacles' wear superiority in personal satisfaction when compared to contact lenses. abstract_id: PUBMED:35449200 The influence of contact lenses with different optical designs on the binocular vision and visual behavior of young adults. Theoretically, contact lenses change the accommodation and vergence demands of the eyes and directly or indirectly influence binocular vision. The purpose of this study was to investigate the influence of different optical designs of contact lenses on binocular vision and visual behavior among young adults. In this randomized, single-blinded experimental study, visual behavioral performance (VBP) and Ocular Surface Disease Index questionnaires were used for preliminary screening. Nineteen myopic subjects aged 20-26 years (21.59 ± 1.40 years) participated in the study. Baseline values for subjective refraction and binocular visual functions were evaluated. All the subjects were randomly grouped to wear different optical designs of single-vision, progressive, and bifocal contact lenses. Eye examinations were performed on the first day (CL baseline), 2nd week, and 4th week of wearing the lenses. In addition, subjects completed the VBP and visual quality questionnaire again at the end of the examination. Single-vision contact lenses (Lens 1) increased accommodative lag, negative relative accommodation, and distant and near vergence; however, vergence facilities, accommodative facilities, heterophoria, and the comfort and posture balance dimensions in the VBP questionnaire were improved. Progressive contact lenses (Lens 2) reduced the lag of accommodation and near vergence; in addition, vergence facilities and accommodative facilities were also improved. Bifocal contact lenses (Lens 3) affected negative and positive relative accommodation, but vergence facilities and accommodative facilities also progressed. Furthermore, both progressive (Lens 2) and bifocal (Lens 3) contact lenses enhanced overall visual behavioral performance. In terms of visual quality, single-vision contact lenses (Lens 1) were the most comfortable, progressive and bifocal contact lenses reduced distant visual acuity and stability, progressive contact lenses (Lens 2) had more complaints about halos at night, and bifocal contact lens (Lens 3) users were more likely to have double vision. Compared with single-vision contact lenses, progressive and bifocal contact lenses relaxed accommodation, reduced the lag of accommodation, and improved visual behavioral performance. Although the vergence function showed a significant change, it did not show worse trends when wearing contact lenses. Contact lenses with different optical designs have a great influence on binocular vision and visual behavioral performance. abstract_id: PUBMED:17688417 High AC/A accommodative esotropia strabismus treated with contact lenses: a single case design (N=1) study. Background And Purpose: The purpose was to determine the efficacy of two types of contact lenses (spherical disposable and aplanatic) as treatment in a patient with esotropia with a high Accommodative Convergence/Accommodation Ratio (AC/A). Due to the possibility of the appearance of accommodative insufficiency in this kind of patient, (i.e., following many years of bifocal glasses use), the elimination of the plus addition lens is advisable. Nevertheless, in some patients, this change leads to the appearance of a residual angle of esodeviation in near vision. It was expected that monofocal aplanatic contact lenses could achieve, due to their optical characteristics, an accurate and orthotropic binocular alignment, without aggravating an undesirable manifestation of the accommodative insufficiency. Methods: An experimental design of a single case (N=1) was used in which the subject acted as his own control. With bifocal glasses the subject displayed stability in his binocular and accommodative system at every distance of vision for the past three years. We compared the efficacy of two different types of hydrophilic contact lenses to control the angle of deviation, both at distance and at near vision. Results: Neither of the two contact lenses produced the results of stability and the correct binocular alignment that had been achieved with bifocal glasses. This subject experienced a worse manifest esodeviation in distance vision with aplanatic lenses than with the disposable ones. Conclusions: These monofocal contact lenses did not create acceptable binocular alignment and stability in a subject with a high AC/A accommodative esotropia. abstract_id: PUBMED:29554011 LogMAR and Stereoacuity in Keratoconus Corrected with Spectacles and Rigid Gas-permeable Contact Lenses. Significance: This study showed an improvement in three-dimensional depth perception of subjects with bilateral and unilateral keratoconus with rigid gas-permeable (RGP) contact lens wear, relative to spectacles. This novel information will aid clinicians to consider RGP contact lenses as a management modality in keratoconic patients complaining of depth-related difficulties with their spectacles. Purpose: The aim of this study was to systematically compare changes in logMAR acuity and stereoacuity from best-corrected spherocylindrical spectacles to RGP contact lenses in bilateral and unilateral keratoconus vis-à-vis age-matched control subjects. Methods: Monocular and binocular logMAR acuity and random-dot stereoacuity were determined in subjects with bilateral (n = 30; 18 to 24 years) and unilateral (n = 10; 18 to 24 years) keratoconus and 20 control subjects using standard psychophysical protocols. Results: Median (25th to 75th interquartile range) monocular (right eye) and binocular logMAR acuity and stereoacuity improved significantly from spectacles to RGP contact lenses in the bilateral keratoconus cohort (P &lt; .001). Only monocular logMAR acuity of affected eye and stereoacuity improved from spectacles to RGP contact lenses in the unilateral keratoconus cohort (P &lt; .001). There was no significant change in the binocular logMAR acuity from spectacles to RGP contact lenses in the unilateral keratoconus cohort. The magnitude of improvement in binocular logMAR acuity and stereoacuity was also greater for the bilateral compared with the unilateral keratoconus cohort. All outcome measures of cases with RGP contact lenses remained poorer than control subjects (P &lt; .001). Conclusions: Binocular resolution and stereoacuity improve from spectacles to RGP contact lenses in bilateral keratoconus, whereas only stereoacuity improves from spectacles to RGP contact lenses in unilateral keratoconus. The magnitude of improvement in visual performance is greater for the binocular compared with the unilateral keratoconus cohort. abstract_id: PUBMED:30601291 Vision Performance and Accommodative/Binocular Function in Children Wearing Prototype Extended Depth-of-Focus Contact Lenses. Objective: To assess two prototype contact lenses (CLs) that extend depth of focus through deliberate manipulation of multiple spherical aberration terms (extended depth-of-focus [EDOF]) for visual performance, accommodative and binocular function, and objective static near refraction against a single-vision (SV) CL. Method: This was a prospective, randomized, cross-over, single-masked (participant) clinical trial in which 16 myopic children wore 2 prototype CLs (EDOFL/EDOFH) designed for presbyopes and a SV CL, each for one week. Measurements comprised monocular and binocular high-contrast visual acuity (HCVA: 6 m, 40 cm), binocular low-contrast visual acuity (LCVA: 6 m), contrast sensitivity (CS: 6 m), phorias (3 m, 33 cm), monocular-accommodative facility (33 cm), and objective static refraction (spherical equivalent M) at zero, -3, and -5 D vergences. Measurements were taken 10 min after lens insertion. Subjective response was assessed using take-home questionnaires comprising vision clarity (distance/intermediate/near), vision quality (haloes at night/ghosting), vision stability when moving (playing sport/using stairs), and comfort. Results: Single vision was significantly better than both EDOF CLs for monocular HCVA, LCVA, and CS (6 m); vision clarity (distance), ghosting (P≤0.040), and EDOFL for binocular HCVA (6 m, P=0.047). M was significantly closer to the ideal objective static refraction at -3 and -5 D vergences (P≤0.004) with both EDOF compared with SV CLs. There were no differences between CLs for any other variable (P≥0.169). Conclusion: Extended depth-of-focus CLs caused minimal disruption to the accommodative and binocular system compared with SV CLs when worn by myopic children. Future EDOF designs for children should reduce the difference between SV for distance vision and vision quality while maintaining the same performance for intermediate and near. Answer: Yes, there are differences in accommodative and binocular function when comparing contact lenses to spectacles in myopes. Studies have shown various effects depending on the type of contact lenses used and the visual condition being corrected. For instance, one study found that when myopia was corrected with soft contact lenses (SCL), there were higher accommodative lags, higher negative relative accommodation, more esophoric near horizontal dissociated phoria, and lower negative fusional vergence in near vision compared to spectacles. This suggests a trend towards poorer accommodative and vergence function with the use of contact lenses (PUBMED:21104269). In the case of keratoconus patients, rigid gas-permeable (RGP) contact lenses notably enhanced distance and near visual acuity compared to spectacles. However, these patients showed exophoria at both distances and a slightly higher degree with RGP contact lenses. Despite some decompensation in the vergence and motor system, the use of RGP contact lenses improved stereoscopic vision, accommodative flexibility, and the range between break and recovery points for both fusional reserves and the near point of convergence (NPC), indicating an adaptive condition of the motor system (PUBMED:38324957). For children wearing dual focus (DF) MiSight® contact lenses for myopia control, no significant differences in binocular and accommodative measurements were observed throughout the study compared to those wearing single-vision (SV) spectacles (PUBMED:30196481). Another study comparing novel S.T.O.P. design contact lenses against MiSight in myopic young adults found that the visual performance and binocular/accommodative function of S.T.O.P. designs were comparable with MiSight, with some aspects of subjective visual performance and monocular accommodative function outperforming MiSight (PUBMED:36282205). Additionally, the use of dual-focus soft contact lenses for myopia control was shown to alter the dynamics of accommodative response and facility in the short-term, with greater lags of accommodation and higher variability of accommodation observed compared to single-vision soft contact lenses (PUBMED:34674953). In summary, while contact lenses can provide improved visual acuity and stereopsis in certain conditions like keratoconus, they may also lead to changes in accommodative and binocular function, such as increased accommodative lag and altered vergence, compared to spectacles. The impact of these changes on visual function and comfort can vary depending on the individual and the type of contact lens used.
Instruction: Axonal numbers in the greater auricular nerve: is there enough to go round? Abstracts: abstract_id: PUBMED:19225442 Axonal numbers in the greater auricular nerve: is there enough to go round? Hypothesis: Does the greater auricular nerve (GAN) have an appropriate number of myelinated axons for use as an interposition graft for the facial nerve? Background: Previous studies have suggested that the GAN has a suitable cross-sectional and fascicular area for use as an interposition graft for the facial nerve. In this study, counts of myelinated axons in the GAN have been undertaken to assess, at a microscopic level, its suitability as such a graft. Methods: Six GANs were examined, and the total myelinated axonal counts and axonal density per square millimeter of fascicular area were calculated. These counts were compared with previous studies on the facial nerve. Results: Axonal density per square millimeter was comparable between greater auricular and facial nerves. However, there were significantly lower numbers of myelinated axons in the GAN compared with the published facial nerve data. Conclusion: The findings support the concept of "doubling over" the GAN to act as an appropriate interpositional facial nerve graft when there is a size mismatch between the two. abstract_id: PUBMED:21072149 Greater auricular nerve neuropraxia with beach chair positioning during shoulder surgery. Neuropraxia of the greater auricular nerve is an uncommon complication of shoulder surgery, with the patient in the beach chair position. The greater auricular nerve, a superficial branch of the cervical plexus, is vulnerable to neuropraxia due to its superficial anatomical location. In this case series, we present three cases of neuropraxia associated with direct compression by a horseshoe headrest, used in routine positioning for uncomplicated shoulder surgery. We outline the risk of using devices of this nature and discourage the use of similar headrest devices due to the potential complications in headrest devices that exert pressure on the posterior auricular area to maintain head position during surgery. abstract_id: PUBMED:32336591 Efficacy of greater auricular and auriculotemporal nerve blocks performed in rabbits. Objective: To evaluate the efficacy, duration and safety of greater auricular and auriculotemporal nerve blocks in rabbits. Study Design: Prospective, randomized, crossover, experimental study. Animals: A total of 11 healthy adult Dutch-belted rabbits. Methods: The rabbits underwent general anesthesia and injections of the greater auricular and auriculotemporal nerves using either bupivacaine (0.3 mL kg-1, 0.5%) or the same volume of saline. After anesthesia, the efficacy and duration of nerve blocks were assessed using analgesiometry forceps on the pinna and cotton-tipped applicators within the vertical ear canal. Rabbits were monitored for abnormal carriage of the ear and auriculopalpebral nerve block of the ipsilateral eye. Body weight, food intake, fecal output and fecal pellet diameter were measured daily for 6 days after the nerve blocks were performed and compared with baseline to assess short-term effects. Results: The greater auricular nerve was successfully blocked in 12/16 (75%) ears for mean ± standard deviation duration of 88 ± 52 minutes. In successfully blocked ears, altered ear position was noted in five/16 (31%) cases. The auriculotemporal nerve was blocked in one/16 ears for 120 minutes. The auriculopalpebral nerve was inadvertently blocked in three/16 ears. Food intake and fecal output decreased significantly during the treatment day but returned to pretreatment values after 24 hours. There was no change in body weight or fecal pellet diameter for either treatment. Conclusions And Clinical Relevance: The results suggest that duration of the greater auricular nerve block with 0.5% bupivacaine was short-lived in the live rabbit. Auriculotemporal nerve block was only achieved in one ear; therefore, further studies are warranted to evaluate the contribution of these blocks in the pain management of rabbits undergoing auricular surgery. abstract_id: PUBMED:26229249 Greater auricular nerve masquerading as lymph node. Hansen's disease is on the verge of being eliminated from India and often missed by clinicians due to low index of suspicion. We present an unusual case in which greater auricular nerve thickening masqueraded as enlarged lymph node in the neck. The patient was referred for fine needle aspiration cytology, which revealed epithelioid cell granulomas suggestive of Hansen's disease. Further clinical examination and investigations including the skin biopsy confirmed the disease, highlighting the role of pathologist in the management of such unusual presentation of a common disease. abstract_id: PUBMED:35464802 18F-FDG PET, contrast CT and MRI to comprehensively diagnose and assess rare perineural spread of squamous cell carcinoma to the greater auricular nerve. We present a case of a 78-year-old male with a primary parotid squamous cell carcinoma which spread via the left facial, trigeminal and greater auricular nerves. The patient presented with left facial droop and paraesthesia. Initial MRI scans demonstrated involvement of the trigeminal and facial nerves with no sign of a primary lesion. Abnormal enhancement within the left parotid substance on FDG PET-CT demonstrated the primary malignancy which was confirmed on histology by core biopsy. There was also focal avidity along the course of the left greater auricular nerve consistent with perineural infiltration, extending from the posterior aspect of the parotid to the left cervical plexus at C2/C3. To our knowledge, this is the second case of squamous cell carcinoma perineural spread to the greater auricular nerve imaged on FDG PET-CT scanning. This case highlights the importance of multimodality imaging correlation in the workup of primary head and neck malignancies and associated perineural spread, which is essential in adjuvant radiation therapy planning to reduce local recurrence, improve prognosis and overall survival. abstract_id: PUBMED:36185881 The Not-So-Merry-Go-Round: Traumatic Inferior-Anterior Hip Dislocation in a 9-Year-Old. Merry-go-rounds are not as innocuous as they may seem. Pediatric hip anterior-inferior dislocations are very rare and can be associated with low-energy trauma. Prompt recognition of pediatric hip dislocations is vital, and this should be treated as a time-sensitive orthopedic emergency. Closed reduction within 6 hours minimizes the risk of avascular necrosis (AVN). We present a case of a 9-year-old boy with an inferior-anterior hip dislocation following low energy trauma while playing on a merry-go-round. The patient was emergently brought to the theatre for closed reduction under general anesthesia within 6 hours. At his 12-month follow-up, he has a full range of motion without any pain. abstract_id: PUBMED:23766239 Preservation of greater auricular nerve during parotidectomy: sensation, quality of life, and morbidity issues. A systematic review. Background: Our objectives were to assess the evidence of preservation of the greater auricular nerve in parotidectomy with regard to morbidity and quality of life. Methods: This was a systematic review. Inclusion criteria were: English literature, prospective and retrospective studies. Exclusion criteria were: single case reports, "teaching" reviews. Outcome measures were: tactile sensation, pain, thermal sensitivity, and quality of life. Results: Although quality of life does not seem to be adversely affected when the greater auricular nerve is sacrificed, preservation of the posterior branch was recommended in 8 studies. When preserving the nerve, the incremental operative time increase is no more than 10 to 5 minutes after a rapid learning curve. Conclusions: There is level Ib evidence that preservation of the greater auricular nerve minimizes the postoperative sensory disturbance and should be considered whenever tumor clearance is not compromised. There is no evidence that overall quality of life is affected when the greater auricular nerve is sacrificed. abstract_id: PUBMED:32029354 Injection techniques for auricular nerve blocks in the rabbit cadaver. Objective: To describe the landmarks for localization and to determine the methodology and volume of methylene blue dye to adequately stain the auricular nerves in rabbit cadavers. Study Design: Prospective, randomized, cadaveric study. Animals: A total of 26 rabbit cadavers (Dutch-Belted and New Zealand White breeds). Methods: Part I: anatomical dissections were performed to identify the sensory auricular nerves and to establish the ideal injection approach and volume of dye required for nerve staining. Part II: a single injection technique using 0.1 mL kg-1 dye was evaluated for staining the greater auricular nerve and two techniques (perpendicular and angled needle approaches) using 0.075 mL kg-1 dye were evaluated for the auriculotemporal nerve. Dye spread was evaluated through cadaveric dissections and nerve staining graded using a 0-2 point scale. Injections were considered successful if the nerve was stained circumferentially. Cadavers were assessed for staining of the mandibular nerve owing to the close proximity to the auriculotemporal nerve. Fisher's exact test and mixed effects logistic regression model were used for statistical analysis. Results: The greater auricular nerve was stained in 24/27 (88.9%) injections. The auriculotemporal nerve was stained in 7/12 injections (58.3%) with the perpendicular needle approach; staining success increased to 80% (12/15 injections) with the angled needle approach; however, this difference was not statistically significant (p = 0.228). Mandibular nerve staining occurred on seven auriculotemporal injections with no statistically significant difference in the incidence of nerve staining between techniques. Conclusions And Clinical Relevance: Results suggest that the auricular nerves in rabbit cadavers can be successfully located and stained using anatomic landmarks and the described injection techniques. abstract_id: PUBMED:27478586 Ultrasound-Guided Greater Auricular Nerve Block as Sole Anesthetic for Ear Surgery. A greater auricular nerve (GAN) block was used as the sole anesthetic for facial surgery in an 80-year-old male patient with multiple comorbidities which would have made general anesthesia challenging. The GAN provides sensation to the ear, mastoid process, parotid gland, and angle of the mandible. In addition to anesthesia for operating room surgery, the GAN block can be used for outpatient or emergency department procedures without the need for a separate anesthesia team. Although this nerve block has been performed using landmark-based techniques, the ultrasound-guided version offers several potential advantages. These advantages include increased reliability of the nerve block, as well as prevention of inadvertent vascular puncture or blockade of the phrenic nerve, brachial plexus, or deep cervical plexus. The increasing access to ultrasound technology for medical care providers outside the operating room makes this ultrasound guided block an increasingly viable alternative. abstract_id: PUBMED:32902032 Impact of greater auricular nerve sacrifice during parotidectomy on quality of life. Background: Greater auricular nerve (GAN) sacrifice can alter sensation; however, the impact on quality of life (QoL) is unclear. Objective: The aim of this study was to evaluate patients' QoL following GAN sacrifice versus preservation. Methods: Patients who underwent parotidectomy from 2010 to 2019 at a single institution completed two QoL surveys. Results between GAN-sacrificed versus preserved cohorts and short (≤ 1 year) versus long-term (&gt; 1 year) follow-up were compared. Results: Of 404 patients, 67 with known GAN status completed the surveys. GAN-sacrificed patients more frequently experienced sensation loss of the lobule and concha than GAN-preserved patients (P = .042 and P = .041, respectively). Interference with daily activities or amount of regained sensation was not different based on GAN status. Short-term follow-up patients had a higher number of symptoms and affected areas than long-term follow-up patients (P = .014 and P = .002, respectively). Conclusion: GAN sacrifice leads to sensation loss of the lobule and concha; however, it does not significantly impair QoL. Answer: The study examining the axonal numbers in the greater auricular nerve (GAN) aimed to determine if the GAN has an appropriate number of myelinated axons for use as an interposition graft for the facial nerve. The results showed that while the axonal density per square millimeter was comparable between the greater auricular and facial nerves, the GAN had significantly lower numbers of myelinated axons compared to the published data on the facial nerve. Despite this, the findings support the concept of "doubling over" the GAN to act as an appropriate interpositional facial nerve graft when there is a size mismatch between the two nerves (PUBMED:19225442). This suggests that while the GAN may not have a one-to-one match in axonal numbers compared to the facial nerve, it can still be used effectively as a graft by doubling it over to compensate for the lower number of axons.
Instruction: Short-term outcomes of acute knee injuries: does the provider make a difference? Abstracts: abstract_id: PUBMED:37593689 Cohort study of short-term outcomes after ACL-reconstruction using autograft with internal brace reinforcement versus isolated autograft demonstrating no significant difference. Background: The gold standard treatment for Anterior Cruciate Ligament injury is reconstruction (ACL-R). Graft failure is the concern and ensuring a durable initial graft with rapid integration is crucial. Graft augmentation with implantable devices (internal brace reinforcement) is a technique purported to reduce the risk of rupture and hasten recovery. Few studies have examined these techniques, in particular when compared to non-augmented grafts. This study assesses the short-term outcome of ACL-R using augmented and non-augmented hamstring tendon autografts. Methods: This was a retrospective cohort study comparing augmented and non-augmented ACL-R. All procedures were performed in a single centre using the same technique. The Knee injury and Osteoarthritis Outcome Score [KOOS] was used to assess patient-reported outcomes. Results: There were 70 patients in the augmented and 111 patients in the control group. Mean graft diameter in the augmented group was 8.82 mm versus 8.44 mm in the non-augmented. Six strand graft was achievable in 73.5% of the augmented group compared to 33% in the non-augmented group. Two graft failures were reported in the non-augmented group and none in the augmented group. Patient satisfaction rates were higher in the augmented group. There was a statistically insignificant improvement in the postoperative KOOS in the augmented group compared to the non-augmented group (p 0.6). Irrespective of augmentation status, no correlation was found between the functional score and age, or femoral tunnel width. Conclusion: No statistically significant difference was demonstrated in the short-term functional outcome of ACL reconstruction using an augmented or non-augmented hamstring graft. Augmented ACL-R may achieve superior graft diameters, failure rates and patient reported outcomes when compared to nonaugmented ACL-R. Prospective trials are needed to examine this further. abstract_id: PUBMED:38387768 Short-Term (6 Months or Less) Pain in Cemented Versus Cementless Total Knee Arthroplasty: A Systematic Review and Meta-Analysis. Background: Cementless total knee arthroplasty (TKA) has increased in popularity to potentially improve survivorship. Radiostereometric studies demonstrate increased component migration during the first 3 to 6 months in cementless constructs, generating concern for increased post-operative pain during early osseointegration. The purpose of this study was to evaluate short-term (≤ 6 months) pain and function in cemented versus cementless TKA. We hypothesize that cementless TKA patients report increased pain during the short-term (≤ 6 months) post-operative period. Methods: The MEDLINE, EMBASE, CINAHL, and Cochrane Libraries were searched for studies evaluating short-term (≤ 6 months) outcomes of cemented versus cementless primary TKA. Studies involving hybrid fixation were excluded. A meta-analysis was performed using standardized mean difference (SMD) for primary outcomes (early post-operative pain) and weighted mean difference (WMD) for secondary outcomes (early post-operative function). Results: There were eleven studies included. There was no significant difference in acute post-operative pain between cemented and cementless TKA within six months of index TKA (SMD 0.08 in favor of cemented TKA; P = 0.10). Early post-operative Forgotten Joint Scores (WMD 0.81; P = 0.81) and Knee Injury and Osteoarthritis Outcome Scores for Joint Replacement (WMD 0.80 in favor of cemented TKA; P = 0.14) were also similar between groups. Conclusion: There is no difference in short-term (≤ 6 months) pain or early function between patients receiving cemented versus cementless TKA. This suggests that surgeons may utilize cementless TKA without fear of increased pain due to micromotion within 6 months of index arthroplasty. However, additional studies with uniform assessment methods are needed to further inform differences in short-term pain and early functional outcomes between cemented and cementless TKA. abstract_id: PUBMED:37959381 Static Baropodometry for Assessing Short-Term Functional Outcome after Unilateral Total Knee Arthroplasty: Exploring Correlation between Static Plantar Pressure Measurements and Self-Reported Outcomes. This study aimed to investigate the association between objective baropodometric and radiological measurements and patient self-reported functional outcomes, assessed through the Knee Injury and Osteoarthritis Outcome Score (KOOS). Additionally, it sought to evaluate the effectiveness of static baropodometry in predicting short-term KOOS results following unilateral total knee arthroplasty (TKA). We conducted a prospective single-center study involving 32 patients who underwent unilateral TKA for knee osteoarthritis (KOA). Patients were evaluated both preoperatively and six months postoperatively, utilizing objective measurements derived from static baropodometric analysis in a normal, relaxed, bipedal standing position using a multi-platform Plantar Pressure Analysis System (PPAS) and radiographic measurements of the femorotibial angle (FTA) and subjective assessments through the national validated version of the KOOS. The study found an insignificant average correction of -0.69° ± 4.12° in the preoperative FTA at the sixth month after TKA. Moreover, there were no significant differences in the KOOS based on different types of knee alignment (KA) both pre- and postoperatively (p &gt; 0.05). No significant correlations were observed between the KOOS, and total average affected and unaffected plantar pressures (TAAPP and TAUPP) pre- and postoperatively, as well as KA pre- and postoperatively. However, significant changes were observed in TAAPP and TAUPP measurements after unilateral TKA. TAAPP demonstrated a significant increase postoperatively (mean change (SD) = 18.60 (47.71); p = 0.035). In conclusion, this study found no significant correlation between KA, static baropodometric measurements, including pre- and postoperative differences, and KOOS outcomes. Therefore, static plantar pressure measurements alone might not serve as a reliable predictor of short-term clinical outcomes after unilateral TKA, as reported by patients. abstract_id: PUBMED:12209480 Short-term outcomes of acute knee injuries: does the provider make a difference? Objective: To compare the short-term outcomes of acute knee injuries treated by specialists and generalists. Methods: Using patient logs, 168 adults with acute knee injuries were identified; 131 (78%) completed a questionnaire 3 months after initial presentation. Results: The mean age of the 77 male and 54 female responders was 34.6 years (range 18-73 years). The injuries were classified as mild (n = 35), moderate (n = 75), or severe (n = 21). Most responders were satisfied with their care and outcome, but 22% noted some functional limitations. The 59 patients seeing an orthopedist were more likely to have had a severe injury, more physician visits, activity limitations, lost time from work or recreation, and more pain when compared with the 72 patients who never saw an orthopedist. Excluding surgical patients, however, satisfaction was not significantly different by provider. After multivariate modeling (adjusting for age, sex, injury severity, and diagnosis), there was no significant association between having seen an orthopedist and either treatment success or satisfaction. Conclusion: With the exception of time lost for recuperation in our community there is little difference in short-term outcome for patients with acute knee injury not undergoing surgery, regardless of the specialty of the treating physician. abstract_id: PUBMED:38087214 Short-term clinical results of bicruciate-retaining total knee arthroplasty using personalized alignment. Background: Bicruciate-retaining (BCR) prosthesis has been introduced to recreate normal knee movement by preserving both the anterior and posterior cruciate ligaments. However, the use of BCR total knee arthroplasty (TKA) is still debatable because of several disappointing reports. We have been performing BCR TKAs with personalized alignment (PA). This study aimed to reveal the limb alignment and soft tissue balance of FA-BCR TKAs and compare the clinical outcomes of FA-BCR TKAs with those of unicompartmental knee arthroplasty (UKA). Methods: Fifty BCR TKAs and 58 UKAs were included in this study. The joint component gaps of BCR TKA were evaluated intraoperatively and the postoperative hip-knee-ankle (HKA) angle, medial proximal tibial angle (MPTA), and lateral distal femoral angle (LDFA) were measured using full-length standing radiography. The short-term clinical outcomes of BCR TKAs were compared with those of UKA using the scoring system of 2011 Knee Society Scoring (KSS) and the knee injury and osteoarthritis outcome score (KOOS) at an average of 2 years postoperatively (1-4yeras). Results: The coronal alignment values of PA-BCR TKA were as follows: HKA angle, 177.9° ± 2.3°; MPTA, 85.4° ± 1.9°; and LDFA, 87.5° ± 1.9°. The joint component gaps at flexion angles of 10°, 30°, 60°, and 90° were 11.1 ± 1.2, 10.9 ± 1.4, 10.7 ± 1.3, and 11.2 ± 1.4 mm for the medial compartment and 12.9 ± 1.5, 12.6 ± 1.8, 12.5 ± 1.8 and 12.5 ± 1.7 mm for the lateral compartment, respectively. The patient expectation score and maximum extension angle of PA-BCR TKA were significantly better than those of UKAs. Conclusions: The short-term clinical outcomes of PA-BCR TKA were comparable or a slightly superior to those of UKAs. abstract_id: PUBMED:32742419 Patient-reported and Functional Outcomes of Bicondylar Tibial Plateau Fractures Managed by Internal Screw Fixation in Combination with An Ilizarov Fixator: A Case Series of 22 Patients with Long-term Follow-up. Introduction: The objective of this case series was to report the long-term patient-reported and functional outcomes of complex bicondylar tibial plateau fractures in patients treated with internal fixation in combination with an Ilizarov fixator. Materials And Methods: A retrospective series of cases. Patient-reported, radiological and functional outcomes were obtained with a mean of 9.4 years' follow-up. Results: Twenty-two patients completed the follow-up. At follow-up, the mean age was 52.2 years, ranging from 26 to 69 years. The gender distribution was 14 males and 8 females. The mean knee injury and osteoarthritis outcome score (KOOS) was pain 72.5, symptoms 62.7, ADL 75.9, sport 35.4, and knee related quality of life (QOL) 56.4. Compared to a reference population, the present study reported worse outcome for the subscales: symptoms, sport, and QOL. The maximal isometric muscle strength for knee extension was 354N for the noninjured leg and 325N for the injured leg (p = 0.27). Assessment of gait functions showed a gait speed of 122.7 cm/second and a cadence of 112.7 steps/minute. Gait speed and cadence showed no significant difference when compared to a reference population. Radiological examination of knee osteoarthritis (OA) showed 6 patients presented with none OA, 13 patients with mild OA, and 3 patients with severe OA. Conclusion: Complex bicondylar tibial plateau fractures are associated with long-term decreased knee structure-specific patient-reported outcome (KOOS). In contrast, most patients presented with satisfactory long-term radiological and functional outcomes. More research is needed to understand the complex association between patient-reported outcomes and radiological and functional outcomes. Level Of Evidence: IV. Series of cases. How To Cite This Article: Larsen P, Traerup J, Mikuzis M, et al. Patient-reported and Functional Outcomes of Bicondylar Tibial Plateau Fractures Managed by Internal Screw Fixation in Combination with An Ilizarov Fixator: A Case Series of 22 Patients with Long-term Follow-up. Strategies Trauma Limb Reconstr 2019;14(2):85-91. abstract_id: PUBMED:35080125 Short-term pain trajectories in patients with knee osteoarthritis. Aim: It is unknown if pain in knee osteoarthritis (KOA) follows distinct patterns over the short term. Therefore, the aim of this study was to identify whether persons with a previous history of KOA pain fluctuations have distinct trajectories of pain over 90 days and to examine associations between baseline characteristics and pain trajectories. Method: People with a previous history of KOA were selected from a web-based longitudinal study. Baseline variables were sex, age, being obese/overweight, years of KOA, knee injury, knee buckling, satisfactory Lubben Social Support Score, pain and stress scales, Intermittent Constant Osteoarthritis Pain Score (ICOAP), medication use, and physical activity. Participants completed a Knee Injury and Osteoarthritis Outcomes Score (KOOS) pain subscale (KOOS-p, rated 0 = extreme to 100 = no knee problems) at 10-day intervals for 90 days. Short-term KOOS-p trajectories were identified using latent growth mixture modeling and the baseline risk factors for these pain trajectories were examined. Results: Participants (n = 313) had a mean age of 62.2 (SD ± 8.1) years and and a body mass index of 29.8 (SD ± 6.6) kg/m2 . The three-class latent growth mixture modeling quadratic model with best fit indices was chosen (based on lowest sample-size-adjusted Bayesian Information Criterion, high probability of belonging, interpretability). Three distinct pain trajectory clusters (over 90 days) were identified: low-moderate pain at baseline with large improvement (n = 11), minimal change in pain over 90 days (n = 248), and moderate-high pain with worsening (n = 46). Higher ICOAP (intermittent scale), perceived stress, negative affect score, and knee buckling at baseline were associated with a worse knee pain trajectory (P &lt; 0.05). Conclusions: Persons with KOA showed unique short-term pain trajectories over 90 days, with distinct characteristics at baseline associated with each trajectory. abstract_id: PUBMED:36527357 Short term clinical outcomes of a Prochondrix® thin laser-etched osteochondral allograft for the treatment of articular cartilage defects in the knee. Objective: The purpose of this study is to evaluate the short-term clinical outcomes of Prochondrix® novel thin, laser-etched osteochondral allograft on isolated articular cartilage defects. Methods: Eighteen patients with isolated, symptomatic, full-thickness articular cartilage lesions were treated with marrow stimulation followed by placement of a T-LE allograft. Demographic and intra-operative data was recorded as well as pre- and post-operative International Knee Documentation Committee (IKDC), Short Form-36 (SF-36), Knee Injury and Osteoarthritis Outcome Score (KOOS), Visual Analogue Scale (VAS) and Tegner scores. Pre- and post-operative data was compared at 6, 12, 24 and 36 months post operatively. Failures requiring reoperation were also recorded. Results: At a mean follow-up of 2.5 years (6-43 months), VAS decreased from 6.55 to 2.55 (p = .02) and subjective IKDC scores increased from 37.61 to 59.65 (p = .02). Statistically significant increases were also seen in KOOS Function-Sports and Recreational Activities (+26.04, p = .04) and KOOS QOL (+18.76, p = .007) as well as in SF-36 Physical Functioning (+25.20, p = .04), Energy/Fatigue (+16.50, p = .02), Social Functioning (+11.79, p = .04), and Bodily Pain (+25.18, p = .04). There were two failures requiring reoperation: one conversion to a patellofemoral arthroplasty (PFA), and one graft dislodgement which required removal. Conclusion: Treatment of articular cartilage lesions of the knee with ProChondrix® has demonstrated sustained positive results out to a mean follow-up of two and a half years in this prospective case series with a low failure rate that required reoperation (2 patients) in this series. These results are comparable to the short-term results of other cartilage restoration procedures currently in use today. A meta-analysis of osteochondral allografting demonstrated a mean 86.7% survival rate at 5 years with significant improvements in clinical outcome scores reaching MCID values. abstract_id: PUBMED:32994108 Functional Outcomes are Comparable Between Posterior Stabilized and Cruciate-Substituting Total Knee Arthroplasty Designs at Short-Term Follow-up. Background: Posterior stabilized (PS) polyethylene inserts have been shown to have excellent long-term functional results following total knee arthroplasty (TKA). A cruciate-substituting (CS) design has been introduced to minimize bony resection and eliminate concerns regarding wear on the PS post. The purpose of this study is to compare the outcomes of patients who underwent TKA using either a PS or CS insert. Methods: We reviewed a consecutive series of 5970 patients who underwent a cruciate-sacrificing TKA and received either a PS (3,314) or CS (2,656) polyethylene liner. We compared demographics, Knee Injury and Osteoarthritis Outcome Score Jr (KOOS Jr), Short-Form 12 (SF-12) scores, and revision rates between the groups at a minimum 2 years followup. A multivariate regression was performed to identify the independent effect of design on functional outcomes. Results: Revision rates between the groups were comparably low (0.35% for PS vs 0.51% for CS, P = .466) at an overall mean follow-up of 43 months. Patients in the PS cohort had statistically higher KOOS Jr scores at 2 years (69.8 vs 72.9, P &lt; .001). Multivariate regression analysis found CS patients to have lower postoperative KOOS Jr scores (estimate -2.26, P = .003), and less overall improvement in KOOS Jr scores (estimate -2.42, P = .024) than PS patients, but neither was a clinically significant difference. Conclusion: Patients who undergo TKA with a CS polyethylene insert have comparable functional outcomes and revision rates to those with a PS design at short-term follow-up. Longer follow-up is needed to determine whether CS can match the outstanding track record of PS TKA. abstract_id: PUBMED:26996450 Preoperative physiotherapy and short-term functional outcomes of primary total knee arthroplasty. Introduction: Physiotherapy is an important part of rehabilitation following arthroplasty, but the impact of preoperative physiotherapy on functional outcomes is still being studied. This randomised controlled trial evaluated the effect of preoperative physiotherapy on the short-term functional outcomes of primary total knee arthroplasty (TKA). Methods: 50 patients with primary knee osteoarthritis who underwent unilateral primary TKA were randomised into two groups: the physiotherapy group (n = 24), whose patients performed physical exercises for six weeks immediately prior to surgery, and the nonphysiotherapy group (n = 26). All patients went through a similar physiotherapy regime in the postoperative rehabilitation period. Functional outcome assessment using the algofunctional Knee Injury and Osteoarthritis Outcome Score (KOOS) scale and range of motion (ROM) evaluation was performed preoperatively, and postoperatively at six weeks and three months. Results: Both groups showed a significant difference in all algofunctional KOOS subscales (p &lt; 0.001). The mean score difference at six weeks and three months was not significant in the sports and recreational activities subscale for both groups (p &gt; 0.05). Significant differences were observed in the time-versus-treatment analysis between groups for the symptoms (p = 0.003) and activities of daily living (p = 0.025) subscales. No significant difference in ROM was found when comparing preoperative measurements and those at three months following surgery, as well as in time-versus-treatment analysis (p = 0.928). Conclusion: Six-week preoperative physiotherapy showed no significant impact on short-term functional outcomes (KOOS subscales) and ROM of the knee following primary TKA. Answer: According to the study with PUBMED:12209480, the short-term outcomes of acute knee injuries do not significantly differ based on the specialty of the treating physician. The study compared the outcomes of acute knee injuries treated by specialists and generalists and found that, with the exception of time lost for recuperation, there was little difference in short-term outcome for patients with acute knee injury not undergoing surgery, regardless of the specialty of the treating physician. This suggests that the provider's specialty may not make a significant difference in the short-term outcomes of acute knee injuries.
Instruction: Is exercise adherence associated with clinical outcomes in patients with advanced heart failure? Abstracts: abstract_id: PUBMED:20424399 Is exercise adherence associated with clinical outcomes in patients with advanced heart failure? Background: There is limited research to support the effect of exercise adherence on clinical outcomes in patients with heart failure (HF). This secondary analysis was conducted on the intervention arm of an exercise training study in patients with HF to assess whether adherence and the dose of treatment exposure were associated with clinical outcomes, functional performance (maximum oxygen consumption [VO2], anaerobic threshold, and workload), and quality of life (QOL). Methods: Seventy-one patients (average age, 54 +/- 12.5 years; male, 66%; Caucasian, 66%; married, 61%; New York Heart Association class II-III, 97.2%; and average ejection fraction, 26.4% +/- 6.5%) were included in the current study. Patients with an increase &gt;/= 18% in the amount of exercise from baseline to 6 months, as measured by pedometers, were categorized as adherers (n = 38); patients who had no change or an increase in the amount of exercise of &lt;18% were categorized as nonadherers (n = 33). Results: The 2 groups were significantly different in the composite endpoint of all-cause hospitalization, emergency room admissions, and death/urgent transplantation (hazard ratio, 0.31; 95% confidence interval, 0.159-0.635; P &lt; 0.001). Adherers had greater improvements in functional performance and QOL compared with nonadherers (P&lt; 0.001). Conclusion: These findings suggest that among patients with advanced HF, adherence to exercise is associated with more favorable clinical outcomes. There is also a positive dose-response relationship between the amount of exercise performed and improvement in functional performance and QOL. abstract_id: PUBMED:32861888 Long-term exercise adherence in patients with heart failure: A qualitative study. Background: Regular exercise is associated with numerous benefits; yet up to 91% of patients with heart failure (HF) do not exercise regularly. This study explored factors supporting long-term exercise adherence in patients with HF. Design: This study was a qualitative multiple-case study. Methods: Data were collected via semi-structured interviews from patients with HF at completion of an 18-month exercise intervention in a health care exercise facility and analyzed using a directed content analysis approach. Results: Participants (n=22) identified perceived health status, scheduling/making exercise part of a routine, social support from coaches and family, knowledge about exercise, and motivation/perseverance as supports for long-term exercise adherence. Conclusions: Perceived health status may be key when promoting exercise adherence in this population. Knowledge, social support from coaches and family members, and motivation should be considered for long-term exercise adherence. abstract_id: PUBMED:26578668 Psychosocial Factors, Exercise Adherence, and Outcomes in Heart Failure Patients: Insights From Heart Failure: A Controlled Trial Investigating Outcomes of Exercise Training (HF-ACTION). Background: Psychosocial factors may influence adherence with exercise training for heart failure (HF) patients. We aimed to describe the association between social support and barriers to participation with exercise adherence and clinical outcomes. Methods And Results: Of patients enrolled in Heart Failure: A Controlled Trial Investigating Outcomes of Exercise Training (HF-ACTION), 2279 (97.8%) completed surveys to assess social support and barriers to exercise, resulting in the perceived social support score (PSSS) and barriers to exercise score (BTES). Higher PSSS indicated higher levels of social support, whereas higher BTES indicated more barriers to exercise. Exercise time at 3 and 12 months correlated with PSSS (r= 0.09 and r= 0.13, respectively) and BTES (r=-0.11 and r=-0.12, respectively), with higher exercise time associated with higher PSSS and lower BTES (All P&lt;0.005). For cardiovascular death or HF hospitalization, there was a significant interaction between the randomization group and BTES (P=0.035), which corresponded to a borderline association between increasing BTES and cardiovascular death or HF hospitalization in the exercise group (hazard ratio 1.25, 95% confidence interval 0.99, 1.59), but no association in the usual care group (hazard ratio 0.83, 95% confidence interval 0.66, 1.06). Conclusions: Poor social support and high barriers to exercise were associated with lower exercise time. PSSS did not impact the effect of exercise training on outcomes. However, for cardiovascular death or HF hospitalization, exercise training had a greater impact on patients with lower BTES. Given that exercise training improves outcomes in HF patients, assessment of perceived barriers may facilitate individualized approaches to implement exercise training therapy in clinical practice. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT00047437. abstract_id: PUBMED:27671166 Adherence to recommended exercise guidelines in patients with heart failure. Exercise training has been shown to be beneficial in patients with heart failure, and its effectiveness is connected to adherence to the exercise program. Nonetheless, adherence to exercise in these patients remains a concern. Heart failure patients can be considered adherent to an exercise program if they meet 80 % of the recommended dose. We summarize exercise recommendations for patients with heart failure, identify exercise prescription methodologies used in studies that have reported exercise adherence, identify strategies and tools used to improve adherence and examine whether these strategies were developed using a theoretical platform with the primary aim to change behavior and improve adherence to exercise. Factors which may also impact adherence such as exercise setting, intensity and length of participation, gender, race, New York Heart Association functional class and heart failure with preserved and reduced ejection fraction were also investigated. Finally, recommendations for future studies for improving adherence to exercise in patients with heart failure are provided. abstract_id: PUBMED:29557821 Patient-Selected Strategies for Post Cardiac Rehabilitation Exercise Adherence in Heart Failure. Purpose: The aim of this study was to evaluate the use of patient-selected exercise adherence strategies following cardiac rehabilitation (CR). Design: Twenty patients with heart failure (HF) were recruited and randomly assigned to the intervention or control group at completion of CR. Methods: The intervention included the use of six adherence strategies (logs, graphs, pedometers, phone follow-up, education, and a letter from CR staff), which were provided for 6 weeks post CR and during home-based exercise. After 6 weeks, the intervention group selected strategies to continue, and only those were provided for the last 6 weeks. At 12 weeks, patients were retested. Findings: Patients with HF demonstrated improvement in distance walked and less HF symptoms and adhered to exercise at levels recommended during CR. Conclusion: Inclusion of patient-selected adherence strategies supports continued exercise and helps to sustain physiological improvements. Clinical Relevance: Results from this study have implications for CR programs serving HF patients and provide insight into adherence strategies. abstract_id: PUBMED:34534664 The HEART Camp Exercise Intervention Improves Exercise Adherence, Physical Function, and Patient-Reported Outcomes in Adults With Preserved Ejection Fraction Heart Failure. Background: Despite exercise being one of few strategies to improve outcomes for individuals with heart failure with preserved ejection fraction (HFpEF), exercise clinical trials in HFpEF are plagued by poor interventional adherence. Over the last 2 decades, our research team has developed, tested, and refined Heart failure Exercise And Resistance Training (HEART) Camp, a multicomponent behavioral intervention to promote adherence to exercise in HF. We evaluated the effects of this intervention designed to promote adherence to exercise in HF focusing on subgroups of participants with HFpEF and heart failure with reduced ejection fraction (HFrEF). Methods And Results: This randomized controlled trial included 204 adults with stable, chronic HF. Of those enrolled, 59 had HFpEF and 145 had HFrEF. We tested adherence to exercise (defined as ≥120 minutes of moderate-intensity [40%-80% of heart rate reserve] exercise per week validated with a heart rate monitor) at 6, 12, and 18 months. We also tested intervention effects on symptoms (Patient-Reported Outcomes Measurement Information System-29 and dyspnea-fatigue index), HF-related health status (Kansas City Cardiomyopathy Questionnaire), and physical function (6-minute walk test). Participants with HFpEF (n = 59) were a mean of 64.6 ± 9.3 years old, 54% male, and 46% non-White with a mean ejection fraction of 55 ± 6%. Participants with HFpEF in the HEART Camp intervention group had significantly greater adherence compared with enhanced usual care at both 12 (43% vs 14%, phi = 0.32, medium effect) and 18 months (56% vs 0%, phi = 0.67, large effect). HEART Camp significantly improved walking distance on the 6-minute walk test (η2 = 0.13, large effect) and the Kansas City Cardiomyopathy Questionnaire overall (η2 = 0.09, medium effect), clinical summary (η2 = 0.16, large effect), and total symptom (η2 = 0.14, large effect) scores. In the HFrEF subgroup, only patient-reported anxiety improved significantly in the intervention group. Conclusions: A multicomponent, behavioral intervention is associated with improvements in long-term adherence to exercise, physical function, and patient-reported outcomes in adults with HFpEF and anxiety in HFrEF. Our results provide a strong rationale for a large HFpEF clinical trial to validate these findings and examine interventional mechanisms and delivery modes that may further promote adherence and improve clinical outcomes in this population. Clinical Trial Registration: URL: https://clinicaltrials.gov/. Unique identifier: NCT01658670. abstract_id: PUBMED:23720399 Psychological responses and adherence to exercise in heart failure. Purpose: The purpose of this study was to describe psychological effects and exercise adherence during a multicomponent exercise training intervention. Methods: A sample of 42 patients with heart failure were randomized into an exercise (INV) group (n = 22) and an attention control (AC) group (n = 20). The exercise protocol included two 12-week phases, a structured phase and a self-managed phase. The psychological responses assessed were mood states and exercise self-efficacy. To meet the second purpose of the study, the exercise group was dichotomized based on the number of sessions completed to create two adherence subgroups. Findings: Results indicate self-efficacy improved for the INV group and was maintained during the self-management phase. The adherence subgroups demonstrated different patterns for weekly exercise. Depression and confusion scores improved for the high-adherence group in contrast to worsening for the low-adherence group. Conclusions: Results suggest a need for further study of the psychological responses of exercise adherence for patients with heart failure. Clinical Relevance: The study indicates the importance of continuous assessment of exercise participation and longer term adherence support for patients with heart failure. abstract_id: PUBMED:12875144 Effects of an exercise adherence intervention on outcomes in patients with heart failure. The purpose of this study was to determine how an exercise adherence intervention affects the physiological, functional, and quality of life outcomes of patients with heart failure (HF). Sixteen HF patients were randomly assigned to an exercise-only group (n = 8) or to an exercise-with-adherence group (n = 8). Two of the 16 people died from nonexercise related causes during the study and were not included in the analysis. The intervention was tested over a 24-week period in which patients participated in a 12-week supervised exercise program (Phase 1) followed by 12 weeks of unsupervised home exercise (Phase 2). The intervention format was one of individualized graphic feedback on exercise goals and participation and problem-solving support by nurses. Results indicate that patients who received the intervention exercised more frequently and experienced improved outcomes during both phases. The adherence intervention may encourage HF patients to continue to exercise and thereby maintain the health benefits gained in both phases of an exercise program. abstract_id: PUBMED:32761093 Patient-reported outcomes and medication adherence in patients with heart failure. Aims: Patient-reported outcome measures (PROMs) may predict poor clinical outcome in patients with heart failure (HF). It remains unclear whether PROMs are associated with subsequent adherence to HF medication. We aimed to determine whether health-related quality of life, anxiety, and depression were associated with long-term medication adherence in these patients. Methods And Results: A national cohort study of Danish patients with HF with 3-year follow-up (n = 1464). PROMs included the EuroQol five-dimensional, five-level questionnaire (EQ-5D-5L), the HeartQoL and the Hospital Anxiety and Depression Scale (HADS). Patient-reported outcomes (PRO) data were linked to demographic and clinical data at baseline, and data on all redeemed prescriptions for angiotensin-converting enzyme inhibitors/angiotensin II receptor blockers/angiotensin receptor neprilysin inhibitors (ACEI/ARB/ARNI), β-blockers, and mineralocorticoid receptor antagonists during follow-up. Medication non-adherence was defined as &lt;80% of proportion of days covered. In adjusted regression analyses, lower health-related quality of life (EQ-5D and HeartQoL) and symptoms of depression (HADS-D) at discharge were associated with non-adherence. After 3 years of follow-up, lower health-related quality of life (EQ-5D) was associated with non-adherence for ACEI/ARB/ARNI [adjusted OR 2.78, 95% confidence interval (CI): 1.19-6.49], β-blockers (adjusted OR 2.35, 95% CI: 1.04-5.29), whereas HADS-D was associated with non-adherence for ACEI/ARB/ARNI (adjusted OR 1.07, 95% CI: 1.03-1.11) and β-blockers (adjusted OR 1.06, 95% CI: 1.02-1.10). Conclusion: Lower health-related quality of life and symptoms of depression were associated with non-adherence across HF medications at 1 and 3 years of follow-up. Person-centred care using PROMs may carry a potential for identifying patients at increased risk of future medication non-adherence. abstract_id: PUBMED:30010027 Effects of the HEART Camp Trial on Adherence to Exercise in Patients With Heart Failure. Background: Few exercise training studies in patients with heart failure (HF) report adherence to guideline-recommended 150 minutes of moderate-intensity exercise per week, and no studies have focused on a primary outcome of adherence. Methods And Results: This randomized controlled trial evaluated the effect of a multicomponent intervention, Heart Failure Exercise and Resistance Training (HEART) Camp, on adherence to exercise (after 6, 12, and 18 months) compared with an enhanced usual care (EUC) group. Patients (n = 204) were 55.4% male, overall average age was 60.4 years, and 47.5% were nonwhite. The HEART Camp group had significantly greater adherence at 12 (42%) and 18 (35%) months compared with the EUC group (28% and 19%, respectively). No significant difference (P &gt; .05) was found at 6 months. The treatment effect did not differ based on patient's age, race, gender, marital status, type of HF (preserved or reduced ejection fraction) or New York Heart Association functional class. Left ventricular ejection fraction (LVEF) significantly moderated the treatment effect, with greater adherence at higher LVEF. Conclusions: The multicomponent HEART Camp intervention showed efficacy with significant effects at 12 months and 18 months. Adherence levels remained modest, indicating a need for additional research to address methods and strategies to promote adherence to exercise in patients with HF. Answer: Yes, exercise adherence is associated with clinical outcomes in patients with advanced heart failure. A secondary analysis conducted on the intervention arm of an exercise training study in patients with heart failure found that those who adhered to exercise (categorized as adherers) had more favorable clinical outcomes compared to nonadherers. Adherers experienced greater improvements in functional performance and quality of life (QOL), and there was a significant difference in the composite endpoint of all-cause hospitalization, emergency room admissions, and death/urgent transplantation between the two groups (PUBMED:20424399). Additionally, psychosocial factors such as social support and barriers to participation were found to influence exercise adherence, with poor social support and high barriers to exercise associated with lower exercise time. However, perceived social support did not impact the effect of exercise training on outcomes, but for cardiovascular death or heart failure hospitalization, exercise training had a greater impact on patients with lower barriers to exercise (PUBMED:26578668). Furthermore, a multicomponent behavioral intervention, Heart failure Exercise And Resistance Training (HEART) Camp, was shown to improve long-term adherence to exercise, physical function, and patient-reported outcomes in adults with heart failure with preserved ejection fraction (HFpEF) and anxiety in heart failure with reduced ejection fraction (HFrEF) (PUBMED:34534664). Another study indicated that the inclusion of patient-selected adherence strategies supports continued exercise and helps to sustain physiological improvements (PUBMED:29557821). Overall, these findings suggest that adherence to exercise is crucial for improving clinical outcomes in patients with advanced heart failure, and interventions that promote exercise adherence can lead to better health outcomes in this population.
Instruction: Does the Addenbrooke's Cognitive Examination-revised add to the Mini-Mental State Examination in established Alzheimer disease? Abstracts: abstract_id: PUBMED:34843170 Validation of a modified Chinese version of Mini-Addenbrooke's Cognitive Examination for detecting mild cognitive impairment. Background: For detecting mild cognitive impairment (MCI), brief cognitive screening tools are increasingly required for the advantage of time saving and no need for special equipment or trained raters. We aimed to develop a modified Chinese version of Mini-Addenbrooke's Cognitive Examination (C-MACE) and further evaluate its validation in detecting MCI. Methods: A total of 716 individuals aged from 50 to 90 years old were recruited, including 431 cognitively normal controls (NC) and 285 individuals with MCI. The effect size of Cramer's V was used to explore which items in the Chinese version of Addenbrooke's Cognitive Examination-III (ACE-III-CV) best associated with MCI and to form the C-MACE. Receiver operating characteristic (ROC) analyses were carried out to explore the ability of C-MACE, ACE-III-CV, Chinese version of Montreal Cognitive Assessment-Basic (MoCA-BC), and Mini-Mental State Examination (MMSE) in discriminating MCI from NC. Results: Five items with greatest effect sizes of Cramer's V were selected from ACE-III-CV to form the C-MACE: Memory Immediate Recall, Memory Delayed Recall, Memory Recognition, Verbal Fluency Animal and Language Naming. With a total score of 38, the C-MACE had a satisfactory classification accuracy in detecting MCI (area under the ROC curve, AUC = 0.892), superior to MMSE (AUC = 0.782) and comparable to ACE-III-CV (AUC = 0.901) and MoCA-BC (AUC = 0.916). In the subgroup of Age &gt; 70 years, Education ≤ 12 years, the C-MACE got a highest classification accuracy (AUC = 0.958) for detecting MCI. Conclusion: In the Chinese-speaking population, C-MACE derived from ACE-III-CV may identify MCI with a good classification accuracy, especially in aged people with low education. abstract_id: PUBMED:29213390 Brazilian adaptation of the Addenbrooke's Cognitive Examination-Revised (ACE-R). The Addenbrooke's Cognitive Examination-Revised (ACE-R) is a highly sensitive and specific tool for the detection of mild dementia. It is particularly useful in differentiating Alzheimer's disease from frontotemporal dementia. While the first version of the test battery has been adapted in many countries, its revised version has not, probably because it was published very recently. Objective: To translate and adapt the ACE-R for use in the Brazilian population. Methods: Two independent translations were made from English into Portuguese, followed by two independent back-translations. Few adaptations in accordance to the Brazilian culture and language were made and a first version of the instrument produced. This former version of the ACE-R was administered to 21 cognitively healthy subjects aged 60 years or more, with different educational levels. Results: The mean age of the studied sample of healthy elderly was 75.4 years (ranging from 60 to 89 years). Small additional modifications were necessary after the evaluation of the first ten subjects in order to improve comprehension of the test. The final Portuguese version of the ACE-R was produced and was found to be well understood by the remaining 11 subjects, taking an average of 15 minutes to be administered. Conclusions: The Brazilian version of the ACE-R proved to be a promising cognitive instrument for testing both in research and clinical settings. With this regard, additional studies are currently being carried out in our unit in order to investigate the diagnostic properties of the ACE-R in our milieu. abstract_id: PUBMED:33533261 Validation of the Chinese version of Addenbrooke's cognitive examination III for detecting mild cognitive impairment. Objectives: To evaluate the reliability and validity of Chinese version of Addenbrooke's Cognitive Examination III (ACE-III-CV) in the identification of mild cognitive impairment (MCI), and further investigate the optimal cutoff scores according to different age and education level. Method: A total of 716 individuals aged from 50 to 90 years old were recruited through internet-based and print advertisements, including 431 cognitively normal controls (NC) and 285 individuals with MCI according to an actuarial neuropsychological method put forward by Jak and Bondi. Besides the cognitive screening tests of ACE-III-CV, Mini-Mental State Examination (MMSE) and Chinese version of Montreal Cognitive Assessment-Basic (MoCA-BC), all the participants underwent a battery of standardized neuropsychological tests. Validations of the ACE-III-CV, MMSE, and MoCA-BC for detecting MCI from NC were determined by Receiver operating characteristic (ROC) curves. Results: ACE-III-CV had a good reliability (Cronbach's coefficient α = 0.807, intraclass correlation coefficients for interrater and test-retest reliability were 0.95 and 0.93). According to the area under ROC curve (AUC), ACE-III-CV and MoCA-BC showed better ability than MMSE in detecting MCI. No significant difference was found between ACE-III-CV and MoCA-BC. The optimal cutoff scores of ACE-III-CV for screening MCI were 72 for individuals with 1-9 years of education, 78 for individuals with 10-15 years of education, and 80 for individuals with more than 16 years of education. Conclusion: The Chinese version of ACE-III-CV is a reliable and valid screening tool for detecting MCI. The optimal cutoff scores are closely related with education level. abstract_id: PUBMED:33592617 Adaptation and Validation of the Mini-Addenbrooke's Cognitive Examination in Dementia in Arabic Speakers in Egypt. Background: The mini-Addenbrooke's Cognitive Examination (m-ACE) is a brief cognitive battery that assesses 5 subdomains of cognition (attention, memory, verbal fluency, visuospatial abilities, and memory recall). It is scored out of 30 and can be administered in under 5 min providing a quick screening tool for assessment of cognition. Objectives: We aimed to adapt the m-ACE in Arabic speakers in Egypt and to validate it in dementia patients to provide cutoff scores. Methods: We included 37 patients with dementia (Alzheimer's disease [n = 25], vascular dementia [n = 8], and dementia with Lewy body [n = 4]) and 43 controls. Results: There was a statistically significant difference (p &lt; 0.001) on the total m-ACE score between dementia patients (mean 10.54 and standard deviation [SD] 5.83) and controls (mean 24.02 and SD 2.75). There was also a statistically significant difference between dementia patients and controls on all sub-score domains of the m-ACE (p &lt; 0.05). Performance on the m-ACE significantly correlated with both the Mini-Mental State Examination (MMSE) and the Addenbrooke's Cognitive Examination-III (ACE-III). Using a receiver operator characteristic curve, the optimal cutoff score for dementia on the m-ACE total score was found to be 18 (92% sensitivity, 95% specificity, and 94% accuracy). Conclusions: We adapted the m-ACE in Arabic speakers in Egypt and provided objective validation of it as a screening tool for dementia, with high sensitivity, specificity, and accuracy. abstract_id: PUBMED:34095357 Identification of predictors for mini-mental state examination and revised Hasegawa's Dementia Scale scores using MR-based brain morphometry. Purpose: The early detection of cognitive function decline is crucial to help manage or slow the progression of symptoms. The Mini-Mental State Examination (MMSE) and revised Hasegawa's Dementia Scale (HDS-R) are widely used in screening for cognitive impairment. The purpose of this study was to explore common predictors of the two different cognitive testing systems using MR-based brain morphometry. Materials And Methods: This retrospective study included 200 subjects with clinical suspicion of cognitive impairment who underwent 3D T1-weighted MRI at our institution between February 2019 and August 2020. Variables related to the volume of deep gray matter and 70 cortical thicknesses were obtained from the MR images using voxel-based specific regional analysis system for Alzheimer's disease (VSRAD) and FreeSurfer software. The correlation between each variable including age and MMSE/HDS-R scores was evaluated using uni- and multi-variate logistic regression analyses. Results: In univariate analysis, parameters include hippocampal volume and bilateral entorhinal cortex (ERC) thickness showed moderate correlation coefficients with both MMSE and HDS-R scores. Multivariate analysis demonstrated the right ERC thickness was the common parameter which significantly correlates with both MMSE and HDS-R scores (p &lt; 0.05). Conclusion: Right ERC thickness appears to offer a useful predictive biomarker for both MMSE and HDS-R scores. abstract_id: PUBMED:25999982 Does the Order of Item Difficulty of the Addenbrooke's Cognitive Examination Add Anything to Subdomain Scores in the Clinical Assessment of Dementia? Background: The Addenbrooke's Cognitive Examination (ACE) is used to measure cognition across a range of domains in dementia. Identifying the order in which cognitive decline occurs across items, and whether this varies between dementia aetiologies could add more information to subdomain scores. Method: ACE-Revised data from 350 patients were split into three groups: Alzheimer's type (n = 131), predominantly frontal (n = 119) and other frontotemporal lobe degenerative disorders (n = 100). Results of factor analysis and Mokken scaling analysis were compared. Results: Principal component analysis revealed one factor for each group. Confirmatory factor analysis found that the one-factor model fit two samples poorly. Mokken analyses revealed different item ordering in terms of difficulty for each group. Conclusion: The different patterns for each diagnostic group could aid in the separation of these different types of dementia. abstract_id: PUBMED:25430606 The Portuguese version of Addenbrooke's Cognitive Examination-Revised (ACE-R) in the diagnosis of subcortical vascular dementia and Alzheimer's disease. The aim of this study was to investigate the accuracy of the Portuguese version of Addenbrooke's Cognitive Examination-Revised (ACE-R) in detecting and differentiating early stage subcortical vascular dementia (SVD) from early stage Alzheimer's disease (AD). Ninety-two subjects (18 SVD patients, 36 AD patients, and 38 healthy controls) were assessed using the ACE-R. Between-group's differences were evaluated using the Quade's rank analysis of covariance. The diagnostic accuracy and discriminatory ability of the ACE-R were examined via receiver operating characteristic (ROC) analysis. The ACE-R was able to successfully discriminate between patients and healthy subjects. The mean ACE-R total scores differed between SVD and AD patients; there were also significant differences in attention and orientation and in memory measures between the groups. An optimal cut-off of 72/73 was found for the detection of AD (sensitivity: 97%; specificity: 92%) and SVD (sensitivity: 100%; specificity: 92%). abstract_id: PUBMED:28635782 Validation of the Russian version of the modified Addenbrooke's cognitive examination for Alzheimer's disease diagnosis Aim: To evaluate the validity of the Russian version of the modified Addenbrooke's cognitive examination (ACE-m) for Alzheimer's disease (AD) diagnosis. Material And Methods: ACE-m and MMSE, as a comparison scale, were administered to 78 patients including 49 patients with AD and 29 healthy people. Results And Conclusion: ACE-m demonstrated the high validity in the diagnosis of dementia in AD. Cronbach's alpha was 0.89. ACE-m is a brief and simple in use test that measures a wide spectrum of cognitive functions, gives a general impression about cognitive deficit and diagnoses dementia in AD with the greater validity compared to MMSE. This scale is useful not only in diagnosis of AD but also in differentiation with other causes of dementia. abstract_id: PUBMED:34530438 Brazilian Version of Addenbrooke's Cognitive Examination-Revised in the Differential Diagnosis of Alzheimer'S Disease and Behavioral Variant Frontotemporal Dementia. Introduction: Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) are frequent causes of dementia and, therefore, instruments for differential diagnosis between these two conditions are of great relevance. Objective: To investigate the diagnostic accuracy of Addenbrooke's Cognitive Examination-Revised (ACE-R) for differentiating AD from bvFTD in a Brazilian sample. Methods: The ACE-R was administered to 102 patients who had been diagnosed with mild dementia due to probable AD, 37 with mild bvFTD and 161 cognitively healthy controls, matched according to age and education. Additionally, all subjects were assessed using the Mattis Dementia Rating Scale and the Neuropsychiatric Inventory. The performance of patients and controls was compared by using univariate analysis, and ROC curves were calculated to investigate the accuracy of ACE-R for differentiating AD from bvFTD and for differentiating AD and bvFTD from controls. The verbal fluency plus language to orientation plus name and address delayed recall memory (VLOM) ratio was also calculated. Results: The optimum cutoff scores for ACE-R were &lt;80 for AD, &lt;79 for bvFTD, and &lt;80 for dementia (AD + bvFTD), with area under the receiver operating characteristic curves (ROC) (AUC) &gt;0.85. For the differential diagnosis between AD and bvFTD, a VLOM ratio of 3.05 showed an AUC of 0.816 (Cohen's d = 1.151; p &lt; .001), with 86.5% sensitivity, 71.4% specificity, 72.7% positive predictive value, and 85.7% negative predictive value. Conclusions: The Brazilian ACE-R achieved a good diagnostic accuracy for differentiating AD from bvFTD patients and for differentiating AD and bvFTD from the controls in the present sample. abstract_id: PUBMED:38403287 Attention to the domains of Revised Hasegawa Dementia Scale and Mini-Mental State Examination in patients with Alzheimer's disease dementia. Background: In Japan, Alzheimer's disease dementia (AD) is the most common cognitive disease, and the most widely used dementia screening tests are the Revised Hasegawa Dementia Scale (HDS-R) and Mini-Mental State Examination (MMSE). This study sought to elucidate the relationships of the individual domains of these tests with age and duration of school education in a large group of patients with AD. Methods: Participants were 505 new outpatients diagnosed with AD who completed the HDS-R and MMSE at the first visit. We investigated the relationships of total and individual domains of these tests with age and duration of school education using the least squares method. Next, we plotted regression lines of the individual domain scores against the total test scores. Results: Younger age and longer duration of school education were significantly associated with higher total HDS-R and MMSE scores in AD. Domain-specific results indicated that younger age was significantly associated with a higher immediate memory score on both the HDS-R and MMSE and with a higher orientation (time), repetition score on the MMSE. Longer duration of school education was significantly associated with a higher working memory score on the HDS-R and with higher serial 7, repetition and writing scores on the MMSE. In addition, shorter duration of school education was significantly associated with higher naming score on the MMSE. The regression lines of orientation of time, remote memory, visual memory, and verbal frequency hit the bottom on the HDS-R (4/30, 8/30, 4/30, and 6/30, respectively) and of orientation of time, serial 7, remote memory, and writing also hit the bottom on the MMSE (8/30, 9/30, 11/30, and 8/30, respectively). Conclusions: We should pay attention to age, duration of school education, and the individual domains when using the HDS-R or MMSE to assess patients with AD. Answer: The Addenbrooke's Cognitive Examination-Revised (ACE-R) is a cognitive test battery that has been shown to be highly sensitive and specific for the detection of mild dementia and is particularly useful in differentiating Alzheimer's disease from frontotemporal dementia (PUBMED:29213390). The ACE-R has been adapted and validated in various languages and cultural contexts, including Brazilian (PUBMED:29213390), Portuguese (PUBMED:25430606), Russian (PUBMED:28635782), and Brazilian for differentiating Alzheimer's disease from behavioral variant frontotemporal dementia (PUBMED:34530438). In comparison to the Mini-Mental State Examination (MMSE), the ACE-R and its adaptations, such as the modified Chinese version of Mini-Addenbrooke's Cognitive Examination (C-MACE) and the ACE-III-CV, have demonstrated better ability in detecting mild cognitive impairment (MCI), which is often a precursor to Alzheimer's disease (PUBMED:34843170; PUBMED:33533261). The ACE-R has been found to successfully discriminate between patients with Alzheimer's disease and healthy subjects, with high sensitivity and specificity (PUBMED:25430606). Additionally, the ACE-R has been shown to be a promising cognitive instrument for testing in both research and clinical settings (PUBMED:29213390). The ACE-R includes a broader range of cognitive domains compared to the MMSE, such as attention, memory, verbal fluency, language, and visuospatial abilities, which may provide a more comprehensive assessment of cognitive function in Alzheimer's disease (PUBMED:33592617). Moreover, the ACE-R has been used to identify predictors for cognitive testing systems like the MMSE, indicating its utility in enhancing the understanding of cognitive decline in Alzheimer's disease (PUBMED:34095357). In conclusion, the Addenbrooke's Cognitive Examination-Revised (ACE-R) adds to the Mini-Mental State Examination (MMSE) in the assessment of established Alzheimer's disease by offering a more detailed evaluation of cognitive domains that are affected by the disease. The ACE-R's sensitivity and specificity, as well as its ability to differentiate between Alzheimer's disease and other forms of dementia, make it a valuable tool in both clinical and research settings.
Instruction: Does the body mass index predict mortality after isolated aortic valve replacement? Abstracts: abstract_id: PUBMED:24383370 Does the body mass index predict mortality after isolated aortic valve replacement? Background And Aim Of The Study: Data relating to the impact of body mass index (BMI) on outcomes after isolated aortic valve replacement (AVR) are scarce and controversial. The study aim was to investigate the predictive value of BMI for early and late mortality after isolated AVR. Methods: Data obtained from patients who underwent isolated AVR between January 1998 and December 2010 at the authors' institution were analyzed retrospectively. Patients were allocated to five groups according to the preoperative BMI: underweight (BMI &lt; 20 kg/m2); normal weight (BMI 20.0-24.9 kg/m2); overweight (BMI 25.0-29.9 kg/m2); obese (BMI 30.0-34.9 kg/m2); and morbidly obese (BMI &gt; 34.9 kg/m2). Logistic and Cox regression analyses were performed to identify the independent predictors of early and late mortality, respectively. Results: After excluding 20 patients who were lost to follow up, and 30 patients with missing preoperative BMI data, a total of 1,758 patients was included in the analysis. The mean follow up was 5.6 +/- 3.5 years (range: 0-13.4 years), and the mean BMI 26.8 +/- 4.3 kg/m2 (range: 17-52 kg/m2). Multivariate logistic regression analyses showed no association between early mortality and the BMI groups. Multivariate Cox regression analyses showed 'underweight' to be an independent predictor for late mortality (hazard ratio 2.89; 95% confidence interval 1.63-5.13, p &lt; 0.0001). Conclusion: 'Underweight' is an independent predictor for late mortality after AVR surgery. Morbid obesity did not prove to be predictive of a worse late survival. abstract_id: PUBMED:37240534 One-Year and Five-Year Outcomes of Transcatheter Aortic Valve Replacement or Surgical Aortic Valve Replacement in a Taiwanese Elderly Population. Background: The aim of our study was to provide real-world data on outcomes for elderly Taiwanese patients who underwent transcatheter aortic valve replacement or surgical aortic valve replacement in different risk groups. Methods: From March 2011 through December 2021, 177 patients with severe aortic stenosis who were ≥70 years old and had undergone TAVI (transcatheter aortic valve implantation) or SAVR (surgical aortic valve replacement) in a single center were divided by STS score (&lt;4%, 4-8% and &gt;8%) into three different groups. Then, we compared their clinical characteristics, operative complications, and all-cause mortality. Results: In all risk groups, there were no significant differences in in-hospital mortality, or 1-year and 5-year mortality between patients in the TAVI and SAVR groups. In all risk groups, patients in the TAVI group had shorter hospital stay and higher rate of paravalvular leakage than the SAVR group. After univariate analysis, BMI (body mass index) &lt; 20 was a risk factor for higher 1-year and 5-year mortality. In the multivariate analysis, acute kidney injury was an independent factor for predicting worse outcomes in terms of 1-year and 5-year mortality. Conclusions: Taiwan elderly patients in all risk groups did not have significant differences in mortality rates between the TAVI and the SAVR group. However, the TAVI group had shorter hospital stay and higher rate of paravalvular leakage in all risk groups. abstract_id: PUBMED:29075334 Association of body mass index and visceral fat with aortic valve calcification and mortality after transcatheter aortic valve replacement: the obesity paradox in severe aortic stenosis. Background: Previous studies showed that metabolic syndrome is associated with aortic valve calcification (AVC) and poor outcomes in aortic stenosis (AS). However, if these associations change and how body fat impacts the prognosis of patients in late stage of the disease have been not yet explored. Aims: To determine the association of body mass index (BMI) and visceral fat with AVC and mortality after transcatheter aortic valve replacement (TAVR). Methods: This was a prospective cohort of 170 severe AS patients referred to TAVR. We quantified AVC mass score and fat depots including epicardial adipose tissue, intrathoracic fat, and abdominal visceral (VAF) and subcutaneous fats by computed tomography. Fat depots were indexed to body surface area. All-cause and cardiovascular-related deaths after TAVR were recorded over a median follow-up of 1.2 years. Results: Higher AVC mass was independently associated with low BMI and low VAF. All-cause mortality risk increased with the decrease of BMI and increment of VAF. A stratified analysis by obesity showed that in non-obese, VAF was inversely associated with mortality, whereas in obese, high VAF was associated with higher mortality (p value for interaction &lt; 0.05). At long-term, hazard ratio [HR] with non-obese/low VAF was 2.3 (95% confidence interval [CI] 1.1-4.9; p = 0.021) and HR with obese/high VAF was 2.5 (95% CI 1.1-5.8; p = 0.031) compared with obese/low VAF patients. Conclusions: In AS patients submitted to TAVR, BMI and VAF were inversely associated with AVC. Pre-intervention assessment of VAF by computed tomography may provide a better discrimination of mortality than BMI alone. abstract_id: PUBMED:37343774 Body Mass Index and Outcomes of Transcatheter Aortic Valve Replacement: A Retrospective Observational Study. The influence of body mass index (BMI) on Transcatheter Aortic Valve Replacement (TAVR) outcomes has been the focus of several previous studies. In this study, we examined the relationship between BMI and various clinical outcomes following TAVR procedures. A comprehensive analysis was conducted using a large cohort of patients who underwent TAVR. In this study, we identified patients who underwent Transcatheter aortic valve replacement (TAVR) in the year 2020. Procedure Classification System (ICD-10-PCS) codes were used to identify TAVR cases. The weighted final study sample included 77319 TAVR hospitalizations. Patients were categorized into 5 groups based on their Body Mass Index (BMI. Our findings revealed that there was no significant difference in in-hospital mortality among different BMI groups when compared to patients with a normal BMI (20 to 24.9). patients with a BMI of 25 or higher demonstrated a statistically significant shorter duration of hospitalization compared to those with a normal BMI. patients with a BMI ranging between 30 and 39.9 exhibited decreased hospitalization costs when compared to patients with a normal BMI. Moreover, our study revealed a decrease in atrial fibrillation, acute heart failure and acute kidney injury complications following TAVR in patients with above-normal BMI. Despite similar in-hospital mortality across BMI groups, having a BMI of 25 or greater is associated with improved immediate outcomes following TAVR. These benefits in overweight and obese patients are consistent with findings described in recent literature. Further studies are warranted to explore the underlying mechanisms and potential implications of these associations, as well as to optimize patient selection and management strategies for TAVR procedures. abstract_id: PUBMED:32471345 The effects of body mass index on outcomes for patients undergoing surgical aortic valve replacement. Background: Most of the studies of obesity and postoperative outcome have looked predominantly at coronary artery bypass grafting with fewer focused on valvular disease. The purpose of this study was to compare the outcomes of patients undergoing aortic valve replacement stratified by body mass index (BMI, kg/m^2). Methods: The Alberta Provincial Project for Outcome Assessment in Coronary Heart Disease registry captured 4780 aortic valve replacements in Alberta, Canada from January 2004 to December 2018. All recipients were stratified by BMI into five groups (BMI: &lt; 20, 20-24.9, 25-29.9, 30-34.9, and &gt; = 35). Log-rank test and Cox regression were used to examine the crude and adjusted survival differences. Results: Intra-operative clamp time and pump time were similar among the five groups. Significant statistical differences between groups existed for the incidence of isolated AVR, AVR and CABG, hemorrhage, septic infection, and deep sternal infection (p &lt; 0.05). While there was no significant statistical difference in the mortality rate across the BMI groups, the underweight AVR patients (BMI &lt; 20) were associated with increased hazard ratio (1.519; 95% confidence interval: 1.028-2.245) with regards to all-cause mortality at the longest follow-up compared with normal weight patients. Conclusion: Overweight and obese patients should be considered as readily for AVR as normal BMI patients. abstract_id: PUBMED:30898426 Impact of Body Mass Index on Short- and Long-Term Outcomes After Isolated First-Time Surgical Aortic Valve Replacement for Aortic Stenosis. Objective: To ascertain whether body mass index (BMI) has a clinical effect on short- and long-term postoperative outcomes after surgical aortic valve replacement in patients with severe aortic stenosis. Design: Single-center, retrospective study. Setting: Tertiary referral hospital. Participants: The study comprised 1,561 patients who underwent isolated first-time aortic valve replacement between 2005 and 2012. Interventions: Fourteen underweight patients were removed from the analysis. The remaining patients were divided into the following 4 groups according to their BMI: 418 as normal weight (≥18.5 to &lt;25 kg/m2), 629 as overweight (≥25 to &lt;30 kg/m2), 342 as obese (≥30 to &lt;35 kg/m2), and 158 as very obese (≥35 kg/m2). Early mortality and postoperative complications were compared, and long-term survival rates were investigated. Measurement And Main Results: Thirty-day mortality was higher in the normal weight group but did not reach statistical significance (p = 0.054), and the incidence of postoperative complications was not different for cerebrovascular accident (p = 0.70), re-sternotomy for bleeding (p = 0.17), sternal wound infection (p = 0.07), and dialysis (p = 0.07). With a mean follow-up time of 4.92 ± 2.82 years, survival rate was better in the overweight group. A Cox proportional hazard model found BMI inversely correlated with long-term mortality when analyzed in a univariable fashion (hazard ratio 0.95; p = 0.009), but this apparent protective effect disappeared when adjusted for preoperative covariates (hazard ratio 0.98, 95% confidence interval 0.96-1.004; p = 0.12). Conclusion: Once adjusted for preoperative characteristics, obesity does not represent an independent predictor for long-term survival rates. There was a higher incidence of 30-day mortality in the normal weight group compared with the overweight and very obese groups. The incidence of deep sternal wound infection was higher in very obese patients. abstract_id: PUBMED:36003588 Impact of body mass index on outcomes in patients undergoing transfemoral transcatheter aortic valve implantation. Objective: This study sought to investigate the effect of body mass index on outcomes in patients with severe aortic valve stenosis undergoing transcatheter aortic valve implantation. Methods: A total of 12,381 patients undergoing transfemoral transcatheter aortic valve implantation were divided into body mass index categories: underweight (&lt;18.5 kg/m2), normal weight (18.5-24.9 kg/m2), overweight (25.0-29.9 kg/m2), and obesity (&gt;30 kg/m2). Primary endpoints were differences in 30-day and 1-year all-cause mortality. Secondary endpoints included all other clinical endpoints such as stroke. Univariate and multivariate odds ratios were calculated using logistic and cox regression analyses. Results: Two percent (n = 205) of patients were underweight, 29% (n = 3564) were normal weight, 44% (n = 5460) were overweight, and 25% (n = 3152) were obese. Thirty-day mortality was lower in overweight (5.3%, odds ratio, 0.73; 95% confidence interval, 0.61-0.88; P = .001) and obese patients (5.2%, odds ratio, 0.74; 95% confidence interval, 0.60-0.92; P = .006), but higher in underweight (9.8%, odds ratio, 1.51; 95% confidence interval, 0.92-2.47; P = .010) as compared to normal weight patients (6.9%). After multivariate adjustment, 30-day mortality was not significantly different across body mass index categories. However, 1-year mortality was higher in underweight patients (hazard ratio, 1.52; 95% confidence interval, 1.10-2.09; P = .011). Stroke rates were comparable between body mass index groups. Conclusions: For overweight and obese patients with severe aortic valve stenosis undergoing transcatheter aortic valve implantation, there was no 30-day difference in mortality compared with patients with normal weight. However, underweight patients showed higher rates of 1-year mortality after transcatheter aortic valve implantation. abstract_id: PUBMED:32540167 Prognostic Impact of Underweight (Body Mass Index &lt;20 kg/m2) in Patients With Severe Aortic Valve Stenosis Undergoing Transcatheter Aortic Valve Implantation or Surgical Aortic Valve Replacement (from the German Aortic Valve Registry [GARY]). According to the Valve Academic Resortium, underweight is one parameter in the definition of frailty, which is associated with increased mortality after transcatheter aortic valve implantation (TAVI) and surgical aortic valve replacement (SAVR). Aims of our study were (1) to examine the impact of underweight on mortality after TAVI and SAVR and (2) to determine the effect of intervention mode (TAVI vs SAVR) on mortality in underweight patients from the German Aortic Valve Registry. Overall, 35,109 patients treated with TAVI or SAVR were studied. Outcomes of underweight (body mass index [BMI] &lt;20 kg/m2) TAVI and SAVR patients were compared using propensity score weighting. Prevalence of underweight was 5.7% in patients who underwent TAVI and 2.9% in patients who underwent SAVR. Underweight patients had significantly increased mortality rates for both treatment strategies compared with normal weight patients (BMI 20 to 30 kg/m2). Comparing underweight TAVI and SAVR-patients using propensity score weighting, no statistically significant differences regarding mortality rates were observed. Subgroup analysis of severely underweight patients (BMI &lt;18.5 kg/m²) revealed no significant increase of mortality after TAVI compared with underweight patients (BMI &lt;20 kg/m2), whereas severely underweight SAVR patients showed twofold increased mortality rates. In conclusion, underweight in patients who underwent TAVI or SAVR is rare, but it is associated with increased mortality. Especially severely underweight SAVR patients showed excess mortality rates. abstract_id: PUBMED:35418541 Body Mass Index and Mortality from Nonrheumatic Aortic Valve Disease among Japanese Men and Women. Aim: We aimed to examine the impact of overweight and obesity on mortality from nonrheumatic aortic valve disease. Methods: In the Japan Collaborative Cohort Study, we analyzed data of 98,378 participants aged 40-79 years, with no history of coronary heart disease, stroke, or cancer at baseline (1988-1990) and who completed a lifestyle questionnaire including height and body weight; they were followed for mortality until the end of 2009. The Cox proportional hazards model was used to calculate the multivariable hazard ratios (HRs) with 95% confidence intervals (CIs) of nonrheumatic aortic valve disease mortality according to body mass index (BMI) after adjusting for potential confounding factors. Results: During the median 19.2 years follow-up, 60 deaths from nonrheumatic aortic valve disease were reported. BMI was positively associated with the risk of mortality from nonrheumatic aortic valve disease; the multivariable HRs (95% CIs) were 0.90 (0.40-2.06) for persons with BMI <21 kg/m2, 1.71 (0.81-3.58) for BMI 23-24.9 kg/m2, 1.65 (0.69-3.94) for BMI 25-26.9 kg/m2, and 2.83 (1.20-6.65) for BMI ≥ 27 kg/m2 (p for trend=0.006), compared with persons with BMI 21-22.9 kg/m2. Similar associations were observed between men and women (p for interaction=0.56). Excluding those who died during the first ten years of follow-up or a competing risk analysis with other causes of death as competing risk events did not change the association materially. Conclusions: Overweight and obesity may be independent risk factors for nonrheumatic aortic valve disease mortality in Asian populations. abstract_id: PUBMED:35722262 Body mass index and postoperative mortality in patients undergoing coronary artery bypass graft surgery plus valve replacement: a retrospective cohort study. Background: The relationship between body mass index (BMI) and postoperative mortality in patients who undergo coronary artery bypass graft (CABG) surgery plus valve replacement is uncertain. We aimed to investigate the association between body mass index (BMI) and postoperative mortality among patients who simultaneously underwent both CABG surgery plus valve replacement. Methods: We retrospectively analyzed 1976 patients who underwent CABG surgery at our hospital between January 2017 and April 2021, including 202 patients who underwent valve replacement surgery during the same period. We analyzed the relationship between BMI and postoperative mortality. The relationship between BMI and postoperative mortality was assessed using smooth curve fitting and a Multiple logistic regression model. Results: The results of smoothing curve fitting showed that BMI and postoperative mortality had a non-linear relationship, and the resulting curve exhibited a two-stage change and a breakpoint. Postoperative mortality is higher in patients that have a body mass index above 25 kg/m2 compared to patients having a body mass index between 18 and 25 kg/m2. Conclusions: Our study found a non-linear relationship between BMI and postoperative mortality in patients undergoing CABG plus valve replacement after adjusting for potential confounders. The causal relationship between BMI and postoperative mortality still requires further investigations. Answer: The body mass index (BMI) has been studied as a potential predictor of mortality after isolated aortic valve replacement (AVR), with varying results across different studies. According to the data from these studies, being underweight (BMI < 20 kg/m^2) is consistently identified as an independent predictor of increased late mortality after AVR surgery (PUBMED:24383370) and is associated with higher 1-year and 5-year mortality rates (PUBMED:37240534). This association is also observed in patients undergoing transcatheter aortic valve replacement (TAVR), where underweight patients have higher rates of 1-year mortality (PUBMED:36003588) and increased mortality rates compared with normal weight patients (PUBMED:32540167). On the other hand, overweight and obese patients do not necessarily experience worse outcomes. In fact, some studies suggest that overweight and obese patients may have similar or even improved immediate outcomes following TAVR, such as shorter hospital stays and decreased hospitalization costs, compared to those with a normal BMI (PUBMED:37343774). Additionally, overweight and obese patients undergoing surgical AVR do not show a significant difference in mortality rates compared to normal weight patients (PUBMED:32471345), and obesity does not appear to be an independent predictor for long-term survival rates once adjusted for preoperative characteristics (PUBMED:30898426). However, there is evidence that very high BMI may be associated with increased mortality in certain populations. For instance, a study from Japan found that overweight and obesity were positively associated with the risk of mortality from nonrheumatic aortic valve disease (PUBMED:35418541). Another study indicated a non-linear relationship between BMI and postoperative mortality, with higher mortality rates in patients with a BMI above 25 kg/m^2 undergoing CABG plus valve replacement (PUBMED:35722262). In summary, while being underweight is a consistent predictor of increased mortality after AVR, the impact of overweight and obesity on mortality outcomes is less clear and may vary depending on the population and specific circumstances. It appears that a higher BMI does not necessarily predict worse outcomes and may even be associated with some improved immediate postoperative outcomes.
Instruction: Is the red cell distribution width strong predictor for treatment response in primary glomerulonephritides? Abstracts: abstract_id: PUBMED:24932664 Is the red cell distribution width strong predictor for treatment response in primary glomerulonephritides? Background: Novel biomarkers are needed to predict the response to treatment in patients with nephrotic syndrome (NS) due to primary glomerulonephritides (PGN). We aimed to test the predictive value of red blood cell distribution width (RDW) for estimation of response to therapy in adult patients with NS. Study design, setting &amp; participants, and intervention: We performed a prospective study including 176 patients with NS due to PGN. Patients were divided into three groups according to their response to the treatment. Group 1 was composed of patients with complete remission whereas group 2 was composed of patients with partial remission and group 3 was composed of patients who were resistant to the treatment. Results: The highest baseline mean RDW value was found in group 3 patients (17.8 ± 1.8) whereas the lowest in group 1 (13.4 ± 0.7) before treatment (p&lt;0.05). We found a significant decrease in RDW value after an effective treatment in groups 1 and group 2 (p&lt;0.05). However, there was no significant change in RDW values after treatment in group 3 (p&gt;0.05). Most of the patient with complete remission had base-line RDW level ≤ 14% (n=45, 90%) (p&lt;0.001, Kendal Tau: -0.86), and most of the patients who were resistant to the treatment had base-line RDW level p&gt;15% (n=68, 86.1%) (p&lt;0.001, Kendal Tau: -0.87). Conclusion: Our results suggest that pre-treatment RDW value is a promising novel biomarker for predicting response to the treatment in adult patients with NS due to PGN. abstract_id: PUBMED:36210887 Distribution Width of Red Blood Cells and Related Factors Among Patients with End-Stage Renal Disease in Addis Ababa, Ethiopia. Background: RDW is critical to the clinical diagnosis and progression of ESRD. There is currently little data on the relationship between RDW and ESRD in sub-Saharan Africa. Because of this, the present study evaluates RDW in patients with ESRD and associated factors in Addis Ababa, Ethiopia. Methods: The hospital-based cross-sectional study design was conducted on a total of 83 patients. RDW, MCV, SCR, BUN, GFR, FBS and serum albumin were determined. Blood pressure (mmHg), weight (kg), height (m), MUAC (cm) and BMI (kg/m2) were also measured. Data entry was via Epi-data version 3.4 and analyzed with SPSS version 26.0. A multivariate logistic regression analysis with a p-value &lt; 0.05 at a 95% confidence interval was used to identify the associated factors of RDW. Results: A total of 83 ESRD patients participated, with a response rate of 95.4%. RDW ranged from 15.5% to 23.6% with a mean of 17.40% + 1.46%. Anisocytosis was present in 98.8% of patients. Of 83 patients, 66.3% were hypertensive, 20.5% had diabetes, and the remaining 13.3% had other conditions (glomerulonephritis and peripheral vascular disease). The mean GFR value was 5.20 mL/min/1.73 + 1.58. RDW showed a significant association with GFR (AOR: 4.6, 95% CI [1.27, 20.74], P = 0.047), alcohol consumption (AOR: 13.4, P = 0.012, 95% CI [1.97, 22.62]), recurrent kidney disease (AOR=25.6, P=0.016, 95% CI [1.85, 53.71]) and use of medication (AOR=00.2, P=0.044), 95% CI [0.03, 0.95]). Conclusion: RDW showed a significant association with GFR, recurrent kidney disease, alcohol consumption, and medication use in hemodialysis-dependent ESRD patients. The mechanisms of RDW disruption in ESRD patients need further investigation. abstract_id: PUBMED:32700683 Red Blood Cell Distribution Width is Associated with Glomerulonephritis in Diabetic Patients with Albuminuria. BACKGROUND The aim of this study was to explore predictive factors to inform accurate diagnosis of glomerulonephritis (GNs) in patients with diabetes. MATERIAL AND METHODS Clinical characteristics and laboratory data were retrospectively analyzed from 200 patients with diabetes including 115 patients who had undergone a renal biopsy. Eligible patients were categorized into three groups: pure type 2 diabetes mellitus (T2DM), isolated diabetic nephropathy (DN), and GN. Odds ratios (ORs) were calculated to evaluate the contributions of predictive factors for GN. A receiver operating characteristic curve (ROC) was created to obtain cut-off values for predictive factors for GNs and investigate their corresponding predictive accuracy. RESULTS Red cell distribution width (RDW) was significantly higher in the GN group than in the DN group. Multivariate regression analysis revealed that baseline RDW level (OR=1.988, 95% CI=1.237~3.194, P=0.005) was an independent predictive factor for development of GNs. CONCLUSIONS Increased RDW levels are independently associated with a greater risk of GN in patients with diabetes who have albuminuria, and may be an additional valuable and noninvasive predictive tool for differentiating GNs and DN. abstract_id: PUBMED:25644969 Red cell distribution width in primary glomerulonephritides. N/A abstract_id: PUBMED:29698936 Response Predictors to Calcineurin Inhibitors in Patients with Primary Membranous Nephropathy. Background: Currently, there is an urgent need to find ways of identifying primary membranous nephropathy (PMN) patients who are likely to benefit from calcineurin inhibitors (CNI) or who are resistant to them. In this study, we employed nano-HPLC-MS/MS analysis to identify serum biomarkers that predict the clinical response to CNI therapy in PMN patients. Methods: The endpoint was complete remission (CR) after CNI treatment. PMN patients were grouped into no-remission (NR) or CR groups to screen predictive candidates using the nano-HPLC-MS/MS analysis. Results: Compared with NR patients, 3 upregulated proteins and 5 downregulated proteins were found to present a twofold change in CR patients' serum. Serum amyloid A1 protein (SAA1) was further validated by ELISA; it was decreased in patients in the NR group compared with patients in the CR group, but SAA1 in patients in these groups was lower than in healthy controls and minimal change disease patients. The area under the receiver operating characteristic (ROC) curve of SAA1 was used to distinguish PMN NR patients from those in remission and was 0.901, with a sensitivity of 78.3% and specificity of 86.8%, similar to that of the phospholipase A2 receptor (PLA2R) antibody. Combining SAA1 with the PLA2R antibody, the area under the ROC curve was 0.956, which was higher than that of SAA1 or the PLA2R antibody alone. Conclusions: Serum SAA1 may be a candidate PMN biomarker that can be used to discriminate CNI NR cases from remission patients. The combination of SAA1 and the PLA2R antibody increases the accuracy of diagnosis. abstract_id: PUBMED:3237281 Detection of glomerular bleeding by urinary-red-cell-size distribution. Midstream urine specimens from 60 consecutive patients with hematuria were examined with an autoanalyzer to determine whether the source of bleeding could be predicted on the basis of the size distribution of urinary red blood cells. In 54 patients a definite diagnosis was made which correlated with the urinary-red-cell-size distribution in 93.7% (15/16) of cases for whom hematuria was considered to be glomerular and in 100% (38/38) of cases of nonglomerular hematuria. It is concluded that this method can greatly help the clinician in distinguishing between glomerular and nonglomerular bleeding in patients with hematuria and channeling such patients towards the most appropriate investigations. abstract_id: PUBMED:29314776 Chronicity index, especially glomerular sclerosis, is the most powerful predictor of renal response following immunosuppressive treatment in patients with lupus nephritis. Aim: Renal responses to immunosuppressive agents in patients with lupus nephritis (LN) differ depending on ethnicity, follow-up duration, disease severity and treatment. Thus, we evaluated predictors of complete remission during the first year following immunosuppressive treatment in patients with LN. Methods: We retrospectively reviewed 79 patients who underwent kidney biopsy prior to the start of induction treatment and who were subsequently treated with immunosuppressive drugs for at least 6 months and followed-up for more than a year. Complete remission (CR) was defined as inactive urinary sediment, a decrease in urinary protein to a creatinine ratio &lt; 0.2, and normal or stable renal function. Multivariate analyses were performed using the logistic regression model to identify independent predictors of CR in LN patients. Results: After 1 year, renal response was achieved in 39 of 79 patients (49.4%) treated with immunosuppressive drugs. Intravenous cyclophosphamide was most commonly used as a treatment, followed in descending order of frequency by mycophenolate mofetil, azathioprine and cyclosporine. CR was associated with disease duration at the onset of LN, serum erythrocyte sedimentation rate, chronicity index on renal histology, glomerular sclerosis, tubular atrophy, interstitial fibrosis, and the use of hydroxychloroquine at the onset of LN. In multivariable regression analysis, glomerular sclerosis in the chronicity index was a significant predictor of complete remission in LN patients. Conclusion: Our findings suggest that glomerular sclerosis in the chronicity index is an independent predictor of CR after the start of therapy in LN patients. abstract_id: PUBMED:2323112 Red cell volume distribution curves in the diagnosis of glomerular and non-glomerular hematuria. It has been suggested that measurement of red blood cell (RBC) volume, by red cell analyzers (RCA), may be used to distinguish glomerular from non-glomerular hematuria. The RCA measures all urinary particles, including RBC. We found that red cell volume distribution curves (RVDC) obtained with a Coulter S+IV RCA were non-specific in predicting the cause of hematuria. Normal RBC added to urine at about 15,000/mm3 produced the expected non-glomerular RCVDC but at decreasing RBC concentrations mixed and then glomerular RCVDCs were found. Urine from 7 normal subjects (6 without urinary RBC) showed particle volume distribution curves identical to glomerular RCVDC. These urinary particles were examined by scanning electron microscopy (SEM) after filtration (3 microns and 0.8 micron pore size). Hematuric urine from a patient after prostatectomy contained RBC (3.5-9.5 microns diameter) and particulate debris (1.0-7.5 microns diameter). Similar results were obtained for a patient with crescentic glomerulonephritis (particles 1-13 microns diameter). Attempts to separate glomerular urinary RBC from debris by centrifugation against a sucrose polymer were unsuccessful. The shape of the RCVDC obtained with RCA is altered by the presence of debris in urine. The true MCV of glomerular RBC is still unknown. The usefulness of RCA in the diagnosis of hematuria is limited to patients with large numbers of urinary RBC. abstract_id: PUBMED:2895832 Red-cell-volume distribution curves in diagnosis of glomerular and non-glomerular haematuria. The distribution curves of urinary red-blood-cell (RBC) size were obtained from automated blood-cell analysis in 146 patients with definite causes of haematuria. In 65 of 67 patients (97%) with haematuria and glomerulonephritis demonstrated by renal biopsy, urinary RBC had an irregular and asymmetrical distribution with RBC size showing a much smaller volume than that of venous RBC. This "glomerular" distribution contrasted with the "non-glomerular" normal distribution when the peak for RBC was at a larger volume than that for peripheral RBC. In 46 of 47 patients with haematuria who had lower urinary tract lesions other than infection, a non-glomerular distribution was obtained; 30 of these cases also showed glomerular distribution, and were classified as "mixed". All 32 patients with urinary tract infection had either a glomerular or mixed distribution, suggesting that they excreted distorted and dysmorphic urinary RBC. After excluding infections, this simple, rapid, reproducible, and non-invasive technique provides reliable information in distinguishing glomerular bleeding from other causes of haematuria. abstract_id: PUBMED:31203156 Neutrophil-to-lymphocyte ratio: An effective predictor of corticosteroid response in IgA nephropathy. Background: The neutrophil-to-lymphocyte ratio (NLR) has been shown to have a strong predictive ability for the prognosis of various diseases. However, the role of the NLR in IgA nephropathy is currently unknown. In this study, we attempted to evaluate how useful the NLR is for predicting corticosteroid response in IgA nephropathy. Methods: For retrospective cohort with IgA nephropathy, the patients who received corticosteroid therapy were recruited. To identify independent factors for corticosteroid response, the NLR and other parameters were analyzed using the multivariate regression model. The predictive usefulness was determined by the area under the receiver operating characteristics curve (AUROC). Survival analysis was conducted to compare the corticosteroid response between different groups according to the cut-off of NLR. Results: Multivariate analyses showed that NLR, serum creatinine, uric acid, and estimated glomerular filtration rate (eGFR) were independent factors for corticosteroid response. The AUROC of NLR (0.748) was higher than those of other factors. Patients with NLRs &lt;2.43 had a higher remission rate than those with NLR ≥2.43 (P &lt; 0.001). Conclusion: NLR is an effective predictor of corticosteroid response in IgA nephropathy. Answer: Yes, the red cell distribution width (RDW) appears to be a strong predictor for treatment response in primary glomerulonephritides (PGN). A prospective study including 176 patients with nephrotic syndrome due to PGN found that patients with complete remission had significantly lower baseline RDW values compared to those who were resistant to treatment. The study concluded that pre-treatment RDW value is a promising novel biomarker for predicting response to treatment in adult patients with NS due to PGN (PUBMED:24932664). Additionally, another study found that increased RDW levels are independently associated with a greater risk of glomerulonephritis in diabetic patients with albuminuria, suggesting that RDW may be a valuable and noninvasive predictive tool for differentiating between glomerulonephritis and diabetic nephropathy (PUBMED:32700683). These findings support the notion that RDW is a strong predictor for treatment response in primary glomerulonephritides.
Instruction: Is IVIg therapy warranted in progressive lower motor neuron syndromes without conduction block? Abstracts: abstract_id: PUBMED:24212395 Is IVIg therapy warranted in progressive lower motor neuron syndromes without conduction block? Objective: To evaluate the likelihood of response to IV immunoglobulin (IVIg) by studying consecutive patients presenting with progressive, asymmetric, pure lower motor neuron (LMN) limb weakness, and to determine the clinical phenotype of those who respond. Methods: Thirty-one consecutive patients with progressive, focal-onset LMN limb weakness, without evidence of clinical upper motor neuron signs; sensory, respiratory, or bulbar involvement; or evidence of motor nerve conduction block on electrodiagnostic studies, were prospectively included in this study. Each patient underwent treatment with IVIg (2 g/kg) for a minimum of 3 months. Electrodiagnostic studies, a neuromuscular symptom score, and expanded Medical Research Council sum score were documented before and after IVIg treatment. The final diagnosis was determined after prolonged clinical follow-up. Results: Only 3 of 31 patients (10%) responded to IVIg. All responders demonstrated distal upper limb-onset weakness, EMG abnormalities confined to the clinically weak muscles, and a normal creatine kinase. This set of features was also identified in 31% of nonresponders presenting with distal upper limb weakness. Sex, age at onset, number of involved limb regions, and the duration of symptoms before treatment were not significantly different between groups. Conclusion: The findings of the present study do not support uniform use of IVIg in patients presenting with progressive asymmetric LMN limb weakness. It is suggested that IVIg treatment be limited to patients who demonstrate clinical and laboratory features suggestive of multifocal motor neuropathy. Classification Of Evidence: This study provides Class IV evidence that IVIg will not improve muscle function in 90% of patients with progressive, asymmetric, pure LMN weakness. abstract_id: PUBMED:22066029 Predicting a positive response to intravenous immunoglobulin in isolated lower motor neuron syndromes. Objective: To determine clinically related characteristics in patients with pure lower motor neuron (LMN) syndromes, not fulfilling accepted diagnostic criteria, who were likely to respond to intravenous immunoglobulin (IVIg) treatment. Methods: Demographic, clinical, laboratory and neurophysiological characteristics were prospectively collected from patients with undifferentiated isolated LMN syndromes who were then treated with IVIg. Patients were classified as either responders or non-responders to therapy with IVIg based on clinical data and the two groups were compared. Results: From a total cohort of 42 patients (30 males, 12 females, aged 18-83 years), 31 patients responded to IVIg and 11 did not. Compared to patients that developed progressive neurological decline, responders were typically younger (45.8 compared to 56.0 years, P&lt;0.05) and had upper limb (83.9% compared to 63.6%, NS), unilateral (80.6% compared to 45.5%, P&lt;0.05), and isolated distal (54.1% compared to 9.1%, P&lt;0.05) weakness. Patients with predominantly upper limb, asymmetrical, and distal weakness were more likely to respond to IVIg therapy. Of the patients who responded to treatment, only 12.9% had detectable GM(1) antibodies and conduction block (not fulfilling diagnostic criteria) was only identified in 22.6%. Conclusions: More than 70% of patients with pure LMN syndromes from the present series responded to treatment with IVIg therapy, despite a low prevalence of detectable GM(1) antibodies and conduction block. Patients with isolated LMN presentations, not fulfilling accepted diagnostic criteria, may respond to IVIg therapy, irrespective of the presence of conduction block or GM(1) antibodies, and should be given an empirical trial of IVIg to determine treatment responsiveness. abstract_id: PUBMED:26515620 Progressive Muscular Atrophy. Progressive muscular atrophy (PMA) is a rare, sporadic, adult-onset motor neuron disease, clinically characterized by isolated lower motor neuron features; however, clinically evident upper motor neuron signs may emerge in some patients. Subclinical upper motor neuron involvement is identified pathologically, radiologically, and neurophysiologically in a substantial number of patients with PMA. Patients with subclinical upper motor neuron involvement do not fulfill the revised El Escorial criteria to participate in amyotrophic lateral sclerosis clinical trials. Intravenous immunoglobulin therapy is only marginally beneficial in a small subgroup of patients with lower motor neuron syndrome without conduction block. abstract_id: PUBMED:10369816 Use of human intravenous immunoglobulin in lower motor neuron syndromes. Objective: To determine whether patients with the clinical phenotype of multifocal motor neuropathy but without the electrophysiological criteria for conduction block would respond to intravenous immunoglobulin (IVIg). Methods: Ten patients were selected with a slowly progressive, asymmetric, lower motor neuron disorder, and were treated prospectively with IVIg at a dose of 2g/kg over 5 days. All subjects had neurophysiological testing to look for evidence of conduction block before treatment. Muscle strength was assessed by MRC grades and hand held myometry, measuring pinch and grip strength. A 20% increase in both pinch and grip myometry was considered a positive response. Results: In no patient was conduction block detected. Four of the 10 patients showed a positive response to IVIg, with the best response occurring in two patients who presented with weakness but without severe muscle wasting. Three of the four responders have continued to receive IVIg for a mean period of 17 months (range 15-24 months), with continued effect. The response to IVIg was not related to the presence of anti-GM1 antiganglioside antibodies, but responders had a selective pattern of muscle weakness and normal (&gt;90% predicted) vital capacity. Conclusion: The findings suggest that a course of IVIg should be considered in patients with the clinical phenotype of multifocal motor neuropathy but without neurophysiological evidence of conduction block. abstract_id: PUBMED:14648143 The spectrum of lower motor neuron syndromes. This review discusses the most important lower motor neuron syndromes. This relatively rare group of syndromes has not been well described clinically. Two subgroups can be distinguished: patients in whom motor neurons (lower motor neuron disease (LMND)) are primarily affected or motor axons and their surrounding myelin (multifocal motor neuropathy (MMN)), both leading to muscle atrophy and weakness. Both hereditary and sporadic forms of LMND have been described. The discussion of recent advances in the genetic knowledge of several hereditary forms of LMND may lead to a better understanding of the pathophysiology and the development of therapeutic strategies. By contrast, the pathogenesis of sporadic LMND is largely unknown. It is, therefore, difficult to consider the various sporadic forms of LMND, discussed in this review, as separate diseases. Because the diagnostic and therapeutic options may differ, it would seem rational to consider sporadic LMND as a spectrum of syndromes which can be distinguished from each other on the basis of clinical presentation.MMN is a lower motor neuron syndrome with presumed immunemediated pathogenesis. Evidence of motor conduction block on nerve conduction studies and a positive response to treatment with intravenous immunoglobulins (IVIg) are considered the most relevant criteria for the diagnosis of MMN. As it is treatable, it is important to distinguish MMN from LMND. Careful electrophysiological analysis in the search for conduction block is, therefore, required in all adult patients with pure lower motor neuron syndromes. For the individual patient, distinction between the various lower motor neuron syndromes is important as it enables the physician to provide adequate information over the disease course in LMND and to facilitate early treatment in MMN. abstract_id: PUBMED:21305570 Do A-waves help predict intravenous immunoglobulin response in multifocal motor neuropathy without block? Introduction: Are there electrophysiological findings that predict response to intravenous immunoglobulin (IVIg) in patients with lower motor neuron (LMN) syndromes without multifocal conduction block (MCB)? Methods: We enrolled 9 patients with LMN syndromes without MCB to receive 18 weeks of IVIg therapy. Response was measured at weeks 2 and 18 using the Appel Amyotrophic Lateral Sclerosis (AALS) score (includes grip and pincer strength measures), ALS Functional Rating Scale (ALSFRS), and electrophysiological measures, including motor unit estimates (MUNEs). Results: No change occurred in AALS or ALSFRS scores posttreatment. Grip/pincer strength increased in 7 patients (P = 0.028) after initial treatment (responders); 2 showed no improvement (non-responders). No electrophysiological measure changed after treatment in either group but MUNEs trended higher (P = 0.055). "Abnormal A-waves" (complex, repetitive biphasic, or present in multiple nerves) occurred in pretreatment studies more often in responders (P = 0.028). Discussion: "Abnormal A-waves" may signal IVIg-responsive LMN syndromes even if conduction block is absent. abstract_id: PUBMED:16490043 Prediction of response to IVIg treatment in patients with lower motor neurone disorders. The distinction between multifocal motor neuropathy, treatable by intravenous immunoglobulins (IVIg), and degenerative motor neurone disorders is often difficult. To find predictive factors for the response to IVIg treatment, 40 consecutive patients with pure lower motor neurone disorders (LMND) were prospectively examined. They all received at least two times IVIg (2 g/kg bodyweight). Prior to the first and before all the following treatments a standardized evaluation was performed including clinical examination, neurophysiological and laboratory evaluation. According to changes in the neurological examination and the Neuromuscular Symptom Score, the patients were divided into responders and non-responders after the second course of treatment. In our study, no single clinical, neurophysiological, or laboratory parameter was sensitive enough to predict response. The only single parameter that highly correlated with a positive response to treatment was an elevated GM1 antibody titre. Lack of response to IVIg treatment is likely in patients with generalization of electromyographic signs of denervation beyond the clinically involved site, proximal localization of the weakness, and an elevated level of the creatinekinase. Conduction blocks do not distinguish between both groups. We propose a scoring system combining clinical, serological and neurophysiological data in order to decide which patients with LMND may receive IVIg. abstract_id: PUBMED:12023421 Multifocal motor neuropathy: clinical and immunological features and response to IVIg in relation to the presence and degree of motor conduction block. Objective: To determine whether patients with clinically typical multifocal motor neuropathy (MMN) with or without definite or probable conduction block (CB) differ in terms of clinical presentation, immunological findings, or response to treatment with intravenous immunoglobulin (IVIg). Methods: 23 consecutive patients were studied with the typical clinical features of MMN, consisting of a progressive multineuropathic motor impairment with minimal or no sensory loss. In 14 patients, electrophysiological studies disclosed the presence of a definite or probable CB according to the criteria proposed by the American Association of Electrodiagnostic Medicine (AAEM) in at least one motor nerve. Six patients had possible CB, defined as a degree of CB 10% less than that required by the AAEM for probable CB, while no CB was detected in three patients. Results: Patients with possible CB did not differ from those with a definite or probable CB in terms of age at disease onset (mean 38.8 v 38.2 years, respectively), distribution and severity of limb weakness, clinical impairment (mean Rankin score 2.2 in both), and frequency of antiganglioside antibodies (33% v 29%). Patients with possible CB had a longer mean disease duration (9 v 5.9 years, p &lt; 0.05) and a less frequent consistent response to IVIg (67% v 86%) than those with a definite or probable CB. Patients without a detectable CB had a similar frequency of antiganglioside antibodies (33%) but had a longer disease duration (20.3 years), greater impairment (Rankin score 2.7), and more frequent signs of axonal degeneration (41% of examined motor nerves) than patients with CB (13-15%, p &lt; 0.005). Only one patient without detectable CB (33%) consistently improved with IVIg. Conclusions: Patients with possible CB were clinically and immunologically indistinguishable from those with definite or probable CB, albeit with a slightly less frequent response to IVIg. This finding suggests that failure to fulfil AAEM criteria for CB in patients with otherwise clinically typical MMN should not preclude this diagnosis and consequently a treatment trial with IVIg. Whether the longer duration and greater severity of the disease and more frequent axonal impairment in patients without detectable CB than in those with CB explain their lower response to IVIg remains to be established. abstract_id: PUBMED:2327739 Lower motor neuron syndromes defined by patterns of weakness, nerve conduction abnormalities, and high titers of antiglycolipid antibodies. We studied 74 patients with progressive, asymmetrical lower motor neuron syndromes. Clinical features of these patients, including age, sex, disease duration, patterns of weakness, and reflex changes, were evaluated by review of records. In each patient the clinical features were compared to the type of nerve conduction abnormalities and to the specificities of high-titer serum antiglycolipid antibodies. Antibody specificities were determined by an enzyme-linked immunosorbent assay using purified glycolipids and carbohydrates as substrates. Our results show that high titers of antibodies to glycolipids are common in sera of patients with lower motor neuron syndromes. Selective patterns of reactivity indicate that specific carbohydrate epitopes on the glycolipids are the targets of the high-titer antibodies in individual patients with lower motor neuron syndromes. Several distinct lower motor neuron syndromes can be identified based on clinical, physiological, and antiglycolipid antibody characteristics. These syndromes include multifocal motor neuropathy with evidence of multifocal conduction block on motor, but not sensory, axons and frequent (84%) high titers of anti-GM1 ganglioside antibodies; a lower motor neuron syndrome with predominantly distal weakness early in the disease course, no conduction block, and a high incidence (64%) of anti-GM1 antibodies; and a lower motor neuron syndrome with predominant early weakness in proximal muscles and serum antibodies to asialo-GM1 that do not cross-react with GM1 ganglioside. abstract_id: PUBMED:17610902 Monomelic neurogenic syndromes: a prospective study. Monomelic neurogenic syndromes are rare. Their classification and prognostic features have not been addressed in the European population. A prospective study of 17 patients with monomelic neurogenic amyotrophy, of upper or lower limb onset, with progression limited to one limb for three or more years. Clinical and neurophysiological studies were performed in the subsequent 3 or more years. Fifteen patients were of European origin and two were Asian. Those presenting with proximal monomelic weakness or with involvement of the posterior compartment of the lower leg showed no further progression after the initial period of development of the syndrome. Brisk reflexes in wasted muscles did not predict progression. Electromyographic signs of denervation in the opposite limb at presentation did not predict later progression. Transcranial magnetic stimulation (TMS) features of corticospinal dysfunction were a useful predictor of subsequent progression (p=0.01). One patient with lower limb onset developed conduction block with weakness in an upper limb nine years after presentation, and this upper limb weakness responded to IVIg therapy. This adult-onset European group of patients is different as compared with juvenile-onset Asian cases. The clinical syndromes appear heterogeneous, but neurophysiological investigations, in particular TMS, can be helpful in determining prognosis. Multifocal motor neuropathy should be considered when there is progression, even years after onset. Answer: The use of IVIg therapy in progressive lower motor neuron (LMN) syndromes without conduction block is a subject of clinical investigation, and the evidence suggests that it may not be universally warranted. A study by Katz et al. (PUBMED:24212395) found that only 10% of patients with progressive asymmetric LMN limb weakness responded to IVIg, suggesting that IVIg treatment should be limited to patients with specific clinical and laboratory features suggestive of multifocal motor neuropathy (MMN). In contrast, a study by Léger et al. (PUBMED:22066029) reported that more than 70% of patients with pure LMN syndromes responded to IVIg therapy, despite a low prevalence of detectable GM(1) antibodies and conduction block. They suggested that patients with isolated LMN presentations may respond to IVIg therapy and should be given an empirical trial to determine treatment responsiveness. The review by van den Berg-Vos et al. (PUBMED:14648143) emphasizes the importance of distinguishing MMN from LMND, as MMN is treatable with IVIg and requires careful electrophysiological analysis to search for conduction block. Meanwhile, a study by Simmons et al. (PUBMED:10369816) found that a subset of patients with the clinical phenotype of MMN but without electrophysiological evidence of conduction block responded to IVIg, suggesting that a trial of IVIg could be considered in such patients. The study by Rotta et al. (PUBMED:16490043) proposed a scoring system combining clinical, serological, and neurophysiological data to decide which patients with LMND may benefit from IVIg, as no single parameter was sensitive enough to predict response. Additionally, a study by Delmont et al. (PUBMED:12023421) found that patients with possible CB were clinically and immunologically indistinguishable from those with definite or probable CB, suggesting that failure to fulfill criteria for CB should not preclude a treatment trial with IVIg.
Instruction: The transitive fallacy for randomized trials: if A bests B and B bests C in separate trials, is A better than C? Abstracts: abstract_id: PUBMED:12429069 The transitive fallacy for randomized trials: if A bests B and B bests C in separate trials, is A better than C? Background: If intervention A bests B in one randomized trial, and B bests C in another randomized trial, can one conclude that A is better than C? The problem was motivated by the planning of a randomized trial, where A is spiral-CT screening, B is x-ray screening, and C is no screening. On its surface, this would appear to be a straightforward application of the transitive principle of logic. Methods: We extended the graphical approach for omitted binary variables that was originally developed to illustrate Simpson's paradox, applying it to hypothetical, but plausible scenarios involving lung cancer screening, treatment for gastric cancer, and antibiotic therapy for clinical pneumonia. Results: Graphical illustrations of the three examples show different ways the transitive fallacy for randomized trials can arise due to changes in an unobserved or unadjusted binary variable. In the most dramatic scenario, B bests C in the first trial, A bests B in the second trial, but C bests A at the time of the second trial. Conclusion: Even with large sample sizes, combining results from a previous randomized trial of B versus C with results from a new randomized trial of A versus B will not guarantee correct inference about A versus C. A three-arm trial of A, B, and C would protect against this problem and should be considered when the sequential trials are performed in the context of changing secular trends in important omitted variables such as therapy in cancer screening trials. abstract_id: PUBMED:19444416 Viral hepatitis B und C Chronic hepatitis B (HBV) and C virus (HCV) infection can lead to liver cirrhosis, hepatocellular carcinoma and death. Treatment of these worldwide prevalent infectious diseases is subject to intensive research efforts with development of new antiviral substances and optimization of treatment strategies using molecular markers. The goal of HBV and HCV treatment is control and elimination of viral replication, respectively, thereby preventing hepatitis-associated complications. While interferon alpha is used less frequently to treat hepatitis B today, it is still (in the pegylated or albumin-fused form) an essential component of hepatitis C therapy. The growing number of targeted therapies such as new nucleus(t)ide analogs, HCV protease and RNA polymerase inhibitors and other new compounds has added complexity to the treatment of viral hepatitis. This update summarizes the current standard of care as well as new developments in chronic hepatitis B and C therapy. abstract_id: PUBMED:29138012 Systematic review on the reporting quality of randomized controlled trials in patients with hepatitis B or C in China. Background: The numbers of articles reporting randomized controlled trials (RCTs) on viral hepatitis in China have been increasing, but there have been few systematic studies evaluating the reporting quality of RCTs in this field. This study was performed to assess the reporting quality of RCTs on the treatment of hepatitis B and C in China from 1991 to 2015. Methods: Articles published between January 1991 and December 2015 were identified via the PubMed, MEDLINE, and Embase databases using the key words "randomized clinical trials", "treatment", "therapy", "hepatitis B", "HBV", "hepatitis C", "HCV", "China", and "Chinese". The reporting quality was assessed against the Consolidated Standards of Reporting Trials (CONSORT) checklist. Results: In total, 211 RCTs on the treatment of hepatitis B or C were included. The number of articles focusing on these RCTs increased rapidly over time, while the reporting quality improved steadily over time. Overall, compliance with the key components of the CONSORT checklist was low, with only 8.5%, 3.8%, and 11.4% of the articles fulfilling the reporting requirements of randomization, allocation concealment, and blinding, respectively. Conclusions: Both the number and the quality of RCT articles were found to have increased steadily over the last two decades. However, compliance with the key components of the CONSORT checklist still needs improvement. It is hoped that the results of this study will lead to improvements in the reporting quality of clinical trials on hepatitis B and C in China. abstract_id: PUBMED:15719800 Diagnosis of hepatitis B and C Diagnosis of hepatitis B and C is based on sensitive and specific serological assays. Molecular assays for the detection of HBV DNA or HCV RNA are in some instances needed to confirm the diagnosis. However, their principal role is in the therapeutic decision making and in the monitoring of antiviral therapy. abstract_id: PUBMED:1905835 Post-hepatitis B, B-D and C cirrhosis Three viruses are responsible for posthepatitic cirrhosis: hepatitis B virus, hepatitis D (also called delta) virus and hepatitis C virus formerly known as non-A, non-B virus. Delta virus is a defective organism which can replicate only when coinfection with hepatitis B virus is present. These three viruses cause chronic active hepatitis which, after a period of 5 to 30 years, gives rise to posthepatitic cirrhosis. Chronic infections with these viruses account for more than 90 p. 100 of chronic active hepatitis in France and constitute a major cause of cirrhosis. Beside complications (hepatocellular insufficiency, portal hypertension, hepatocellular carcinoma) which are common to all types of cirrhosis irrespective of their origin, the course of posthepatitic cirrhosis is characterized by possible episodes of reactivation of chronic hepatitis and by a very high risk of hepatocellular carcinoma. Two kinds of treatment are now available: antiviral therapy (basically with interferon alpha) and liver transplantation. Antiviral therapy must, of course, be given before the stage of cirrhosis has been reached. Liver transplantation in these patients raises special problems due to recurrence of viral infection in the graft. Vaccination against hepatitis B virus, which also prevents the B-delta coinfection, must be systematic in populations at risk. abstract_id: PUBMED:24409613 Dialysis and hepatitis B and C Haemodialysis and peritoneal dialysis are two dialysis techniques which can be offered to patients with kidney failure who are carriers of the hepatitis B or C virus. These techniques comprise specificities which require a specific protocol. abstract_id: PUBMED:16381232 Hepatitis B and C: natural course of disease Significant progress in the understanding of the natural history of hepatitis B and C has been made in recent years due to molecular diagnosis techniques. The most important biologic feature of hepatitis B and C viruses (HBV, HCV) is their ability to cause chronic hepatitis. The natural course of HBV infection is variable, ranging from inactive HBsAg carrier state to progressive chronic hepatitis that can evolve into liver cirrhosis and hepatocellular carcinoma. HBeAg-negative chronic hepatitis is due to a naturally occurring HBV variant with mutations in the precore or basic core promoter regions. It accounts for the majority of cases in many European countries and is generally associated with a more severe liver disease. The morbidity and mortality in chronic hepatitis B are linked to the evolution to cirrhosis and hepatocellular carcinoma. The progression of fibrosis is strongly associated with persistent active viral replication When the diagnosis is made, the 5-year cumulative incidence of developing cirrhosis ranges from 8% to 20%. The 5-year cumulative incidence of hepatic decompensation is 20%. Hepatocellular carcinoma is one of the most common cancers worldwide, 75% of which are related to chronic HBV infection. Coinfection with hepatitis D virus can lead to a more progressive liver disease in a shorter period of time. Hepatitis C virus infection becomes chronic in 80% of infected persons resulting in different stages of chronic hepatitis, with 20%-30% progressing to cirrhosis within 20 years period. The progression of fibrosis determines the ultimate prognosis. The major factors known to be associated with fibrosis progression are older age, male gender and alcohol consumption. Viral load and genotype do not play a role in the disease progression. Progression to fibrosis is more rapid in immunocompromised patients. abstract_id: PUBMED:9824948 Hepatitis B and C: current therapy Treatment of acute viral hepatitis B is symptomatic, fulminant cases may require liver transplantation. In chronic hepatitis B interferon (IFN)-alpha will induce sustained response rates of 30-40%. Nucleoside analogues such as famciclovir or lamivudine appear to be promising for treatment in non-responders or cirrhotic and immunosuppressed patients. IFN-alpha may reduce the rate of chronic courses in acute hepatitis C infections. Chronic hepatitis C patients with elevated ALT activities, positive serum HCV RNA and portal or bridging fibrosis on biopsy are recommended for treatment with IFN-alpha. Sustained responses are observed in less than 20% of treated patients. Retreatment with IFN-alpha may be indicated in non-responders or in case of relapse. Combination therapy of IFN-alpha plus ribavirin may emerge as treatment of choice for patients with a relapse in the near future. abstract_id: PUBMED:9471838 Diagnosis of Hepatitis B and C: current aspects The genetic variability of hepatitis B virus (HBV) is for a DNA virus extremely high. Escape mutants of the pre-core/core gene and S-gene are of clinical and diagnostic importance. HBeAg negative (pre-core) mutants are frequently associated with a more severe prognosis of chronic hepatitis B. One amino acid substitution in the neutralizing epitope of HBsAg is responsible for the failure of active and passive immunization against HBV. S-Gen mutants may be not detected with monoclonal HBsAg tests. Hepatitis C virus (HCV) is even more heterogenous than HBV. At least 6 genotypes may be differentiated. Genetic variability is the cause of reinfections, which in general have a better prognosis than primary infection. Diagnosis of infections with genotypes 4 to 6 is not absolutely reliable. The influence of different genotypes on disease progression needs to be further investigated. A more severe prognosis and higher HCV RNA concentrations are observed in genotype 1b infections. The higher RNA concentration is partly due to an in vitro artefact, since the amplification rate with current primer pairs is optimal for genotype 1. abstract_id: PUBMED:28933169 History and presence of hepatitis B and C therapy Infection with hepatitis B virus (HBV) and hepatitis C virus (HCV) are the worlds major causes of chronic liver disease. Care of patients infected with HBV and HCV and/or over the last 20 years has significantly improved thanks to the better understanding of the pathophysiology of disease, improvement of diagnostic, therapeutic and preventive options. The goal of treatment of chronic hepatitis B is to extend the length of life and improve its quality through the barriers of the progression of chronic hepatitis to cirrhosis, decompensation cirrhosis and hepatocellular carcinoma (HCC). Chronic HBV infection can by currently treated either with tenofovir or entecavir orally (absolute majority of cases), and that in the long-term (years), or even for life-long therapy, or with pegylated interferon α-2a, which is given by injection once a week for 48 weeks (limited possibility of use). The primary goal of chronic hepatitis C treatment is to cure the infection, by achieving a sustained virological response defined as undetectable virus nucleic acid (HCV RNA) in peripheral blood 12 or 24 weeks after the end of therapy. At present, IFN-free regimens become the standard of chronic HCV therapy with the efficiency of 95-100 % and with minimum of side effects and contraindications.Key words: chronic hepatitis B - chronic hepatitis C - IFN-free therapy. Answer: The transitive fallacy for randomized trials refers to the incorrect assumption that if intervention A is better than intervention B in one randomized trial, and intervention B is better than intervention C in another randomized trial, then intervention A must be better than intervention C. However, this is not necessarily true and can be a logical fallacy. The transitive fallacy can arise due to changes in an unobserved or unadjusted binary variable that may affect the outcomes of the trials. For example, in the context of lung cancer screening, if B (x-ray screening) bests C (no screening) in the first trial, and A (spiral-CT screening) bests B in the second trial, it does not guarantee that A is better than C at the time of the second trial. This is because there could be changing secular trends in important omitted variables such as therapy in cancer screening trials. Therefore, even with large sample sizes, combining results from separate randomized trials of A versus B and B versus C will not necessarily provide correct inference about A versus C. To protect against this problem, a three-arm trial including A, B, and C should be considered, especially when sequential trials are performed in the context of changing trends in important variables (PUBMED:12429069).
Instruction: Quality of Life and Alleviation of Symptoms After Breast Reduction for Macromastia in Obese Patients: Is Surgery Worth It? Abstracts: abstract_id: PUBMED:26718700 Quality of Life and Alleviation of Symptoms After Breast Reduction for Macromastia in Obese Patients: Is Surgery Worth It? Background: Breast hypertrophy can cause a variety of symptoms and affect lifestyle and quality of life. Breast reduction, being the most effective treatment, is sometimes difficult to establish as standard treatment in obese patients (difficulties to differentiate symptoms from macromastia or from obesity, higher rate of complications). Aim: To evaluate the effect of reduction mammaplasty (quality of life and symptoms) in obese patients comparing with non-obese. Methods: This is a prospective study of patients undergoing reduction mammaplasty. Patients were allocated in non-obese (BMI &lt; 29) and obese (BMI &gt; 30). Demographic data, comorbidities, specific symptoms questionnaire, data from the surgical procedure, Spanish version of the Health-Related Quality of Life (SF-36) questionnaire, complications and sequels were recorded and collected before the operation and at 1 month and 1 year after. Chi-square, Fisher's exact t test, McNemar, Mann-Whitney U and Kruskal-Wallis tests were used for statistical analysis. Results: One hundred twenty-one consecutive patients were operated on; 54 (44.6 %) obese and 67 (55.4 %) non-obese. The average age of patients was 40.7 (18-78), average volume of resected tissue was 1.784 g (401-5.790), and average hospital stay was 2.94 days (1-11). There were no differences between obese and normal BMI patients with regard to length of hospital stay, complications, sequels, or reoperations. Symptoms improved in both groups. Physical and mental components of the SF-36 improved at 1 year in both groups (p &lt; 0.001). The mental health component improved at 1 month (p &lt; 0.001) in both groups. Conclusions: Obese patients should be considered for reduction mammaplasty surgery in the same way as women of normal weight. Level Of Evidence Iii: This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266. abstract_id: PUBMED:24508223 Can the efficacy of reduction mammoplasty be predicted? The applicability and predictive value of breast-related symptoms questionnaire in measuring breast-related symptoms pre- and postoperatively. To measure the impact of reduction mammoplasty, the Breast-Related Symptoms Questionnaire (BRSQ) was translated into Finnish and tested among women seeking reduction mammoplasty. This previously validated questionnaire focuses on 13 breast hypertrophy-related symptoms and their frequency. In this prospective multicentre study, the breast-related symptoms of 98 women were measured preoperatively with BRSQ and the health-related quality of life (HRQoL) with the 15 dimension (15D), a well-established generic tool. A total of 59 participants were followed up at least 6 months postoperatively. The women were middle-aged (mean age 44 years) and most of them overweight (mean Body mass index (BMI) 29). All patients had frequent physical symptoms and disability due to their breasts and reported low breast severity symptom score (BSS mean 27, range 13-38). Mean amount of resected breast tissue was 1310 g per patient. Postoperatively, the breast-related symptoms were significantly relieved, and 55 of 59 operated patients reported less frequent or non-existent symptoms (mean BSS 59, range 22-65). BSS score improved especially in obese women and those with pendulous breasts. A low preoperative BSS was related to considerable benefit from surgery. HRQoL score improved significantly from 0.889 to 0.930 (P &lt; 0.001) and significant improvement was seen especially in dimensions, such as discomfort, usual activities and breathing. In conclusion, BRSQ is an easy tool to use to quantify breast-related symptoms. It visualised effectively the impact of the reduction mammoplasty. Surgical breast reduction significantly improves breast-related symptoms and the HRQoL among women with many breast-related symptoms. The present guidelines for patient selection in breast reduction surgery should be updated to use valid measurement and scientific evidence. abstract_id: PUBMED:27038061 Quality of Life Following Symptomatic Macromastia Surgery: Short- and Long-term Evaluation. Unlabelled: Mammary hypertrophy or macromastia can cause a wide range of symptoms (physical, psychosomatic or behavioral), which affect patients' quality of life. Breast reduction can, in most of the cases, solve the problem. However, certain factors could have a negative effect on the outcome of surgery. The aims of this study were to discover the degree of patient satisfaction (short- and long-term) and to evaluate results of reduction mammoplasty, and also to ascertain which factors may have a negative role on the effectiveness of breast reduction surgery. We carried out a prospective and longitudinal study of 121 patients who underwent breast reduction surgery. Quality of life, outcome of surgery (complications and sequelae) and degree of patient satisfaction were evaluated at 1 month and at 1 year after reduction mammoplasty. Mean patient age was 40.71 (SD = 12.02). Among them, 35.5% were overweight, 44.6% were obese and 34.7% were smokers. The most common symptom was pain. The mean amount of resected breast tissue was 1785 g (SD = 876). A total of 27.3% of the patients suffered complications and 30.60% suffered sequelae. Our results show an improvement in symptoms (p &lt; 0.001) and quality of life (p &lt; 0.001 to p = 0.002) 1 month after and 1 year after breast reduction compared with the preoperative situation. Neither age, body mass index, smoking habit nor the amount of tissue removed had a negative effect on the results of surgery. One year after surgery, the majority of patients were satisfied with the outcome (96.6%), they would recommend it to others (96.6%), and they would undergo surgery a second time (95.8%). Conclusions: Breast reduction is highly efficient in resolving symptoms and in improving quality of life. It leads to a high level of short- and long-term satisfaction irrespective of each patient's individual characteristics. abstract_id: PUBMED:27344408 Impact of reduction mammoplasty on the quality of life of obese patients suffering from symptomatic macromastia: A descriptive cohort study. Background: Although reduction mammoplasty (RM) is an effective and efficient treatment for symptomatic macromastia, overweight and obese patients who request this treatment are frequently rejected because of selection criteria based on the body mass index. Scientific evidence is inconclusive regarding the increased postoperative complications in obese patients undergoing RM, and there is a lack of adequately designed studies examining the impact of RM on the quality of life of this group of patients. Patients And Methods: A descriptive cohort study was performed on 37 consecutive obese patients (body mass index &gt; 30 kg/m(2)) undergoing bilateral RM for symptomatic macromastia. Short Form SF-36 quality-of-life questionnaires were completed at interviews a week before surgery and 6 and 18 months after surgery. In addition, 37 women of matching ages, who were companions of patients hospitalized at our short-stay surgery unit, were used as a control group for comparison. Significant differences between repeated measurements on a single sample were assessed using the Wilcoxon signed-rank test. To evaluate these changes, we used effect size by computing Hedges' g corrected. Results: The preoperative SF-36 physical component score was significantly lower than the control group's score (40 vs. 53, p &lt; 0.001). There was no significant difference in the mental component score (45 vs. 49, p = 0.210). Postoperative SF-36 scores were increased with a normalizing effect, as 18 months after surgery only the body pain domain scored lower than the control group scores. Conclusions: According to our results, obese women with symptomatic macromastia undergoing RM exhibited increased quality of life, and this improvement was maintained over time. Therapy: Level III Evidence. abstract_id: PUBMED:10845288 Physical and psychosocial symptoms among 88 volunteer subjects compared with patients seeking plastic surgery procedures to the breast. In an investigation of the relationship between macromastia and physical and psychosocial symptoms, 88 female university students, 21 augmentation mammaplasty patients, and 31 breast reduction patients graded somatic and psychosocial symptoms. The intent of the study was to discover which complaints were most common among women presenting for reduction mammaplasty and to determine whether height/weight index and brassiere chest measurement and cup size might affect their symptoms. Both the student group and the augmentation mammaplasty patients differed significantly from the breast reduction patients. Eighty-one percent of the reduction patients complained of neck and back pain. Seventy-seven percent complained of shoulder pain, 58 percent complained of chafing or rash; 45 percent reported significant limitation in their activity; and 52 percent were unhappy with their appearance (p &lt; 0.001 compared with augmentation and student groups). Physical symptoms were related to height/weight index and bra chest and cup sizes in each of the three participating groups. It was found that patients who present for symptom-related reduction mammaplasty have a disease-specific group of physical and psychosocial complaints that are more directly related to large breast size than to being overweight. abstract_id: PUBMED:9915166 A retrospective study of changes in physical symptoms and body image after reduction mammaplasty. Reduction mammaplasty is performed typically to alleviate the painful physical symptoms of macromastia. Women who suffer from macromastia also frequently present to the plastic surgeon with heightened body image dissatisfaction and maladaptive behavioral changes in response to their breast size. Numerous investigations have demonstrated improvement in physical symptoms after breast reduction surgery. Studies have also suggested that psychological improvement occurs postoperatively; however, they have not used well-validated, standardized psychological measures. The present study is a retrospective analysis of the physical and psychological status of women who underwent reduction mammaplasty. One hundred ten patients who underwent a reduction mammaplasty between 1982 and 1996 were mailed a packet of questionnaires designed to assess current physical symptoms and body image. Sixty-one of the 110 patients (55 percent) responded. The vast majority reported substantial improvement or elimination of neck, back, shoulder, and breast pain, grooving from bra straps, poor posture, skin irritation, and social embarrassment. In addition, they reported significantly less dissatisfaction with their breasts as compared with a sample of breast reduction patients assessed preoperatively. Symptom relief and improved body image occurred independently of preoperative body weight, as we found few significant differences between obese and non-obese women concerning the resolution of physical symptoms or improvement in body image. Results provide further evidence of the efficacy of reduction mammaplasty not only for relief of physical symptoms but also for alleviation of body image dissatisfaction. abstract_id: PUBMED:33234945 Optimal Timing for Reduction Mammaplasty in Adolescents. Background: Reduction mammaplasty effectively alleviates symptoms and restores quality of life. However, operating on adolescents remains controversial, partly because of fear of potential postoperative breast growth. This cross-sectional study provides surgeons with a method to predict the optimal timing, or biological "sweet spot," for reduction mammaplasty to minimize the risk of breast regrowth in adolescents. Methods: The authors reviewed the medical records of women aged 12 to 21 years who underwent reduction mammaplasty from 2007 to 2019. Collected data included symptomology, perioperative details, and postoperative outcomes. Results: Four hundred eighty-one subjects were included in analyses and were, on average, 11.9 years old at first menses (menarche) and 17.9 years old at surgery. Six percent of subjects experienced postoperative breast growth. Breast size appears to stabilize considerably later in obese adolescents compared to healthy-weight and overweight patients, and breast growth in obese macromastia patients may not end until 9 years after menarche. Operating on obese women before this time point increased the likelihood of glandular breast regrowth by almost 120 percent (OR, 1.18; 95 percent CI, 1.11 to 1.26). Surgery performed less than 3 years after menarche, the commonly regarded end of puberty, increased the likelihood of glandular regrowth by over 700 percent in healthy-weight and overweight subjects (OR, 7.43; 95 percent CI, 1.37 to 40.41). Conclusions: Findings suggest that reduction mammaplasty age restrictions imposed by care providers and third-party payors may be arbitrary. Surgical readiness should be determined on an individual basis incorporating the patient's biological and psychological maturity, obesity status, potential for postoperative benefit, and risk tolerance for postoperative breast growth. Clinical Question/level Of Evidence: Risk, III. abstract_id: PUBMED:23889919 Breast reduction in adolescents: indication, timing, and a review of the literature. Background: Adolescent breast hypertrophy can have long-term negative medical and psychological impacts. In select patients, breast reduction surgery is the best treatment. Unfortunately, many in the general and medical communities hold certain misconceptions regarding the indications and timing of this procedure. Several etiologies of adolescent breast hypertrophy, including juvenile gigantomastia, adolescent macromastia, and obesity-related breast hypertrophy, complicate the issue. It is our hope that this paper will clarify these misconceptions through a combined retrospective and literature review. Methods: A retrospective review was conducted looking at adolescent females (≤18 years old) who had undergone bilateral breast reduction surgery. Their preoperative comorbidities, BMI, reduction volume, postoperative complications, and subjective satisfaction were recorded. In addition, a literature review was completed. Results: 34 patients underwent bilateral breast reduction surgery. The average BMI was 29.5 kg/m(2). The average volume resected during bilateral breast reductions was 1820.9 g. Postoperative complications include dehiscence (9%), infection (3%), and poor scarring (6%). There were no cases of recurrence or need for repeat operation. Self-reported patient satisfaction was 97%. All patients described significant improvements in self body-image and participation in social activities. The literature review yielded 25 relevant reported articles, 24 of which are case studies. Conclusion: Reduction mammaplasty is safe and effective. It is the preferred treatment method for breast hypertrophy in the adolescent female and may be the only way to alleviate the increased social, psychological, and physical strain caused by this condition. abstract_id: PUBMED:12859918 An investigation of the suitability of bra fit in women referred for reduction mammaplasty. Reduction mammaplasty is rationed in NHS plastic surgery provision, despite abundant evidence that most women who undergo this operation obtain significant improvement in their physical health and quality of life. We suspected that women seeking reduction mammaplasty often wear ill-fitting bras, which may exacerbate some of their symptoms. Therefore, we studied 103 women who attended a nurse-run pre-assessment clinic, asking them what size bra they currently wore and then measuring them to see whether their bra size was correct. We also questioned bra manufacturers, designers and shop bra fitters about bra manufacture, sizing and fitting techniques, and we reviewed these findings. Of the 102 women suitable for inclusion in the study, all wore the wrong size bra. Their mean 'claimed' back measurement was 36 inches (range: 30-42 in.) and their mean cup size was F (range: C-J). We found that all but one underestimated their back measurement (by a mean of 4 in.; range: -2-10 in.) and overestimated their cup size (by a mean of three sizes; range: one size smaller to seven sizes larger) when compared with manufacturers' fitting guidelines. Multiple regression analysis used to assess the relationships of various factors to incorrect bra sizing showed a strong link (Pearson correlation=0.54; P&lt;0.001) between obesity and inaccurate back measurement. The reasons why women with breast hypertrophy wear incorrectly fitting bras are discussed. We conclude that obesity, breast hypertrophy, fashion and bra-fitting practices combine to make those women who most need supportive bras the least likely to get accurately fitted bras, so exacerbating the symptoms for which they seek surgery. abstract_id: PUBMED:36625798 Breast Reduction Surgery in Adolescents. Macromastia-overgrowth of the female breast-is distressing and disabling, often starting in middle school and continuing into adulthood. Breast reduction is an effective treatment of symptoms of macromastia in adults, and its application in adolescents has been increasing. Special considerations within the adolescent population include interruption of age-appropriate activities, development of healthy body image/esteem, the connection to obesity, potential for postoperative recurrence/continued growth, the effect of surgery on future lactation and breast-feeding, and the adolescent's ability to participate in surgical decision-making (assent). This review highlights these issues and reviews breast reduction surgery outcomes and complications as they apply to the female adolescent. After reading this article, the reader should be able to identify adolescents who may be candidates for breast reduction surgery and have the information needed to counsel patients before their referral to a surgeon. [Pediatr Ann. 2023;52(1):e31-e35.]. Answer: Based on the provided abstracts, breast reduction surgery appears to be worth it for obese patients suffering from macromastia, as it significantly improves quality of life and alleviates symptoms associated with large breast size. A prospective study of patients undergoing reduction mammaplasty, which included both obese (BMI > 30) and non-obese (BMI < 29) patients, found that symptoms improved in both groups. Physical and mental components of the SF-36, a health-related quality of life questionnaire, improved at 1 year in both groups, suggesting that obese patients should be considered for reduction mammaplasty surgery in the same way as women of normal weight (PUBMED:26718700). Another study showed that reduction mammoplasty significantly relieved breast-related symptoms, with 55 of 59 operated patients reporting less frequent or non-existent symptoms postoperatively. The study concluded that surgical breast reduction significantly improves breast-related symptoms and health-related quality of life among women with many breast-related symptoms (PUBMED:24508223). Furthermore, a descriptive cohort study on 37 consecutive obese patients undergoing bilateral reduction mammoplasty for symptomatic macromastia revealed that these patients exhibited increased quality of life post-surgery, and this improvement was maintained over time (PUBMED:27344408). In addition, a study evaluating the short- and long-term satisfaction of patients after reduction mammoplasty found that the majority of patients were satisfied with the outcome one year after surgery, and there was an improvement in symptoms and quality of life compared with the preoperative situation (PUBMED:27038061). Overall, the evidence from these studies suggests that breast reduction surgery is beneficial for obese patients with macromastia, leading to improved quality of life, alleviation of physical symptoms, and high levels of patient satisfaction. Therefore, it can be concluded that breast reduction surgery is indeed worth it for obese patients suffering from macromastia.
Instruction: Prediction of radial artery diameter in candidates for transradial coronary angiography: Is occupation a factor? Abstracts: abstract_id: PUBMED:26148077 Prediction of radial artery diameter in candidates for transradial coronary angiography: Is occupation a factor? Objective: Transradial approach (TRA) for coronary angiography is a popular alternative approach to transfemoral coronary angiography due to the infrequent occurrence of access site complications such as bleeding, hematoma and pseudoaneursym formation. However, not all patients are suitable for TRA. This study aimed to determine the predictors of radial artery diameter in patients who are candidates for TRA. Methods: The study included 222 consecutive patients who underwent TRA due to suspected stable coronary artery disease. Radial artery diameter was evaluated prior to the procedure using B-mode ultrasonography. Results: Radial artery diameter was correlated with wrist circumference (r=0.539, p&lt;0.001), height (r=0.258, p&lt;0.001), weight (r=0.237, p&lt;0.001), body mass index (r=0.167, p=0.013), shoe size (r=0.559, p&lt;0.001), and pulse pressure (r=-0.161, p=0.016). The right radial artery was larger in men than in women (2.73±0.39 mm vs. 2.15±0.35 mm, p&lt;0.001), and smaller in patients with sedentary office work than in physically active outdoor workers (2.42±0.45 mm vs. 2.81±0.37 mm, p&lt;0.001). Wrist circumference (b=0.044, p&lt;0.001, confidence interval (CI) 95%= 0.025-0.062), shoe size (b=0.075, p=&lt;0.001, CI 95%=0.039-0.112) and occupation (b=0.228, p&lt;0.001, CI 95%=0.138-0.318) were the independent predictors of radial artery diameter in regression analysis. Conclusion: In addition to shoe size and wrist circumference, occupation may be an important predictor of radial artery diameter, and it should be evaluated with other clinical parameters in the prediction of radial artery diameter. abstract_id: PUBMED:23460113 Impact of transradial coronary procedures on radial artery function. We evaluated the impact of transradial coronary procedures on the vasodilatory function of the radial artery. A total of 65 patients who underwent transradial coronary procedures were enrolled. All patients were examined with B-mode high-resolution ultrasound. Radial artery baseline diameter and response to flow-mediated dilation (FMD) and nitroglycerin-mediated dilation (NMD) were measured in the right radial artery. The FMD of the right radial artery was 11.5%, 4.1%, and 0.7%, respectively, before the procedures, 1 day, and 3 months after the procedures (P &lt; .05 at 1 day, P &lt; .01 at 3 months). The NMD of the right radial artery was 17.6%, 5.4%, and 6.3%, respectively, before the procedures, 1 day, and 3 months after the procedures (P &lt; .05 at 1 day, P &lt; .05 at 3 months). Transradial coronary procedures decrease radial artery FMD and NMD resulting in immediate and persistent blunting of vasodilatory function. abstract_id: PUBMED:29374579 Radial artery as a graft for coronary artery bypass surgery in the era of transradial catheterization. Radial artery use as a bypass conduit is well established during the past decades, in terms of both patency and safety. On the other hand, transradial catheterization causes a series of structural and functional changes to the vessel itself. Impairment of nitric oxide-dependent vasodilation and notable decrease in radial artery diameter due to intima thickening and hyperplasia, especially during the first 3 months after its cannulation, constitute some of the most important alterations on the radial artery wall and its function after a transradial coronary catheterization procedure. Given the constantly increasing numbers of these transradial catheterization procedures, the authors of this article focus on the current knowledge regarding the potential use of the radial artery as a bypass conduit, after its catheterization, also considering several possible mechanisms on its subsequent structural and functional changes. abstract_id: PUBMED:17062231 Radial artery diameter and vasodilatory properties after transradial coronary angiography. Background: The radial artery is proposed as an alternative conduit in coronary revascularization. During the last years the transradial approach has, in many centers, emerged as the preferred technique in percutaneous diagnostic and interventional coronary procedures. This induces a trauma to the radial artery that possibly could influence its suitability as a bypass graft. In this study we assessed by ultrasound the long-term effects of transradial coronary angiography on the radial artery diameter and vasodilatory properties. Methods: Thirty patients were examined with high resolution ultrasound 10 to 14 months after a transradial coronary angiography. Radial artery baseline diameter and response to flow-mediated (FMD) and nitroglycerin-mediated vasodilation (NMD) were examined in the right radial artery with the unexposed left radial artery as control. Results: Right radial artery diameter was reduced compared with the left radial artery (2.58 +/- 0.38 vs 2.71 +/- 0.32 mm, p &lt; 0.01). Both FMD and NMD were preserved in the exposed artery (FMD 8.4 +/- 8.0 vs 8.0 +/- 6.1%, NMD 15.5 +/- 6.8 vs 16.7 +/- 6.6%, both ns). Due to the reduced baseline diameter, the right radial lumen diameter remained significantly smaller after vasodilatory stimuli. Large interindividual differences were demonstrated. Conclusions: The radial artery diameter is diminished one year after transradial coronary angiography while vasodilatory properties are preserved. The preserved vasodilatory capacity could favorably influence the suitability of the artery as bypass graft. However, with the long-term structural changes induced by transradial angiography, use of a catheter-exposed radial artery as a conduit in coronary artery revascularization should not be strongly recommended. With ultrasound, both structure and vasodilatory properties of a catheterization-exposed radial artery may be assessed in case the artery is considered as bypass material. abstract_id: PUBMED:37388675 Factors Affecting Radial Artery Occlusion After Right Transradial Artery Catheterization for Coronary Intervention and Procedures. Objective: To determine the factors influencing proximal radial artery occlusion (PRAO) right radial artery after coronary intervention. Methods: This is a single-center prospective observational study. A total of 460 patients were selected to undergo coronary angiography (CAG) or percutaneous coronary intervention (PCI) via the proximal transradial approach (PTRA) or distal transradial approach (DTRA). The 6F sheath tube were received by all patients. Radial artery ultrasound was performed 1 day before procedure and 1-4 days after procedure. Patients were divided into the PRAO group (42 cases) and the non-PRAO group (418 cases). General clinical data and preoperative radial artery ultrasound indexes of the two groups were compared to analyze related factors leading to PRAO. Results: The total incidence of PRAO was 9.1%, including 3.8% for DTAR and 12.7% for PTRA. The PRAO rate of DTRA was significantly lower than that of PTRA (p &lt; 0.05). Female, low body weight, low body mass index (BMI) and CAG patients were more likely to develop PRAO after procedure (p &lt; 0.05). The internal diameter and cross-sectional area of the distal radial artery and proximal radial artery were smaller in the PRAO group than in the non-PRAO group, and the differences were statistically significant (p &lt; 0.05). Multifactorial model analysis showed that the puncture approach, radial artery diameter and procedure type were predictive factors of PRAO, and the receiver operating characteristic curve showed a good predictive value. Conclusion: A larger radial artery diameter and DTRA may reduce the incidence of PRAO. Preoperative radial artery ultrasound can guide the clinical selection of appropriate arterial sheath and puncture approach. abstract_id: PUBMED:25634390 Effects of subcutaneously infiltrated nitroglycerin on diameter, palpability, ease-of-puncture and pre-cannulation spasm of radial artery during transradial coronary angiography. Background: The success of transradial catheterization depends on meticulous access of radial artery which in turn depends on palpating a good radial pulse. Objectives: Our objectives were to analyze the effects of subcutaneously infiltrated nitroglycerin on diameter of radial artery, palpability of radial pulse, ease-of-puncture and pre-cannulation spasm of radial artery during transradial coronary angiography. Methods: Patients undergoing transradial coronary angiography were randomized to Group NL or Group SL. In Group NL, 3 ml of solution containing nitroglycerin and lignocaine was infiltrated subcutaneously at the site intended for puncture of radial artery. Similarly, saline and lignocaine were infiltrated in Group SL. Diameter of radial artery was objectively assessed by ultrasonography. Measurements were performed at baseline and repeated at 1 min after injecting the solutions. The ease-of-puncture was evaluated by the number of punctures and the time needed for successful access of radial artery. Results: Both groups had 100 patients each. Baseline diameter of radial artery was similar between two groups. The post-injection diameter of radial artery increased by 26.3% in Group NL and 11.4% in Group SL. Nitroglycerin significantly improved the palpability of radial pulse, reduced the number of punctures and shortened the time needed for successful access of radial artery. Pre-cannulation spasm of radial artery occurred in 1% of Group NL and 8% of Group SL. Conclusions: Subcutaneously infiltrated nitroglycerin leads to significant vasodilation of radial artery. This avoids pre-cannulation spasm of radial artery, enhances palpability of the radial pulse and thus makes the puncture of radial artery easier. abstract_id: PUBMED:27665851 Radial artery ultrasound predicts the success of transradial coronary angiography. Background: The transradial approach has become the preferred vascular access during conventional coronary angiography (CCA). Hereby, we evaluated the impact of pre-procedural radial artery diameter (RAD), the cross-sectional area (CSA), and the perimeter on vascular complications (VACs). Methods: We conducted a single-center prospective analysis of 513 patients who underwent CCA. Radial artery ultrasonography was performed before and after CCA to measure the RAD, CSA, and perimeter. Results: The average RAD, CSA, and perimeter were 2.60 ± 0.48 mm, 6.2 ± 3.0 mm2, and 8.9 ± 1.7 mm, respectively. Vascular complications were reported in 56 (11%) patients. The RAD, CSA, and perimeter were significantly smaller in patients in whom procedure-related VACs were observed than in those with no complications: 2.3 ± 0.5 vs. 2.70 ± 0.54 mm (p = 0.0001), 4.9 ± 2.1 vs. 6.4 ± 3 mm2 (p = 0.001), and 7.6 ± 2.1 vs. 9.2 ± 1.6 mm (p = 0.0001), respectively. Univariate logistic regression showed that RAD, CSA and perimeter can independently predict VACs (OR 0.833, 95% CI 0.777-0.894, p &lt; 0.0001; OR 0.647, 95% CI 0.541-0.773, p &lt; 0.0001; OR 0.545, 95% CI 0.446-0.665, p &lt; 0.0001, respectively). Conclusions: Ultrasonographic study of the radial artery before CCA can provide valuable information regarding vascular access. abstract_id: PUBMED:33530954 Recanalization of the occluded radial artery via distal transradial access in the anatomic snuffbox. Background: Radial artery occlusion is a common complication after coronary angiography and percutaneous coronary intervention via the transradial access. In recent years, coronary angiography and percutaneous coronary intervention via the distal transradial access has gradually emerged, but recanalization of the occluded radial artery through the distal transradial access has rarely been reported. Case Presentation: A 67-year-old female with arterial hypertension and diabetes mellitus was admitted to the hospital due to chest pain for three hours. She was diagnosed with acute myocardial infarction. After admission, the patient successfully underwent emergency coronary angiography and percutaneous coronary intervention through the right transradial access. Radial artery occlusion was found after the operation, and recanalization was successfully performed through the right distal transradial access before discharge. Immediately after the operation and one month later, vascular ultrasonography showed that the antegrade flow was normal. Conclusions: This report presents a case of radial artery occlusion after emergency coronary angiography and percutaneous coronary intervention in which recanalization was successfully performed through the right distal transradial access. This case demonstrates that recanalization of a radial artery occlusion via the distal transradial access is safe and feasible. abstract_id: PUBMED:35334079 Distal Transradial Access: a Safe and Feasible Approach for Coronary Catheterization in Cases of Total Radial Artery Occlusion. Radial artery occlusion (RAO) is still a major complication of coronary catheterization (CC) via transradial access (TRA). Recently, coronary angiography (CAG) and percutaneous coronary intervention (PCI) through distal transradial access (dTRA) have been proven to be safe and feasible, but RAO recanalization and complete CC via dTRA have been rarely reported. Twenty-nine consecutive patients with RAO were enrolled in the present study. RAO was first confirmed by ultrasonography and after puncture of the distal radial artery (RA) in the anatomical snuffbox; cannula angiography was conducted to confirm total RAO. With the exception of two patients, we successfully recanalized the occluded RA in 27 patients. After RAO recanalization, subsequent CAG and PCI were successful, and no complications occurred. RAO recanalization and complete coronary catheterization via dTRA are safe and feasible. abstract_id: PUBMED:36074269 Incidence and predictors of radial artery occlusion following transradial coronary angiography: the proRadial trial. Objectives: This study investigated the contemporary incidence and predictors of radial artery occlusion as well as the effectiveness of antithrombotic treatment for radial artery occlusion following transradial coronary angiography. Background: The radial artery is the standard access for coronary angiography and even complex interventions. Postprocedural radial artery occlusion is still a common and significant complication. Methods: This prospective study enrolled 2004 patients following transradial coronary angiography. After sheath removal, hemostasis was obtained in a standardized fashion. Radial artery patency was evaluated by duplex ultrasonography in all patients. In case of occlusion, oral anticoagulation was recommended and patients were scheduled for a 30-day follow-up including Doppler ultrasonography. Results: A new-diagnosed radial occlusion was found in 4.6% of patients. The strongest independent predictors of radial occlusion were female sex and active smoking status. In the subgroup of patients with percutaneous coronary interventions, female sex followed by sheath size &gt; 6 French were the strongest predictors of radial occlusion. 76 of 93 patients with radial occlusion received an oral anticoagulation for 30 days. However, reperfusion at 30 days was found in 32% of patients on oral anticoagulation. Conclusion: The incidence of radial artery occlusion following coronary angiography in contemporary practice appears with 4.6% to be lower as compared to previous cohorts. Female sex and smoking status are the strongest independent predictors of radial occlusion followed by procedural variables. The limited effectiveness of oral anticoagulation for treatment of radial artery occlusion suggests a primarily traumatic than thrombotic mechanism of this complication. Answer: Yes, occupation is a factor in predicting radial artery diameter in candidates for transradial coronary angiography. A study that included 222 consecutive patients who underwent transradial coronary angiography due to suspected stable coronary artery disease found that occupation, along with shoe size and wrist circumference, were independent predictors of radial artery diameter. Specifically, the study observed that the radial artery was smaller in patients with sedentary office work compared to those with physically active outdoor work (PUBMED:26148077). This suggests that the type of occupation, which may reflect the level of physical activity associated with the job, can influence the size of the radial artery and should be considered along with other clinical parameters when predicting radial artery diameter for transradial coronary angiography.
Instruction: Do psychosocial work factors and social relations exert independent effects on sickness absence? Abstracts: abstract_id: PUBMED:12646546 Do psychosocial work factors and social relations exert independent effects on sickness absence? A six year prospective study of the GAZEL cohort. Study Objectives: The objective of this prospective cohort study was to determine whether psychosocial work characteristics and social relations exert independent effects on the incidence of sickness absence in a population of middle aged French employees over six years of follow up. Design: This study included 9631 men and 3595 women participating in the French GAZEL cohort. Social relations (social networks, personal social support, and social relations satisfaction) were measured in 1994 by self report. Psychosocial work characteristics (decision latitude, psychological demands, and social support at work) were ascertained in 1995. Sickness absence data were collected independently. The authors studied the incidence of short (&gt;7 days), intermediate (7-21 days), and long (&gt;21 days) spells of absence from 1995 to 31 December 2001. Rate ratios associated with psychosocial exposures, adjusted on sociodemographic characteristics, and health behaviours, were calculated by means of log-linear Poisson regression. Setting: A cohort of 20000 employees of France's national gas and electricity company (the GAZEL study). Main Results: Among men and women, levels of decision latitude and personal social support below the median predicted 17% to 24% increases in absence rates. Low satisfaction with social relations and low social support at work lead to a 10% to 26% excess in sick leaves among men. No interactive effects were found between the variables under study. Conclusions: The quality of the work environment and of social relations affect sickness absence over an extended period of follow up. This study supports the hypothesis of independent, not interactive effects. abstract_id: PUBMED:24639009 Psychosocial factors at work and sickness absence: results from the French national SUMER survey. Objective: This study aims at exploring the associations between psychosocial work factors and sickness absence. Methods: The sample from the French National Survey SUMER 2010 included 46,962 employees. Sickness absence spells and days within the last year were studied as two outcomes. Psychosocial work factors included psychological demands, decision latitude, social support, reward, working time, and workplace violence variables. Covariates were age, occupation, economic activity, and other occupational exposures. Results: For both genders, low latitude, low reward, shift work, bullying, and verbal abuse were risk factors of absence spells while long working hours were a protective factor. High demands, low support, and physical violence were risk factors for women. Low support and bullying for both genders, high demands for women, and low reward, long working hours, and physical violence for men increased absence duration. Conclusions: This study highlights the importance of psychosocial work factors as risk factors of sickness absence. abstract_id: PUBMED:37563743 Prospective associations of psychosocial work factors with sickness absence spells and duration: Results from the French national working conditions survey. Background: Some psychosocial work factors are associated with sickness absence, however little information is available on the associations of various psychosocial work factors and multiple exposures with sickness absence spells and duration, and gender differences. Methods: Data were from the French working conditions survey conducted on a nationally representative sample of the working population. The study sample included 17,437 employees (7292 men, 10,145 women) followed from 2013 to 2016 and/or from 2016 to 2019. Occupational exposures (20 psychosocial work factors, 4 working time/hours factors, 4 physical work exposures) were measured at the beginning of each follow-up period. Hurdle and multinomial models were used to study the associations with the number of days and spells of sickness absence. Results: Most of the psychosocial work factors predicted the risk of at least 1 day of sickness absence. Stronger associations were found among women than men for some factors. Psychosocial work factors were more likely to predict the number of spells than the number of days of sickness absence. Some physical work exposures predicted sickness absence spells and days, whereas shift work in women predicted the risk of at least 1 day of sickness absence. Dose-response associations were found between multiple psychosocial work exposures and sickness absence spells, and between multiple physical exposures and sickness absence spells and days. Conclusion: Comprehensive prevention policies oriented toward the whole psychosocial and physical work environment should be useful to reduce sickness absence among men and women. abstract_id: PUBMED:25022281 Psychosocial work conditions associated with sickness absence among hospital employees. Background: Meaningfulness of the job, collaboration among colleagues, trustworthiness of the closest superior and bullying have previously been shown to be major covariates of intention to quit the job. Aims: To test if these elements of the psychosocial work environment are also the most essential covariates of sickness absence. Methods: A cross-sectional questionnaire study of hospital employees which sought information on elements of the psychosocial work environment, general health status, life style, age, gender and profession. Data on sickness absence were obtained from the employer's salary database. Results: A total of 1809 hospital employees took part with a response rate of 65%. The mean age was 43 (range: 20-69) and 75% were female. Totally, 363 study participants (20%) had at least 14 days sickness absence (defined as high absence) during the preceding year. Associations between high sickness absence and 29 psychosocial work elements were analysed, adjusting for relevant confounders. Following multiple logistic regression analysis, three elements had an independent statistically significant association with high sickness absence: no exposure to bullying (odds ratio (95% confidence interval) 0.50 (0.33-0.77)), high meaningfulness of the job (0.71 (0.52-0.97)) and high trustworthiness of the closest superior (0.70 (0.54-0.92)). Conclusions: Elements of the psychosocial work environment which have previously been shown to have a significant independent association with intention to quit the job were also the most essential covariates of high sickness absence. abstract_id: PUBMED:33053900 Interaction Effects of Physical and Psychosocial Working Conditions on Risk for Sickness Absence: A Prospective Study of Nurses and Care Assistants in Sweden. Employees in health and social care are often simultaneously exposed to both physical and psychosocial challenges that may increase their risk for sickness absence. The study examines interaction effects of physical and psychosocial work conditions on the future risk for sickness absence among nurses and care assistants in Sweden. The study was based on 14,372 participants in any of the Swedish Work Environment Surveys conducted during the years 1993-2013 with linked register information on background factors and compensated sickness absence. Adjusted hazard ratio (HR), stratified by occupation, and measures of additive interaction effects were estimated. The combinations of high psychosocial job demands and heavy physical work and strenuous postures, respectively, significantly increased the risks for sickness absence among nurses (HR 1.43; CI 1.09-1.88 and HR 1.42; CI 1.16-1.74, respectively), as well as among care assistants (HR 1.51; CI 1.36-1.67 and HR 1.49; CI 1.36-1.63, respectively). The combinations of low job control and both heavy physical work (HR 1.44; CI 1.30-1.60) and strenuous postures (HR 1.42; CI 1.30-1.56) were also associated with excess risk for sickness absence among care assistants. We also found interaction effects among care assistants but not among nurses. The results indicate that the high sickness absence rate among care workers in Sweden can be reduced if the simultaneous exposures of high psychosocial and high physical challenges are avoided. Management policies for reduced time pressure, improved lifting aids, and measures to avoid awkward work postures are recommended. For care assistants, increased influence over work arrangements is likely to lower their sickness absence risk. abstract_id: PUBMED:30282689 Effect of psychosocial work factors on the risk of certified absences from work for a diagnosed mental health problem: a protocol of a systematic review and meta-analysis of prospective studies. Introduction: Mental health problems (MHPs) are frequent and disabling and are the first or second leading cause of certified sickness absences from work in industrialised countries. They are generally long lasting and generate a considerable human and socioeconomic burden. The deleterious effect of adverse psychosocial work factors on MHP has been documented. However, the evidence regarding the effect of these factors on absences from work for an MHP has not been synthesised since 2007. The proposed systematic review aims to synthesise the effect of adverse psychosocial work factors from three validated theoretical models (the demand-control-support, effort-reward-imbalance and organisational justice models) on the risk of certified absences from work for diagnosed MHP among workers. Method And Analysis: A systematic search strategy will be conducted in seven databases: Medline, Embase, CINAHL, Web of Science, PsycInfo, Sociological abstracts and IBSS. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses recommendations, a multistep screening process by independent reviewers will lead to study selection. The search strategy was first run in 16 January 2017 and will be updated in October 2018. Only quantitative, prospective studies evaluating the effect of at least one psychosocial work factor from the validated theoretical models on certified absence from work for a diagnosed MHP will be considered for inclusion. Extracted data will be used for quantitative and qualitative evidence synthesis as well as to assess risk of bias and methodological quality. Meta-estimates will be provided for high-quality studies and by each psychosocial work factor, after considering homogeneity and number of studies. Ethics And Dissemination: As this study will be based only on published studies, ethics approval is not required. Given that psychosocial works factors are frequent and modifiable, the results of this systematic review may provide evidence to support prevention strategies that can help to reduce the human social and economic burden associated with medically certified absences from work for an MHP. Prospero Registration Number: CRD42018091632. abstract_id: PUBMED:32596020 Is Work Group Social Capital Associated With Sickness Absence? A Study of Workplace Registered Sickness Absence at the Work Group Level. Background: The concept of social capital has its focus on cooperative relations in the workplace. This study investigates the association between social capital and sickness absence among workers in 41 work groups in the Danish dairy industry and examines the possible effects of an intervention on social capital in the workplace on sickness absence. Methods: A sample of 791 dairy workers working in 41 work groups that participated in an intervention study on social capital filled in a questionnaire on four subtypes of social capital, and social capital scores from individual participants were aggregated to the level of work groups. Sickness absence was measured at the level of work groups in company registers as the two-year average percentage of working time lost to sickness absence. Group-level associations between social capital and sickness absence were analyzed using multilevel linear regression analysis. Analyses were adjusted for age, gender, group size, and random effects at the workplace level. Results: We found statistically significant associations between social capital within work groups, social capital in relation to the immediate manager, and social capital toward the workplace as a whole on the one side and sickness absence on the other side. We found no support for any effects of the intervention on sickness absence. Conclusion: The work group level of social capital is associated with the work group level of sickness absence. However, the intervention to enhance group-level social capital had no effect on reducing sickness absence in the intervention group. abstract_id: PUBMED:33292316 Estimating the potential reduction in future sickness absence from optimizing group-level psychosocial work characteristics: a prospective, multicenter cohort study in German industrial settings. Background: Absence from work due to sickness impairs organizational productivity and performance. Even in organizations with perfect work conditions, some inevitable baseline sickness absence exists amongst working populations. The excess sickness absence observed above this baseline rate has become the focus of traditional health promotion efforts, addressing preventable physical illness, health behavior and mental health at the personal level. However, a health and safety approach following the TOP-rule would consider work-group psychosocial work characteristics as a potential risk factor amenable to organizational measures. To date, there is a scarcity of studies relating psychosocial work characteristics to possible reduction of excess sickness-absence rates. Methods: We aimed to estimate the potentially avoidable excess fraction of absence attributable to work-group psychosocial characteristics. We considered work-group averaged perception of psychosocial work characteristics as a proxy to the methodologically elusive objective assessment of organizational characteristics. Participants were recruited from multiple sites of a German automotive manufacturer with individuals nested within work groups. We predicted 12-month follow-up work-group sickness absence rates using data from a baseline comprehensive health examination assessing work characteristics, health behavior, and biomedical risk factors. We considered the quartile of work-groups yielding favorable psychosocial work characteristics as a realistic existing benchmark. Using the population attributable fraction method we estimated the potentially amenable sickness absence from improving work-group psychosocial characteristics. Results: Data from 3992 eligible participants from 29 work groups were analyzed (39% participation rate, average age 41.4 years (SD = 10.3 years), 89.9% males and 49% manual workers.). Work-group absence rates at follow up varied from 2.1 to 8.9% (mean 5.1%, 11.7 missed days). A prediction model of seven psychosocial work characteristics at the work group level explained 70% of the variance of future absence rates. The estimated reduction from improving psychosocial work characteristics to the benchmark level amounted to 32% of all sickness absence, compared to a 31% reduction from eliminating health behavioral and medical risk factors to the benchmark target. Conclusions: Psychosocial characteristics at the work-group level account for a relevant proportion of all sickness absence. Health promotion interventions should therefore address psychosocial characteristics at the work group level. abstract_id: PUBMED:26077204 Effect of Psychosocial Work Environment on Sickness Absence Among Patients Treated for Ischemic Heart Disease. Introduction: During the last decades mortality has declined in patients with coronary heart disease due to improvements in treatments and changes in life style, resulting in more people living with chronic heart disease. This implies that focus on rehabilitation and re-integration to the work-force becomes increasingly important. Previous studies among healthy workers suggest that the psychosocial working environment is associated with sickness absence. Whether the psychosocial working environment plays a role for patients with existing cardiovascular disease on return to work and sickness absence is less studied. Methods: A cohort of patients under 67 years and treated with percutaneous coronary intervention (PCI) was established in 2006. Three months after the procedure the patients (n = 625) answered a questionnaire about their psychosocial working environment. Patients were followed in registers for the following year. We examined the association between psychosocial working environment and sickness absence at 3 months, 1 year and new sick-listings during the first year with logistic regression. Results: A total of 528 patients had returned to work 3 months after the PCI, while 97 was still sick-listed. After 1 year one was dead, 465 were working and 85 were receiving health related benefits, while 74 had left the workforce permanently. A number of 106 patients were sick-listed during the whole first year or had left the workforce permanently. After the initial return to work, 90 experienced a new sickness absence during the first year while the remaining 429 did not. High work pace, low commitment to the workplace, low recognition (rewards) and low job control were associated with sickness absence at 3 months, but not after 1 year. Low job control as well as job strain (combination of high demands and low control) was associated with new sick-listings. Conclusion: The psychosocial working environment was associated with sickness absence 3 months after the PCI, but not 1 year after. abstract_id: PUBMED:26684705 Do psychological job demands, decision control and social support predictreturn to work three months after a return-to-work (RTW) programme? The rapid-RTW cohort study. Background: Long-term sickness absence is a considerable health and economic problem in the industrialised world. Factors that might predict return to work (RTW) are therefore of interest. Objective: To examine the impact of psychosocial work characteristics on RTW three months after the end of a RTW programme. Methods: A cohort study of 251 sick-listed employees from 40 different treatment and rehabilitation services in Norway recruited from February to December 2012. The Job Content Questionnaire was used to gather information on the psychosocial work conditions. Full or partial RTW was measured three months after the end of the RTW programme, using data from the national sickness absence register. Logistic regression analyses were performed to investigate the association between the psychosocial work characteristics and RTW. Results: Having low psychological job demands (OR = 0.4, 95% CI: 0.2-0.9), high co-worker- (OR = 3.4, 95% CI: 1.5-5.8), and supervisor support (OR = 3.4, 95% CI: 1.6-7.3), and being in a low-strain job (low job demands and high control) (OR = 4.6, 95% CI: 1.1-18.6) were predictive of being in work three months after the end of the RTW programme, after adjusting for several potential prognostic factors. Conclusion: Interventions aimed at returning people to work might benefit from putting more emphasise on psychosocial work characteristics in the future. Answer: Yes, psychosocial work factors and social relations exert independent effects on sickness absence. A six-year prospective study of the GAZEL cohort found that among men and women, levels of decision latitude and personal social support below the median predicted increases in absence rates. Low satisfaction with social relations and low social support at work led to an excess in sick leaves among men. No interactive effects were found between the variables under study, supporting the hypothesis of independent effects (PUBMED:12646546). Additionally, the French national SUMER survey highlighted that psychosocial work factors such as low decision latitude, low reward, shift work, bullying, and verbal abuse were risk factors for absence spells. High demands and low support were risk factors for women, and low support and bullying for both genders increased absence duration (PUBMED:24639009). Further supporting these findings, a study using data from the French working conditions survey showed that most psychosocial work factors predicted the risk of at least one day of sickness absence, with stronger associations found among women for some factors. Psychosocial work factors were more likely to predict the number of spells than the number of days of sickness absence (PUBMED:37563743). Moreover, a cross-sectional questionnaire study of hospital employees found that elements of the psychosocial work environment such as exposure to bullying, meaningfulness of the job, and trustworthiness of the closest superior had an independent statistically significant association with high sickness absence (PUBMED:25022281). In summary, multiple studies indicate that psychosocial work factors and social relations have independent effects on sickness absence, affecting both the incidence and duration of sick leaves.
Instruction: Ankle block in forefoot reconstruction before or after inflation of tourniquet--Does timing matter? Abstracts: abstract_id: PUBMED:23337270 Ankle block in forefoot reconstruction before or after inflation of tourniquet--Does timing matter? Background: Forefoot surgery causes postoperative pain frequently requiring strong painkillers. Regional blocks are now increasingly used in order to control postoperative pain especially in the first 24h when the pain is at its worst. We conducted a prospective study to see if timing of ankle block i.e. before or after inflation of tourniquet showed any difference in efficacy in postoperative pain control in first 24h. Methods: A prospective randomised study was conducted between September 2010 and August 2011 involving 60 patients. Group A (n=30) had the ankle block administered after and Group B (n=30) had a block before inflation of a thigh tourniquet. Patients were given assessment forms to chart their pain on visual analogue scale (VAS) score at 4h and 24h postoperatively. Results: Both groups demonstrated good postoperative pain control. Average VAS pain score at four and twenty fours after surgery was 2.5 and 4.5 in Group A and 3.9 and 6.3 in Group B respectively. Pain control, both at 4 and 24h surgery was better in Group A but this was statistically significant only at 24h. Conclusions: A regional anaesthetic ankle block should be routinely used in forefoot surgery to control postoperative pain. The ankle block should be applied after the inflation of tourniquet as this appears to provide better pain control. abstract_id: PUBMED:29374988 The Effects of Timing of Ankle Blocks in Forefoot, Midfoot, or Hindfoot Reconstruction With the Use of an Ankle Tourniquet. Background: Ankle blocks are used in the ambulatory surgery setting to control postoperative pain, which is often worst in the first 24 hours after surgery. We conducted a trial to determine whether the timing of ankle block administration in relation to ankle tourniquet inflation has an effect on perceived pain and narcotic consumption. Methods: A prospective randomized study was conducted between August 2015 and January 2016. Patients were assigned to three groups. In group A, an ankle block was performed before ankle tourniquet inflation; in group B, immediately after ankle tourniquet inflation; and in group C, immediately after ankle tourniquet inflation with additional local anesthetic placed around the incision at the end of the procedure. Pain was assessed by a visual analogue scale (VAS) score, which was recorded at discharge, 24 hours, 48 hours, and 2 weeks after surgery. Narcotic consumption was recorded at 24 and 48 hours after surgery. Results: The only statistically significant difference in mean VAS scores occurred at 24 hours, when patients who received an ankle block after tourniquet inflation with local incisional anesthetic at closure (group C) had a mean VAS score 2.8 points lower (3.5 vs 6.3; P = .025) than those who received only an ankle block after tourniquet inflation (group B). There was no difference in narcotic consumption between groups at 24 and 48 hours. Conclusions: The timing of ankle block in relation to tourniquet inflation did not have an effect on pain control in forefoot, midfoot, and hindfoot reconstruction. The synergistic effect of an ankle block with additional incisional anesthetic at closure, is more effective than ankle block alone and is the ideal combination for postoperative pain control in foot surgery. Levels Of Evidence: Therapeutic, Level II: Prospective, comparative trial. abstract_id: PUBMED:24694562 Safety and efficacy of forefoot surgery under ankle block anaesthesia. Background: Ankle block as the sole anaesthetic for forefoot surgery is not used uniformly throughout the UK despite obvious advantages and widespread use internationally. Aim: Evaluation of safety and effectiveness of performing elective forefoot surgery under ankle block anaesthesia alone in a pilot cohort of patients in a Scottish unit. Methods: Data were collected from 81 consecutive forefoot procedures (71 patients) using a standardised questionnaire including incremental pain assessment (0-10; 0 no pain, 10 severe pain). Results: Seven patients reported pain during surgery; average score 0.17. Average pain scores 6, 12, 24 and 48 h following surgery were 1.5, 3.09, 2.3 and 1.9, respectively. All patients were discharged home and walking on the same day. Conclusion: Forefoot surgery under ankle block alone may be safe and effective. Anaesthesia obtained permitted routine forefoot procedures and provided lasting post-operative analgesia. Combined with intra-operative sedation, use of ankle tourniquet and same day discharge, it had very high patient acceptance and satisfaction. abstract_id: PUBMED:1398364 The pneumatic ankle tourniquet with ankle block anesthesia for foot surgery. The use of a pneumatic ankle tourniquet applied to the supramalleolar ankle region is a useful method of obtaining a bloodless field in surgery of the foot. The pneumatic ankle tourniquet allows for more accurate and reproducible control of circumferential compression than the standard Esmarch bandage, when used in conjunction with the regional ankle block. Between March 1987 and October 1990, 84 foot surgeries were performed using the pneumatic tourniquet and ankle block technique on 76 patients by one surgeon. Tourniquet ischemia lasted from 30 to 105 min. Tourniquet pressure was set to 100 to 150 mm of mercury above systolic blood pressure without exceeding 325 mm of mercury. Two patients reported mild pain directly beneath the tourniquet after 45 and 70 min, respectively. Neither patient required deflation of the tourniquet to complete the procedure. The clinical and electrophysiologic evidence showed that no neurologic or vascular damage occurs. The use of the pneumatic tourniquet in conjunction with regional ankle block anesthesia provides a reasonable alternative to the standard thigh tourniquet for surgery of the foot. abstract_id: PUBMED:26012178 Surgical efficacy of the ankle tourniquet for forefoot surgery. For precise, safe and proficient procedures haemostasis is critical. For forefoot surgery, the customary thigh tourniquet is commonly accepted for this role as the additional muscle mass and minimal bony prominences in the thigh avert neuromuscular and skin injury. However, for patients with pathophysiological issues that may be exacerbated by a thigh tourniquet, application of an ankle tourniquet may decrease the risks and increase cuff tolerance as the volume of ischaemic tissue is reduced. abstract_id: PUBMED:18594091 The efficacy of combined popliteal and ankle blocks in forefoot surgery. Background: Postoperative pain following forefoot surgery can be difficult to control with oral analgesia. The aim of this study was to compare the efficacy of a combined popliteal and ankle block with that of an ankle block alone in providing postoperative analgesia following forefoot surgery. Methods: We performed a prospective, randomized, controlled single-blind study involving sixty-three patients, twenty-six of whom had a combined ankle and popliteal block and thirty-seven of whom had an ankle block alone. All patients underwent an elective osseous surgical procedure on the forefoot. Postoperative pain was evaluated with use of a visual analogue scale and a verbal response form. Patient satisfaction was also recorded. Results: The patients who had had a combined popliteal and ankle block had significantly less pain at six hours postoperatively (p = 0.011), twenty-four hours postoperatively (p &lt; 0.001), and at discharge (p = 0.014). This group of patients also had higher satisfaction with pain relief. Conclusions: A popliteal block in conjunction with an ankle block provides significantly better pain relief than does an ankle block alone in patients undergoing forefoot surgery. abstract_id: PUBMED:6642328 Ankle-foot peripheral nerve block for mid and forefoot surgery. A simplified, predictable method of peripheral nerve block at the ankle and foot with a long acting anesthetic agent bupivacaine (Marcaine) 0.5% is presented. Combined with the use of a pneumatic tourniquet it allows the performance of mid and forefoot surgery on an inpatient or outpatient basis. It alleviates the risks of general or spinal anesthesia and avoids the use of analgesics for a period of 10 to 25 hours. The posterior tibial nerve is blocked at the level or the neurovascular tunnel on the posterior aspect of the distal tibia. The deep peroneal nerve is blocked at the level of the midtarsus in the fourth fascial compartment. The intermediate and medial dorsal cutaneous branches of the superficial peroneal nerve and the saphenous nerve are blocked subcutaneously on the dorsum of the foot. The sural nerve is blocked subcutaneously at one fingerbreadth distal to the tip of the lateral malleolus. The results have been excellent to good. No adverse reactions occurred affecting the central nervous system of the myocardium. abstract_id: PUBMED:26564732 Use of peripheral blocks and tourniquets in foot surgery: A survey of Australian orthopaedic foot and ankle surgeons. Background: The most common reasons for unplanned admission to the hospital from outpatient surgery have the potential to be minimised or eliminated by peripheral nerve blocks (PNB). Tourniquets are commonly used in elective extremity surgery but it's use is mostly guided by personal preferences and does no correlate with the existing literature. Our aim was to explore the current practice of PNB and tourniquet use by foot and ankle surgeons in Australia. Methods: The Australian Orthopaedic Foot and Ankle Surgery Society (AOFAS) annual meeting was held in Adelaide in 2011. Members were asked to complete an electronic survey on their current use of peripheral nerve blocks and tourniquets. Two specific elective case scenarios were included for comment, one pertaining to forefoot pathology, the other hindfoot pathology. Results: Twenty-three AOFAS members replied to the survey, an overall response rate of 76.6%. Of these, only two surgeons did not use ankle blocks in elective surgery and none were prepared to operate without a tourniquet. Most Australian foot and ankle surgeons were reluctant to use local anaesthetic techniques without an accompanying GA. Conclusions: While the literature suggests that GA may add to complications without any benefit for the procedure and that distally based tourniquets may add benefit, these are not the favoured techniques in Australian foot and ankle surgeons. abstract_id: PUBMED:36923376 Does the pain sensitivity questionnaire correlate with tourniquet pain in patients undergoing ankle surgery? Background: Tourniquet pain is the most prominent problem in ankle surgery, and there is no proper method to predict it. It was reported that pain sensitivity questionnaires could evaluate the pain sensitivity of subjects. Its potential to predict tourniquet pain in ankle surgery is constructive and meaningful. Methods: One hundred and twenty patients undergoing ankle surgery were included in this study. The pain sensitivity questionnaire (PSQ) and self-rating anxiety scale (SAS) were completed before the operation. The methods included an ultrasound-guided popliteal sciatic, a femoral nerve block, and a proximal thigh tourniquet. The pressure of the tourniquet was set according to the systolic blood pressure (SBP + 100 mmHg). A visual analogue scale (VAS) was used to assess the tourniquet pain. Also, the onset time of tourniquet pain ≥4 VAS units was recorded. Results: The PSQ-total and PSQ-minor scores were significantly correlated with the onset time when the tourniquet pain ≥4 VAS units (r = -0.763, r = -0.731, P &lt; 0.001). The PSQ-total score &lt;6.5 group gave significantly lower ratings for items 3, 4, 14, and 16 in the PSQ survey compared to the PSQ-total score ≥6.5 group (P &lt; 0.05). Patients with high pain sensitivity have a higher need for analgesic drugs (P &lt; 0.001). PSQ-total score ≥6.5 (OR = 185.8, 95% CI = 39.8-1,437.6, P &lt; 0.001), sex (male, OR = 0.11, 95% CI = 0.018-0.488, P &lt; 0.05), and age (OR = 0.92, 95% CI = 0.842-0.995, P &lt; 0.05) were risk factors for reporting a tourniquet pain ≥4 VAS units within 30 min. Conclusion: The PSQ score is found to be correlated with intraoperative tourniquet pain. In addition, sex and age also affect the time of having intraoperative tourniquet pain. abstract_id: PUBMED:21733421 Ankle tourniquet pain control in forefoot surgery: a randomized study. Background: Forefoot surgery is often performed under regional anesthesia in awake patients, using tourniquet or Esmarch bandage to obtain a bloodless field. The purpose of this study was to examine the value and need for local tourniquet pain control using local subcutaneous analgesic mixture in patients undergoing forefoot surgery under ankle block anesthesia. Materials And Methods: We prospectively randomized 56 patients who underwent forefoot surgery under ankle block to receive either subcutaneous local anesthetic mixture under the tourniquet or no additional anesthetic. We checked for local tourniquet pain score (VAS 0 to 100) and skin condition during and after the procedure. Results: The tourniquet was quite tolerable in both groups, with an average VAS score of 7 to 21. No difference was observed between groups throughout most of the procedure. No correlation between VAS scores and procedure length or patient's age or gender was found. Conclusion: An ankle tourniquet was well-tolerated by patients without need for local anesthetic beneath the cuff. Answer: The timing of an ankle block in relation to the inflation of a tourniquet does appear to matter in terms of postoperative pain control in forefoot reconstruction. A study indicated that administering an ankle block after the inflation of a thigh tourniquet provided better pain control at 24 hours postoperatively, which was statistically significant (PUBMED:23337270). Another study found that while the timing of the ankle block itself did not significantly affect pain control, the combination of an ankle block with additional local anesthetic placed around the incision at the end of the procedure (administered after tourniquet inflation) was more effective than an ankle block alone (PUBMED:29374988). This suggests that the synergistic effect of an ankle block with additional incisional anesthetic at closure is the ideal combination for postoperative pain control in foot surgery. Furthermore, the use of an ankle block alone for forefoot surgery has been reported to be safe and effective, providing lasting post-operative analgesia and high patient acceptance and satisfaction (PUBMED:24694562). The use of a pneumatic ankle tourniquet in conjunction with regional ankle block anesthesia has also been shown to be a reasonable alternative to the standard thigh tourniquet for surgery of the foot, without causing neurologic or vascular damage (PUBMED:1398364). In summary, the timing of an ankle block in relation to tourniquet inflation does have implications for postoperative pain control in forefoot reconstruction, with better outcomes observed when the block is administered after tourniquet inflation, especially when combined with additional local anesthetic at the incision site.
Instruction: Resident job satisfaction: will 80 hours make a difference? Abstracts: abstract_id: PUBMED:15547569 Resident job satisfaction: will 80 hours make a difference? Objective: This study was undertaken to assess job satisfaction and quality of life among obstetrics and gynecology residents before the 80-hour work week. Study Design: We administered a job satisfaction survey to residents before July 1, 2003, assessing satisfaction with residency training, indicators of current quality of life, and predictions for the effect of reduced work hours. Results: Residents were satisfied with training, with important outliers, including leisure time, ability to pursue educational reading, and surgical experience. We created job satisfaction facets that were generally reliable constructs and valid predictors for overall residency satisfaction. Residents predict more free time and a healthier lifestyle under the new requirements, but do not anticipate using additional time to study or teach. Conclusion: Job satisfaction facets for residents are proposed here and may be refined through further study. Lower scores for surgical experience are of concern in light of decreasing work hours. Educators must monitor self-directed learning efforts under new work hours. abstract_id: PUBMED:34803338 The nonlinear consequences of working hours for job satisfaction: The moderating role of job autonomy. Overwork is a common phenomenon worldwide. Although previous studies have found that long working hours can cause physical and mental health problems in employees, the nature of the relationship between working hours and job satisfaction remains little understood. We have theorised that there is a curvilinear association between working hours and job satisfaction, and tested this hypothesis. A total of 771 adult Chinese employees submitted self-reported measures of working hours, job satisfaction, and job autonomy. The results show that working hours have an inverted U-shaped association with job satisfaction. Work scheduling autonomy and decision-making autonomy moderate this relationship. Here we present our data and discuss their theoretical and practical implications. Supplementary Information: The online version contains supplementary material available at 10.1007/s12144-021-02463-3. abstract_id: PUBMED:23616719 Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety. Long working hours and sleep deprivation have been a facet of physician training in the US since the advent of the modern residency system. However, the scientific evidence linking fatigue with deficits in human performance, accidents and errors in industries from aeronautics to medicine, nuclear power, and transportation has mounted over the last 40 years. This evidence has also spawned regulations to help ensure public safety across safety-sensitive industries, with the notable exception of medicine. In late 2007, at the behest of the US Congress, the Institute of Medicine embarked on a year-long examination of the scientific evidence linking resident physician sleep deprivation with clinical performance deficits and medical errors. The Institute of Medicine's report, entitled "Resident duty hours: Enhancing sleep, supervision and safety", published in January 2009, recommended new limits on resident physician work hours and workload, increased supervision, a heightened focus on resident physician safety, training in structured handovers and quality improvement, more rigorous external oversight of work hours and other aspects of residency training, and the identification of expanded funding sources necessary to implement the recommended reforms successfully and protect the public and resident physicians themselves from preventable harm. Given that resident physicians comprise almost a quarter of all physicians who work in hospitals, and that taxpayers, through Medicare and Medicaid, fund graduate medical education, the public has a deep investment in physician training. Patients expect to receive safe, high-quality care in the nation's teaching hospitals. Because it is their safety that is at issue, their voices should be central in policy decisions affecting patient safety. It is likewise important to integrate the perspectives of resident physicians, policy makers, and other constituencies in designing new policies. However, since its release, discussion of the Institute of Medicine report has been largely confined to the medical education community, led by the Accreditation Council for Graduate Medical Education (ACGME). To begin gathering these perspectives and developing a plan to implement safer work hours for resident physicians, a conference entitled "Enhancing sleep, supervision and safety: What will it take to implement the Institute of Medicine recommendations?" was held at Harvard Medical School on June 17-18, 2010. This White Paper is a product of a diverse group of 26 representative stakeholders bringing relevant new information and innovative practices to bear on a critical patient safety problem. Given that our conference included experts from across disciplines with diverse perspectives and interests, not every recommendation was endorsed by each invited conference participant. However, every recommendation made here was endorsed by the majority of the group, and many were endorsed unanimously. Conference members participated in the process, reviewed the final product, and provided input before publication. Participants provided their individual perspectives, which do not necessarily represent the formal views of any organization. In September 2010 the ACGME issued new rules to go into effect on July 1, 2011. Unfortunately, they stop considerably short of the Institute of Medicine's recommendations and those endorsed by this conference. In particular, the ACGME only applied the limitation of 16 hours to first-year resident physicans. Thus, it is clear that policymakers, hospital administrators, and residency program directors who wish to implement safer health care systems must go far beyond what the ACGME will require. We hope this White Paper will serve as a guide and provide encouragement for that effort. RESIDENT PHYSICIAN WORKLOAD AND SUPERVISION: By the end of training, a resident physician should be able to practice independently. Yet much of resident physicians' time is dominated by tasks with little educational value. The caseload can be so great that inadequate reflective time is left for learning based on clinical experiences. In addition, supervision is often vaguely defined and discontinuous. Medical malpractice data indicate that resident physicians are frequently named in lawsuits, most often for lack of supervision. The recommendations are: The ACGME should adjust resident physicians workload requirements to optimize educational value. Resident physicians as well as faculty should be involved in work redesign that eliminates nonessential and noneducational activity from resident physician dutiesMechanisms should be developed for identifying in real time when a resident physician's workload is excessive, and processes developed to activate additional providersTeamwork should be actively encouraged in delivery of patient care. Historically, much of medical training has focused on individual knowledge, skills, and responsibility. As health care delivery has become more complex, it will be essential to train resident and attending physicians in effective teamwork that emphasizes collective responsibility for patient care and recognizes the signs, both individual and systemic, of a schedule and working conditions that are too demanding to be safeHospitals should embrace the opportunities that resident physician training redesign offers. Hospitals should recognize and act on the potential benefits of work redesign, eg, increased efficiency, reduced costs, improved quality of care, and resident physician and attending job satisfactionAttending physicians should supervise all hospital admissions. Resident physicians should directly discuss all admissions with attending physicians. Attending physicians should be both cognizant of and have input into the care patients are to receive upon admission to the hospitalInhouse supervision should be required for all critical care services, including emergency rooms, intensive care units, and trauma services. Resident physicians should not be left unsupervised to care for critically ill patients. In settings in which the acuity is high, physicians who have completed residency should provide direct supervision for resident physicians. Supervising physicians should always be physically in the hospital for supervision of resident physicians who care for critically ill patientsThe ACGME should explicitly define "good" supervision by specialty and by year of training. Explicit requirements for intensity and level of training for supervision of specific clinical scenarios should be providedCenters for Medicare and Medicaid Services (CMS) should use graduate medical education funding to provide incentives to programs with proven, effective levels of supervision. Although this action would require federal legislation, reimbursement rules would help to ensure that hospitals pay attention to the importance of good supervision and require it from their training programs. RESIDENT PHYSICIAN WORK HOURS: Although the IOM "Sleep, supervision and safety" report provides a comprehensive review and discussion of all aspects of graduate medical education training, the report's focal point is its recommendations regarding the hours that resident physicians are currently required to work. A considerable body of scientific evidence, much of it cited by the Institute of Medicine report, describes deteriorating performance in fatigued humans, as well as specific studies on resident physician fatigue and preventable medical errors. The question before this conference was what work redesign and cultural changes are needed to reform work hours as recommended by the Institute of Medicine's evidence-based report? Extensive scientific data demonstrate that shifts exceeding 12-16 hours without sleep are unsafe. Several principles should be followed in efforts to reduce consecutive hours below this level and achieve safer work schedules. The recommendations are: Limit resident physician work hours to 12-16 hour maximum shiftsA minimum of 10 hours off duty should be scheduled between shiftsResident physician input into work redesign should be actively solicitedSchedules should be designed that adhere to principles of sleep and circadian science; this includes careful consideration of the effects of multiple consecutive night shifts, and provision of adequate time off after night work, as specified in the IOM reportResident physicians should not be scheduled up to the maximum permissible limits; emergencies frequently occur that require resident physicians to stay longer than their scheduled shifts, and this should be anticipated in scheduling resident physicians' work shiftsHospitals should anticipate the need for iterative improvement as new schedules are initiated; be prepared to learn from the initial phase-in, and change the plan as neededAs resident physician work hours are redesigned, attending physicians should also be considered; a potential consequence of resident physician work hour reduction and increased supervisory requirements may be an increase in work for attending physicians; this should be carefully monitored, and adjustments to attending physician work schedules made as needed to prevent unsafe work hours or working conditions for this group"Home call" should be brought under the overall limits of working hours; work load and hours should be monitored in each residency program to ensure that resident physicians and fellows on home call are getting sufficient sleepMedicare funding for graduate medical education in each hospital should be linked with adherence to the Institute of Medicine limits on resident physician work hours. MOONLIGHTING BY RESIDENT PHYSICIANS: The Institute of Medicine report recommended including external as well as internal moonlighting in working hour limits. The recommendation is: All moonlighting work hours should be included in the ACGME working hour limits and actively monitored. (ABSTRACT TRUNCATED) abstract_id: PUBMED:30009036 Working hours and depressive symptoms: the role of job stress factors. Background: South Korea is one of the countries with the longest working hours in the OECD countries. The aim of this study was to evaluate the effect of working hours on depressive symptoms and the role of job stress factors between the two variables among employees in South Korea. Methods: This study used data from the Korea Working Conditions Survey in 2014. Study subjects included 23,197 employees aged 19 years or older who work more than 35 h per week. Working hours were categorized into 35-39, 40, 41-52, 53-68, and more than 68 h per week. Depressive symptoms were assessed using the WHO's Well-Being Index with a cut-off score of 13. We calculated prevalence ratios of depressive symptoms according to working hours using log-binomial regression. Through the percentage change in prevalence ratios, we identified the extent of the role of job stress factors that explain depressive symptoms. Results: The risks of depressive symptoms were significantly higher in people who worked 35-39 h per week (PR: 1.09, CI: 1.01-1.18), 53-68 h/week (PR: 1.21, CI: 1.16-1.25), and more than 68 h/week (PR: 1.14, CI: 1.07-1.21) than 40 h/week, after adjusting for confounding variables. Job stress explained the effects of long working hours on depressive symptoms in about 20-40% of the groups working more than 40 h/week. Among the factors of job stress, social support was 10-30%, which showed the highest explanatory power in all working hours. Reward explained 15-30% in the more than 52 h working group, and reward was the most important factor in the working group that exceeded 68 h. Conclusions: We showed the working hours could be an independent risk factor for depressive symptoms in employees. To improve workers' mental health, it is important to strengthen social support in the workplace, to provide adequate rewards as they work, and ultimately to regulate the appropriate amount of working hours. abstract_id: PUBMED:36408434 Job Satisfaction among Resident Doctors of a Tertiary Care Hospital in South Delhi. Background: Doctors' job satisfaction is crucial to the health service to ensure commitment, effective training, service provision, and retention. Job satisfaction matters to doctors for their happiness, fulfilment, service to patients, and duty to employers. The quality of performance in the health sector to a large extent depends on whether healthcare providers are job-satisfied or dissatisfied. Objectives: This study assessed the level and factors determining job satisfaction among resident doctors in a federal tertiary institution. Material And Methods: A cross-sectional study was conducted among the resident doctors of a government tertiary care hospital. Data were collected using a self-reported questionnaire consisting of 49 items under seven domains of job satisfaction, where higher values indicated a higher level of satisfaction. The average scores of items were computed to construct factor scores for each individual. A Chi-square test was applied. Results: The proportion of job satisfaction among resident doctors was found to be 80.9%. On adjustment, the odds of being satisfied were found to be higher in the older age groups, among males, and doctors posted in clinical departments. Conclusion: Most respondents in this study were satisfied with their jobs with minorities satisfied with their monetary and infrastructure facilities. There is a need to address these issues to enhance healthcare quality, especially in the public sector. abstract_id: PUBMED:38025934 The effects of resident work hours on well-being, performance, and education: A review from a Japanese perspective. This article examines the impact of working-hour restrictions on the well-being, performance, and education of medical residents in Japan. Despite Japan's plan to introduce new regulations for resident working hours by 2024, there is still an ongoing debate regarding their appropriateness. This review provides a comprehensive overview of the current regulations of resident working hours worldwide, with a specific focus on weekly hours. The varying regulations are highlighted, including the 80-hour-per-week regulation in the United States and the 48-h-per-week regulation in the European Union influencing other regions. The article also discusses the effectiveness of working-hour restrictions on residents' mental health, with shorter working hours having potentially greater benefits. However, the impacts on medical safety and resident education are mixed, and further reduction in working hours must be carefully considered to avoid adverse effects. The planned changes to working-hour limits for residents in Japan offer a unique opportunity to gain new evidence on the impact of such regulations, which will be of interest to policymakers and researchers worldwide. abstract_id: PUBMED:37327639 The Demands of Surgery Residency: More Than Just Duty Hours? Introduction: Efforts to improve surgical resident well-being could be accelerated with an improved understanding of resident job demands and resources. In this study, we sought to obtain a clearer picture of surgery resident job demands by assessing how residents distribute their time both inside and outside of the hospital. Furthermore, we aimed to elucidate residents' perceptions about current duty hour regulations. Methods: A cross-sectional survey was sent to 1098 surgical residents at 27 US programs. Responses regarding work hours, demographics, well-being (utilizing the physician well-being index), and perceptions of duty hours in relation to education and rest, were collected. Data were evaluated using descriptive statistics and content analysis. Results: A total of 163 residents (14.8% response rate) were included in the study. Residents reported a median total patient care hours per week of 78.0 h. Trainees spent 12.5 h on other professional activities. Greater than 40% of residents were "at risk" for depression and suicide based on physician well-being index scores. Four major themes associated with education and rest were identified: 1) duty hour definitions and reporting mechanisms do not completely reflect the amount of work residents perform, 2) quality patient care and educational opportunities do not fit neatly within the duty hour framework, 3) resident perceptions of duty hours are impacted the educational environment, and 4) long work hours and lack of adequate rest negatively affect well-being. Conclusions: The breadth and depth of trainee job demands are not accurately captured by current duty hour reporting mechanisms, and residents do not believe that their current work hours allow for adequate rest or even completion of other clinical or academic tasks outside of the hospital. Many residents are unwell. Duty hour policies and resident well-being may be improved with a more holistic accounting of resident job demands and greater attention to the resources that residents have to offset those demands. abstract_id: PUBMED:27814318 Why the long hours? Job demands and social exchange dynamics. Background: This study investigates the determinants of long working hours from the perspectives of the demand-control model [Karasek, 1979] and social exchange theory [Blau, 1964; Goulder, 1960]. Objective: These two theoretical perspectives are tested to understand why individuals work longer (or shorter) hours. Methods: The hypotheses are tested with a representative sample of 1,604 employed Canadians. Results: In line with Karasek's model, the results support that high job demands are positively associated with longer work hours. The social exchange perspective would predict a positive association between skill discretion and work hours. This hypothesis was supported for individuals with a higher education degree. Finally, the results support a positive association between active jobs and longer work hours. Conclusions: Our research suggests that job demands and social exchange dynamics need to be considered together in the explanation of longer (or shorter) work hours. abstract_id: PUBMED:27533894 A comparison of job descriptions for nurse practitioners working in out-of-hours primary care services: implications for workforce planning, patients and nursing. Aims And Objectives: To compare and contrast job descriptions for nursing roles in out-of-hours services to obtain a general understanding of what is required for a nurse working in this job. Background: Out-of-hours services provide nursing services to patients either through telephone or face-to-face contact in care centres. Many of these services are newly created giving job opportunities to nurses working in this area. It is vital that nurses know what their role entails but also that patients and other professionals know how out-of-hours nurses function in terms of competence and clinical role. Design: Content analysis of out-of-hours job descriptions. Method: Content analysis of a convenience sample of 16 job descriptions of out-of-hours nurses from five out-of-hours care providers across England was undertaken. The findings were narratively synthesised, supported by tabulation. Results: Key role descriptors were examined in terms of job titles, managerial skills, clinical skills, professional qualifications and previous experience. Content analysis of each out-of-hours job description revealed a lack of consensus in clinical competence and skills required related to job title although there were many similarities in skills across all the roles. Conclusion: This study highlights key differences and some similarities between roles and job titles in out-of-hours nursing but requires a larger study to inform workforce planning. Relevance To Clinical Practice: Out-of-hours nursing is a developing area of practice which requires clarity to ensure patient safety and quality care. abstract_id: PUBMED:37253431 Staffing Shortages, Staffing Hours, and Resident Deaths in US Nursing Homes During the COVID-19 Pandemic. Objectives: Staffing shortages at nursing homes during the COVID-19 pandemic may have impacted care providers' staffing hours and affected residents' care and outcomes. This study examines the association of staffing shortages with staffing hours and resident deaths in nursing homes during the COVID-19 pandemic. Design: This study measured staffing hours per resident using payroll data and measured weekly resident deaths and staffing shortages using the Centers for Disease Control and Prevention's National Healthcare Safety Network data. Multivariate linear regressions with facility and county-week fixed effects were used to investigate the association of staffing shortages with staffing hours and resident deaths. Setting And Participants: 15,212 nursing homes. Measures: The primary outcomes included staffing hours per resident of registered nurses (RNs), licensed practical nurses (LPNs), and certified nursing assistants (CNAs) and weekly total deaths per 100 residents. Results: Between May 31, 2020, and May 15, 2022, 18.4% to 33.3% of nursing homes reported staffing shortages during any week. Staffing shortages were associated with lower staffing hours per resident with a 0.009 decrease in RN hours per resident (95% CI 0.005-0.014), a 0.014 decrease in LPN hours per resident (95% CI 0.010-0.018), and a 0.050 decrease in CNA hours per resident (95% CI 0.043-0.057). These are equivalent to a 1.8%, 1.7%, and 2.4% decline, respectively. There was a positive association between staffing shortages and resident deaths with 0.068 (95% CI 0.048-0.088) total deaths per 100 residents. This was equivalent to an increase of 10.5%. Conclusion And Implications: Our results showed that self-reported staffing shortages were associated with a statistically significant decrease in staffing hours and with a statistically significant increase in resident deaths. These results suggest that addressing staffing shortages in nursing homes can save lives. Answer: The question of whether an 80-hour workweek will make a difference in resident job satisfaction is complex and multifaceted. The study conducted before the implementation of the 80-hour workweek among obstetrics and gynecology residents indicated that while residents were generally satisfied with their training, there were concerns about the lack of leisure time, the ability to pursue educational reading, and surgical experience (PUBMED:15547569). The residents predicted more free time and a healthier lifestyle under the new requirements but did not anticipate using the additional time to study or teach. This suggests that while the reduction in work hours may improve some aspects of residents' quality of life, it may not necessarily translate into increased educational activities. Another study examining the relationship between working hours and job satisfaction found a curvilinear association, indicating that job satisfaction increases with working hours up to a certain point, after which it begins to decline (PUBMED:34803338). This suggests that there may be an optimal range of working hours that maximizes job satisfaction, and exceeding this range could lead to decreased satisfaction. The study also highlighted the moderating role of job autonomy in this relationship. The implementation of the Institute of Medicine's recommendations on resident physician work hours aimed to address the issues of fatigue and safety, suggesting that long working hours and sleep deprivation can negatively impact clinical performance and lead to medical errors (PUBMED:23616719). The recommendations included limiting resident physician work hours to 12-16 hour maximum shifts and ensuring a minimum of 10 hours off duty between shifts. In Japan, the introduction of working-hour restrictions is being debated, with concerns about the impact on residents' mental health, medical safety, and education (PUBMED:36408434). The effects of these restrictions on well-being and performance are mixed, indicating that further reduction in working hours must be carefully considered. Overall, while reducing work hours to 80 per week may improve certain aspects of residents' lives, such as providing more free time and potentially reducing fatigue, it is not clear that this alone will lead to increased job satisfaction or better educational outcomes. The balance between work demands, autonomy, and the educational environment plays a crucial role in determining job satisfaction among residents (PUBMED:15547569; PUBMED:34803338; PUBMED:23616719; PUBMED:36408434).
Instruction: Clinical indications for hysterectomy route: patient characteristics or physician preference? Abstracts: abstract_id: PUBMED:7503184 Clinical indications for hysterectomy route: patient characteristics or physician preference? Objectives: Our purpose was to compare the indications, characteristics, surgical management, and outcomes of patients undergoing total abdominal hysterectomy, total vaginal hysterectomy, and laparoscopically assisted vaginal hysterectomy and to assess whether patients who underwent abdominal hysterectomy might have been candidates for laparoscopically assisted vaginal hysterectomy and whether patients who underwent total abdominal hysterectomy or laparoscopically assisted vaginal hysterectomy might have been candidates for total vaginal hysterectomy. Study Design: The hospital charts of 502 women who underwent elective inpatient hysterectomy at a single large general hospital between January 1992 and November 1993 were abstracted retrospectively by use of a structured data abstraction instrument. The study included patients operated on by 16 different experienced gynecologists. Data were collected regarding patient demographic characteristics, clinical history and preoperative physical examination, indications for surgery, route of hysterectomy, intraoperative findings, pathologic study results, and outcomes in the immediate postoperative hospitalization period. Results: Patient age, race, weight, parity, and previous surgical history were significantly associated with hysterectomy type. Although no nulliparous patients and no patients with a uterine size estimated preoperatively to be &gt; 12 weeks of gestation underwent total vaginal hysterectomy, 16.6% and 30.6% of laparoscopically assisted vaginal hysterectomy patients had these characteristics, respectively. A total of 6.6% of total abdominal hysterectomy cases and 16.7% of laparoscopically assisted vaginal hysterectomy cases lacked an obvious justification for an abdominal procedure. On average, surgical time was 23 minutes longer for laparoscopically assisted vaginal hysterectomy than for total abdominal hysterectomy and 30 minutes longer for total abdominal hysterectomy than for total vaginal hysterectomy. When uterine size or configuration impaired access to uterine vessels, laparoscopically assisted vaginal hysterectomy was difficult to perform. Postoperative morbidity was similar across the three procedures, but average length of hospital stay was 2.8 days, 3.5 days, and 4.4 days for laparoscopically assisted vaginal hysterectomy, total vaginal hysterectomy, and total abdominal hysterectomy, respectively. Conclusions: Although there are some consistent and statistically significant differences in the characteristics of patients undergoing total abdominal hysterectomy versus laparoscopically assisted vaginal hysterectomy versus total vaginal hysterectomy, laparoscopically assisted vaginal hysterectomy is enabling many patients to avoid total abdominal hysterectomy. However, many patients undergoing total abdominal hysterectomy and laparoscopically assisted vaginal hysterectomy could probably undergo total vaginal hysterectomy instead. Clinical outcomes were similar regardless of type of hysterectomy performed. Practice style and personal preference of the surgeon thus may be playing a significant role in selection of hysterectomy type. Laparoscopically assisted vaginal hysterectomy becomes technically difficult and conversion to total abdominal hysterectomy is more frequent when uterine size or configuration impairs access to uterine vessels. abstract_id: PUBMED:18874198 Benefits and indications of vaginal hysterectomy. N/A abstract_id: PUBMED:24922591 Hysterectomies at the Conakry university hospitals: social, demographic, and clinical characteristics, types, indications, surgical approaches, and prognosis The objectives of this study were to calculate the frequency of hysterectomies at the Conakry university hospitals (Donka Hospital and Ignace Deen Hospital), describe the women's social, demographic, and clinical characteristics, and identify the key indications, the surgical techniques used, and the prognosis. This was a 2-year descriptive study, retrospective for the first year (May 2011-April 2012) and prospective for the second (May 2012-April 2013), of 333 consecutive hysterectomies performed in the obstetrics and gynecology departments of these two hospitals. Hysterectomy is one of the surgical procedures most commonly performed in these departments (following cesarean deliveries), with frequency of 4.4% interventions. The profile of the women undergoing this surgery was that of a woman aged younger than 49 years (61%), married (75.7%), multiparous (33%), of childbearing age (61%), and with no history of abdominal or pelvic surgery (79.6%). Nearly all hysterectomies were total (95%, compared with 5% subtotal; the approach was abdominal in 82.25% of procedures and vaginal in 17.75%. The most common indication for surgery was uterine fibroids (39.6%), followed by genital prolapse (22.2%), and obstetric emergencies (17.8%). The average duration of surgery was 96 minutes for abdominal and 55 minutes for vaginal hysterectomies. The principal intraoperative complication was hemorrhage (12.31%), and the main postoperative complication parietal suppuration (21.02%). The average length of hospital stay was 10.3 days for abdominal hysterectomies and 7.15 days for vaginal procedures. We recorded 14 deaths for a lethality rate of 4.2%; most of these deaths were associated with hemorrhagic shock during or after an obstetric hysterectomy (93%). Hysterectomy remains a common intervention in developing countries. Its indications are common during the pregnancy and postpartum period, with high morbidity and mortality rates. Improving obstetric coverage could reduce its indications. abstract_id: PUBMED:23635670 Variation in ovarian conservation in women undergoing hysterectomy for benign indications. Objective: Emerging data suggest that oophorectomy at the time of hysterectomy for benign indications may increase long-term morbidity and mortality. We performed a population-based analysis to estimate the rates of oophorectomy in women undergoing hysterectomy for benign indications. Methods: The Perspective database was used to estimate the rate of ovarian preservation in women aged 40-64 years who underwent hysterectomy for benign indications. Hierarchical mixed-effects regression models were developed to estimate the influence of patient, procedural, physician, and hospital characteristics on ovarian conservation. Between-hospital variation in ovarian preservation also was estimated. Results: Among 752,045 women, 348,972 (46.4%) underwent bilateral oophorectomy, whereas 403,073 (53.6%) had ovarian conservation. Stratified by age, the rate of ovarian conservation was 74.3% for those younger than 40 years of age; 62.7% for those 40-44 years of age; 40.8% for those 45-49 years of age; 25.2% for those 50-54 years of age; 25.5% for those 55-59 years of age; and 31.0% for those 60-64 years of age. Younger age and more recent year of surgery had the strongest association with ovarian conservation. The observed patient, procedural, physician, and hospital characteristics accounted for only 46% of the total variation in the rate of ovarian conservation; 54% of the variability remained unexplained, suggesting a large amount of intrinsic between-hospital variation in the decision to perform oophorectomy. Conclusion: The rate of ovarian conservation is increasing, particularly among women younger than 50 years old. Although demographic and clinical factors influence the decision to perform oophorectomy, there appears to be substantial between-hospital variation in performance of oophorectomy that remains unexplained by measurable patient, physician, or hospital characteristics. Level Of Evidence: II. abstract_id: PUBMED:28177208 Minimally invasive hysterectomy for benign indications: an update. Hysterectomy is one of the most commonly performed surgeries worldwide. Indication for hysterectomy is most often benign, which includes conditions such as prolapse, abnormal uterine bleeding, fibroids and pelvic pain. A broad range of surgical approaches exists for hysterectomy, ranging from open to minimally invasive techniques. Under this minimally invasive umbrella, the following techniques are included: vaginal hysterectomy, laparoscopic hysterectomy, and variations of those two techniques, such as laparoscopic-assisted vaginal hysterectomy, robotic-assisted hysterectomy, laparo-endoscopic single-site laparoscopic hysterectomy, mini-laparoscopic hysterectomy, and natural orifice transluminal endoscopic surgery hysterectomy. As hysterectomy is being performed increasingly via a minimally invasive route, it is important that gynecologists are familiar with the established as well as emerging techniques for minimally invasive hysterectomy (MIH). Surgical planning is a complex process, which requires an in depth and informed conversation between a patient and her physician. Patient preferences, surgeon skill and indication for surgery all should be taken into consideration when determining the most appropriate surgical approach. This article will review the different routes of MIH. Perioperative considerations will be discussed, as will the advantages and disadvantages of each minimally invasive approach. abstract_id: PUBMED:17486915 Laparoscopic hysterectomy: technique, indications Today, hysterectomy is, after caesarean section, the most frequent surgical intervention performed in fertile women. Introduced in 1989, laparoscopic hysterectomy remains poorly diffused: today, less than 5% of all hysterectomies remain done by laparoscopy. Nevertheless after a correct learning curve, laparoscopic hysterectomy finds perfect indications in benign and even some malignant indications. In these conditions, the complication rate is similar to those of the other surgical routes. Currently the limitations of this technique are the very bulky uterus, contraindicated uterine morcellation, the lengthening of the operative time potentially generated by this technique, and the lack of experienced instructors. abstract_id: PUBMED:37533661 Pathologic Profile of Hysterectomy Cases in Saudi Arabia: A Tertiary Center Experience. Background: Hysterectomies are increasingly being replaced by more conservative management modalities and are now only used for limited clinical indications. The agreement between these indications and the final pathology is understudied. Objectives: This study aimed to correlate the preoperative clinical diagnoses with the pathological findings detected in hysterectomies in Saudi women of different age groups. Materials And Methods: This retrospective study analyzed the preoperative clinical indications and the subsequent clinical and pathological data of all hysterectomies conducted at a single tertiary care hospital in the Eastern Province of Saudi Arabia between January 2010 and December 2021. Results: There was no decline in the frequency of hysterectomies across the study period. Abnormal uterine bleeding was the most common clinical indication across all age groups, followed by symptomatic fibroid and uterine prolapse in women aged &lt;50 and ≥50 years, respectively. A total of 9.2% of the cases were indicated for therapeutic reasons, either as an emergency procedure for peripartum hemorrhage or for suspected uterine premalignant and malignant lesions. Besides these, 41.2% of patients had more than one pathology, with the most common combination being leiomyoma and adenomyosis (43.4%). The difference in age between malignant and benign cases (52.3 vs. 48.8 years, respectively) was not statistically significant (P = 0.109). Conclusions: Most of our cohort had benign lesions. Perceptible misuse of the procedure in this cohort was unlikely, as a large proportion of the patients had more than one pathology corresponding to each clinical indication. abstract_id: PUBMED:31449888 Incidence of occult malignancies identified during hysterectomies performed for benign indications. Objective: Hysterectomy is a commonly performed gynaecological procedure worldwide. Although most hysterectomies are performed for benign indications, the possibility of an unexpected malignancy in the final pathology results should be considered. The aim of our study was to investigate the evaluation and management of patients who underwent hysterectomy for benign reasons but were diagnosed with a malignancy in the final pathology results. Methods: We retrospectively examined the medical records of patients who underwent hysterectomy for benign indications between 2011 and 2017, and recorded the information obtained from these patient files. Results: In total, 1050 patients who underwent hysterectomy for benign indications were included in the study. Among these patients, 127 underwent hysterectomy for uterine prolapse, 230 for uterine myoma, 223 for treatment-resistant menometrorrhagia, 150 for treatment-resistant menometrorrhagia and uterine myoma, 61 for endometriosis, 108 for ovarian cysts, 45 for endometrial polyps, and 106 for treatment-resistant menometrorrhagia and ovarian cysts. In 13 of these patients, malignancy was unexpectedly identified via the final pathology results. Seven of these patients had sarcoma, three had cervical cancer, two had ovarian cancer, and one had metastasis of a haematological malignancy. Conclusion: Malignancy was identified in 13 of the 1050 patients included in our study. The incidence of unexpected malignancy in these patients who underwent hysterectomies performed for benign indications was determined to be 1.23%. Seven of these 13 patients had sarcoma. Our results demonstrate the absence of reliable data for preoperative detection of sarcoma. abstract_id: PUBMED:8591927 The impact of nonclinical factors on practice variations: the case of hysterectomies. Objective: This study investigates the role of nonclinical factors (physician characteristics) in explaining variations in hysterectomy practice patterns. Data Sources And Study Setting: Patient discharge data are obtained from the Arizona state discharge database for the years 1989-1991. Physician data are obtained from the Arizona State Medical Association. The analyses are based on 36,104 cases performed by 339 physicians in 43 hospitals. Study Design: This article measures the impact of physician factors on the decision to perform a hysterectomy, controlling for a host of patient and hospital characteristics. Physician factors include background characteristics and training, medical experience, and physician's practice style. Physician effects are evaluated in terms of their overall contribution to the explanatory power of regression models, as well as in terms of specific hypotheses to be tested. Data Collection: The sources of data were linked to produce one record per patient. Principal Findings: As a set, physician factors account for a statistically significant increase in the explanatory power of the model after addition of patient and hospital effects. Parameter estimates provide further support for the hypothesized effects of physicians' background, experience, and practice characteristics. Conclusions: Overall, the results confirm that nonclinical (physician) factors play a statistically significant role in the hysterectomy decision. Substantively, however, these factors play a smaller, secondary role compared to that of clinical and patient factors in explaining practice variations in hysterectomies. The results suggest that efforts to reduce unnecessary hysterectomies should be directed at identifying the appropriate clinical indications for hysterectomy and disseminating this information to physicians and patients. This may require such intervention strategies as continuing clinical education, promulgation of explicit practice guidelines, peer review, public education, and greater understanding and inclusion of patient preference in the decision process. abstract_id: PUBMED:21623739 Use of vaginal hysterectomy in Denmark: rates, indications and patient characteristics. Objective: To describe the use of vaginal, abdominal and laparoscopic hysterectomy in Denmark from 1999 to 2008, the influence of national guidelines and the patient and procedure-related characteristics associated with the choice of vaginal hysterectomy. Design. Nationwide register-based cohort study. Setting: Danish Hysterectomy Database and Danish National Patient Registry. Population: All women with a hysterectomy for benign indications in Denmark from 1999 to 2008. Methods: The incidence rate/100,000 women was used to describe the route of surgery in hysterectomies of 50,755 women. A multiple logistic regression analysis was done to examine the association between patient- and procedure-related characteristics and choice of surgical procedure including 20,486 women. Main Outcome Measures: Trends in surgical approach from 1999 to 2008. Patient- and surgery-related characteristics associated with vaginal hysterectomy from 2004 to 2008. Results: There was an overall increase in the use of vaginal hysterectomies from 12 to 34%, a decrease in the use of abdominal hysterectomies and a consistent number of laparoscopic hysterectomies. The number of vaginal hysterectomies varied between regions, ranging from 2 to 86%. The use of vaginal hysterectomy was not dependent on the total number of hysterectomies performed at the hospital. The characteristics associated with vaginal hysterectomy were higher age, smaller uterus size, indications for surgery (genital prolapse and severe uterine bleeding), less smoking and moderate alcohol intake. Conclusions: Vaginal hysterectomy has replaced abdominal hysterectomy increasingly but cannot be directly correlated to the implementation of national guidelines as there was large national variation. Several characteristics are significantly associated with vaginal hysterectomy. Answer: The clinical indications for the route of hysterectomy appear to be influenced by both patient characteristics and physician preference. Patient age, race, weight, parity, and previous surgical history were significantly associated with the type of hysterectomy performed, with certain characteristics such as nulliparity and larger uterine size being less common in patients undergoing total vaginal hysterectomy (PUBMED:7503184). However, the study also suggests that many patients undergoing total abdominal hysterectomy (TAH) and laparoscopically assisted vaginal hysterectomy (LAVH) could potentially undergo total vaginal hysterectomy (TVH) instead, indicating that practice style and personal preference of the surgeon may play a significant role in the selection of hysterectomy type (PUBMED:7503184). Additionally, there is substantial between-hospital variation in the decision to perform oophorectomy at the time of hysterectomy for benign indications, which remains unexplained by measurable patient, physician, or hospital characteristics, further suggesting the influence of physician preference (PUBMED:23635670). The choice of surgical approach for hysterectomy is complex and should consider patient preferences, surgeon skill, and the indication for surgery (PUBMED:28177208). Moreover, the use of vaginal hysterectomy has increased over time and varies significantly between regions, which may not be directly correlated to the implementation of national guidelines, indicating that other factors such as physician preference may be at play (PUBMED:21623739). The study on hysterectomies at the Conakry university hospitals also highlights that the surgical approach is predominantly abdominal, with the most common indication being uterine fibroids (PUBMED:24922591). In conclusion, while patient characteristics are important determinants of the route of hysterectomy, physician preference and practice style also significantly influence the choice of procedure, as evidenced by the variability in surgical approaches and the lack of a direct correlation with clinical guidelines (PUBMED:7503184, PUBMED:23635670, PUBMED:21623739).
Instruction: Are psychotropics drugs used in pregnancy? Abstracts: abstract_id: PUBMED:21327172 Lactating mother and psychotropic drugs. Usage of psychotropics during pregnancy and lactation has always been a topic of debate and controversy. The debate stems from the potential adverse effects on the growing fetus or infants due to the transfer of psychotropic drugs through placenta or breast milk of mothers receiving them; and the problem of discontinuing psychotropics in lactating mother considering chances of relapse. However, most of the psychotropics are found to be relatively safe when used cautiously during the lactation phase. This article describes available data on the use of psychotropics in lactating mothers, in particular, in relation to the safety profile of infants. abstract_id: PUBMED:16783040 Psychotropics in pregnancy: weighing the risks. With increase in the use of newer psychotropics, there is a growing concern in relation to the teratogenicity. Unfortunately, it is not possible to carry out prospective studies in pregnant women and as a result physicians caring for such patients have to rely on case reports, case series, and retrospective studies. Available evidence shows that the safety of psychotropics in pregnancy is still unresolved and the decision to prescribe psychotropics in pregnancy should be taken in the light of severity of mental disease, and drugs should be prescribed only when the potential risk to the foetus from exposure is outweighed by the risk of untreated maternal disorder. In this review we discussed the current evidence of the teratogenic risks with psychotropic drugs commonly used to treat psychiatric disorders and also focused on decision making in such patients. abstract_id: PUBMED:17455181 Are psychotropics drugs used in pregnancy? Purpose: To assess the prevalence and characteristics of psychiatric drug use in pregnancy. Methods: A prospective observational study was performed on a total of 1332 consecutive women admitted for delivery, during a 3 months period, in the public obstetric services of Tenerife Island (covering a population of 1 000 000 inhabitants). Results: Less than 4% (3.6%) of the women recognised having a psychiatric disorder, and only 2.5% were receiving psychiatric drug treatment at the moment they knew they were pregnant; of those, 68.7% introduced substantial modifications in their treatment at that moment, 47.9% did not report any change with respect to the period before pregnancy and 35.4% recognised that their mood was worse than previously. Although patients affected by a psychiatric disorder registered a higher rate of abdominal delivery, no differences in delivery or obstetric complications were found between women with and without psychiatric illness or in relation to psychiatric drug treatment. Conclusions: Compared to the literature, the studied population shows a lower rate of psychiatric problems and pharmacological treatment. This might reflect underrecognition or undertreatment. abstract_id: PUBMED:33926354 Prenatal medication exposure and epigenetic outcomes: a systematic literature review and recommendations for prenatal pharmacoepigenetic studies. When used during pregnancy, analgesics and psychotropics pass the placenta to enter the foetal circulation and may induce epigenetic modifications. Where such modifications occur and whether they disrupt normal foetal developme nt, are currently unanswered questions. This field of prenatal pharmacoepigenetics has received increasing attention, with several studies reporting associations between in utero medication exposure and offspring epigenetic outcomes. Nevertheless, no recent systematic review of the literature is available. Therefore, the objectives of this review were to (i) provide an overview of the literature on the association of prenatal exposure to psychotropics a nd analgesics with epigenetic outcomes, and (ii) suggest recommendations for future studies within prenatal pharmacoepigenetics. We performed systematic literature searches in five databases. The eligible studies assessed human prenatal exposure to psychotropics or analgesics, with epigenetic analyses of offspring tissue as an outcome. We identified 18 eligible studies including 4,419 neonates exposed to either antidepressants, antiepileptic drugs, paracetamol, acetylsalicylic acid, or methadone. The epigenetic outcome in all studies was DNA methylation in cord blood, placental tissue or buccal cells. Although most studies found significant differences in DNA methylation upon medication exposure, almost no differences were persistent across studies for similar medications and sequencing methods. The reviewed studies were challenging to compare due to poor transparency in reporting, and heterogeneous methodology, design, genome coverage, and statistical modelling. We propose 10 recommendations for future prenatal pharmacoepigenetic studies considering both epidemiological and epigenetic perspectives. These recommendations may improve the quality, comparability, and clinical relevance of such studies. PROSPERO registration ID: CRD42020166675. abstract_id: PUBMED:21267317 The use of psychotropic drugs in pregnancy and lactation. The decision to use drugs during pregnancy can be complicated by many factors. The concerns of congenital malformation, the effects of drugs during labour, withdrawal effects on the newborn, and the exposure of the breast-fed newborn to drugs are some of the issues every physician is faced with when treating a pregnant woman. Drugs should be used during pregnancy only in situations where anticipated benefits far outweigh the possible risks. Approximately 20%-30% of women of childbearing age reportedly use antidepressants. It is absolutely essential that physicians know the effects of psychotropics on pregnancy and lactation before they issue a prescription. In this paper I have reviewed the effects of different classes of psychotropics on women during pregnancy and lactation. abstract_id: PUBMED:22556440 Unplanned pregnancies leading to psychotropic exposure in women with mental illness - Findings from a perinatal psychiatry clinic. Objective: The aims of this study were (a) to describe the sociodemographic and clinical profile of women with unplanned pregnancies and consequent exposure to psychotropic drugs, (b) to describe the nature and timing of psychotropic exposure during pregnancy among these women, and (c) to examine the outcome of decisions related to pregnancy following consultation at a perinatal psychiatric service. Materials And Methods: WOMEN ATTENDING THE PERINATAL PSYCHIATRY SERVICES REFERRED FOR ACCIDENTAL EXPOSURE TO PSYCHOTROPICS WERE ASSESSED BY STRUCTURED INTERVIEWS FOR THE FOLLOWING DETAILS: sociodemographic details, clinical details, psychotropic drug use, advice given in the clinic, and outcome related to this advice. Results: Fifty-three women were referred for counseling related to unplanned pregnancies and consequential psychotropic exposure. Forty-two women (79%) sought consultation in the first trimester. More than a third of the women, 19 (36%), were taking more than one psychotropic medication during the first consultation. Only 11 (20%) women had received any form of prepregnancy counseling prior to becoming pregnant. Of the 37 women who came for follow-up in the clinic, 35 (94%) of them continued the pregnancy. Conclusions: Unplanned pregnancies in women with mental illness are common and result in exposure to multiple psychotropic medications during the first trimester. Majority of women did not report of having prepregnancy counseling and which needs to be an integral part of treatment and education. abstract_id: PUBMED:19742205 More questions than answers! Clinical dilemmas in psychopharmacology in pregnancy and lactation. Women in childbearing age frequently suffer from mental illness. Maternal psychiatric disorders may have a devastating impact on the fetus and the newborn. Thus treating or preventing relapse of these disorders during pregnancy and puerperium is a clinical and ethical duty with the necessity to avoid or minimize fetal or neonatal drug exposure. Though there are many guidelines and comprehensive reviews regarding drug safety in pregnancy and lactation, the application of these recommendations into clinical practice appears to be complex. Hence, we present some clinical questions with answers considering the available literature on safety of psychotropics in pregnancy and lactation. abstract_id: PUBMED:22165524 Psychotropics, pregnancy, lactation and neurodevelopmental issues: a review Young women are at risk of mental problem during pregnancy and postpartum. It is particularly important that the mental health of childbearing women is stable. However, all psychotropic medications diffuse across the placenta and are excreted into breast milk. The fetus is thus at risk of teratogenicity and postnatal behavioural sequelae. Which are today the psychotropics that can take up the challenge of minimizing the risk for the foetus and at the same time limiting the impact of morbidity on the mother ? Medical literature of last ten years is reviewed. The safety issue of psychotropic's use during pregnancy and lactation is still unresolved. Little information is available concerning developmental outcome of children exposed to medication, especially about newer molecules even though recent case reports are encouraging. Larger standardized studies are still needed. abstract_id: PUBMED:36356427 Women with mental illness seeking conception counselling at a specialist perinatal psychiatry service in India. Aims: To study the clinical and sociodemographic profile of women with mental illness seeking conception counselling at perinatal psychiatry service. To compare women seeking preconception counselling (PCC) to that of those who sought help after unplanned pregnancy with exposure (UPE) to psychotropics. Material And Methods: Data was extracted from the clinical records of women between the year 2014-2017 who attended perinatal outpatient services. The details collected included sociodemographic details, illness details, contraception, infertility treatment and detailed treatment history. Results: Of the 766 women who consulted perinatal psychiatry services (2014-17), 179 (23.37%) had consulted for conception counselling. Out of that 104 (58.1%) sought PCC, and 75 (41.9%) had UPE to psychotropics. PCC was sought by women with higher education, those having a severe mental illness, and a longer duration of illness. Women with medical comorbidities and receiving infertility treatment also sought PCC. Also, women who had earlier undergone PCC been more likely to seek PCC for further pregnancies. Conclusions: Women with mental illness should be counselled regarding planned pregnancies, contraception, and the impact of psychotropics on pregnancy and foetus. Also, liaison with other specialities like obstetricians, sonologist and paediatricians is essential. abstract_id: PUBMED:36640183 Exposure to psychotropic drugs before and during pregnancy: what has changed over the last two decades? Trends in prescribing psychotropic drugs before and during pregnancy may have changed over the years, but actual information is lacking. We therefore compared and assessed the exposure and acceptance rates of classes of antipsychotic (+ lithium), anxiolytic, sedative/hypnotic, antidepressant, and psychostimulant before and during pregnancy in the past two decades. All singleton pregnancies with ≥1 prescription of psychotropic drug from six months before pregnancy until child's birthdate were identified in the pregnancy subset of the IADB.nl prescription database. The prescription patterns of psychotropics were distinguished as continuation rate (CR), initiation rate (IR), discontinuation rate (DR), total exposure rate (TER), and acceptance rate. Singleton pregnancies exposed to psychotropic drugs before and during pregnancy increased from 118.4 to 136.5 (per 1000 singleton pregnancies) between decades. Changing trends were observed in decade 2, including a high increase in the TER of antipsychotic class (3.3 to 6.8) and antidepressant class (23.0 to 40.6). A marked increase for individual drugs was seen for sertraline (TER: 0.6 to 6.6 and PAT: 35.3% to 82.5%), citalopram (TER: 2.3 to 10.0 and PAT: 51.1% to 74.6%), and quetiapine (TER: 0.4 to 3.1 and PAT: 57.1% to 66.0%). Although the total exposure rates of five classes of psychotropics in singleton pregnancies increased in decade 2, only antidepressant class had a higher acceptance rate during pregnancy. Certain SSRI antidepressants and atypical antipsychotics were more frequently prescribed in decade 2 than in decade 1, reflecting that treatment options were preferred for safer treatment choices. Answer: Yes, psychotropic drugs are used in pregnancy. A prospective observational study found that 2.5% of women were receiving psychiatric drug treatment at the moment they knew they were pregnant, with 68.7% introducing substantial modifications in their treatment at that moment (PUBMED:17455181). The use of psychotropics during pregnancy is a complex decision that involves weighing the risks of potential adverse effects on the fetus against the risks of untreated maternal mental illness (PUBMED:16783040). Despite the concerns, the safety issue of psychotropic use during pregnancy and lactation is still unresolved, and there is a need for larger standardized studies to better understand the developmental outcomes of children exposed to these medications (PUBMED:22165524). Trends in prescribing psychotropic drugs before and during pregnancy have changed over the years, with an increase in the total exposure rates of certain classes of psychotropics in singleton pregnancies (PUBMED:36640183).
Instruction: Do 5-Year-Old Children Perform Dual-Task Coordination Better Than AD Patients? Abstracts: abstract_id: PUBMED:24232169 Do 5-Year-Old Children Perform Dual-Task Coordination Better Than AD Patients? Objective: A cross-sectional study was carried out to examine the pattern of changes in the capacity to coordinate attention between two simultaneously performed tasks in a group of 570 volunteers, from 5 to 17 years old. Method: The results revealed that the ability to coordinate attention increases with age, reaching adult values by age 15 years. Also, these results were compared with the performance in the same dual task of healthy elderly and Alzheimer disease (AD) patients found in a previous study. Results: The analysis indicated that AD patients showed a lower dual-tasking capacity than 5-year-old children, whereas the elderly presented a significantly higher ability than 5-year-old children and no significant differences with respect to young adults. Conclusion: These findings may suggest the presence of a working memory system's mechanism that enables the division of attention, which is strengthened by the maturation of prefrontal cortex, and impaired in AD. abstract_id: PUBMED:36773489 Motor and cognitive dual-task performance under low and high task complexity in children with and without developmental coordination disorder. Background: In everyday life, tasks are often performed simultaneously, which may be more difficult for children with developmental coordination disorder (DCD) than their peers. Aims: To examine (1) the effects of task complexity and type of concurrent task on dual-task performance in children with and without DCD; and (2) if the amount of effort that children put into the task performance differs between the groups. Methods: Participants were 64 children with and without DCD (aged 7-14 years). The dual-task paradigm consisted of a manual dexterity task of relatively low complexity (box and block test) or relatively high complexity (pegboard task), and a concurrent motor task (cycling task) or a concurrent cognitive task (word-listening task). To assess mental effort, children were asked how tired they felt before and after the experiment. Results: Dual-task interference was highest when the manual dexterity task of relatively high complexity was combined with the concurrent motor task. There were no group differences in dual-task interference, but children with DCD reported a larger increase in the level of tiredness after the experiment indicative of greater mental effort. Conclusions: Depending on task demands, children with DCD are able to perform dual-tasks at the same level as their peers, but performance may take children with DCD more mental effort. abstract_id: PUBMED:38087064 Cognitive control and meta-control in dual-task coordination. When two tasks are presented simultaneously or in close succession, such as in the overlapping task paradigm of the psychological refractory period, dual-task performance on those tasks is usually impaired compared with separate single-task performance. Numerous theories explain these emerging dual-task costs in terms of the existence of capacity limitations in the constituent component tasks. The current paper proposes active dual-task coordination processes that work on the scheduling of these capacity-limited processes. Further, there are recent findings that point to a meta-cognitive control level in addition to these active coordination processes. This additional level's responsibility is to adjust the dual-task coordination of capacity-limited stages (i.e., coordination adjustment). I review evidence focusing on the existence of dual-task coordination processes and processes of coordination adjustment. The remainder of the paper elaborates on preliminary findings and points to the separability of these sets of processes, which is a key assumption of the framework of dual-task coordination adjustment. abstract_id: PUBMED:35523027 The effects of a visuo-motor and cognitive dual task on walking adaptability in children with and without Developmental Coordination Disorder. Background: Children with Developmental Coordination Disorder (DCD-C) have motor coordination deficits which lead to difficulties in sports and play activities that require adaptations of the walking pattern. Sports and play often involve performing dual tasks, which affects performance in DCD-C more than in typically developing children (TD-C). So far, testing the impact of dual tasking on walking adaptability in DCD-C has received little scientific attention. Research Question: We tested the hypothesis that 6-12 year old DCD-C will show lower levels of walking adaptability than TD-C, and that due to problems with automatization this difference will increase when they are forced to divide their attention between tasks when a concurrent visuo-motor or cognitive task is added. Methods: Twenty-six DCD-C and sixty-nine TD-C were included in this cross-sectional study. They performed a challenging walking adaptability (WA) task on a treadmill as a single, a visuo-motor dual and a cognitive dual task at a pace of 3.5 km/h. Repeated measures ANCOVAs were performed with condition (single/dual task) as within-subjects factor, group (TD/DCD) as between-subjects factor, and age as covariate. Results: DCD-C performed poorer on the WA task than TD-C. The group differences increased when a concurrent visuo-motor task was added, but not when adding a concurrent cognitive task. A significant effect of age was found with younger children performing worse on all tasks. Significance: The results highlight the problems DCD-C have with walking adaptability and dual tasks, which capacities are essential for full participation in sports and play activities. Future research should investigate whether DCD-C may benefit from task-specific walking adaptability training. abstract_id: PUBMED:26528232 Better dual-task processing in simultaneous interpreters. Simultaneous interpreting (SI) is a highly complex activity and requires the performance and coordination of multiple, simultaneous tasks: analysis and understanding of the discourse in a first language, reformulating linguistic material, storing of intermediate processing steps, and language production in a second language among others. It is, however, an open issue whether persons with experience in SI possess superior skills in coordination of multiple tasks and whether they are able to transfer these skills to lab-based dual-task situations. Within the present study, we set out to explore whether interpreting experience is associated with related higher-order executive functioning in the context of dual-task situations of the Psychological Refractory Period (PRP) type. In this PRP situation, we found faster reactions times in participants with experience in simultaneous interpretation in contrast to control participants without such experience. Thus, simultaneous interpreters possess superior skills in coordination of multiple tasks in lab-based dual-task situations. abstract_id: PUBMED:27992773 Larger between-task crosstalk in children than in adults: Behavioral results from the backward crosstalk paradigm and a diffusion model analysis. In dual-task settings, one can find influences from Task 2 processing even on Task 1 performance, an effect referred to as the backward crosstalk effect (BCE). The size of the BCE has been taken as an index of how well Task 1 processing can be shielded against concurrently ongoing Task 2 processes. In the current study, we compared the size of the BCE between adults and a group of 5- and 6-year-old children. First, the BCE turned out to be larger in children than in adults. Second, both groups exhibited a comparable adjustment of behavior in response to just experienced conflict; in both groups, the BCE was smaller following conflict trials than following no-conflict trials. Third, a diffusion model analysis suggests that the source for the BCE is different in adults than in children. In particular, not parallel Task 2 response activation appears to be the source of the BCE in children. Rather, non-decisional processes appear to be responsible for the BCE in this age group. Thus, this study shows that (a) 5- and 6-year-old children can perform dual-tasks, but (b) they show slightly larger signs of between-task crosstalk, and (c) the exact reasons for this appear to be different from those responsible in adults. abstract_id: PUBMED:28659844 Transferability of Dual-Task Coordination Skills after Practice with Changing Component Tasks. Recent research has demonstrated that dual-task performance with two simultaneously presented tasks can be substantially improved as a result of practice. Among other mechanisms, theories of dual-task practice-relate this improvement to the acquisition of task coordination skills. These skills are assumed (1) to result from dual-task practice, but not from single-task practice, and (2) to be independent from the specific stimulus and response mappings during the practice situation and, therefore, transferable to new dual task situations. The present study is the first that provides an elaborated test of these assumptions in a context with well-controllable practice and transfer situations. To this end, we compared the effects of dual-task and single-task practice with a visual and an auditory sensory-motor component task on the dual-task performance in a subsequent transfer session. Importantly, stimulus and stimulus-response mapping conditions in the two component tasks changed repeatedly during practice sessions, which prevents that automatized stimulus-response associations may be transferred from practice to transfer. Dual-task performance was found to be improved after practice with the dual tasks in contrast to the single-task practice. These findings are consistent with the assumption that coordination skills had been acquired, which can be transferred to other dual-task situations independently on the specific stimulus and response mapping conditions of the practiced component tasks. abstract_id: PUBMED:23384484 Influence of dual task constraints during walking for children. The effects of dual-task constraints on bimanual coordination and walking in three age groups: young (4-6 years old), middle (7-9 years old), and older groups (10-13 years old) were examined. Children were asked to first walk along a path (baseline condition) and then to walk along the same path while carrying a box steady and level (dual-task condition). The young group showed less bimanual coordination with less level and more variable normalized vertical box positioning (mean hand differences, young: 3.68%, middle: 2.42%, older: 1.61%), less correlated hand movements (mean correlation, young: r(8)=0.58, middle: r(8)=0.77, older: r(8)=0.79), and more elbow and shoulder joint excursion on the dominant side (all Ps&lt;0.05). In addition, the young group had shorter stride lengths and less normalized anterior/posterior ground reaction forces under the dual-task condition than the baseline condition (all Ps&lt;0.05). These findings indicate that 4- to 6-year-old children might still be developing their ability to perform activities requiring dual-task constraints that involve simultaneous use of the upper and lower extremities. abstract_id: PUBMED:26667940 Do frailty and cognitive impairment affect dual-task cost during walking in the oldest old institutionalized patients? The objective of this study was to investigate dual-task costs in several elderly populations, including robust oldest old, frail oldest old with MCI, frail oldest old without MCI, and frail elderly with dementia. Sixty-four elderly men and women categorized into frail without MCI (age 93.4 ± 3.2 years, n = 20), frail with MCI (age 92.4 ± 4.2 years, n = 13), robust (age 88.2 ± 4.1 years, n = 10), and patients with dementia (age 88.1 ± 5.1 years, n = 21). Five-meter gait ability and timed-up-and-go (TUG) tests with single and dual-task performance were assessed in the groups. Dual-task cost in both 5-m habitual gait velocity test and TUG test was calculated by the time differences between single and dual-task performance. The robust group exhibited better 5-m gait and TUG test performances in the single and dual-task conditions compared with the other three groups (P &lt; 0.001), and the frail and frail + MCI groups exhibited better performances than the dementia group (P &lt; 0.001). No significant differences were observed between the frail and frail + MCI groups. However, all groups exhibited lower gait velocities in the verbal and arithmetic task conditions, but the dual-task cost of the groups were similar. Robust individuals exhibited superior single and dual-task walking performances than the other three groups, and the frail and frail + MCI individuals exhibited performances that were superior to those of the patients with dementia. However, the dual-task costs, i.e., the changes in gait performance when elderly participants switch from a single to a dual task, were similar among all four of the investigated groups. Therefore, these results demonstrated that the magnitude of the impairment in gait pattern is independent of frailty and cognitive impairment status. abstract_id: PUBMED:24529863 Influence of dual task constraints on gait performance and bimanual coordination during walking in children with unilateral cerebral palsy. The purpose of the current study was to evaluate the effects of dual task constraints on walking and bimanual coordination for children with and without unilateral Cerebral Palsy (CP). Ten children with unilateral CP (age 7-11 years; MACS levels I-II) and ten age-matched typically developed children were asked to first stand still while holding a box level (standing condition), second, to walk along a path (baseline condition), and third to walk again while carrying a box steady and level (dual task condition) at a preferred speed. The results showed that children with unilateral CP decreased their walking speed, stride length, step width, and toe clearance from the floor under dual task constraints when compared to the baseline condition (all p's&lt;0.05), however, typically developing children did not change. Children with unilateral CP also had less level box carrying, larger vertical box movement, and larger elbow movements when compared to typically developing children under dual task condition (all p's&lt;0.05). Dual task constraints with a secondary motor task like the current walking with a box task seemed challenging for children with unilateral CP. Therefore, future treatments or assessments should consider using dual task constraints to manipulate the difficulty of tasks. Answer: Yes, 5-year-old children perform dual-task coordination better than Alzheimer's disease (AD) patients. A cross-sectional study examining the pattern of changes in the capacity to coordinate attention between two simultaneously performed tasks across different age groups found that the ability to coordinate attention increases with age, reaching adult values by age 15 years. When comparing these results with the performance of healthy elderly and AD patients in the same dual task, it was observed that AD patients showed a lower dual-tasking capacity than 5-year-old children. In contrast, the elderly presented a significantly higher ability than 5-year-old children and no significant differences with respect to young adults. This suggests that there is a working memory system's mechanism that enables the division of attention, which is strengthened by the maturation of the prefrontal cortex and impaired in AD (PUBMED:24232169).
Instruction: Tumor tissue identification in the pseudocapsule of pituitary adenoma: should the pseudocapsule be removed for total resection of pituitary adenoma? Abstracts: abstract_id: PUBMED:19240574 Tumor tissue identification in the pseudocapsule of pituitary adenoma: should the pseudocapsule be removed for total resection of pituitary adenoma? Objective: The microsurgical pseudocapsule can be found in the transition zone between an adenoma and the surrounding normal pituitary tissue. We investigated the precise histology of the pseudocapsule. Furthermore, we evaluated the remission rate, the changes in pituitary function, and the recurrence rate after intensive resection of the pseudocapsule. Methods: In 616 patients with pituitary adenomas (Hardy Types I-III) over a period of 14 years, we introduced intensive resection of the microsurgical pseudocapsule to achieve complete tumor removal. A combined pituitary function test and radiological study were performed on the patients before surgery, 1 year after surgery, and at subsequent 1.5-year intervals 2 to 13 years postoperatively. Results: Microsurgical pseudocapsules were identified in 343 (55.7%) of 616 patients, and the distinct microsurgical pseudocapsules were observed in 180 (52.5%) of these patients. In the remaining 163 patients, the microsurgical pseudocapsules were incompletely developed. Tumor cluster infiltration was present in the pseudocapsule in 71 (43.6%) of these patients. Aggressive resection of the microsurgical pseudocapsule was more often required in larger tumors than in smaller ones. The presence of a pseudocapsule was slightly more frequent in prolactin-secreting tumors (70.9%) than in growth hormone-secreting (55.0%) and adrenocorticotropic hormone-secreting (40.0%) tumors. In the 243 patients of the total resection group who underwent combined pituitary function tests more than 2 times after surgery, the surgical remission rate was 99.1% in clinically nonfunctional tumors, 88% in growth hormone-secreting, 70.6% in prolactin-secreting, and 100% in adrenocorticotropic hormone-secreting tumors. The surgical remission rate was 86.2% in the presence of a pseudocapsule and 94.3% in the absence of a pseudocapsule. Preoperative hypopituitarism improved in 140 patients (57.6%), persisted in 47 patients (19.3%), and was aggravated in 33 patients (13.6%). There was no statistical difference in improvement or deterioration of pituitary function according to the existence or absence of the pseudocapsule. The tumor recurrence rate was 0.8% in the total resection group and was 42.1% in the subtotal resection group. Conclusion: We have shown that tumor tissue is frequently present within the pseudocapsule, suggesting that any tumor remnant in the pseudocapsule could be a source of recurrence and an obstacle to achieving complete remission. These results indicate that intensive resection of the pseudocapsule could result in a higher remission rate without deteriorating pituitary function. abstract_id: PUBMED:23833641 Total resection of a solitary fibrous tumor of the sellar diaphragm: A case report. The present study reports the case of a patient with a vision impairment in the right eye. Head computed tomography revealed a round, hyperdense mass in the sellar and suprasellar regions. Pituitary gland magnetic resonance imaging (MRI) revealed isointensity on T1- and T2-weighted imaging. Tumor-enhanced scanning showed heterogeneous contrast enhancement. The initial diagnosis was that of meningioma or pituitary tumor. A total tumor resection was performed using a right pterional approach under general anesthesia. During surgery, the base of the tumor was located on the sellar diaphragm of the left anterior pituitary stalk. The pathological diagnosis was of a solitary fibrous tumor (SFT). The patient had no post-operative diabetes insipidus or idiopathic pituitary hypofunction. The clinical experience, imaging information and pathological features of SFT in this case report may provide a reference for correct diagnosis and total resection of SFTs in the sella turcica. abstract_id: PUBMED:33945115 Impact of intraoperative magnetic resonance imaging on gross total resection, extent of resection, and residual tumor volume in pituitary surgery: systematic review and meta-analysis. Background: Residual tumor tissue after pituitary adenoma surgery, is linked with additional morbidity and mortality. Intraoperative magnetic resonance imaging (ioMRI) could improve resection. We aim to assess the improvement in gross total resection (GTR), extent of resection (EOR), and residual tumor volume (RV) achieved using ioMRI. Methods: A systematic review was carried out on PubMed/MEDLINE to identify any studies reporting intra- and postoperative (1) GTR, (2) EOR, or (3) RV in patients who underwent resection of pituitary adenomas with ioMRI. Random effects meta-analysis of the rate of improvement after ioMRI for these three surgical outcomes was intended. Results: Among 34 included studies (2130 patients), the proportion of patients with conversion to GTR (∆GTR) after ioMRI was 0.19 (95% CI 0.15-0.23). Mean ∆EOR was + 9.07% after ioMRI. Mean ∆RV was 0.784 cm3. For endoscopically treated patients, ∆GTR was 0.17 (95% CI 0.09-0.25), while microscopic ∆GTR was 0.19 (95% CI 0.15-0.23). Low-field ioMRI studies demonstrated a ∆GTR of 0.19 (95% CI 0.11-0.28), while high-field and ultra-high-field ioMRI demonstrated a ∆GTR of 0.19 (95% CI 0.15-0.24) and 0.20 (95% CI 0.13-0.28), respectively. Conclusions: Our meta-analysis demonstrates that around one fifth of patients undergoing pituitary adenoma resection convert from non-GTR to GTR after the use of ioMRI. EOR and RV can also be improved to a certain extent using ioMRI. Endoscopic versus microscopic technique or field strength does not appear to alter the impact of ioMRI. Statistical heterogeneity was high, indicating that the improvement in surgical results due to ioMRI varies considerably by center. abstract_id: PUBMED:28593109 Extent of Endoscopic Resection for Anterior Skull Base Tumors: An MRI-Based Volumetric Analysis. Objective To determine the volume of ventral skull base tumor removed following endoscopic endonasal (EEA) resection using MRI-based volumetric analysis and to evaluate the inter-rater reliability of such analysis. Design Retrospective case series. Setting Academic tertiary care hospital. Participants EEA patients November 2012 to August 2015. Main Outcome Measures Volumetric analysis of pre- and immediately postoperative MR imaging was performed independently by two investigators. The percentage of total tumor resected was evaluated according to resection goal and tumor type. Results A total of 39 patients underwent resection. Intraclass correlation coefficients between the raters were 0.9988 for preoperative and 0.9819 for postoperative images. Tumors (and average percentage removed) included 17 nonsecreting pituitary adenomas (95.3%), 8 secreting pituitary adenomas (86.2%), 4 meningiomas (81.6%), 3 olfactory neuroblastomas (100%), 2 craniopharyngiomas (100%), 1 large B-cell lymphoma (90.5%), 1 germ cell neoplasm (48.3), 1 benign fibrous connective tissue mass (93.4%), 1 epidermoid cyst (68.4%), and 1 chordoma (100%). For tumors treated with intent for gross total resection, 96.9 ± 4.8% was removed. Conclusion EEAs achieved tumor resection rates of ∼97% when total resection was attempted. The radiographic finding of residual tumor is of uncertain clinical significance. The volumetric analysis employed in this study demonstrated high inter-rater reliability and could facilitate further study. abstract_id: PUBMED:33309892 Volumetric Study of Nonfunctioning Pituitary Adenomas: Predictors of Gross Total Resection. Objective: Despite the efforts made to determine the achieved resection grade after pituitary adenoma surgery, there is a high level of disagreement among all the available classifications and measurement methods used. Our objective is to identify the factors that preoperatively could predict a gross total resection (GTR) of a clinically nonfunctioning pituitary adenoma through an endoscopic endonasal approach. Methods: Across 100 surgeries, we analyzed epidemiologic and clinical data, radiologic relevant data, extent of resection (EOR), and postoperative outcomes. The EOR was measured objectively through an accurate volumetric analysis. Results: The median presurgical volume was 8.58 cm3 (range, 0.5-58 cm3), the median maximum diameter was 27.3 mm (range, 7-67 mm), and the Knosp grade was 0 in 1 patient, 1 in 23%, 2 in 31%, 3 in 23% and 4 in 22% of patients. In the multivariate logistic regression analysis, we found 3 factors that significantly predicted the chances of a successful GTR: previous sellar surgery, Knosp grade, and tumor signal in the T2-weighted magnetic resonance imaging scan. Another 10 radiologic variables were analyzed and had no effect on the EOR. Conclusions: Knosp grade (P &lt; 0.001; odds ratio [OR], 25.65; 95% confidence interval, 7.19-91.52) is the most predictive factor for performing a GTR of nonfunctioning pituitary adenoma. Previous pituitary surgery (P = 0.023; OR, 5.81) and an isointense T2-weighted signal (P = 0.034; OR, 3.75) also negatively influenced the chances of GTR. We highlight the influence of T2-weighted signal in the chances of GTR. abstract_id: PUBMED:35711254 Microscopic Transsphenoidal Resection of Giant Pituitary Adenomas: Analysis of the Factors Limiting the Degree of Resection in 73 Cases. Purpose: To analyze the risk factors affecting the gross-total resection of giant pituitary adenomas using a transsphenoidal approach under a microscope to provide a reference basis for formulating an appropriate surgical strategy. Methods: The clinical data of patients who underwent microscopic transsphenoidal resection of giant pituitary adenomas in a single center from January 2011 to December 2020 were retrospectively analyzed. Based on magnetic resonance imaging and surgical records, the predictive factors affecting the gross-total resection of giant pituitary adenomas under microscopy were determined through univariate and multivariate analyses. Results: A total of 73 patients with giant pituitary adenomas underwent transsphenoidal microsurgery. Gross-total resection was performed in 19 cases (26%), subtotal resection in 31 cases (42%), partial resection in 21 cases (29%), and the degree of resection was &lt;50% in only two cases (3%). After binary logistic analysis, it was found that it was more difficult to completely remove giant pituitary adenomas with a Knosp grade 3-4 [odds ratio (OR) = 0.214, 95% confidence interval (CI): 0.05-0.917; P = 0.038], greater proportion of tumor suprasellar volume (odds ratio = 0.937, 95% confidence interval: 0.898-0.978; P = 0.003), and intraoperative evidence of invasion of the cavernous sinus (odds ratio = 0.187, 95% CI: 0.039-0.898; P = 0.036). Conclusion: It is difficult to remove a giant pituitary adenoma invading the cavernous sinus completely with a higher degree of invasion of the suprasellar region using microscopic transsphenoidal surgery. The combined application of multiple surgical methods can help to improve the degree of resection during a single operation. abstract_id: PUBMED:38064693 Conditioned recurrence-free survival following gross-total resection of nonfunctioning pituitary adenoma: a single-surgeon, single-center retrospective study. Objective: The authors sought to determine the time to recurrence after achieving gross-total resection of nonfunctioning pituitary adenoma (NFPA) in adult patients. The authors also sought to determine the rate of recurrence after increasing years of recurrence-free imaging. Methods: The authors performed a retrospective chart review of all adult patients who underwent gross-total resection of NFPA between September 2004 and January 2018 by the senior surgeon. The primary outcome of the study was time to recurrence, defined by imaging and/or clinical criteria. Results: The median follow-up time of the 148 patients who met the inclusion criteria was 91 months; 12 of these patients (8.1%) had recurrence. The median time to recurrence was 80 months. The range of time for these recurrences was 36-156 months. The probabilities of remaining recurrence free at 180 months after gross-total resection of NFPA and 12, 36, 60, 84, or 120 months of recurrence-free imaging were 82%, 84%, 86%, 88%, and 93%, respectively. The year-over-year odds of a recurrence increased linearly by 1.07%. There was no difference in recurrence-free imaging when patients were stratified by Knosp grade or tumor subtype. None of the patients with recurrence underwent repeat resection. When identified, patients were managed either conservatively or with radiosurgery. Conclusions: Increased intervals of recurrence-free imaging were not associated with a decrease in risk of recurrence, which suggests that patients require life-long periodic imaging. If followed with periodic imaging, recurrence can be discovered before clinically symptomatic and successfully treated without repeat surgery. abstract_id: PUBMED:27903119 The learning curve in endoscopic endonasal resection of craniopharyngiomas. OBJECTIVE There is a paucity of literature regarding the learning curve associated with performing endoscopic endonasal cranial base surgery. The purpose of this study was to determine to what extent a learning curve might exist for endoscopic endonasal resection in cases of craniopharyngiomas. METHODS A retrospective review was performed for all endoscopic endonasal craniopharyngioma resections performed at Thomas Jefferson University from 2005 to 2015. To assess for a learning curve effect, patients were divided into an early cohort (2005-2009, n = 20) and a late cohort (2010-2015, n = 23). Preoperative demographics, clinical presentation, imaging characteristics, extent of resection, complications, tumor control, and visual and endocrine outcomes were obtained. Categorical variables and continuous variables were compared using a 2-sided Fisher's exact test and t-test, respectively. RESULTS Only the index operation performed at the authors' institution was included. There were no statistically significant differences between early and late cohorts in terms of patient age, sex, presenting symptoms, history of surgical or radiation treatment, tumor size or consistency, hypothalamic involvement, or histological subtype. The rate of gross-total resection (GTR) increased over time from 20% to 65% (p = 0.005), and the rate of subtotal resection decreased over time from 40% to 13% (p = 0.078). Major neurological complications, including new hydrocephalus, meningitis, carotid artery injury, or stroke, occurred in 6 patients (15%) (8 complications) in the early cohort compared with only 1 (4%) in the late cohort (p = 0.037). CSF leak decreased from 40% to 4% (p = 0.007). Discharge to home increased from 64% to 95% (p = 0.024). Visual improvement was high in both cohorts (88% [early cohort] and 81% [late cohort]). Rate of postoperative panhypopituitarism and permanent diabetes insipidus both increased from 50% to 91% (p = 0.005) and 32% to 78% (p = 0.004), which correlated with a significant increase in intentional stalk sacrifice in the late cohort (from 0% to 70%, p &lt; 0.001). CONCLUSIONS High rates of near- or total resection and visual improvement can be achieved using an endoscopic endonasal approach for craniopharyngiomas. However, the authors did find evidence for a learning curve. After 20 cases, they found a significant decrease in major neurological complications and significant increases in the rates of GTR rate and discharge to home. Although there was a large decrease in the rate of postoperative CSF leak over time, this was largely attributable to the inclusion of very early cases prior to the routine use of vascularized nasoseptal flaps. There was a significant increase in new panhypopituitarism and diabetes insipidus, which is attributable to increase rates of intentional stalk sacrifice. abstract_id: PUBMED:34855027 Characterization of tumor remnants in intraoperative MRI-assisted microscopic and endoscopic transsphenoidal resection of less invasive pituitary adenomas. Introduction: Intraoperative magnetic resonance imaging (iMRI) improves the intraoperative detection of adenoma remnants in transsphenoidal surgery. iMRI might be redundant in endoscopic pituitary surgery in non-invasive tumors (Knosp 0-2) due to a superior visualization of anatomical structures in the periphery of the sella turcica compared to the microscopic technique. We identified the anatomical location of tumor remnants in iMRI and evaluated risk factors for secondary resection after iMRI and hereby selected patients with pituitary adenomas who may benefit from iMRI-assisted resection. Methods: We conducted a retrospective monocenter study of patients who underwent iMRI-assisted transsphenoidal surgical resection of pituitary adenomas at our department between 2012 and 2020. A total number of 190 consecutive iMRI-assisted transsphenoidal surgeries of pituitary adenomas graded as Knosp 0-2 were selected for analysis. Exclusion criteria were missing iMRI availability or pathologies other than adenomas. Of these 190 cases, 46.3% (N = 88) were treated with microscopic, 48.4% (N = 92) with endoscopic, and 5.3% (N = 10) with endoscopic-assisted technique. Volumetric measurement of preoperative, intraoperative, and postoperative tumor extension was performed. Demographic data, tumor characteristics, and MRI features were evaluated. Additionally, analysis of adenoma remnants identified by iMRI was performed. Results: An additional resection after iMRI was performed in 16.3% (N = 31). iMRI helped to reach gross total resection (GTR) in 83.9% (26/31) of these cases. False-positive resection was found in 1 patient (0.5%). Multivariable logistic analysis identified tumor volume (OR = 1.2, p = 0.007) recurrence (OR = 11.3, p = 0.002) and microscopic technique (OR = 2.8, p = 0.029) as independent risk factors for additional resection. Simultaneously, the endoscopic technique was significantly associated with GTR as evaluated by iMRI (OR = 2.8, p = 0.011) and postoperative MRI (OR = 5.8, p = 0.027). The detailed analysis of adenoma remnants on iMRI revealed the suprasellar location in a diaphragm fold, penetrating tumor above the diaphragm, or undetected invasion of cavernous sinus as well as in case of microscopic resection tumor location outside the line of sight as the main reasons for incomplete resections. Conclusion: Tumor volume, recurrence, and microscopic technique were identified as independent predictors for additional resection in patients with Knosp 0-2 adenomas. iMRI might increase the extent of resection (EOR) safely even after the endoscopic visualization of the sella with very low risk for false-positive findings. Remnants of tumors hidden within the diaphragmic folds, intrathecally, or behind the infiltrated wall of cavernous sinus not recognized on preoperative MRI were the most common findings in iMRI. abstract_id: PUBMED:27903116 Endoscopic endonasal versus open transcranial resection of craniopharyngiomas: a case-matched single-institution analysis. OBJECTIVE The authors compared clinical and radiological outcomes after resection of midline craniopharyngiomas via an endoscopic endonasal approach (EEA) versus an open transcranial approach (TCA) at a single institution in a series in which the tumors were selected to be equally amenable to gross-total resection (GTR) with either approach. METHODS A single-institution retrospective review of previously untreated adult midline craniopharyngiomas was performed. Lesions were evaluated by 4 neurosurgeons blinded to the actual approach used to identify cases that were equally amenable to GTR using either an EEA or TCA. Radiological and clinical outcome data were assessed. RESULTS Twenty-six cases amenable to either approach were identified, 21 EEA and 5 TCA. Cases involving tumors that were resected via a TCA had a trend toward larger diameter (p = 0.10) but were otherwise equivalent in preoperative clinical and radiological characteristics. GTR was achieved in a greater proportion of cases removed with an EEA than a TCA (90% vs 40%, respectively; p = 0.009). Endoscopic resection was associated with superior visual restoration (63% vs 0%; p &lt; 0.05), a decreased incidence of recurrence (p &lt; 0.001), lower increase in FLAIR signal postoperatively (-0.16 ± 4.6 cm3 vs 14.4 ± 14.0 cm3; p &lt; 0.001), and fewer complications (20% vs 80% of patients; p &lt; 0.001). Significantly more TCA patients suffered postoperative cognitive loss (80% vs 0; p &lt; 0.0001). CONCLUSIONS An EEA is a safe and effective approach to suprasellar craniopharyngiomas amenable to GTR. For this select group of cases, the EEA may provide higher rates of GTR and visual improvement with fewer complications compared with a TCA. Answer: The investigation into the histology of the pseudocapsule in pituitary adenomas has shown that tumor tissue is frequently present within the pseudocapsule. In a study involving 616 patients with pituitary adenomas, microsurgical pseudocapsules were identified in 55.7% of the cases, and tumor cluster infiltration was present in the pseudocapsule in 43.6% of these patients (PUBMED:19240574). The presence of tumor tissue within the pseudocapsule suggests that any tumor remnant in the pseudocapsule could be a source of recurrence and an obstacle to achieving complete remission. The study found that intensive resection of the microsurgical pseudocapsule resulted in a higher surgical remission rate without deteriorating pituitary function. The surgical remission rate was higher in the absence of a pseudocapsule (94.3%) compared to its presence (86.2%). Additionally, the tumor recurrence rate was significantly lower in the total resection group (0.8%) compared to the subtotal resection group (42.1%) (PUBMED:19240574). Based on these findings, it can be concluded that the removal of the pseudocapsule should be considered for total resection of pituitary adenoma to achieve a higher remission rate and reduce the risk of recurrence. The study indicates that intensive resection of the pseudocapsule could be beneficial in the management of pituitary adenomas.
Instruction: Is there a correlation between the SIGAPS score and publishing articles in French? Abstracts: abstract_id: PUBMED:22560683 Is there a correlation between the SIGAPS score and publishing articles in French? Introduction: SIGAPS score determines to a great extent each university hospital's funding for research. This score is considered to reflect the scientific production. Due to its financial impact, he may modify authors' attitudes leading them to publish their articles in French. Our objective was to evaluate the association between the SIGAPS score and publications in French. Methods: Based on Rouen University Hospital's (RUH) SIGAPS data from 2007 to 2010, we used Spearman's correlation to establish an association between SIGAPS score of each author and the fact that they published in French. Results: A positive correlation was found between SIGAPS score and author's number of publications in French (r(s)=0.51; IC(95%)=[0.44; 0.58]). The relationship between the SIGAPS score and the author's publication rate in French was negative but higher (r(s)=-0.87; IC(95%)=[-1.0; -0.68]; author's deciles). Discussion: The relationship between the SIGAPS score and the number of publications in French is not surprising as the SIGAPS score is based on the number of publications. As regards to the publication rate in French, this was even more interesting as our results showed that the more productive an author was, the less they published their results in French. Publications in French did not appear to be the best way to improve individual SIGAPS score. There is high heterogeneity between authors who are very prolific, with one third having a publication rate in French high above the average and one third who had a publication rate in French well below the average. Conclusion: There was a high negative correlation between SIGAPS score and the publication rate in French for RUH's researchers (author's deciles). abstract_id: PUBMED:30738601 Bibliometric analysis of neurosurgery publications in France Bibliometrics are methods used to quantitatively analyze the scientific literature. The application of bibliometrics in neurosurgery is emerging and has not yet been studied in the French neurosurgical community. In France, the most used statistical method is named SIGAPS; the scores are based on the position of the author and the impact factor of the journal. The SIGAPS score, which can be calculated for a research team or an individual, impacts the public financial support of university hospitals in France. We analyzed retrospectively the publication productivity and the SIGAPS score of university neurosurgery departments in France for the period 2009-2016. This was supplemented by a list of current academic research protocols granted by the Minister of Health. The SIGAPS score is higher in large university hospitals even though all teams are dedicated to publishing more in the traditional neurosurgery journals (Neurosurgery, Journal of Neurosurgery, World Neurosurgery, J Neuro-oncology, Neuro oncology, Eur Spine J). Of note, the national Neurochirurgie journal (publishing in French and English) is progressively improving its impact factor (0.8). The most common themes are functional neurosurgery, neuro-oncology and spine. There are 14 on-going national multicenter research programs, 4 of them focused on the health economics of treatment innovations. These data provide the most accurate available snapshot of the scholarly efforts of all the French university neurosurgery departments. It is based on a national statistics method (SIGAPS). For future evaluation, it should be combined with internationally available parameters such as the h-index and m-quotient. Publication productivity has broad implications for the success of both academic departments and individual faculty members and their financial support. abstract_id: PUBMED:25209819 SIGAPS: a software package for the evaluation of medical publications The "système d'interrogation, de gestion et d'analyse des publications scientifiques" (System for Identification, Management and Analysis of Scientific Publications), or SIGAPS, is an innovative tool of French design that enables the identification and analysis of bibliographic references produced by a given researcher or unit using the Medline database (PubMed). This evaluation takes into account the author's rank of signature and the impact factor of the journal of publication within the discipline in question. The limits are those of the impact factor. Analyses produced by SIGAPS enable financial assessment to be made by hospitals. abstract_id: PUBMED:26796477 Measure of plan Azheimer's impact on the university hospitals articles production through SIGAPS data Introduction: A third Alzheimer and related diseases plan (AP), including a detailed research part has been implemented from 2008 to 2013. The aim of this study was to assess the AP impact on the original articles production concerning Alzheimer and related disease (AD) before (2004-2008) and after its implementation (2009-2013). Methods: Number and impact factor (IF) based ranking of original articles coming from French University Hospitals (CHU) have been assessed using the data provided by the SIGAPS bibliometric software used so far for calculating allocation of the governmental medical research funding (MERRI). Scientific production on AD has been compared to four other control pathologies: 2 neurologic (stroke and Parkinson disease) and 2 non-neurologic (diabetes and AIDS). Publication trends of the 27 CHU and of the 7 Interregional grouping of clinical research and innovation (GIRCI) have been analyzed using specialization indices (SI). Results: The number of AD articles increased from 1277 to 1972 (+54.4 %) as well as SIGAPS score from 18,038 to 29,309 (+62.5 %). Meanwhile number of articles rated A and B (published in the 25 % best journals of the specialty) increased from 430 to 846 (+114.1 %). The figures for these indicators evolved similarly for AD and stroke while quite differently for the 3 other diseases: plateauing at a high level for Parkinson disease and AIDS and modestly increasing for diabetes. SI was&gt;1 for the 3 neurologic diseases, including AD in North-Western and South-West GIRCI. SI of CHU on AD did not vary before and after AP implementation, 4 CHU having both a production and an SI at a high level: Toulouse, Bordeaux, Montpellier and Lille. Conclusion: The number and "quality" of original articles on AD production by the CHU increased after the AP starting of AP initiation in 2008. Whether or not this was directly due to the financial support provided by the AP remains to be determined. The scientific production on AD was unequally distributed throughout the country, the CHU being already strongly involved in AD research before the AP reinforced their commitment after. This work illustrates the potential implication of SIGAPS as a strategic tool for research policy. abstract_id: PUBMED:23280093 Objective evaluation and comparison of the scientific publication from the departments of the Assistance publique-Hôpitaux de Paris: analysis of the SIGAPS score Purpose: To analyze the results of the bibliometric system (SIGAPS score) of scientific publications in the Assistance publique-Hôpitaux de Paris (AP-HP) and to compare the scientific production among the various medical and surgical specialties of the academic hospitals of Paris. Methods: All the publications imported from Pubmed between 2006 and 2008 were included. The following data were taken into account and analysed: the hospital department of origin, the number of articles published, the number of full-time physicians, the SIGAPS score. Results: Thirty-eight thousand, seven hundred and nine publications were included. The departments were consisted of 747 full-time practitioners 5719 (1895 Professors [33.1%], 2772 Assistant Professors [48.4%] and 1052 fellows [18.4%]). The average number of full-time practitioner by department was 7.7±6.7 (range 1-69). The average total number of articles published in a department was 51.8±49.4 (range 1-453). The average SIGAPS score was more important in medicine than in surgery (621.2±670.1 vs. 401±382.2; P=0.01) but not the average number of article per practitioner (8.1±8.3 vs. 6.6±6.2; P=0.0797). The mean number of publication by full-time practitioner was 7.9±7.8 (1-45), or an average of 2.7±2.6 for each full-time practitioner each year. Conclusion: Academic hospitals in Paris have a reasonably scientific output but with a mean of 2.7 articles per full-time practitioner per year. No major differences between medical and surgical disciplines were observed. abstract_id: PUBMED:26043639 SIGAPS, a tool for the analysis of scientific publications The System for the Identification, Management and Analysis of Scientific Publications (SIGAPS) is essential for the funding of teaching hospitals on the basis of scientific publications. It is based on the analysis of articles indexed in Medline and is calculated by taking into account the place of the author and the ranking of the journal according to the disciplinary field. It also offers tools for the bibliometric analysis of scientific production. abstract_id: PUBMED:31635995 Publication output of French orthopedic and trauma surgeons: Quantitative and qualitative bibliometric analysis of their scientific production in orthopedics and other medical fields. Introduction: Bibliometric analysis is being used more and more in orthopedics and traumatology. However, the quantity and quality of publications authored by French orthopedic and trauma surgeons outside their discipline have never been analyzed, nor has the change in the quality of orthopedics publications. This led us to carry out a bibliometric analysis to answer the following questions: 1) How has the quantity of scientific production by French orthopedists changed over the past 10 years? 2) How has the quality of the overall scientific production by French orthopedists changed over the past 10 years? Hypothesis: From 2008 to 2017, the production of French orthopedists has increased in quantity and quality in orthopedics and other medical fields. Material And Methods: The analysis was performed by cross-referencing the list of SOFCOT (French Society for Orthopaedic Surgery and Traumatology) members with the French SIGAPS database and the InCites platform. Out of 3979 SOFCOT members, 972 (24%) had authored publications during this period and were included in the analysis. Several indicators were analyzed: number of publications; SIGAPS score (production quality) for the various Web of Science (WoS) categories; number and percentage of publications in the top 1% and top 10% (most highly cited articles worldwide). Results: The "Orthopedics" discipline was still the most prevalent with 68% of all publications identified. The "Surgery" discipline was stable, and the share of publications had increased in five other disciplines: Sports Sciences, Clinical Neurology, Emergency Medicine, Engineering-Biomedical and Material Science-Biomaterials. Of the 727 journals indexed in PubMed in which at least one author is a French orthopedic or trauma surgeon, 79 journals (11%) had at least 10 articles authored by a French orthopedists, making up 4680/6056 published articles (77%) during the study period. The highest SIGAPS score was in the "Orthopedics" discipline followed by "Surgery", and then by "Sport Sciences" with a large number of publications in the SIGAPS B category. Publications in "Orthopedics" category A and B journals increased 14% during this period. When all disciplines are pooled, the share of publications in SIGAPS A, B and C categories increased by 10% from 2008 to 2017. The largest increase in publications for categories A and B and the top 1% and 10% was in "Sports Sciences". Discussion: Over the past 10 years, French orthopedists have continued to increase their scientific production in the orthopedics field and in related fields such as Sport Sciences, Clinical Neurology and Biomedical Engineering. The quality of the scientific production of French orthopedists in their discipline and outside of it has greatly improved, as evidenced by the shift toward SIGAPS A and B journals. Level Of Evidence: IV, retrospective study without control group. abstract_id: PUBMED:25454750 Limiting a Medline/PubMed query to the "best" articles using the JCR relative impact factor Background: Medline/PubMed is the most frequently used medical bibliographic research database. The aim of this study was to propose a new generic method to limit any Medline/PubMed query based on the relative impact factor and the A &amp; B categories of the SIGAPS score. Material And Methods: The entire PubMed corpus was used for the feasibility study, then ten frequent diseases in terms of PubMed indexing and the citations of four Nobel prize winners. The relative impact factor (RIF) was calculated by medical specialty defined in Journal Citation Reports. The two queries, which included all the journals in category A (or A OR B), were added to any Medline/PubMed query as a central point of the feasibility study. Results: Limitation using the SIGAPS category A was larger than the when using the Core Clinical Journals (CCJ): 15.65% of PubMed corpus vs 8.64% for CCJ. The response time of this limit applied to the entire PubMed corpus was less than two seconds. For five diseases out of ten, limiting the citations with the RIF was more effective than with the CCJ. For the four Nobel prize winners, limiting the citations with the RIF was more effective than the CCJ. Conclusion: The feasibility study to apply a new filter based on the relative impact factor on any Medline/PubMed query was positive. abstract_id: PUBMED:25442052 Scoring of medical publications with SIGAPS software: Application to orthopedics. SIGAPS is a bibliometric software tool developed in France to identify and analyze Medline-indexed publications that are produced by a researcher or research group. This measurement takes into account the author's ranking on the paper along with the journal's prestige according to its impact factor within the research field. However, use of this impact factor is the primary limitation of SIGAPS. SIGAPS analysis results are used to assign a financial value to hospital facilities. The impact of the journal Revue de Chirurgie Orthopédique and its successor-Orthopaedics &amp; Traumatology: Surgery &amp; Research-was compared using the Medline-based ISI (SIGAPS) and SCOPUS-based SCImago journal rankings. abstract_id: PUBMED:22364630 Analysis of the bibliometrics score of surgical department from the academic hospitals of Paris: what is the rank of urology? Objectives: Analyze the results of the bibliometric system and analysis of scientific publications (SIGAPS) in the Assistance publique-Hôpitaux de Paris (AP-HP) and compare the scientific production among the various surgical disciplines of the academic hospitals of Paris and define the place of urology. Methods: The publications from 115 surgical departments between 2006 and 2008 were included. Only surgical departments were considered in the current study. The following data were taken into account: the hospital department of origin, the number of articles published, the number of first place, last places, the number of full-time unit, the SIGAPS score. Statistical analysis focused on the quality and on the quantity of published articles per surgeons and per department. There were eight academic departments of urology identified within the AP-HP. Results: The database contained information for 115 surgical departments. The mean number of articles published by department was 42.89±27.34 (13.2 to 110.75). The mean number of publications per full-time surgeon was 6.7±2.59 (3.77 to 12.84), or a mean of 2.25±0.86 released by full-time and by year. The median score SIGAPS of surgery was 304 with a wide interval (122 to 903.5). Urology was the specialty with the highest median score compared to other surgical specialties. The department, which published the most, was the center 1, in comparison with the center 6 which was publishing the most in A/B ranking journals. Conclusion: Urology was the absolute leader by far in terms of scientific publications in the AP-HP when compared to other surgical disciplines. The discipline is organized efficiently to juggle clinical work and research indicating a certain dynamism of the teams that invest there to fulfill the missions assigned to them in the University Hospital and the part of the autonomy of the universities. Answer: Yes, there is a correlation between the SIGAPS score and publishing articles in French. A study based on data from Rouen University Hospital's SIGAPS from 2007 to 2010 found a positive correlation between SIGAPS score and the number of publications in French (r(s)=0.51). However, the relationship between the SIGAPS score and the author's publication rate in French was negative but higher (r(s)=-0.87), indicating that the more productive an author was, the less they published their results in French. This suggests that publications in French did not appear to be the best way to improve individual SIGAPS scores (PUBMED:22560683).
Instruction: Do anxiety symptoms predict major depressive disorder in midlife women? Abstracts: abstract_id: PUBMED:24467997 Do anxiety symptoms predict major depressive disorder in midlife women? The Study of Women's Health Across the Nation (SWAN) Mental Health Study (MHS). Background: In women, anxiety symptoms are common and increase during midlife, but little is known about whether these symptoms predict onsets of major depressive disorder (MDD) episodes. We examined whether anxiety symptoms are associated with subsequent episodes of MDD in midlife African-American and Caucasian women, and whether they confer a different risk for first versus recurrent MDD episodes. Method: A longitudinal analysis was conducted using 12 years of data from the Study of Women's Health Across the Nation (SWAN) Mental Health Study (MHS). The baseline sample comprised 425 Caucasian (n=278) and African American (n=147) community-dwelling women, aged 46.1±2.5 years. Anxiety symptoms measured annually using a self-report questionnaire were examined in relation to MDD episodes in the subsequent year, assessed with the SCID. Multivariable models were estimated with random effects logistic regression. Results: Higher anxiety symptoms scores were associated with a significantly higher adjusted odds of developing an episode of MDD at the subsequent annual visit [odds ratio (OR) 1.47, p=0.01], specifically for a recurrent episode (OR 1.49, p=0.03) but non-significant for a first episode (OR 1.32, p=0.27). There were no significant racial effects in the association between anxiety symptoms and subsequent MDD episodes. Conclusions: Anxiety symptoms often precede MDD and may increase the vulnerability of midlife women to depressive episodes, particularly recurrences. Women with anxiety symptoms should be monitored clinically during the ensuing year for the development of an MDD episode. abstract_id: PUBMED:24952069 Does family history of depression predict major depression in midlife women? Study of Women's Health Across the Nation Mental Health Study (SWAN MHS). This study aims to determine whether family history of depression predicts major depression in midlife women independent of psychosocial and health profiles at midlife. Participants were 303 African American and Caucasian women (42-52 years at baseline) recruited into the Study of Women's Health Across the Nation (SWAN) and the Women's Mental Health Study (MHS) in Pittsburgh. Major depression was assessed annually with the Structured Clinical Interview for DSM-IV. Family mental health history was collected at the ninth or tenth follow-up. Multivariable logistic regression was used to determine whether family history of depression predicted major depression in midlife, adjusting for covariates. The odds of experiencing major depression during the study were three times greater for those with a family history than for those without a family history (OR = 3.22, 95% CI = 1.95-5.31). Family history predicted depression (OR = 2.67, 95% CI = 1.50-4.78) after adjusting for lifetime history of depression, age, trait anxiety, chronic medical conditions, and stressful life events. In analyses stratified by lifetime history of depression, family history significantly predicted depression only among women with a lifetime history of depression. Family history of depression predicts major depression in midlife women generally, but particularly in those with a lifetime history of depression prior to midlife. abstract_id: PUBMED:33573787 The Midlife Transition, Depression, and Its Clinical Management. The World Health Organization estimates that more than 260 million people are affected by depression worldwide, a condition that imposes a significant burden to individuals, their families, and society. Women seem to be disproportionately more affected by depression than men, and it is now clear that some women may experience windows of vulnerability for depression at certain reproductive stages across their life span, including the midlife transition. For some, age, the presence of cardiovascular or metabolic problems, and the emergence of significant, bothersome vasomotor symptoms and sleep problems may result in a compounded, deleterious impact on well-being and overall functioning. abstract_id: PUBMED:29290217 Variation in symptoms of depression and anxiety in midlife women by menopausal status. Objectives: To examine the association between menopausal status and the risk of symptoms of depression and anxiety in a community-based sample of Australian midlife women. Study Design: Female participants (mean age 50.6±1.5) who were premenopausal (n=237), perimenopausal (n=249) or naturally postmenopausal (n=225) were drawn from the Personality and Total Health (PATH) Through Life Project, a longitudinal study. Main Outcome Measures: Symptoms of depression and anxiety were measured using the Goldberg Depression Scale and Goldberg Anxiety Scale. Generalised linear regression models with a negative binomial log link were used. Results: Relative to premenopause and after adjusting for all relevant covariates, being perimenopausal was associated with increased risk of greater symptoms of depression (incidence rate ratio [IRR]=1.29, p=0.001), while being postmenopausal was associated with increased risk of greater symptoms of anxiety (IRR=1.15, p=0.041). Being perimenopausal or postmenopausal was associated with an increased risk of greater symptoms of depression (IRR=1.35, p=0.008; IRR=1.31, p=0.029) and anxiety (IRR=1.22, p=0.030; IRR=1.32, p=0.006) in women without a history of probable major depressive disorder or generalised anxiety disorder. Risk of symptoms did not differ with menopausal status in women with this history. Conclusions: Menopausal status is associated with the risk of symptoms of depression and anxiety. There is a greater likelihood of increased symptoms of depression during perimenopause and symptoms of anxiety during postmenopause. In women without a history of depression or anxiety, the perimenopause and postmenopausal stages are associated with increased risk of greater symptoms of anxiety and depression relative to premenopause. abstract_id: PUBMED:25417760 Risk factors for major depression during midlife among a community sample of women with and without prior major depression: are they the same or different? Background: Women's vulnerability for a first lifetime-onset of major depressive disorder (MDD) during midlife is substantial. It is unclear whether risk factors differ for first lifetime-onset and recurrent MDD. Identifying these risk factors can provide more focused depression screening and earlier intervention. This study aims to evaluate whether lifetime psychiatric and health histories, personality traits, menopausal status and factors that vary over time, e.g. symptoms, are independent risk factors for first-onset or recurrent MDD across 13 annual follow-ups. Method: Four hundred and forty-three women, aged 42-52 years, enrolled in the Study of Women's Health Across the Nation in Pittsburgh and participated in the Mental Health Study. Psychiatric interviews obtained information on lifetime psychiatric disorders at baseline and on occurrences of MDD episodes annually. Psychosocial and health-related data were collected annually. Cox multivariable analyses were conducted separately for women with and without a MDD history at baseline. Results: Women without lifetime MDD at baseline had a lower risk of developing MDD during midlife than those with a prior MDD history (28% v. 59%) and their risk profiles differed. Health conditions prior to baseline and during follow-ups perception of functioning (ps &lt; 0.05) and vasomotor symptoms (VMS) (p = 0.08) were risk factors for first lifetime-onset MDD. Being peri- and post-menopausal, psychological symptoms and a prior anxiety disorder were predominant risk factors for MDD recurrence. Conclusions: The menopausal transition warrants attention as a period of vulnerability to MDD recurrence, while health factors and VMS should be considered important risk factors for first lifetime-onset of MDD during midlife. abstract_id: PUBMED:28489293 The role of family history of depression and the menopausal transition in the development of major depression in midlife women: Study of women's health across the nation mental health study (SWAN MHS). Background: This study evaluated whether family history of depression predicts major depression in midlife women above and beyond static risk factors (such as personal history of depression prior to midlife) and risks that may change dynamically across midlife (such as menopausal, psychosocial, and health profiles). Methods: Participants were 303 African American and Caucasian women (42-52 years at baseline) recruited into the Study of Women's Health across the Nation (SWAN) Mental Health Study (MHS) in Pittsburgh. Major depression was assessed annually with Structured Clinical Interviews for DSM-IV. Family mental health history was collected at the ninth or tenth annual follow-up. Random effects logistic regression was used to assess the relationship between family history of depression and midlife depression, controlling for baseline sociodemographic characteristics and time-varying risk factors. Results: Family history of depression was associated with midlife depression after adjusting for participant's history of major depression prior to midlife, trait anxiety and baseline age, and time-varying menopausal status, body mass index, very upsetting life events, and chronic difficulties (OR = 2.24, 95% CI = 1.17-4.29, P = .02). Higher odds of major depression were found when women were late perimenopausal or postmenopausal relative to when they were premenopausal or early perimenopausal (OR = 3.01, 95% CI = 1.76-5.15, P &lt; .0001). However, menopausal status was only associated with major depression among women without a family history. Conclusions: Family history of depression predicts major depression in midlife women independent of the menopausal transition and other time-varying covariates. Notably, the menopausal transition was associated with increased risk only among women without a family history of depression. abstract_id: PUBMED:18377672 Predictors of first lifetime episodes of major depression in midlife women. Background: Little is known about factors that predict first lifetime episodes of major depression in middle-aged women. It is not known whether health-related factors and life stress pose more or less of a risk to the onset of clinical depression than does the menopausal transition. Method: The Structured Clinical Interview for DSM-IV Axis I Disorders (SCID) was used to assess diagnoses of lifetime, annual and current major depression in a community-based sample of premenopausal or early perimenopausal African American and White women. Menstrual cycle characteristics, psychosocial and health-related factors, and blood samples for assay of reproductive hormones were obtained annually. Two hundred and sixty-six women without a history of major depression at baseline constituted the cohort for the current analyses. Results: Over 7 years of follow-up, 42 (15.8%) women met criteria for a diagnosis of major depression. Frequent vasomotor symptoms (VMS; hot flashes and/or night sweats) (HR 2.14, p=0.03) were a significant predictor of major depression in univariate analyses. After simultaneous adjustment for multiple predictors in Cox proportional hazards analyses, frequent VMS were no longer significant; lifetime history of an anxiety disorder (HR 2.20, p=0.02) and role limitations due to physical health (HR 1.88, p=0.07) at baseline and a very stressful life event (HR 2.25, p=0.04) prior to depression onset predicted a first episode of major depression. Conclusions: Both earlier (e.g. history of anxiety disorders) and more proximal factors (e.g. life stress) may be more important than VMS in contributing to a first episode of major depression during midlife. abstract_id: PUBMED:15483368 Lifetime depression history and sexual function in women at midlife. We examined the association between lifetime depression history and sexual function in a community-based sample of midlife women. Specifically, 914 women aged 42-52 who were participants in the Study of Women's Health Across the Nation completed a self-report assessment of their sexual behaviors, sexual desire, sexual arousal, and sexual satisfaction over the past 6 months. On the basis of the Structured Clinical Interview for the DSM-IV , participants were categorized into 1 of 3 lifetime major depressive disorder (MDD) history groups: no MDD history, single episode MDD, and recurrent MDD. In line with previous reports, women with a history of recurrent MDD reported experiencing less frequent sexual arousal, less physical pleasure, and less emotional satisfaction within their current sexual relationships. Although the groups did not differ in their reported frequency of sexual desire or partnered sexual behaviors, lifetime depression history was associated with increased rates of self-stimulation (masturbation). Associations between lifetime depression history and lower levels of physical pleasure within partnered sexual relationships and higher rates of masturbation remained significant following control for current depressive symptoms, study site, marital status, psychotropic medication use, and lifetime history of anxiety or substance abuse/dependence disorder. Future research is needed to characterize the temporal and etiologic relationships among lifetime depressive disorder, current mood state, and sexual function in women across the lifespan. abstract_id: PUBMED:25012412 Profile of depressive symptoms in women in the perinatal and outside the perinatal period: similar or not? Purposes: To analyze which Beck Depression Inventory-II (BDI-II) and Postpartum Depression Screening Scores (PDSS) total and dimensional scores, as well as symptomatic answers proportions significantly differ between women in the perinatal period (pregnant/postpartum) without major depression, with major depression and women outside the perinatal period. Methods: 572 pregnant women in the third trimester completed Beck Depression Inventory-II and Postpartum Depression Screening Scale and were assessed with the Diagnostic Interview for Genetic Studies. 417 of these were also assessed (with the same instruments) at three months postpartum. Ninety non-pregnant women or that did not have a child in the last year (mean age=29.42±7.159 years) also filled in the questionnaires. Results: Non-depressed pregnant women showed lower scores than depressed pregnant women and higher scores than women outside the perinatal period in the BDI-II total score and in its Somatic-Anxiety dimension. Non-depressed postpartum women showed significantly higher scores than women outside the perinatal period only at Sleep/Eating Disturbances. Compared to women outside the perinatal period, pregnant women without depression presented higher scores only in the somatic items. Women with vs. without depression in the postpartum period did not significantly differ and both presented higher scores than women outside the perinatal period in the proportions of loss of energy and sleep changes. Limitations: Women outside the perinatal period were not diagnosed for the presence of a depressive disorder, but their BDI-II mean score was similar to the figures reported worldwide regarding women in childbearing age. Conclusion: In the perinatal period, most particularly at pregnancy, women experience significant somatic changes even if not clinically depressed. Cognitive-affective symptoms are more useful when assessing the presence of perinatal depression. abstract_id: PUBMED:25881281 Multiple risk factors predict recurrence of major depressive disorder in women. Background: It is difficult to predict recurrence of depressive episodes in patients with major depression (MD): evidence for many risk factors is inconsistent and general prediction algorithms are lacking. The aim of this study was to develop a prediction model for recurrence of depressive episodes in women using improved methodology. Methods: We used prospective data from a general population sample of female twins with a last-year MD episode (n=194). A rich set of baseline predictors was analyzed with Cox proportional hazards regression subject to elastic net regularization to find a model predicting recurrence of depressive episodes. Prediction accuracy of the model was assessed in an independent test sample (n=133), which was limited by the unavailability of a number of key predictors. Results: A wide variety of risk factors predicted recurrence of depressive episodes in women: depressive and anxiety symptoms during the index episode, the level of symptoms at the moment of interview, psychiatric and family history, early and recent adverse life events, being unmarried, and problems with friends and finances. Kaplan Meier estimated survival curves showed that the model differentiated between patients at higher and lower risk for recurrence; estimated areas under the curve were in the range of 0.61-0.79. Limitations: Despite our rich set of predictors, certain potentially relevant variables were not available, such as biological measures, chronic somatic diseases, and treatment status. Conclusions: Recurrence of episodes of MD in women is highly multifactorial. Future studies should take this into account for the development of clinically useful prediction algorithms. Answer: Yes, anxiety symptoms do predict major depressive disorder (MDD) in midlife women. A longitudinal analysis using 12 years of data from the Study of Women's Health Across the Nation (SWAN) Mental Health Study (MHS) found that higher anxiety symptoms scores were significantly associated with a higher adjusted odds of developing an episode of MDD at the subsequent annual visit. This association was specifically significant for a recurrent episode of MDD but was non-significant for a first episode. The study concluded that anxiety symptoms often precede MDD and may increase the vulnerability of midlife women to depressive episodes, particularly recurrences. Therefore, it is recommended that women with anxiety symptoms should be monitored clinically for the development of an MDD episode in the ensuing year (PUBMED:24467997).
Instruction: Primary cholesteatoma of the external auditory canal: is the epithelial migration defective? Abstracts: abstract_id: PUBMED:37206805 Coexistence of Primary External Auditory Canal Cholesteatoma and Tympanomastoid Paraganglioma: A Diagnostic Dilemma. Synchronous presentation of cholesteatoma and tympanomastoid paraganglioma is extremely rare. Due to the overlapping clinical features, the clinical diagnosis of coexistence is difficult. Only two cases of tympanomastoid paraganglioma coexisting with middle ear cholesteatoma have been reported in the literature, but simultaneous presentation of primary external auditory canal cholesteatoma and tympanomastoid paraganglioma has not been reported till date. Coexistence of external auditory canal cholesteatoma and paraganglioma is an incidental diagnosis in this current case. The advancement of imaging techniques could help the diagnosis of this very rare clinical coexistence in preoperative assessment. abstract_id: PUBMED:18277309 Primary cholesteatoma of the external auditory canal: is the epithelial migration defective? Objective: Primary cholesteatoma of the external auditory canal (EACC) is a rare disease, characterized by osteonecrosis with formation of sequesters and ingrowth of keratinizing squamous epithelium in the bony EAC. The aetiology and pathogenesis are unknown, but an earlier study has demonstrated abnormal epithelial migration in such ears. The present study explored whether this interesting result can be reproduced. Study Design: The epithelial migration in 10 ears with EACC was studied using the ink-dot method. Two ears with minor lesions were studied before treatment, and 8 ears were studied after operation or conservative treatment. The results were compared with the migration in 15 normal ears examined in a previous study using the same method. Setting: Tertiary referral center. Main Outcome Measures: 1) Presence or absence of epithelial migration; 2) change of the normal centrifugal pattern; and 3) estimated migration speed compared with normal ears. Results: A qualitatively normal centrifugal migration was present on intact (unaffected or healed) skin in all 10 ears with EACC and was only missing directly on crust-covered lesions. The estimated migration speed was similar to the speed in normal ears, that is, approximately 150 mum/d. Conclusion: Ears with EACC seem to have qualitatively and quantitatively normal epithelial migration except directly on crust-covered lesions. It is unlikely that an abnormal epithelial migration is involved in the pathogenesis of this disease. These observations have implications for EAC disorders with similar clinical features that involve bony invasion, including osteoradionecrosis. abstract_id: PUBMED:35483689 Clinical analysis and surgical treatment of congenital external auditory canal stenosis complicated with external auditory canal cholesteatoma Objective:To investigate the clinical features, diagnosis, and treatment experience of congenital stenosis of an external auditory canal with external auditory canal cholesteatoma. Methods:The clinical data of 152 patients(153 ears) with congenital external auditory canal stenosis complicated with external auditory canal cholesteatoma treated in the Department of Otorhinolaryngology Head and Neck Surgery of Beijing Tongren Hospital affiliated to Capital Medical University from January 2009 to December 2019 were analyzed retrospectively, including the age of onset, clinical manifestations, signs, audiology, high-resolution computed tomography(HRCT) of the temporal bone, mode of operation and so on. According to the preoperative imaging findings, intraoperative findings,and pathological results, 152 patients with congenital external auditory canal stenosis with external auditory canal cholesteatoma were treated with canaloplasty and tympanoplasty while clearing the cholesteatoma. Results:All patients were followed up for 2-2.5 years, there was no recurrence of cholesteatoma, and the reconstructed external auditory canal was spacious. The hearing levels of 108 ears who underwent hearing reconstruction were significantly improved, and the average hearing threshold was reduced by 20-35 dB. Conclusion:The stenosis of the external auditory meatus is easy to be complicated with cholesteatoma of the external auditory canal,and the occurrence of cholesteatoma of the external auditory canal is directly related to the diameter of the external auditory canal meatus.But the time of occurrence of the cholesteatoma is not directly related to the diameter of the external auditory canal. Severe congenital stenosis of the external auditory canal with auricle deformity is easy to be missed and misdiagnosed due to retroauricular redness, swelling, and ulceration. For this kind of patient, cholesteatoma should be treated first, and then plastic surgery such as auricle reconstruction should be performed. Retroauricular incisions should be avoided to create conditions for auricle reconstruction in the future. abstract_id: PUBMED:28154624 External auditory canal cholesteatoma External auditory canal cholesteatoma is defined as an accumulation of keratin leading to osteolytic bony erosion. It is a rare or underdiagnosed otologic entity. Our study aims to illustrate one case of external auditory canal cholesteatoma. Hypertensive diabetic patient, 65 years of age, under therapy suffering from severe right earache, insomnia, hearing loss, small purulent otorrhea and right facial paralysis grade V since 3 months. Initially the diagnosis was malignant external otitis. The patient underwent antibiotic therapy without improvement. Otologic examination showed burgeoning ulcerative lesion of the posterior wall of the right external auditory meatus. Systematic lesion biopsy was performed and confirmed the diagnosis of cholesteatoma. The patient underwent open technique tympanoplasty. External auditory canal cholesteatoma is rare, clinical symptoms are not specific, coronal CT scan makes a major contribution to a positive diagnosis showing bony crater of external auditory meatus. Treatment depends on the extent of the lesions. It can be based on simple local cares or on open technique tympanoplasty. External auditory canal cholesteatoma may have multiple clinical aspects and lend to confusion with other pathologies of the external auditory meatus. abstract_id: PUBMED:30578992 Clinical analysis of pediatric primary external auditory canal cholesteatoma. Objective: Pediatric primary external auditory canal cholesteatoma (EACC) is a rare disease. The present study aimed to explore the clinical features and prognosis of this disease. Methods: Clinical data of 41 ears with pediatric primary EACC were collected, and the clinical characteristics and prognosis were analyzed in this retrospective study. Results: A total of 35 patients (median age of 13 years) were recruited in this study. Of these, 13 suffered on the left side and 16 on the other side, while 6 suffered on the bilateral ears. A total of 35 ears had hearing loss, 26 had otalgia, 15 had otorrhea, 12 had tinnitus, and 4 had pruritus. The median course of the disease was 1 month. The bone destruction of the external auditory canal (EAC) was located on the inferior wall of 20 ears, the posterior wall of 22 ears, the superior wall of 15 ears, and the anterior wall of 15 ears. According to Shin's stages, 24 ears were in stage I, 2 in stage II, 15 in stage III. A total of 40 ears underwent transcanal cholesteatoma removal (TCR) and 1 underwent both TCR and tympanoplasty. Finally, 33 ears were followed up postoperatively, and only 1 ear presented recurrence. Conclusion: The posterior wall of the EAC is the most commonly affected wall, and the involvement of multiple walls is common. Due to the remodeling of the EAC bone in pediatric patients, the high stage lesions can be treated by minimally invasive surgical debridement combined with a close follow-up postoperatively. abstract_id: PUBMED:29797972 Clinical characteristics and surgical treatment of types Ⅱ to Ⅳ external auditory canal cholesteatoma Objective:The aim of this study is to explore the clinical characteristics,diagnostic methods,surgical procedures and results of typesⅡ to Ⅳexternal auditory canal cholesteatoma.Method:In the 63 cases,there were 39 patients suffering typeⅡexternal auditory canal cholesteatoma,22 patients suffering typeⅢ and 2 patients suffering type Ⅴ.All patients had surgeries to remove the lesion under the microscope,and underwent canaloplasty or tympanoplasty or modified mastoidectomy as well.Result:There were 2 cases occurred meatal stenosis postoperatively ,as well as 2 cases of cholesteatoma recurrenced,and were cured after second operation.Four patients'hearing level were not improve postoperatively due to the severe damage of their ossicular chain.All other patients were cured with one operation,which the cavity with good skin coverage and there has been no recurrence.Conclusion:Temporal bone CT scan has important significance in the diagnosis and classification of external auditory canal cholesteatoma.Prevention and early diagnosis are particularly important.Surgical treatment should completely remove the lesion,restore hearing according to the situation,and appropriate surgical treatment should be taken according to the classification,location of external auditory canal cholesteatoma as well as hearing level of patients. abstract_id: PUBMED:32158662 Spontaneous External Auditory Canal Cholesteatoma: Case Series and Review of Literature. External auditory canal cholesteatoma is a rare disease. Main symptoms of this entity are ear discharge and dull chronic ear pain and significant findings are bony erosion of the external canal with keratin debris accumulation. Inadequate diagnosis and delay in the treatment of this condition can leads to serious complications. We are presenting our experience of eight patients of spontaneous EACC treated by surgical management. As the spontaneous EACC is a rare condition, the aim of the study is to share the experience of diagnosis and management of spontaneous EACC. abstract_id: PUBMED:35602283 External Auditory Canal Cholesteatoma: Clinical and Radiological Features. Introduction External auditory canal cholesteatoma (EACC) is often misdiagnosed. Objectives To outline the clinical presentation of EACC, and to describe its radiological findings on high-resolution computed tomography (HRCT) of the temporal bone. Methods The clinical records of all patients diagnosed with EACC from April 2017 to March 2020 in a tertiary care center were retrospectively reviewed. The clinical presentation, the findings on the HRCT of the temporal bone, and the treatment provided were analyzed. Results A total of 9 patients, 7 males and 2 females, with a mean age of 30 years, were diagnosed with primary EACC. Six patients presented with otorrhoea, three, with otalgia, three. with hearing loss, and one with facial palsy. Some patients had multiple symptoms. The most common findings on otomicroscopy were destruction of the posterior and inferior canal walls, with cholesteatoma and intact tympanic membrane (six patients). Two patients had aural polyp, and one had a narrow ear canal due to sagging of the posterior canal wall. On HRCT, all nine patients showed soft-tissue density in the external auditory canal with erosion of the canal wall. The disease extended to the mastoid in eight cases, and to the cavity of the middle ear in one. There were three cases of dehiscence of the facial canal. Dehiscence of the dural and sinus plates was observed in two cases each. Eight patients underwent mastoidectomy, and one underwent debridement with canalplasty. Conclusion Review of the clinical and radiological findings is essential to reduce the rate of misdiagnosis. abstract_id: PUBMED:37550109 External Ear Disease: Keratinaceous Lesions of the External Auditory Canal. Keratosis obturans (KO) and external auditory canal cholesteatoma (EACC) are two distinct keratinaceous lesions of the external ear. This article reviews the signs, symptoms, pathophysiology, workup, and treatment of each. Patients with either pathology can often be managed in the clinic with debridement; however, EACC is more likely to involve osteonecrosis and require more extensive operative management if disease is not confined to the canal on account of the bony erosion characteristic of cholesteatoma. If required for extensive disease, surgical approaches to both pathologies are similar. abstract_id: PUBMED:35822375 Clinical features, diagnosis and treatment for patients presenting with granulation tissue of the external auditory canal Objective:To explore the clinical characteristics and diagnosis and treatment in the patients presenting with granulation tissue of the external auditory canal. Methods:The data of 71 postoperative patients presenting with granulation tissue of the external auditory canal in the Department of Otolaryngology, the First Affiliated Hospital of the Air Force Military Medical University from January 2015 to June 2020 were analyzed retrospectively, including the chief complaint, physical examination, auxiliary examination and preoperative imaging, biopsy was performed when necessary to confirm the diagnosis. Among the 71 patients, 30 cases were diagnosed as chronic otitis media, 19 cases were external auditory canal cholesteatoma, 5 cases were external auditory canal carcinoma, 6 cases were paraganglioma, 1 case was granulomatous hemangioma, 1 case was first branchial cleft fistula, 4 cases were granuloma of the external auditory canal, 4 cases were hemangioma of the external auditory canal, and 1 case was foreign body of the external auditory canal. Individualized treatment plans are made according to the characteristics and extent of the lesions. Results:Postoperative follow-up was 12 to 74 months, with an average of (44±18.1) months. Seventy patients(98.6%) had no complications such as sensorineural deafness, external auditory stenosis or peripheral facial paralysis after surgery, and one patient with paraganglioma had postoperative neurological function grade Ⅱ, and was treated with nutritional nerves, and the postoperative neural function recovered to grade Ⅰ after 3 months. Conclusion:The patients presenting with granulation tissue of the external auditory canal can be diagnosed as various diseases. It is necessary to analyze the patient's medical history in detail, confirm the diagnosis in combination with imaging examination, and formulate an individualized treatment plan to reduce misdiagnosis and missed diagnosis. Answer: The study of epithelial migration in primary cholesteatoma of the external auditory canal (EACC) using the ink-dot method revealed that ears with EACC seem to have qualitatively and quantitatively normal epithelial migration except directly on crust-covered lesions. The estimated migration speed was similar to the speed in normal ears, approximately 150 μm/d. This suggests that it is unlikely that an abnormal epithelial migration is involved in the pathogenesis of this disease (PUBMED:18277309).
Instruction: Does acid suppressive therapy reduce the risk of laryngeal cancer recurrence? Abstracts: abstract_id: PUBMED:16222214 Does acid suppressive therapy reduce the risk of laryngeal cancer recurrence? Objective/hypothesis: Gastroesophageal reflux disease (GERD) is implicated in laryngeal cancer pathogenesis and recurrence posttherapy. There are currently limited data on the effect of acid suppressive therapy in decreasing the recurrence of laryngeal cancer. Therefore, we conducted this study to identify potential effect of GERD and acid suppressive therapy on recurrences after larynx-preserving therapies. Study Design: Case control study. Methods: Cases and controls, derived from a single tertiary care center, were patients who had newly diagnosed localized laryngeal cancer (T3 or less and absence of nodes) and having undergone larynx-preserving surgery or radiotherapy/chemotherapy were followed between January 1, 2000 and December 31, 2003. Univariable associations were performed for demographics, smoking and alcohol patterns, stage of tumor, initial treatment, surgeon of record, presence of GERD, and the use of acid suppressive medications. Multivariable associations were performed for clinically significant variables. Results: Of 258 patients with laryngeal cancer, 61 satisfied the selection criteria. Twenty-two of 61 (36%) developed recurrence and constituted cases, whereas 39/61 (64%) did not have recurrence and constituted controls. On univariable analysis, significant factors for decreased recurrences were GERD, hazard ratio 0.24 (95% confidence interval [CI] 0.08-0.71), and acid suppressive therapy, hazards 0.22 (95% CI 0.07-0.66). On multivariable analysis, laryngeal cancer recurrence was significantly less in those on acid suppressive therapy, hazard 0.31 (95% CI 0.13-0.75). Conclusions: Acid suppression postlaryngeal cancer therapies may have protective effect on laryngeal cancer recurrences. A prospective study is needed to better define this perceived beneficial effect. abstract_id: PUBMED:8643266 Stoma recurrence after laryngectomy: an analysis of risk factors. Data from 130 patients who underwent total laryngectomy for squamous cell carcinoma of the larynx were reviewed. Patients were treated either by primary laryngectomy and planned postoperative radiotherapy or by primary radiotherapy and subsequent salvage laryngectomy. Patients with other treatment modalities and patients with positive margins of resection and laryngectomies for hypopharyngeal cancers were excluded from the study. The stomal recurrence rate with reference to several risk factors, such as primary tumor stage, location of tumor, lymph node metastases, timing of tracheotomy, and presence of a postoperative pharyngoperistomal fistula, was analyzed. The overall incidence of stomal recurrence was 10%. The treatment modality appeared to have an impact on subsequent stomal recurrence: stomal recurrence developed more often after salvage laryngectomy (18.4%) than after primary laryngectomy with planned postoperative radiation (4.8%). Advanced T stage, N stage, subglottic involvement, and preoperative tracheotomy are risk factors for stomal recurrence only in patients with a primary laryngectomy. Stomal recurrence developed in only four patients after primary laryngectomy with planned radiation. All four patients had more than one risk factor: primary tumor stage T4 (four times), subglottic involvement (three times), and preoperative tracheotomy (three times). The presence of a postoperative pharyngoperistomal fistula likewise may represent a risk factor for the development of a stomal recurrence. abstract_id: PUBMED:19857193 Stomal recurrence after total laryngectomy: clinical and patological analysis of risk factors Objectives: The aim of this study is to evaluate risk factors for stomal recurrence and to state and emphasize the importance of these factors. Patients And Methods: This study was carried out retrospectively on 119 patients (116 males, 3 females; mean age 56.8 years; range 31 to 86 years) with larynx cancer who underwent total laryngectomy. All patients were examined for T-stage and location of primary tumor, tracheotomy (preoperative or postoperative) opening time, modality of applied treatment, presence of pharyngeal stoma fistula and presence of cervical lymph node metastasis in the postoperative period. Results: Five patients (4.2%) out of 119 had stomal recurrence. Development period of stomal recurrence after laryngectomy was found to be between six months and three years (mean: 16.6 months). Conclusion: The treatment of this fatal complication is very difficult, and it has a poor prognostic nature. As a result, presence of subglottic involvement and advanced tumors in our cases are considered as main risk factors. abstract_id: PUBMED:28600598 Recurrence of vocal fold leukoplakia after carbon dioxide laser therapy. This work aims to analyze the recurrence of vocal fold leukoplakia after carbon dioxide (CO2) laser resection. In this retrospective study, all patients undergoing CO2 laser resection of vocal fold leukoplakia were followed up for at least 2 years. Recurrence was diagnosed as any presence of leukoplakia in the vocal cord subsequent to previous successful complete resection. A total of 326 patients with complete resection of vocal fold leukoplakia and follow-up subsequent surveillance laryngoscopy were studied. The recurrence rate, the recurrence time, and risk factors were evaluated. Of these, 52 (16.0%) patients experienced recurrence with a mean follow-up time of 50.5 ± 15.4 months. The mean time to recurrence was 16.2 ± 14.1 months. Univariate analysis showed that the size of lesion (P &lt; 0.001, Pearson χ 2 test; P &lt; 0.001, log-rank test) and the pathological grade (P = 0.025, Pearson χ 2 test; P = 0.028, log-rank test) were significantly related to recurrence. The size of lesion was an independent prognostic factor for recurrence using multivariate analysis (P = 0.001, logistic regression; P = 0.001, Cox proportional hazards model). Considering the possible recurrence of vocal fold leukoplakia, long-term follow-up is required after CO2 laser resection. In conclusion, the size of lesion combined with the pathological grade are important risk factors that predict vocal fold leukoplakia recurrence. abstract_id: PUBMED:37515635 The subglottic involvement is an independent risk factor for recurrence of laryngeal amyloidosis. Objective: To analyze the risk factors for recurrence of laryngeal amyloidosis (LA). Methods: The clinical data of patients with LA admitted in the Otolaryngology Head and Neck Surgery Department of the Second Affiliated Hospital of Xi'an Jiaotong University from August 2009 to June 2022 were analyzed retrospectively; then, the risk factors for recurrence and their impacts on the recurrence time were also analyzed. Results: Of the 44 patients with LA, the majority (38 cases, 86.4%) only involved one anatomical region and the others (6 cases, 13.6%) involved two laryngeal regions concurrently. Overall, the glottic region was the most commonly affected area (28 cases, 63.6%), followed by the supraglottic region (16 cases, 36.4%) and subglottic region (6 cases, 13.6%). In addition, all the lesions were categorized as isolated nodule (31.8%), submucosal localized deposition (52.3%), and submucosal diffuse deposition (15.9%) according to their morphologies under electronic laryngoscope. Finally, six patients (13.6%) had recurrence after operation with a median recurrence time of 24.5 months, and subglottic involvement was confirmed to be an independent risk factor for recurrence of LA by univariate and multivariate logistic regression analyses (P &lt; 0.05). Meanwhile, the patients with subglottic involvement presented as submucosal diffuse deposition had a considerable shorter recurrence time (t = 5.759, P = 0.005). Conclusion: The subglottic involvement is an independent risk factor for recurrence of LA. Level Of Evidence: 4: abstract_id: PUBMED:27716797 Risk of Recurrence in Laryngeal Cancer. A cohort study was undertaken to analyze the risk of recurrence among 1616 patients with primary squamous cell carcinoma of the larynx from 1983 to 2010 at a single, tertiary academic center in Oslo, Norway. The cohort was followed from the date of diagnosis to September 2011. Competing risk regression analysis assessed the association between various risk factors and the risk of recurrence, where death was considered a competing event. Recurrence was observed in 368 patients (23%) during the study period. The majority (71%) of recurrences involved the location of the primary tumor. The overall risk of recurrence during the first three years after initiating treatment was 20.5%. Increased risk of recurrence was observed in patients with supraglottic cancer, younger patients, those with T2-T3 tumors and in patients treated in the earlier part of the study period. Significant factors for recurrence in glottic carcinomas were age, treatment in the earlier part of the study and T-status, whereas age was a significant factor in supraglottic cancer. N-status appeared less significant. In conclusion, follow-up of laryngeal squamous cell carcinoma should place particular emphasis on the site of the primary tumor, younger patients, cases of supraglottic cancer and T2-T4 primary tumors, especially during the first three years after treatment. More studies are needed to assess the impact of surgical versus non-surgical treatment, and eventually the significance of recurrence, for disease-specific and overall survival in cases of advanced laryngeal squamous cell carcinoma. abstract_id: PUBMED:2126104 Stomal recurrence after laryngectomy: interrelated risk factor study. This study was designed to identify pathogenic factors in recurrent carcinoma at the tracheal stoma. The charts of 444 patients with laryngeal cancer who were treated surgically between 1976 and 1988 revealed stomal recurrence in 15 (3.4%). Eighty percent of the patients with stomal recurrence had tumors in the subglottis. In comparison to patients with different sites involved this was significant (p less than 0.001). Other variables examined and analyzed included primary stage, previous treatment, neck pathologic status, neck treatment, age, sex, postoperative adjunctive therapy, and timing of tracheotomy. Tumor involvement of the subglottis is the single most important variable in stomal recurrence of carcinoma. Previous conservation laryngeal surgery and preoperative/emergency tracheotomy are not related to stomal recurrence of carcinoma. Mean length of survival for patients with stomal recurrence of carcinoma in our series was 8.9 months. Two patients were successfully treated with extensive surgical resection. The dismal prognosis of stomal recurrence of carcinoma suggests that management of this condition should focus on prevention. Treatment of patients with tumors of the subglottis should include attention to the paratracheal region and superior mediastinum, in conjunction with laryngectomy immediately after initial diagnosis. abstract_id: PUBMED:7316854 Stomal recurrence. A critical analysis of risk factors. Stomal recurrence developed in 5% of the 507 patients who underwent total laryngectomy for a squamous cell carcinoma. The most common site of the primary tumor was the glottis, followed by the supraglottic and pyriform sinus regions. Initial subglottic extension of the tumor and metastatic lymphadenopathy were the most significant risk factors. The primary tumor size, prior emergency tracheostomy, and conservation surgical procedures had no effect on the incidence of stomal recurrence. The median survival in patients with stomal malignant neoplasms was only five months. In high-risk patients, extended dissection or elective postoperative radiotherapy is recommended. abstract_id: PUBMED:12954897 Risk factors for local-regional recurrence following preoperative radiation therapy and surgery for head and neck cancer (stage II-IVB). Purpose: To discover possible risk factors for local-regional recurrence (LRR) following preoperative radiation therapy and curative surgery for head and neck squamous cell carcinoma (SCC) (stage II-IVB). Materials And Methods: Clinical records from 1987 to 1999 of 161 patients with head and neck SCC (oral cavity, 80 patients; larynx, 50; hypopharynx, 19; oropharynx, 12) who underwent preoperative radiation therapy and surgery were retrospectively reviewed. One hundred thirty-two (82%) of the patients had stage III or IV cancer. The median radiation dose was 38 Gy. Results: The 5-year overall survival rate and LRR rate were 58% and 35%, respectively. At multivariate analysis, oral cavity cancer (P =.020), clinical T stage (P =.016), clinical N stage (P =.017), and status of surgical margins (P =.008) emerged as variables that were significantly associated with LRR. The analysis of only those patients with lymph node involvement showed that oral cavity cancer (P =.008), advanced N-stage cancer (P =.045), and long interval between the start of preoperative radiation therapy and surgery (&gt; or =7 weeks) (P =.019) emerged as variables that were significantly associated with LRR. Conclusion: Oral cavity cancer, advanced T or N stage of disease, and unsatisfactory margins were risk factors for LRR. A long interval (&gt; or =7 weeks) was a risk factor for LRR in patients with lymph node involvement. abstract_id: PUBMED:8127185 Factors associated with recurrence and voice quality following radiation therapy for T1 and T2 glottic carcinomas. Evaluation of 63 patients undergoing primary radiation therapy for treatment of T1 and T2 glottic carcinomas was undertaken to evaluate the success of primary treatment and to identify factors which might influence recurrence or voice quality. Twelve patients (19%) recurred following radiation therapy, with ultimate salvage in 11, for a 3-year survival rate of 98%. Voice preservation was achieved in 83%. Continued smoking after radiation therapy was associated with significantly greater risk of recurrence. Stage of tumor and anterior commissure involvement were not associated with increased recurrences. Sixty-seven percent of patients who did not recur had good voice quality after treatment. Vocal fold stripping or excisional biopsy rather than limited biopsy for initial diagnosis, complications of treatment, and continued smoking after treatment were all significantly associated with an increased risk of poorer voice quality after treatment while bilateral vocal fold tumors were associated with risks that approached significance. Voice analysis of five patients revealed that objective changes in voice can be detected after radiation therapy in those with associated risk factors but may be normal in those without these risk factors. Answer: Yes, acid suppressive therapy may reduce the risk of laryngeal cancer recurrence. A case-control study found that on univariable analysis, significant factors for decreased recurrences were gastroesophageal reflux disease (GERD) and acid suppressive therapy. On multivariable analysis, laryngeal cancer recurrence was significantly less in those on acid suppressive therapy (PUBMED:16222214). This suggests that acid suppression post-laryngeal cancer therapies may have a protective effect on laryngeal cancer recurrences, although the study concludes that a prospective study is needed to better define this perceived beneficial effect.
Instruction: Interarch tooth size relationship of Indian population: does Bolton's analysis apply? Abstracts: abstract_id: PUBMED:24939274 Interarch tooth size relationship of Indian population: does Bolton's analysis apply? Background: Orthodontists have several methods to detect inter arch-tooth size discrepancies in patients presenting for orthodontic treatment. Many methods such as Pont's analysis, Nance and Carey's analysis, Peck and Peck ratio, Bolton's analysis are commonly used methods. A thorough search of literature reveals a paucity of reliable data on the mesiodistal dimensions of the crowns of the permanent dentition of Indian population. Without information about the size of individual tooth and groups of teeth for Indian population, it is difficult for the clinician to make an adequate diagnosis and treatment plan and to carry out a plan of therapy. Aim: To determine the size of individual permanent tooth and the sex differences, with tooth size ratios according to Bolton's formula and to evaluate the reliability of Bolton's norms (Anterior/ Posterior) in Indian population. Materials And Methods: One hundred and twenty cases in which an excellent occlusion was naturally existent with the mean age of 22 years (18-27 years) were selected in the study. The casts were prepared after taking alginate impression of maxillary and mandibular arches and pouring them immediately with dental stone. A sliding Digital Vernier calliper used to measure the teeth. The ratios of the mean of mesiodistal dimensions of types of teeth were computed. Results: The mean overall ratio for the Indian population is found to be 92.11 with a standard deviation of 2.12. The values ranged from 86.50-97.13 and the median is 92.16. The mean anterior ratio for the Indian population is found to be 78.14 with a standard deviation of 2.59. Conclusion: Bolton's original data does not represent Indian population. In our study greater size variability was found in maxillary teeth as compared with mandibular teeth except mandibular first molar. Our study indicated that population-specific standards are necessary for clinical assessments. Significant differences were shown for the overall ratio and anterior ratio for both sexes as compared to Bolton's ratio. abstract_id: PUBMED:10672217 Interarch tooth size relationships of 3 populations: "does Bolton's analysis apply?". This study evaluates whether Bolton's interarch ratios extend across populations and genders. The data were derived from systematically collected preorthodontic casts of 180 patients, including 30 males and 30 females from each of 3 populations (black, Hispanic, and white). Forty-eight mesiodistal contact points were digitized on each model, and the lengths of the anterior, posterior, and overall arch segments were calculated. The results showed significant (P &lt;.05) ethnic group differences in all 6 arch segment lengths and in all 3 interarch ratios. Whites displayed the lowest overall ratio (92.3%), followed by Hispanics (93.1%), and blacks (93.4%). The group differences were due primarily to the relationships between the posterior segments. The arch segments of males were significantly larger than females; the overall and posterior ratios were also significantly larger in males than in females. Multiple regression analyses showed that individual differences in the overall ratio were most closely associated with the size of the lower second premolar, followed by the upper lateral incisors, upper second premolars, and the lower central incisors. In combination, these 4 teeth explained approximately 50% of the variation in the overall ratio between subjects. We conclude that interarch tooth size relationships are population and gender specific. Bolton ratios apply to white females only; the ratios should not be indiscriminately applied to white males, blacks, or Hispanics. abstract_id: PUBMED:31784006 The influence of labiolingual and mesiodistal anterior tooth dimensions on interarch relationships: A modified anterior Bolton analysis. Introduction: Tooth size and morphology of anterior teeth influence interarch relationships. The Bolton analysis uses tooth width to calculate a sum of mandibular to maxillary tooth width ratios necessary for proper occlusion. Several parameters not factored in the Bolton analysis influence occlusion, such as tooth thickness. This study sought to use 3D modeling to develop and assess a tooth size analysis that encompasses labioligual thickness as well as mesiodistal width of anterior teeth. Methods: The role of tooth thickness in interarch relationships was studied using simulations in a 3D modeling software (Suresmile). To develop a new chart of interarch ratios based on tooth thickness, a series of simulations were produced with varying tooth thicknesses and widths. The new ratios were evaluated on records from 50 patients. Results: Findings from the simulations suggest that the ideal tooth thickness remains approximately 2 mm if the overall tooth width of the dentition increases and the interarch anterior ratio is maintained. The thickness-adjusted anterior mandibular to maxillary tooth ratio ranges from 0.70-0.79 depending on the tooth thickness. This thickness-adjusted ratio provides a superior prediction for the sum of anterior tooth width compared with the Bolton analysis. Conclusion: Tooth thickness affects interarch tooth width ratios and anterior occlusion. A thickness-adjusted ratio can be used to more accurately predict anterior tooth dimensions necessary to achieve proper occlusion. abstract_id: PUBMED:27350861 Rehabilitation of a patient with non-syndromic partial oligodontia. Oligodontia is defined as a congenital tooth agenesis with the absence of six or more permanent teeth. This clinical report describes a patient with non-syndromic partial oligodontia, with retained deciduous teeth and the absence of 16 permanent teeth. Anterior esthetic problems were caused by interarch tooth size discrepancy, interdental space, aberrant tooth dimensions, and the absence of centric contacts of the anterior teeth. Prosthetic restoration after orthodontic and implant treatment was performed with a multi-disciplinary team approach. Favorable functional and esthetic results were obtained using a definitive prosthesis. abstract_id: PUBMED:16448244 Standardizing interarch tooth-size harmony in a Syrian population. The Bolton analysis is considered to be a good indicator for evaluating the degree of intermaxillary tooth-size harmony, but the possibility of ethnic variation of these values should be examined. Thus, the aim of this study was to calculate both the anterior and overall ratios of mandibular and maxillary tooth sizes for a Syrian sample of harmonious permanent dentitions and to compare these ratios with the data from the Bolton and the Michigan studies. In plaster models of 55 Syrian patients (11-22 years) with neutral occlusion (Angle Class I), harmonious overjet and overbite, no reduction of mesiodistal tooth width or missing teeth, the mesiodistal widths of each tooth from the incisors to the first permanent molars were measured in both arches. In the statistical data analysis, the anterior and overall ratios were calculated according to Bolton. The results for the anterior ratio (78.99 +/- 2.18) and the overall ratio (92.26 +/- 2.06) showed no statistically significant differences by sex (P &gt; .48). These values and the degree of variation were similar to the original data by Bolton. Both studies differed considerably from the values of the anterior ratio found in the Michigan University study, which also shows a higher degree of variability. Nevertheless, the overall ratios of all three studies were very similar. Therefore, the interarch tooth-size analysis and values for a harmonious dentition developed by Bolton can also be transferred to an Arabian or at least a Syrian population. abstract_id: PUBMED:24987639 Effects of premolar extractions on Bolton overall ratios and tooth-size discrepancies in a north Indian population. Aim: The purpose of this study was to evaluate the Bolton overall ratio in a north Indian population reporting for orthodontic treatment and to determine the effect of extractions on the Bolton ratios. Another aim of the study was to check the effects of different extraction patterns on the final Bolton ratio. Materials And Methods: One hundred and twenty pre-treatment dental casts (60 males and 60 females) of orthodontic patients were selected randomly. Mesio-distal dimensions of the mandibular and maxillary teeth were measured before treatment, and subjected to Bolton analysis. Hypothetical tooth extraction by the following combinations: All the first premolars, all the second premolars, upper first and lower second premolars and upper second and lower first premolars were performed for each patient. The measurement results were again subjected to Bolton analysis to see whether any tooth-size discrepancy had been created. Results And Conclusion: The tooth material ratio of the studied north Indian population shows a mild maxillary tooth material excess. Extraction of premolars in any combination causes the maxillary tooth material to further increase. There is no significant sex difference in the tooth material ratios with or without extractions. In patients requiring extraction; all first premolar extraction or maxillary first and mandibular second premolar extraction should be preferred. For deciding a treatment plan involving extraction of teeth, we need to consider that the maxillary tooth mass may increase after extraction. Furthermore, the normal or the clinically significant tooth size discrepancies may change following extraction of teeth. abstract_id: PUBMED:29807960 Use of Interarch Width Ratio to measure Transverse Relationship: A New Method to measure and assess Interarch Discrepancy. Aim: The purpose of this study was to assess the arch width, transverse discrepancy, and transverse interarch ratio for class I malocclusion sample, with and without crossbite, in permanent dentition stage. Materials And Methods: Records of class I malocclusion patients with minimal crowding and spacing with and without posterior crossbite were selected. Each group consisted of 40 pairs of dental casts (20 males and 20 females). Arch widths were measured for the canine, first, and second premolars, and first permanent molars using the buccal approach. Means and standard deviations (SDs) of dental arch widths were measured for the maxillary and mandibular arches, and the interarch width ratio was calculated. Results: There was a highly significant difference between the noncrossbite and crossbite groups regarding the maxillary width, p &lt; 0.001. However, no differences were found between groups for the mandibular widths, p &gt; 0.01. There were also significant differences between both groups for the intercanine, first and second premolars, and first molar ratios, p &lt; 0.001. Conclusion: This new simple method showed that a transverse maxillary-mandibular ratio of 1:1.1 is ideal. A ratio less than 1:0.9 will indicate the presence of crossbite. Clinical Significance: The interarch ratio significantly aids in orthodontic treatment planning in patients requiring maxillary expansion and/or surgical cases. abstract_id: PUBMED:37226607 ALGORITHM OF RADIOLOGICAL EXAMINATION OF PATIENTS IN THE PLANNING OF ORTHODONTIC TREATMENT OF INTERARCH RELATIONSHIP OF TEETH AND TEETH POSITION ANOMALIES. Objective: The aim: To increase the efficiency of diagnosis and planning of orthodontic treatment of patients with interarch relationship of teeth and teeth position anomalies by developing an optimal algorithm for radiological research using cone-beam computed tomography (CBCT), teleroentgenography (TRG), orthopantomography (OPG). Patients And Methods: Materials and methods: 1,460 patients with interarch relationship of teeth and teeth position anomalies were examined at the Department of Radiology of the P. L. Shupyk National Healthcare University of Ukraine. The cohort of 1460 examined patients was divided by gender: 600 men (41.1%) and 860 women (58.9%) aged 6 to 18 years and 18 to 44 years. Distribution of patients was held by the number of the primary pathology signs and the number of concomitant pathology signs. Results: Results: The choice of the optimal radiological examination of patients is influenced by the number of signs of the main and accompanying pathology. The risk of secondary examination of the patient with a mathematical method of choosing the optimal diagnostic technique is determined.The risk of secondary radiological examination of the patient was determined at a marginal probability of 88% (at a higher value, the model predicts a repeat visit). Conclusion: Conclusions: The developed diagnostic model shows that when determining the Pr-coefficient of 0.79, it is recommended to carry out OPTG and TRG. With indicators of 0.88, the recommendation is to conduct CBCT in the age groups 6-18 and 18-44 years. abstract_id: PUBMED:8531154 Association between the interarch distance and food bolus size in the early phase of mastication. The effects of food bolus size on interarch distance were examined in three dimensions with sphere-shaped gelatin (5 to 25 mm diameters). Interarch distance in the early phases of chewing indicated a significant linear relation (p &lt; 0.001) to food bolus size. The correlation coefficients were high particularly at the second and third opening sequences. At the first opening sequence during ingestion, interarch distance for small bolus of less than 15 mm was nearly constant and relatively wider than the bolus size. Practical information for determining the interincisal distance in prosthodontic treatment is provided. abstract_id: PUBMED:33893268 Interarch Traction Strategy for Palatal Cuspid Impactions. Aim And Objective: To disimpact a palatally impacted canine using a novel, compliance-dependent technique. Background: Orthodontic traction of palatally impacted teeth warrants careful mechanical strategies to avoid complications that include r oot damage to adjacent teeth and r esorptions. Sound biomechanical control to avoid these side effects is considered paramount in planning the traction. Technique: The palatally impacted canine was pulled into the arch with the aid of a modified power arm on the exposed canine and a miniscrew on the lower arch. Conclusion: The impacted canine was successfully brought into occlusion within 11 months. Clinical Significance: This paper highlights the use of a simple strategy using interarch mechanics and temporary anchorage devices (TADs) to aid in the safe mechanical eruption of impacted palatal canines without the need to bend complex wire designs. Answer: Bolton's analysis, which is used to detect interarch tooth size discrepancies, may not be universally applicable to all populations due to variations in tooth size and morphology across different ethnic groups. The study on the Indian population found that Bolton's original data does not represent the Indian population, as there was greater size variability found in maxillary teeth compared to mandibular teeth, except for the mandibular first molar. The mean overall ratio for the Indian population was found to be 92.11 with a standard deviation of 2.12, which is different from Bolton's norms. The study indicated that population-specific standards are necessary for clinical assessments, and significant differences were shown for the overall ratio and anterior ratio for both sexes as compared to Bolton's ratio (PUBMED:24939274). Similarly, another study evaluating Bolton's interarch ratios across different populations and genders found that these ratios are population and gender-specific. The study, which included black, Hispanic, and white populations, showed significant ethnic group differences in all interarch ratios, with whites displaying the lowest overall ratio, followed by Hispanics and blacks. The study concluded that Bolton ratios apply to white females only and should not be indiscriminately applied to white males, blacks, or Hispanics (PUBMED:10672217). Therefore, based on these studies, it can be concluded that Bolton's analysis does not universally apply to the Indian population, and population-specific standards should be used for clinical assessments of interarch tooth size relationships.
Instruction: Is TNF-α gene polymorphism related to pulmonary functions and prognosis as determined by FEV1, BMI, COPD exacerbation and hospitalization in patients with smoking-related COPD in a Turkish population? Abstracts: abstract_id: PUBMED:24818527 Is TNF-α gene polymorphism related to pulmonary functions and prognosis as determined by FEV1, BMI, COPD exacerbation and hospitalization in patients with smoking-related COPD in a Turkish population? Introduction: Some conflicting results have been published about the relationship between TNF-α-308 gene polymorphism and chronic obstructive pulmonary disease (COPD). The aim of this study was to determine whether TNF-α-308 gene polymorphism was associated with smoking-related COPD and whether it was associated with pulmonary function parameters (PFTs), body mass index (BMI), and prognosis. Methods: We studied the frequencies of TNF-α-308 gene polymorphism in 90 male subjects (60 subjects with COPD and 30 healthy smokers) in a Caucasian population. Results: There was no significant difference in the frequency of G/G and G/A gene polymorphisms in the COPD group compared with control subjects (p&gt;0.05). We compared COPD patients as G/A gene polymorphism and G/G gene polymorphism; the PFTs and BMI before and after one year were not statistically significant (p&gt;0.05). Also, the exacerbation and hospitalization data of COPD patients were not significant between these groups. Conclusion: In conclusion, there was no difference between smoking-related COPD and the control group according to TNF α-308 gene polymorphism in a Caucasian population. In addition, it was shown that important determinants of prognosis of COPD such as FEV1, BMI, COPD exacerbation and hospitalization were not associated with TNF-α-308 gene polymorphism. abstract_id: PUBMED:25764776 May smoking be considered as the most important factor that determines premature death after hospitalization due to COPD exacerbation? Preliminary data. Introduction: COPD, cardiovascular diseases and cancer are smoking-related diseases that have been accepted as the leading causes of premature mortality worldwide. Nevertheless, smoking is still considered to be a risk rather than a prognostic factor for mortality. The aim of the study was to determine the most important factors in predicting the risk of premature death after effective hospital treatment of COPD exacerbation. Material And Methods: 34 consecutive patients hospitalized with COPD exacerbation were followed up and their post-hospitalization survival time was analyzed. Basic clinical data (BORG, MRC, BMI, pack-years and age) was collected. The following tests that were performed prior to discharge were assessed: 6MWT, spirometry, body plethysmography, diffusion capacity, transthoracic echocardiography (TEE) and whole night polysomnography. Routine laboratory and immunoenzymatic tests (hs-CRP, endothelin 1 (ET-1), NT-proBNP, IL-6, TNF-alfa) were analyzed. Results: The average follow-up period was 15.1 ± 8.2 month. The mortality rate was 3/34 = 8.8%. Univariable analysis revealed significant differences that indicated a greater number of deaths at higher values of: pack-years (p = 0.02), BODE (p = 0.03), heart rate (HR) after 6MWT (p = 0.003), ET-1 (p = 0.04), but at lower values of TLCO/VA (p = 0.03) and 6MWT-distance (p = 0.006). Multivariable analysis revealed that only pack-years (p = 0.005) were predictive for mortality. Conclusions: Smoking history seems to have the strongest impact on short-term mortality after recovery from COPD exacerbation. abstract_id: PUBMED:33748274 Correlation between TNF-α -308 and +489 Gene Polymorphism and Acute Exacerbation of Chronic Obstructive Pulmonary Diseases. Acute exacerbation of chronic obstructive pulmonary disease (AECOPD) is becoming a common respiratory disease, leading to increased morbidity and mortality worldwide. Tumor necrosis factor-alpha (TNF-α) is a powerful proinflammatory cytokine involved in the pathogenesis of AECOPD. Therefore, we proposed a close correlation between the TNF-α polymorphism [-308G/A (rs1800629), +489G/A (rs1800610)] and the disease progress of patients with AECOPD. Comparison of the TNF-α genotypes between the 198 AECOPD diagnosed patients groups and 195 healthy peoples suggested their significant differences of the three genotypes (AA, GA, GG) distribution for TNF-α -308 (P &lt; 0.05), but no differences of that for TNF-α +489. We found that patients with TNF-α -308 GA/AA genotypes showed smaller adjacent arterial diameter, thicker bronchial wall, higher bronchial artery ratio, higher bronchial wall grading, and higher frequency of acute exacerbations than those with TNF-α -308 GG genotype. Patients with TNF-α +489 GA/AA genotypes showed the same AECOPD properties as patients with TNF-α -308 except for the high frequency of acute exacerbations. Further experiment showed that the TNF-α -308 and+489 gene polymorphisms could affect the expression level of TNF-α in macrophages, suggesting the involvement of the macrophage population in disease regulation of AECOPD patients with TNF-α -308G/A and+489G/A genotype heterogeneity. In conclusion, the TNF-α -308 G/A genotype was related to AECOPD susceptibility and progress, while the TNF-α +489G/A genotype was related to AECOPD progress, but not AECOPD susceptibility. abstract_id: PUBMED:32946850 Distinguishing Smoking-Related Lung Disease Phenotypes Via Imaging and Molecular Features. Background: Chronic tobacco smoke exposure results in a broad range of lung pathologies including emphysema, airway disease and parenchymal fibrosis as well as a multitude of extra-pulmonary comorbidities. Prior work using CT imaging has identified several clinically relevant subgroups of smoking related lung disease, but these investigations have generally lacked organ specific molecular correlates. Research Question: Can CT imaging be used to identify clinical phenotypes of smoking related lung disease that have specific bronchial epithelial gene expression patterns to better understand disease pathogenesis? Study Design And Methods: Using K-means clustering, we clustered participants from the COPDGene study (n = 5,273) based on CT imaging characteristics and then evaluated their clinical phenotypes. These clusters were replicated in the Detection of Early Lung Cancer Among Military Personnel (DECAMP) cohort (n = 360), and were further characterized using bronchial epithelial gene expression. Results: Three clusters (preserved, interstitial predominant and emphysema predominant) were identified. Compared to the preserved cluster, the interstitial and emphysema clusters had worse lung function, exercise capacity and quality of life. In longitudinal follow-up, individuals from the emphysema group had greater declines in exercise capacity and lung function, more emphysema, more exacerbations, and higher mortality. Similarly, genes involved in inflammatory pathways (tumor necrosis factor-α, interferon-β) are more highly expressed in bronchial epithelial cells from individuals in the emphysema cluster, while genes associated with T-cell related biology are decreased in these samples. Samples from individuals in the interstitial cluster generally had intermediate levels of expression of these genes. Interpretation: Using quantitative CT imaging, we identified three groups of individuals in older ever-smokers that replicate in two cohorts. Airway gene expression differences between the three groups suggests increased levels of inflammation in the most severe clinical phenotype, possibly mediated by the tumor necrosis factor-α and interferon-β pathways. Clinical Trial Registration: COPDGene (NCT00608764), DECAMP-1 (NCT01785342), DECAMP-2 (NCT02504697). abstract_id: PUBMED:34053484 Expression of serous fibroblast growth factor 7 and related inflammatory factors in patients with acute exacerbation of chronic obstructive pulmonary disease Objective: To investigate the expression of fibroblast growth factor 7 (FGF7) and related inflammatory factors in the serum of patients with acute exacerbation of chronic obstructive pulmonary disease (COPD). Methods: A case control study was conducted. The patients with AECOPD admitted to the First Affiliated Hospital of Xinjiang Medical University from November 2016 to January 2020 were enrolled. The patients were divided into mild group [forced expiratory volume in one second (FEV1)/forced vital capacity (FVC) ratio (FEV1/FVC) &lt; 0.70, FEV1 percentage in predicted value (FEV1%) ≥ 80%], moderate group (FEV1/FVC &lt; 0.70, 50% ≤ FEV1% &lt; 80%), and severe group (FEV1/FVC &lt; 0.70, 30% ≤ FEV1% &lt; 50%) based on their lung function test results, with 20 patients in each group, and 20 patients with normal pulmonary function who underwent elective non-thoracic surgery such as gastrointestinal surgery and orthopedics surgery in the same period were selected as controls. The demographic data, FEV1/FVC, FEV1%, FVC, maximum mid-expiratory flow percentage in predicted value (MMEF%), 6-minute walking test (6MWT), and St George Respiratory Questionnaire (SGRQ) score were recorded respectively. Serum levels of FGF7, interleukins (IL-6, IL-1β) and tumor necrosis factor-α (TNF-α) were determined by enzyme linked immunosorbent assay (ELISA). Pearson correlation was used to analyze the correlation between TNF-α and lung function. Results: Compared with the normal pulmonary function group, the levels of FEV1/FVC, FEV1%, MMEF% and 6MWT in the mild, moderate and severe groups were significantly decreased, and the SGRQ scores were increased, the indicators continued to deteriorate with the aggravation of the disease, the statistical differences were found between severe group and normal pulmonary function group [FEV1/FVC: 0.39±0.09 vs. 0.81±0.04, FEV1%: (38.80±6.28)% vs. (109.58±13.80)%, MMEF%: (0.34±0.14)% vs. (2.69±0.99)%, 6MWT (m): 279.00±41.61 vs. 402.85±53.97, SGRQ scores: 34.95±6.71 vs. 2.60±2.06, all P &lt; 0.05]. Compared with the normal pulmonary function group, the levels of FGF7 in the mild, moderate and severe groups were significantly lowered (ng/L: 6.31±2.65, 6.10±1.39, 6.64±1.77 vs. 8.29±3.51, all P &lt; 0.05), but no significant difference was found among the mild, moderate and severe groups (all P &gt; 0.05). Compared with the normal pulmonary function group, IL-6 and TNF-α levels were significantly increased in the mild, moderate and severe groups, and TNF-α increased with the aggravation of the disease, the statistical difference was found between severe group and normal pulmonary function group (ng/L: 7.42±2.28 vs. 3.83±0.92, P &lt; 0.05). There was no significant difference in IL-1β level between the normal pulmonary function group and the mild, moderate, severe groups. Correlation analysis showed that TNF-α was negatively correlated with FEV1/FVC and FEV1% (r values were -0.350 and -0.527, respectively, both P &lt; 0.01). Conclusions: In AECOPD patients, serum FGF7 was decreased, while IL-6 and TNF-α were increased; however, with the aggravation of the disease, there was no significant change in the level of FGF7 in the peripheral blood, but the TNF-α level might be increased, accompanied by severe damage of small airway function. abstract_id: PUBMED:33616330 Elevated Serum Myeloid-Related Protein (MRP) 8/14 in Patients with Acute Exacerbation of Chronic Obstructive Pulmonary Disease. Background: The current study aims to evaluate the expression and clinical significance of myeloid-related protein (MRP) 8/14 in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD). Methods: The levels of MRP8/14, TNF-α, and IL-1β in the serum of the patients with AECOPD were determined using ELISA assay. The correlation between the expression of MRP8/14 and TNF-α, IL-1β, forced expiratory volume in one second FEV1 % pred in AECOPD patients was analyzed using Pearson's correlation assay. Receiver operating characteristic (ROC) analysis was performed to evaluate the diagnostic value of serum MRP8/14 in AECOPD patients. Results: The levels of MRP8/14, TNF-α, and IL-1β in the serum of the patients with AECOPD were significantly higher than those in the control group. Furthermore, the expression of MRP8/14 was positively correlated with TNF-α, IL-1β, and negatively correlated with FEV1 % pred. In addition, the level of serum MRP8/14 in GOLD 3-4 patients was higher than that in GOLD 1 - 2 patients. Meanwhile, the level of serum MRP8/14 in AECOPD patients with mMRC 3 - 4 was higher than that in patients with mMRC 0 - 2. ROC analysis showed that serum MRP8/14 could differentiate AECOPD patients from healthy controls. Conclusions: Altogether, elevated serum MRP8/14 level plays a key role in chronic airway inflammation and may be a useful marker in the diagnosis of AECOPD patients. abstract_id: PUBMED:32661658 Association Between Serum S100A8/S100A9 Heterodimer and Pulmonary Function in Patients with Acute Exacerbation of Chronic Obstructive Pulmonary Disease. Background: Many studies have indicated that S100A8 and S100A9 may be involved in the development and progression of chronic obstructive pulmonary disease (COPD). However, there has been no clinical study analyzing the role of the serum S100A8/S100A9 heterodimer in COPD patients. The aim of this study was to analyze the correlation of the serum S100A8/S100A9 heterodimer with pulmonary function in COPD patients during acute exacerbation (AE-COPD) based on a cross-sectional study. Methods: A total of 131 AE-COPD patients and matched healthy subjects were recruited. Pulmonary function, arterial blood gas values, and serum inflammatory cytokines were measured. Results: Serum S100A8/S100A9 was increased in AE-COPD patients. AE-COPD patients were ranked into different grades based on FEV1%. Serum S100A8/S100A9 was higher in Grade 4 than in Grade 1-2 and Grade 3 patients with AE-COPD. Univariate regression analysis found that serum S100A8/S100A9 was negatively correlated with FEV1% in AE-COPD patients. Furthermore, serum S100A8/S100A9 was positively associated with MCP-1 in AE-COPD patients. Further stratified analysis revealed that serum S100A8/S100A9 was negatively associated with FEV1/FVC in Grade 3 (OR 0.629, P &lt; 0.05) and in Grade 4 (OR 0.347, P &lt; 0.05). In addition, there was a positive relationship between serum S100A8/S100A9 and PaCO2 in Grade 3 (OR 1.532, P &lt; 0.05) and Grade 4 (OR 1.925, P &lt; 0.01). Conclusion: S100A8/S100A9 was negatively associated with pulmonary function in AE-COPD patients, indicating that the serum S100A8/S100A9 heterodimer may be involved in the progression of AE-COPD, and may be a relevant serum biomarker in the diagnosis for AE-COPD. abstract_id: PUBMED:34277053 Association of serum vitamin D levels with disease severity, systemic inflammation, prior lung function loss and exacerbations in a cohort of patients with chronic obstructive pulmonary disease (COPD). Background: Vitamin D deficiency has been associated with chronic disorders including chronic obstructive pulmonary disease (COPD) but the relationships with inflammation, exacerbations and disease progression remain unclear. Methods: In this monocentric cross-sectional observational study we analyzed the disease status, systemic inflammation, prior exacerbation frequency and loss in lung function in relation to serum 25-hydroxyvitamin D (25-OHD) levels in a cohort of 94 patients with COPD. Serum 25-OHD, C-reactive protein, interleukin-6 and tumor necrosis factor-α were quantified. Exacerbation frequencies and sunlight exposure were assessed. These parameters were analyzed in correlation to the current forced expiratory volume in 1 s (FEV1), the individual average 3-year FEV1 decline and the Global Initiative for Chronic Obstructive Lung Disease (GOLD) stage. Results: We observed fair correlation between serum 25-OHD and the current FEV1 (r=0.38, P&lt;0.001). Furthermore, mean serum 25-OHD was significantly altered between patients of GOLD stages I-IV (P=0.013). There was weak negative correlation of 25-OHD and the average annual change of the FEV1 (r=-0.26, P&lt;0.05). Furthermore, we observed fair negative correlation between 25-OHD and C-reactive protein (r=-0.32, P&lt;0.01) as well as weak negative correlation with interleukin-6 (r=-0.23, P&lt;0.05). While the exacerbation frequency significantly differed between GOLD stages (P=0.04), there was no direct association between exacerbations and 25-OHD levels. Conclusion: Our data confirm frequent vitamin D deficiency in COPD and point out correlations between 25-OHD levels, systemic inflammation, disease severity and progression. abstract_id: PUBMED:28260877 Repression of Toll-like receptor-4 by microRNA-149-3p is associated with smoking-related COPD. Background: Smoking is the leading cause of COPD. Exploring molecular markers and understanding the pathogenic mechanisms of smoking-related COPD are helpful for early clinical diagnosis and treatment of the disease. This study aims to identify specific circulating microRNAs (miRNAs) from the blood of COPD patients with a long history of smoking. Methods: Blood samples from four different groups were collected, and miRNA microarray was performed. Differential expression of miRNAs was verified by quantitative polymerase chain reaction. In vitro, THP-1 cells were cultured and stimulated with cigarette smoke extract (CSE) or transfected with miR-149-3p inhibitor/mimics. Protein levels of Toll-like receptor 4 (TLR-4) and nuclear factor κB (NF-κB) were detected using Western blot and immunofluorescence. Interleukin (IL)-1β and tumor necrosis factor (TNF)-α levels were determined by an enzyme-linked immunosorbent assay. Results: miRNA profiling revealed that the expression of 56 miRNAs was changed between the four groups. Expression of miR-149-3p in group C (non-smoker non-COPD) was higher than in group S (smoker non-COPD), S-COPD (smoker with stable COPD) and AE-COPD (smoker with acute exacerbation COPD). CSE stimulation down-regulated the expression of miR-149-3p and up-regulated the TLR-4 and NF-κB levels in THP-1 cells. Transfecting miR-149-3p inhibitors in THP-1 cells also increased the expression of its target genes. Furthermore, overexpression of miR-149-3p inhibited the TLR-4/NF-κB signaling pathways and reduced the secretion of IL-1β and TNF-α. Conclusion: This study found that smoking can induce differential expression of circulating miR-NAs, such as down-regulation of miR-149-3p. Reducing miR-149-3p may increase the inflammatory response in COPD patients through the regulation of the TLR-4/NF-κB signaling pathway. abstract_id: PUBMED:23791383 Prevalence of anaemia associated with chronic obstructive pulmonary disease. Study of associated variables. Background: Anaemia is one of the extrapulmonary manifestations of chronic obstructive pulmonary disease (COPD). Its real prevalence, physiopathology and clinical repercussion are unknown. The objectives of our study were: to determine the prevalence of anaemia in patients with stable COPD not attributable to other causes and to establish the relationship of anaemia with clinical, prognostic and inflammatory markers with an important role in COPD. Methods: The study included stable COPD patients with no other known causes of anaemia. The following tests were carried out: respiratory function tests; serum determination of erythropoietin and inflammatory markers: high sensitivity C-reactive protein (hs-CRP), fibrinogen, interleukin 6 (IL-6), interleukin 8 (IL-8) and tumour necrosis factor α (TNF-α). Body mass index (BMI), Charlson and BODE indices, the number of exacerbations in the previous year, dyspnoea and quality of life were also calculated. Results: One hundred and thirty patients were included. Anaemia prevalence was 6.2%. Mean haemoglobin value in anaemic patients was 11.9±0.95g/dL. Patients with anaemia had a lower BMI (P=.03), higher Charlson index (P=.002), more elevated erythropoietin levels (P=.016), a tendency to present a lower FEV1% value (P=.08) and significantly lower IL-6 values when compared to non-anaemic patients (P=.003). Conclusions: In our series, the anaemia associated with COPD was less prevalent than that published in the literature to date, and was related to certain clinical and inflammatory markers. Answer: The study described in abstract PUBMED:24818527 investigated the relationship between TNF-α-308 gene polymorphism and smoking-related COPD in a Turkish population. The study aimed to determine if there was an association between this gene polymorphism and various factors such as pulmonary function parameters (PFTs), body mass index (BMI), and prognosis, including COPD exacerbation and hospitalization. The study included 90 male subjects, 60 with COPD and 30 healthy smokers, all of whom were Caucasian. The results showed no significant difference in the frequency of G/G and G/A gene polymorphisms in the COPD group compared with the control subjects. Furthermore, when comparing COPD patients with G/A gene polymorphism to those with G/G gene polymorphism, there were no statistically significant differences in PFTs and BMI before and after one year. Additionally, the data on exacerbation and hospitalization of COPD patients did not show significant differences between these groups. In conclusion, the study found no difference between smoking-related COPD and the control group according to TNF α-308 gene polymorphism in a Caucasian population. Moreover, it was demonstrated that important determinants of the prognosis of COPD, such as FEV1, BMI, COPD exacerbation, and hospitalization, were not associated with TNF-α-308 gene polymorphism in this Turkish population.
Instruction: Are laboratory tests necessary in making the diagnosis of limb pains typical for growing pains in children? Abstracts: abstract_id: PUBMED:18045281 Are laboratory tests necessary in making the diagnosis of limb pains typical for growing pains in children? Background: The present study was conducted to determine if performing laboratory tests are necessary to exclude other possible diagnoses in a child with limb pains typical for growing pains according to history and physical examination. Methods: A matched case-control study was carried out on children with limb pains who attended the Motahary clinic, Shiraz, Iran, in a 2 year period. The inclusion criteria were limb pains, which were recurrent and lasted for more than 3 months. The exclusion criteria were any abnormal systemic symptoms and signs, localizing signs, joint involvement; and limp or limitation of activity. The controls were clinically normal children matched for age and sex. Laboratory tests including complete blood count, erythrocyte sedimentation rate, serum calcium, phosphorus, alkaline phosphatase and rheumatoid factor were done for all patients and controls. Results: In total, 100 patients and 150 controls were included in the study. There were no significant differences between the patients and controls with regard to all of the measured laboratory tests. Conclusion: Growing pain is a clinical diagnosis and if precise inclusion and exclusion criteria in history and physical examination are considered, there will be no need for laboratory tests in order to make a diagnosis. abstract_id: PUBMED:28177851 Growing Pains. Context: The term growing pains describes a common, benign syndrome of recurrent discomfort that occurs in young children. First described in the 1800s, the etiology of this condition remains unclear. The peak incidence does not correspond to a time of rapid growth. Children typically report bilateral pain in the lower extremities that occurs late in the day or at night. Evidence Acquisition: The PubMed database was searched using the keywords growing pains, benign nocturnal limb pains of childhood, recurrent limb pain of childhood, and limb pain in childhood. Articles were also found by reviewing references from the initial PubMed search. Only English-language articles published from 1900 through 2016 were included in the review. Study Design: Clinical review. Level Of Evidence: Level 3. Results: When a patient's history is classic for growing pains and physical examination is normal, laboratory and radiographic evaluation are not needed to make the diagnosis. Findings typical for growing pains include bilateral lower extremity pain usually experienced in the early evening or at night. The pain is not caused by activity and will not cause a limp. Conclusion: Additional workup is warranted for children with an atypical history, systemic symptoms, or for those individuals with physical examination abnormalities such as allodynia, focal tenderness, joint swelling, or decreased joint range of motion. Management of growing pains generally consists of symptomatic care with massage and over-the-counter analgesics, as well as reassurance to children and parents about the benign, self-limited nature of this condition. This review article summarizes data on the epidemiology, etiology, and management of growing pains and provides a framework for distinguishing this entity from other causes of extremity pain. abstract_id: PUBMED:36264529 Diagnosing growing pains in children by using machine learning: a cross-sectional multicenter study. Growing pains (GP) are the most common cause of recurrent musculoskeletal pain in children. There are no diagnostic criteria for GP. We aimed at analyzing GP-related characteristics and assisting GP diagnosis by using machine learning (ML). Children with GP and diseased controls were enrolled between February and August 2019. ML models were developed by using tenfold cross-validation to classify GP patients. A total of 398 patients with GP (F/M:1.3; median age 102 months) and 254 patients with other diseases causing limb pain were enrolled. The pain was bilateral (86.2%), localized in the lower extremities (89.7%), nocturnal (74%), and led to awakening at night (60.8%) in most GP patients. History of arthritis, trauma, morning stiffness, limping, limitation of activities, and school abstinence were more prevalent among controls than in GP patients (p = 0.016 for trauma; p &lt; 0.001 for others). The experiments with different ML models revealed that the Random Forest algorithm had the best performance with 0.98 accuracy, 0.99 sensitivity, and 0.97 specificity for GP diagnosis. This is the largest cohort study of children with GP and the first study that attempts to diagnose GP by using ML techniques. Our ML model may be used to facilitate diagnosing GP. abstract_id: PUBMED:1250808 Growing pains: fact or fiction? Growing pains are recurrent limb pains peculiar to children. Brief episodes of leg pains occurring intermittently at night are typical, but the clinical spectrum is variable. A diagnosis of growing pains can be made with certainty only after other possible conditions have been ruled out by appropriate studies and after the child has been observed carefully for a period of months. The two diseases most often confused with this syndrome are rheumatic fever and juvenile rheumatoid arthritis. Aspirin and supportive measures are beneficial. abstract_id: PUBMED:21575536 Vitamin D levels in children with growing pains. Objective: To estimate the serum levels of vitamin D in children with growing pains and determine the relationship between serum vitamin D levels, parathormone and routine biochemical markers. Study Design: Cross-sectional study. Place And Duration Of Study: Department of Paediatrics, Liaquat National Hospital, Karachi, from October 2008 to September 2009. Methodology: Hundred children, aged 5-12 years presenting in Paediatric Outpatient Department of Liaquat National Hospital, Karachi, with limb pains, fulfilling the diagnostic criteria of growing pains, were included. Children with any systemic illness, organic cause of pain, rheumatologic disorders and signs of rickets were excluded from the study. Children were investigated for serum total calcium, inorganic phosphorus, alkaline phosphatase, vitamin D3 (25-hydroxecholecalciferol) and parathormone levels. On the basis of serum vitamin D3 level, patients were divided into 3 groups; group 1 with normal level of vitamin D3 (&gt; 75 nmol/L), group 2 with vitamin D insufficiency (level between 50-75 nmol/L), and group 3 with vitamin D deficiency (level &lt; 50 nmol/L). Significance of group proportions was determined using chi-square test with significance at p &lt; 0.05. Results: The mean age of the participants was 8.05 years with the majority (59%) being females. Only 6% had normal vitamin D levels. Over 95% of the children with vitamin D insufficiency had normal alkaline phosphatase and parathormone levels. Conclusion: Hypovitaminosis D may have a role in pathogenesis of growing pains. All children with unexplained limb pains without identifiable organic pathology should be tested for vitamin D status, and treated, if necessary. Routine biochemical markers alone are not sufficient to detect all cases of hypovitaminosis D. abstract_id: PUBMED:31041161 Growing pains: What do we know about etiology? A systematic review. Background: Growing pains is the most common cause of musculoskeletal pain in early childhood and was first described in 1823 by French physician Marcel Duchamp. Although it has been researched extensively, the etiology is still unknown. Several theories have been proposed throughout the years. Aim: Analyze the available scientific literature to provide an update on the latest evidence on the etiology. Methods: According to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, the scientific literature on the etiology of growing pains was systematically reviewed using the following inclusion criteria: studies of any level of evidence reporting clinical or preclinical results and dealing with the etiology of growing pains. The medical electronic databases PubMed and Web of Science were searched by two independent authors on October 20, 2018. The search string used was "(growing pains OR benign nocturnal limb pains OR musculoskeletal pains) AND (etiology OR pathogenesis) AND (pediatrics)". Results: A total of 32 articles were included. The etiology of growing pains still remains poorly understood. Many theories have been proposed, but none of them are decisive. A lower pain threshold has been found among patients suffering from growing pains in comparison to healthy controls. Furthermore, evidence suggests an association between growing pains and reduced bone strength in young patients, although this finding still remains controversial. Changes in the vascular perfusion pattern have also been studied. However, the etiology of growing pains does not seem related to a vascular component. The anatomical/mechanical theory has not been supported, but the role of vitamin D deficiency has been investigated many times. Strong recent evidence indicates a genetic susceptibility in the pathogenesis of growing pains. Furthermore, psychological factors also seem to play a strong role in the onset. Conclusion: The scientific literature about the etiology of growing pains presents heterogeneity and lack of consensus; more studies are needed to understand the genesis of benign musculoskeletal pain syndrome of childhood. abstract_id: PUBMED:18762787 Growing pains: a noninflammatory pain syndrome of early childhood. The term 'growing pains' has been used for almost 200 years to refer to the often severe, generally bilateral lower-extremity nocturnal pains experienced by up to one-third of all children at some time during early childhood. No clear mechanism has yet been identified that explains these pains, but there is an increasing body of evidence indicating that several factors, individually or in combination, might be responsible for this phenomenon. These include mechanical factors, such as joint hypermobility and flat feet, decreased pain thresholds, reduced bone strength, and emotional factors involving the patient's family and other social stressors. Correct diagnosis of growing pains requires a thorough patient history and physical examination. The diagnosis can be safely established without unnecessary laboratory investigations or imaging; however, identification of one or more clinical cautionary signs, such as unilateral pain, morning stiffness, joint swelling and systemic symptoms (e.g. fever, weight loss and malaise), should trigger an extended evaluation to exclude other more serious conditions that might also present with limb pain. Once the diagnosis has been established, conservative management, using symptomatic pain medications, massage and other supportive measures, should be employed until the syndrome self-resolves with time. abstract_id: PUBMED:36979977 Children's Symptoms with a Febrile Illness and a Positive or Negative Test of SARS-CoV-2 during the Omicron Wave. Febrile infections are common in childhood. Children can be infected with SARS-CoV-2, but their course is milder than in adults. So far, a comparison between febrile infections with a positive or negative Corona test with the Omicron variant is missing. The data used are from the FeverApp registry, which collects parent-reported data on febrile infections in children and informs about fever management. A comparison of symptomatic differences between episodes with a positive or negative Corona test was performed using Χ2-tests. During the Omicron wave, reported tests doubled and positive test results nearly 12-folded. In episodes with positive Corona saliva tests, more cough, fatigue, disturbed smell/taste, limb pain, sore throat, signs of serious sickness, and touch sensitivity were reported. Children with a negative Corona test show more tonsillitis, teething, any pain symptoms, earaches, and rashes. Thus, there are some significant differences between febrile infections with a positive or negative Corona test, but symptoms are present on both sides. The omicron variant seems to be more infectious than the alpha or delta variants in children, but the symptoms remain mild and do not differ much from other febrile infections. abstract_id: PUBMED:9874005 Complex regional pain syndrome I (CRPS I): prospective study and laboratory evaluation. Objective: To relate clinical features to autonomic laboratory indices used in the diagnosis of Complex Regional Pain Syndrome type I (CRPS I) (reflex sympathetic dystrophy) to generate improved diagnostic criteria. Description: CRPS I is a chronic pain syndrome, characterized by diffuse limb pain with allodynia and prominent vasomotor and sudomotor dysfunction. Methods: We conducted a prospective study on 102 patients referred for possible CRPS I. These patients completed a structured questionnaire and underwent neurologic examination, with special attention to the evaluation of clinical features of vasomotor, sudomotor, motor, and sensory, including pain, dysfunction. All patients were tested using a standard autonomic protocol that compared side-to-side skin temperature, resting sweat output, and quantitative sudomotor axon reflex test (QSART) measurements. Composite autonomic clinical (CRPS-Sx) and laboratory (CRPS-LAB) scores were defined. The clinical (subjective and objective) and the laboratory data were analyzed using Pearson's correlation analysis and Bonferroni's probability value to assess concordance and their value in correctly diagnosing CRPS I. Results: All cases occurred after limb injury. One-third of cases did not fulfill our criteria of CRPS I. Highly significant correlations (p&lt;.001) were found among certain clusters of symptoms and signs that shared unifying pathophysiologies. CRPS-Sx correlated with CRPS-LAB (p = .035). The indices that correlated most reliably with clinical data and with each other were RSO, QSART, and skin temperature reductions. Conclusion: Clinical and autonomic laboratory probability scores correlate in an internally consistent manner. Both CRPS-Sx and CRPS-LAB are sensitive and reliable tools to formulate a correct diagnosis of CRPS I and can be combined to provide an improved set of diagnostic criteria for CRPS I. abstract_id: PUBMED:28871487 Fabry disease in children: a federal screening programme in Russia. Our objective was to examine the prevalence of Fabry disease in Russian children with chronic pain in the distal limbs. This non-interventional, multi-centre study included children 2-18 years of age with chronic recurrent unilateral or bilateral pain, burning, or acroparesthesia in the hands or feet. The presence of Fabry disease was defined by abnormal alpha-galactosidase A activity in males or alpha-galactosidase gene (GLA) mutation in females. Among 214 patients (110 males), 84.1% had bilateral limb pain and 31.8% had unilateral limb pain recorded at some time point; 61 (28.5%) patients had a positive family history possibly associated with Fabry disease. Alpha-galactosidase A activity was within the normal range in all 109 of the male patients tested. One female patient had a GLA mutation (C937G &gt; T) and alpha-galactosidase A activity within the normal range. Conclusion: We did not find definitive evidence of Fabry disease in these children with a history of chronic recurrent unilateral or bilateral limb pain or acroparesthesia. The presence of chronic limb pain does not appear to be highly predictive of a diagnosis of Fabry disease in Russian children and adolescents, suggesting that key early signs and symptoms of Fabry disease are not specific to the disease. What is Known: • Signs and symptoms of Fabry disease are seen in children &lt; 10 years of age; pain in the distal limbs is a common early symptom. What is New: • Fabry disease was not diagnosed in this population of Russian children with a history of chronic limb pain. • The presence of acroparesthesia or chronic limb pain does not appear to be highly predictive of a diagnosis of Fabry disease in Russian children and adolescents, suggesting that these early symptoms of Fabry disease are not specific to the disease. Answer: Laboratory tests are generally not necessary in making the diagnosis of limb pains typical for growing pains in children when the history and physical examination are classic for growing pains. A study conducted to determine if performing laboratory tests are necessary to exclude other possible diagnoses in children with limb pains typical for growing pains found that there were no significant differences in laboratory tests between patients with growing pains and controls, suggesting that growing pain is a clinical diagnosis and laboratory tests are not needed if precise inclusion and exclusion criteria in history and physical examination are considered (PUBMED:18045281). Similarly, another study indicated that when a patient's history is classic for growing pains and the physical examination is normal, laboratory and radiographic evaluation are not needed to make the diagnosis (PUBMED:28177851). However, additional workup is warranted for children with an atypical history, systemic symptoms, or physical examination abnormalities such as allodynia, focal tenderness, joint swelling, or decreased joint range of motion. Furthermore, a systematic review on the etiology of growing pains also supports the notion that the diagnosis can be safely established without unnecessary laboratory investigations or imaging, provided that clinical cautionary signs are absent (PUBMED:18762787). In conclusion, laboratory tests are not typically necessary for diagnosing growing pains in children when the clinical presentation is typical and there are no concerning signs or symptoms that suggest another condition.
Instruction: Does congenital cleft palate intrinsically influence craniofacial morphology? Abstracts: abstract_id: PUBMED:24011467 The influence of the extend of isolated palatal cleft on craniofacial morphology. Purpose: The aim of this study was to determine what influence that extent of cleft may have on craniofacial morphology in children with a cleft palate by retrospectively comparing craniofacial morphologies in age-matched prepubertal children. Materials And Methods: Using a retrospective study design, we compared the craniofacial morphology cephalometrically, of 14 Japanese girls at age 9 years with a cleft soft palate (Group_CSP), with 13 age-matched Japanese girls with cleft soft and hard palate, not extending as far as the incisive foramen (Group_CPNIF), and 12 age-matched Japanese girls with cleft soft and hard palate, extending as far as the incisive foramen (Group_CPIF), living in the same region (Hokkaido, Japan). None of them had received dentofacial orthopaedic treatment. P &lt; .05 was considered statistically significant. Results: In Group_CPNIF, the maxilla was significantly short (P &lt; .05), the anterior parts of the maxilla being retruded (P &lt; .05), compared with Group_CSP and, compared with Group_CPNIF, in Group_CPIF, anterior part of the maxilla was positioned posteriorly (P &lt; .05), the maxillary body was significantly smaller (P &lt; .01). Conclusions: This study suggests that in prepubertal children, as the congenital defect in the tissue of the palate extended, the shortness in the anteroposterior maxillary length, accompanying anterior part of the maxilla in posterior position, was more marked. abstract_id: PUBMED:19231769 Does congenital cleft palate intrinsically influence craniofacial morphology?: Craniofacial features in unoperated submucous cleft palate children in prepuberty. Purpose: The aberrations in facial structure with concave midface are a well-recognized feature characterizing cleft palate patients. Several explanations have been given as to the aberration causes, including surgical interference or the primary tissue deficiency in the congenital cleft. The aim of this study was to determine intrinsic effects of congenital cleft palate on craniofacial morphology by retrospectively comparing craniofacial features between children with unoperated submucous cleft palate and noncleft children with normal occlusion in prepuberty. Patients And Methods: Twelve Japanese children (7 girls and 5 boys) with unoperated submucous cleft palate at age 9 were examined cephalometrically. None of them had undergone any orthodontic treatments. Their craniofacial morphologies were compared with those in 60 Japanese noncleft children (30 girls and 30 boys) with normal occlusion at age 9. Results: In cleft children, anteroposterior length of the maxilla was significantly short, and the posterior part of the maxilla was more in the anterior position compared with noncleft children. Also, the anterior parts of the maxilla tended to be slightly retruded in cleft children. Conclusion: In the current study, we recognized characteristic differences in the craniofacial morphology between the unoperated submucous cleft palate children and the noncleft children in prepuberty, and showed that the craniofacial deviation in the cleft children can be defined as the intrinsic effects of congenital cleft palate itself. abstract_id: PUBMED:36360890 Influence of Primary Palatal Surgery on Craniofacial Morphology in Patients with Cleft Palate Only (CPO)-Systematic Review with Meta-Analysis. Background: Cleft palate only (CPO) is the second most prevalent cleft type. Both the cleft and palatal scarring may affect craniofacial growth. The aim of this systematic review was to summarize scientific evidence on effect of palatal surgery on craniofacial morphology in CPO. Methods: A search was conducted in PubMed, PMC, WoS, Scopus, Embase, using the keywords: "cleft palate" AND ("craniofacial morphology" OR "cephalometric analysis") NOT "lip" with inclusion and exclusion criteria ensuring confident, direct comparison between study groups. The quality assessment was performed with Arrive's scale for radiologic examinations. Results: Of 713 potential articles, 19 were subjected to qualitative analysis and 17 to meta-analysis, which confirmed reduced SNA in unoperated CPO versus non-cleft individuals. No scientific evidence was found directly assessing the effect of surgery on craniofacial morphology. The negative effect of palatal surgery was seen indirectly: in treated CPO versus non-cleft, the size effect of SNA is bigger than in untreated CPO versus non-cleft. A high heterogeneity came from a few non-European publications. Conclusions: CPO is associated with sagittal maxillary deficiency resulting both from the cleft and from primary surgery, disregarding cleft severity in operated CPO patients. Ethnic differences influence craniofacial morphology in CPO. This research received no external funding. Study protocol number in PROSPERO database: CRD42021268957. abstract_id: PUBMED:37964651 Epigenetic regulation of craniofacial development and disease. Background: The formation of the craniofacial complex relies on proper neural crest development. The gene regulatory networks (GRNs) and signaling pathways orchestrating this process have been extensively studied. These GRNs and signaling cascades are tightly regulated as alterations to any stage of neural crest development can lead to common congenital birth defects, including multiple syndromes affecting facial morphology as well as nonsyndromic facial defects, such as cleft lip with or without cleft palate. Epigenetic factors add a hierarchy to the regulation of transcriptional networks and influence the spatiotemporal activation or repression of specific gene regulatory cascades; however less is known about their exact mechanisms in controlling precise gene regulation. Aims: In this review, we discuss the role of epigenetic factors during neural crest development, specifically during craniofacial development and how compromised activities of these regulators contribute to congenital defects that affect the craniofacial complex. abstract_id: PUBMED:30717789 Scoping review of the morphology and anthropometry of Tessier craniofacial clefts numbers 3 and 4. Background: In 2016, WHO reported a death rate of 303,000 newborns before 4 weeks of age due to congenital anomalies. Those that survive congenital anomalies may have long-term disabilities which may have significant impacts on the individual, their families, the healthcare system, and societies. Tessier craniofacial clefts numbers 3 and 4 are congenital anomalies that result in a partial or total defect of craniofacial tissues thereby seriously influencing the patient's appearance and impair normal functioning. Therefore, understanding these defects is paramount to relieving the burden caused by this disability. The objective of this review was to examine the literature on the understanding of the knowledge of morphology and anthropometry of Tessier craniofacial clefts numbers 3 and 4 so that areas yet to be fully understood by research can be mapped out for future research. Methods And Analysis: A scoping review for literature on patients who have Tessier craniofacial clefts numbers 3 and 4 was conducted. Relevant studies from 1976 to the present were identified. The following databases were searched for peer-reviewed literature viz., PubMed, MEDLINE, EBSCOhost, Google Scholar, and the Cochrane library. The study selection was guided by the eligibility criteria. A data table was designed to extract information from the literature. The result of this study was reported using the Preferred Reporting Items for Systematic reviews and Meta-analyses (PRISMA). The quality of the included studies was assessed using the Mixed Method Appraisal Tool (MMAT). Result: Thirty-three studies met the inclusion criteria. The majority of the studies included were conducted in middle-income countries (54.5%) and some in high-income countries (45.5%); none was recorded from low-income countries. The total available sample size from the studies was 120 with a dominant male population of 67 (55.8%) and female 53 (44.2%). The majority (97%) of the studies reported on the knowledge of morphology while 12.1% of the included studies reported on anthropometry. Of the 33 included studies, 32 scored the highest quality (76-100%) from the quality assessment. Discussion: The findings from this review show evidence of the knowledge of morphology and the knowledge of anthropometry of Tessier craniofacial clefts numbers 3 and 4. However, these knowledges have not translated to universally recognized ways of repairing and documenting these clefts due to the sparse amount of studies on Tessier craniofacial clefts numbers 3 and 4. abstract_id: PUBMED:37370953 Craniofacial Cephalometric Morphology in Caucasian Adult Patients with Cleft Palate Only (CPO). Orofacial clefts are common birth defects that affect the morphology of the skull. Cleft palate only (CPO) has a different etiology than other types of clefts, and craniofacial morphology in CPO differs from that of UCLP and BCLP. The long-term effect of the cleft and its surgery is visible after growth cessation. However, few studies exist describing cephalometric craniofacial morphology in adults with CPO. The aim of the present study was to describe the cephalometric craniofacial morphology of adult patients with CPO compared to healthy patients. The study included analysis of cephalometric lateral headfilms of 28 adults with CPO and 28 healthy subjects. It was found that the angles of SNA, ANB, 1-:NB angle (°) and Wits appraisal were significantly smaller in CPO, whereas NL-NSL (°), 1+:NA angle (°) and 1+:NA (mm) had significantly higher values in CPO compared to the control group. It has been concluded that CPO in adult patients is characterized by a sagittal jaw discrepancy due to maxillary deficiency, with a tendency for compensatory inclination of the upper and lower incisors. abstract_id: PUBMED:26273503 The Craniofacial Morphology in Adult Patients with Unoperated Isolated Cleft Palate. Objective: To address the effect of intrinsic factors on craniofacial growth by analyzing the craniofacial morphology of unoperated isolated cleft palate in Chinese adult. Materials And Methods: This study included 37 nonsyndromic isolated cleft palate and 39 age and gender matched non-clefts. Twenty-six cephalometric measurements were employed to evaluate the facial morphology. Independent samples T test and Mann-Whitney U were used for comparison. Significant difference was defined at 95% level. Results: Data from this study showed patients with unoperated isolated cleft palate have a reduced maxillary sagittal length (ANS-PMP, A-PMP, P&lt;0.05), a smaller ANB angle (ANB, P&lt;0.05) and a retrusive ANS point (S-N-ANS, P&lt;0.05; Ba-N-ANS, P&lt;0.05). Measurements descripted position of maxilla (S-Ptm, P&gt;0.05), depth of bony pharynx (Ba-PMP, P&gt;0.05), anterior and posterior maxillary height (N-ANS, P&gt;0.05; R-PMP, P&gt;0.05) and mandible morphology (including linear measurements and angle measurements) did not show any significant difference between case and control groups. Conclusions: Patients with isolated cleft palate were characterized by maxillary retrusion. Mandible morphology and cranial basal morphology in isolated cleft palate showed no significant difference with nonclefts. Patients with isolated cleft palate are more vulnerable to cross bite than nonclefts. Intrinsic deficiencies did detrimental effect on maxilla sagittal length, but did no detrimental effect on maxilla position, mandible size and position. abstract_id: PUBMED:31370693 Effects of Different Timing of Alveolar Bone Graft on Craniofacial Morphology in Unilateral Cleft Lip and Palate. Objective: To evaluate the effect of the timing of secondary alveolar bone graft (SABG) on craniofacial morphology in patients with complete unilateral cleft lip and palate (UCLP). Design: Single-center retrospective assessment of consecutively treated nonsyndromic patients with complete UCLP. Participants: One hundred sixty-seven patients (108 males, 59 females) with complete UCLP in whom the cleft was repaired with 1-stage method at approximately 8 months of age. The age of 128 patients at SABG varied from 1.4 to 11.5 years (SABG group), while 39 patients still awaited SABG at the moment of cephalometric evaluation (no-SABG group). Methods: Craniofacial morphology was assessed on lateral cephalograms taken at 10 years of age (standard deviation = 0.8; range: 7.5-12.3) using linear and angular measurements. T tests and regression models were made to analyze data. Results: Regression models demonstrated that the effect of SABG on the craniofacial morphology was limited-cephalometric variables which were statistically significantly different between SABG and no-SABG groups showed no association with the timing of SABG when (1) age of primary repair of the cleft, (2) age of cephalometric evaluation, (3) cleft side, (4) gender, and (5) operator were controlled for. Only the length of the maxilla (Condylion-point A) was affected-1-year delay of SABG corresponded with an increase in Co-point A distance by 0.52 mm. However, adjusted R2 of the model was 0.11. Conclusions: Our findings cautiously indicate that SABG performed before 8 years of age can have limited negative effect on craniofacial morphology. Nevertheless, our results should be confirmed by cleft centers practicing alternative surgical repairs of the cleft. abstract_id: PUBMED:20818243 Intrinsic effects of congenital cleft palate on craniofacial morphology and growth characteristics in puberty. Objectives: Of the 12 children in prepuberty (then aged 9 years) with unoperated submucous cleft palate where we previously found a characteristic morphology of the maxilla, we examined changes in the maxillary morphology in their puberty (until age 14 years), aiming to determine the intrinsic effects of congenital cleft palate on their craniofacial morphology and growth characteristics, which also involve the formation of cranioface, by retrospectively comparing their findings and those of noncleft children with normal occlusion. Patients And Methods: Twelve Japanese children (7 girls and 5 boys) with unoperated submucous cleft palate at age 14 years were examined cephalometrically. None of them had undergone dentofacial orthopedic treatment of the maxilla. Their craniofacial morphologic characteristics were compared with those of 60 Japanese noncleft children aged 14 years (30 girls and 30 boys) with normal occlusion. Results: The results were nearly identical to the previous findings when they were 9 years old: the maxillary length was short, the anterior part of the maxilla being retruded and the posterior part of the maxilla being in anterior position in the cleft children at age 14 years, compared with the noncleft children. In the craniofacial growth changes between the ages 9 and 14 years, an increase in the posterior upper facial height was markedly small, and the inclination of the palatal plane was promoted in the cleft children, compared with the noncleft children. Conclusions: We confirmed that the intrinsic effects of congenital cleft were influential on the morphology of cranioface and its growth changes, which also involve craniofacial formation. abstract_id: PUBMED:19328918 Composite tissue allotransplantation for the reconstruction of congenital craniofacial defects. Facial disfigurement in children with congenital craniofacial defects can lead to decreased self-esteem and poor self-perception. Traditional methods of reconstruction can fail to achieve a normal appearance in patients with severe disfigurements. Composite tissue allotransplantation (CTA) in children could offer a unique reconstructive opportunity. A discussion of the usage of CTA for congenital craniofacial defects is thus warranted. Treatment of severe craniofacial clefts, Treacher-Collins syndrome, hemifacial microsomia, and some vascular anomalies can yield unsatisfactory results, even after multiple surgeries. CTA provides the advantage of intact vascularized bone that would not need to be reshaped to fit the defect, with the correct donor match. CTA also provides reconstruction with similar tissue type in regions of the central midface such as the nose, lips, and eyelids. With advances in transplant immunology to devise mechanisms to decrease immunosuppression and induce donor antigen-specific tolerance, CTA may be a future reality in the pediatric population. Answer: Yes, congenital cleft palate intrinsically influences craniofacial morphology. Studies have shown that children with unoperated submucous cleft palate exhibit characteristic differences in craniofacial morphology compared to noncleft children. Specifically, in cleft children, the anteroposterior length of the maxilla is significantly shorter, and the posterior part of the maxilla is more anteriorly positioned. Additionally, the anterior parts of the maxilla tend to be slightly retruded in cleft children (PUBMED:19231769). This suggests that the craniofacial deviation in cleft children can be defined as the intrinsic effects of the congenital cleft palate itself. Further evidence of the intrinsic influence of congenital cleft palate on craniofacial morphology is provided by a study that found that as the congenital defect in the tissue of the palate extended, there was a marked shortness in the anteroposterior maxillary length, with the anterior part of the maxilla in a more posterior position (PUBMED:24011467). Moreover, research on adult patients with unoperated isolated cleft palate has shown that these individuals have a reduced maxillary sagittal length, a smaller ANB angle, and a retrusive ANS point, indicating that intrinsic deficiencies have a detrimental effect on maxilla sagittal length (PUBMED:26273503). In summary, the intrinsic effects of congenital cleft palate are influential on the morphology of the cranioface and its growth changes, which also involve craniofacial formation (PUBMED:20818243).
Instruction: Smoking cessation interventions following acute coronary syndrome: a missed opportunity? Abstracts: abstract_id: PUBMED:28942430 Interventions for smoking cessation in patients admitted with Acute Coronary Syndrome: a review. Tobacco smoking contributes to about six million deaths per year and is predicted to increase in the future. Several pharmacological interventions are used for smoking cessation. Trials using nicotine replacement therapy (NRT) in acute coronary syndrome (ACS) showed inconclusive results. Furthermore, three trials using bupropion in patients admitted with ACS failed to show improvement in smoking cessation compared with placebo. Interestingly, only one trial using varenicline was successful in achieving smoking abstinence in the acute setting. With regard to behavioural interventions, a meta-analysis found that telephone counselling was successful in both the acute and stable settings, with greater effect in the acute setting. The best results for smoking cessation were found in trials that used a combination of pharmacological and behavioural interventions. The objective of this report is to review the results of studies on interventions used for smoking cessation after an ACS. abstract_id: PUBMED:22952289 Smoking cessation interventions following acute coronary syndrome: a missed opportunity? Background: It is recommended that general practitioners (GPs) offer cessation advice and pharmacological interventions to smokers with acute coronary syndrome (ACS). The study objective was to describe the extent to which this is done, and to describe outcomes by smoking status. Design: Patients aged 30+ hospitalised for troponin-positive ACS from 2002 to 2009, discharged home alive, were identified in the Myocardial Ischaemia National Audit Project registry. Patient data were linked to the General Practice Research Database, Hospital Episode Statistics, and Office of National Statistics mortality data, enabling a unique perspective of longitudinal smoking data. Patients who smoked prior to the hospitalisation had GP interventions and quitting status established in the 3 months following discharge, and were followed up for major clinical outcomes. Methods: The outcomes evaluated included death, repeat ACS, stroke, heart failure, and major adverse cardiac events (MACE). Results: Of the 4834 patients included, 965 (20%) were smokers at the time of their ACS. After the ACS event, only 225 (24%) received any GP smoking intervention within 3 months, with 82 (9%) receiving advice only, and 143 (15%) receiving a pharmacological intervention. Patients who quit (320; 33%) were at a decreased risk of mortality (relative risk (RR) 0.49; 95% confidence interval (CI) 0.35-0.69) and MACE (RR 0.61; 0.46-0.80) compared with patients who did not. Conclusions: Whilst a high proportion of patients with ACS are smokers, there is a low level of GP cessation intervention following hospital discharge. This missed opportunity of patient care is important given the decreased risk of mortality and MACE found amongst those who quit. abstract_id: PUBMED:29526275 Smoking Cessation in Patients With Acute Coronary Syndrome. Over 30% of the nearly 1 million North Americans hospitalized annually with an acute coronary syndrome (ACS) are smokers. Despite a substantially increased risk of morbidity and mortality, 2/3 of patients who quit smoking after ACS return to smoking within 1 year. To summarize the evidence of smoking cessation in patients hospitalized after ACS, we systematically reviewed all randomized controlled trials of pharmacologic and behavioral smoking cessation therapies in patients with ACS. In addition, we reviewed the clinical considerations surrounding the use of smoking cessation therapies, including their broad mechanisms of action and possible alternative treatments, including cardiac rehabilitation programs and electronic cigarettes. A total of 7 randomized controlled trials met our inclusion criteria (4 pharmacotherapies and 3 behavioral therapies). In pharmacologic trials, only varenicline increased point prevalence abstinence at 12 months. Behavioral interventions produced significantly improved abstinence rates at 6 and 12 months. However, these studies had substantial limitations affecting their generalizability. Overall, currently available smoking cessation therapies are limited in their efficacy in patients hospitalized after ACS. Because of the relative scarcity of data and the urgency of establishing clinical guidelines, there is a critical need to continue examining the efficacy and safety of smoking cessation interventions in patients hospitalized after ACS. abstract_id: PUBMED:33150137 Smoking and Provision of Smoking Cessation Interventions among Inpatients with Acute Coronary Syndrome in China: Findings from the Improving Care for Cardiovascular Disease in China-Acute Coronary Syndrome Project. Highlights: Over half of male acute coronary syndrome patients were smokers in China.Smoking was associated with higher risk of critical cardiac symptoms at admission.Only 35.3% of smoking patients received smoking cessation interventions in China. Background: Smoking cessation is recognized as an effective and cost-effective strategy for improving the prognosis of patients with coronary heart disease. Despite this, few studies have evaluated the smoking prevalence and provision of smoking cessation interventions among patients with acute coronary syndrome (ACS) in China. Objectives: To evaluate the smoking prevalence, clinical conditions and in-hospital outcomes associated with smoking, and the provision of smoking cessation interventions among ACS patients in China. Methods: This registry study was conducted using data from the Improving Care for Cardiovascular Disease in China project, a collaborative nationwide registry of the American Heart Association and the Chinese Society of Cardiology. Our study sample comprised 92,509 ACS inpatients admitted between November 2014 and December 2018. A web-based data collection platform was used to report required data. Results: Smoking prevalence among male and female ACS patients was 52.4% and 8.0%, respectively. Patients younger than 45 years had the highest smoking rate (men: 68.0%; women: 14.9%). Compared with non-smokers, smokers had an earlier onset age of ACS and a greater proportion of severe clinical manifestations at admission, including ST-elevation myocardial infarction (67.8% versus 54.8%; p &lt; 0.001) and substantially elevated myocardial injury markers (86.1% versus 83.0%; p &lt; 0.001). After multivariable adjustment, smoking was associated with higher risk of critical cardiac symptoms at admission (OR = 1.14, 95% CI: 1.08-1.20; p &lt; 0.001) and had no direct association with in-hospital outcomes (OR = 0.93, 95% CI: 0.84-1.02; p = 0.107) of ACS patients. Of 37,336 smokers with ACS, only 35.3% received smoking cessation interventions before discharge. There was wide variation in provision of smoking cessation interventions across hospitals (0%-100%). Conclusions: Smoking is highly prevalent among ACS patients in China. However, smoking cessation interventions are not widely adopted in clinical practice in China as part of formal treatment strategies for ACS patients, indicating an important target for quality improvement. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT02306616. abstract_id: PUBMED:19301384 Predictors of smoking cessation and relapse after hospitalization for acute coronary syndrome. Background: A hospital admission for a serious cardiac event offers a unique opportunity for smoking cessation. Understanding the factors that predict and enhance cessation among smoking cardiac inpatients is important for hospital physicians and clinical staff. Study Objective: To determine factors that predict smoking cessation, relapse, or continued smoking among posthospitalized cardiac patients who were smoking at the time of admission. Sample: Patients hospitalized with acute coronary syndrome (ACS) were recruited from 5 hospitals in Michigan to participate in a study assessing hospital quality improvement plus at-home health behavior change counseling. Measurements: Patient interview data were collected shortly after discharge and 3 and 8 months later to describe patient demographics, clinical characteristics, tobacco use, and other behaviors. Multinomial logit regression was used to predict smoking cessation, relapse, and continued smoking. Results: Of patients smoking at hospitalization who completed both follow-up interviews, 56.8% (n = 111) were not smoking at 8 months. A significant predictor of successful cessation was higher household income (odds ratio [OR] = 4.72; P = 0.003), while having other smokers in the household decreased the odds of cessation (OR = 0.20; P = 0.001). History of depression increased the odds of relapse (OR = 6.38; P = 0.002) and being a lighter smoker decreased the odds (OR = 0.16; P = 0.026). Conclusions: Although approximately one-half of the smokers in this study reported successful cessation, interventions are still needed to assist all smokers to successfully quit smoking after an ACS hospitalization. Our data suggest targeting follow-up programs to include other family members and using specialized methods for heavy smokers. abstract_id: PUBMED:23167150 Smoking cessation interventions after acute coronary syndromes. Results of a cross-sectional survey in the Lazio Region of Italy. Given the limited research on Italian hospital smoking care practices, a cross-sectional survey was undertaken in April-May 2011 to describe the current status of smoking cessation interventions for ACS patients in cardiovascular institutions of the Lazio Region of Italy. Lazio is a region of central Italy with a resident population of about 5,600,000. According to the data of the Regional Health Authority, about 10.000 patients are admitted for ACS every year in this region of Italy. Acute cardiac care in the region is currently provided by 33 Cardiology Divisions. All of these units were considered as eligible for the survey. The eligible respondent for each unit was the director. A self-report questionnaire was developed based on previous studies that examined the specific features of smoking cessation care provided to hospitalised patients. Questionnaires were forwarded by the Lazio Regional Section of the Italian National Association of Hospital Cardiologists (ANMCO). Completed questionnaires were received from 22 of the 33 eligible Divisions (66%). These 22 responding units currently provide acute care to about 70% of all ACS patients of the region. Responding units were more likely to represent public non-teaching hospitals (p = 0.002), while non-responders were mostly from private non-teaching institutions (p = 0.04). Response rates were not influenced by the presence of either interventional catheterization laboratory (Cathlab) or cardiac surgery within the hospitals. The survey suggest that most of cardiology units fail to provide recommended smoking care interventions to ACS patients. In particular, brief smoking cessation advice before discharge represents the only systematically implemented approach in clinical practice (22 units; 100%). Smoking cessation counselling is provided only in 9 units (40%). Specific pharmacotherapy is prescribed in selected case only in about one third of units (7 units; 32%), with varenicline being the preferred drug. Structural variables and organizational complexity have no influence on smoking care, as hospitals with Cathlab and cardiac surgery do not implement more effective strategies. Overall, this survey shows that the majority of smoking ACS inpatients may receive inadequate smoking care and that hospitals have considerable opportunity for improvement. abstract_id: PUBMED:26228612 Italian cardiologists and tobacco smoking. A survey on the prevalence and knowledge of smoking and strategies for smoking cessation in a cohort of Italian cardiologists Background: Tobacco use is the single most preventable cause of death. Its cessation is the most cost-effective strategy for reducing long-term cardiovascular morbidity and mortality. Although both healthcare professionals and the general population are aware of the detrimental effects of smoking on health, more than 25% of Italians are current smokers. Recent surveys showed that almost 50% of smoker patients relapse to smoke after having been discharged for acute coronary syndrome. Physicians who smoke may be a barrier for effective cessation interventions. Thus, it is important to assess cardiologists' attitude toward smoking habits and the implementation of smoking cessation programs. Methods: The survey "Italian cardiologists and smoking habits" has been held in 2013 during the 44° National Congress of the Italian Association of Hospital Cardiologists (ANMCO) to assess cardiologists' smoking status, their level of knowledge about smoking cessation interventions and their involvement in the management of smoking cessation. Out of more than 1200 cardiologists attending the ANMCO congress, 610 subjects (aged 51 ± 11 years) answered an anonymous 35-item questionnaire; they were asked to declare their smoking status (9.5% current smokers). Results: Among doctors who attended the survey, 58% correctly indicated smoking as an addiction and 45% regularly advised their patients to stop smoking. The majority of cardiologists (93%) reported a positive attitude toward smoking cessation strategies, 62% of them thought that cardiologists themselves should treat smoking dependence, though specific tools (70%) and education for the management of smoking cessation are lacking (66%). Two thirds of the entire sample of ANMCO cardiologists declared their willingness to participate in specialized educational programs. Conclusions: More than a half of Italian cardiologists are aware that smoking is an addiction. Although they feel themselves yet inadequate toward this approach, they are favorable to implement their own knowledge and skills toward smoking cessation. abstract_id: PUBMED:34541754 Smoking cessation after acute coronary syndrome: A systematic review and meta-analysis. Background: Smoking cessation is an effective secondary prevention measure after acute coronary syndrome (ACS). We conducted a systematic review with the aim to better understand which patients have a greater propensity to quit smoking and the risk factors for continued smoking after ACS. Methods: We searched MEDLINE and EMBASE for studies that evaluated smoking cessation after ACS. The pooled rate of smoking cessation across included studies was performed. Random effects meta-analysis for different variables and their association with smoking cessation was conducted. Results: A total of 39 studies with 11 228 patients were included in this review. The pooled rate of smoking cessation following ACS across 38 studies was 45.0%. Factors associated with greater likelihood of smoking cessation were attendance at cardiac rehabilitation (OR 1.90 95% CI 1.44-2.51), married/not alone (OR 1.68 95% CI 1.32-2.13), intention/attempt to quit smoking (OR 1.27 95% CI 1.11-1.46), diabetes mellitus (OR 1.24 95% CI 1.03-1.51) and hospitalised duration (OR 1.09 95% CI 1.02-1.15). Variables associated with a lower likelihood of smoking cessation were depression (OR 0.57 95% CI 0.43-0.75), chronic obstructive pulmonary disease/lung disease (OR 0.73 95% CI 0.57-0.93), previous admission with acute myocardial infarction/cardiac admission (OR 0.61 95% CI 0.47-0.80), cerebrovascular disease/transient ischaemic attack (OR 0.42 95% CI 0.30-0.58) and unemployment (OR 0.37 95% CI 0.17-0.80). Conclusions: The majority of smokers with an ACS continue to smoke after admission. Patients attending cardiac rehabilitation show increased odds of quitting while people who are depressed and those with chronic lung disease were less likely to quit smoking and should be targeted for intensive smoking cessation interventions. abstract_id: PUBMED:20356478 The smoking status of patients with acute coronary syndrome and effect of simple intervention on smoking cessation. Objective: To evaluate the abstinence rate and relapse rate of smoker with ACS after discharged from hospital, to evaluate the effect of simple smoking cessation interventions. Methods: 150 smokers with ACS were collected and was divided into simple intervention group (n = 87) and control group (n = 63), respectively, followed up for 6 months. 2 months, 6 months abstinence rate, relapse rate were compared between two groups, logistic regression model was used to analyzed the relevant factors for relapse. Results: The smoking rate in patients with ACS was 31.14%, 6-month continuous abstinence rate and 6-month relapse rate was 64.6%, 36.4%, respectively. 6-month continuous abstinence rate and relapse rate between simple quit-smoking intervention group and control group showed no difference. Nicotine dependence score more than 4 points can be used as predictor of relapse. Conclusions: The smoking rate and the abstinence rate of smoker with ACS is higher than the general population, simple quit-smoking intervention do not increase the success rate of quit-smoking, indicated the need to strengthen the intervention in patients with ACS and smoking. abstract_id: PUBMED:32350194 Cigarette Smoking Cessation Temporarily Enhances the Release of Phosphorylated-HSP27 from Human Platelets. Objective Cigarette smoking is a risk factor for arteriopathy, including acute coronary syndrome, stroke and peripheral vascular disease. Thus, cessation is strongly recommended in order to reduce these risks. We recently demonstrated that smoking cessation causes temporary hyper-aggregability of human platelets. We previously showed that heat shock protein 27 (HSP27) is released from human platelets stimulated by collagen, accompanied by its phosphorylation. Accumulating evidence indicates potent roles of extracellular HSP27 as a modulator of inflammation. In the present study, using the stored samples obtained in the previous study, we investigated the effect of cigarette smoking cessation on the release of phosphorylated-HSP27 from collagen-activated human platelets (n=15 patients). Methods We enrolled patients who visited smoking cessation outpatient services between January 2012 and November 2014. Platelet-rich plasma, chronologically obtained before and after the cessation, was stimulated by collagen using a PA-200 aggregometer in the previous study. The levels of phosphorylated-HSP27 released from platelets were determined by an enzyme-linked immunosorbent assay. The phosphorylation of HSP27 in platelets was evaluated by a Western blot analysis. Results Cessation of cigarette smoking significantly upregulated the levels of collagen-stimulated release of phosphorylated-HSP27 at four and eight weeks after quitting smoking compared to before cessation. However, there was no significant difference between the levels before cessation and those at 12 weeks after cessation. The levels of phosphorylated-HSP27 stimulated by collagen in the platelets at four weeks after smoking cessation were remarkably enhanced compared to before cessation. Conclusion Cigarette smoking cessation temporarily enhances the collagen-stimulated release of phosphorylated-HSP27 from human platelets in the short term. Answer: The literature suggests that smoking cessation interventions following acute coronary syndrome (ACS) represent a significant missed opportunity for improving patient outcomes. Despite the well-established risks associated with tobacco smoking and the benefits of quitting, particularly after an ACS event, the implementation of smoking cessation interventions appears to be suboptimal. For instance, a study found that only 24% of smokers admitted with ACS received any general practitioner (GP) smoking intervention within 3 months of discharge, with a mere 15% receiving pharmacological interventions. Those who quit smoking experienced a decreased risk of mortality and major adverse cardiac events (MACE), highlighting the importance of cessation interventions (PUBMED:22952289). A systematic review of randomized controlled trials on pharmacologic and behavioral smoking cessation therapies in patients with ACS revealed that only varenicline was effective in increasing abstinence at 12 months among pharmacotherapies, while behavioral interventions showed improved abstinence rates at 6 and 12 months. However, the general efficacy of available smoking cessation therapies in this patient population is limited, indicating a need for further research and guideline development (PUBMED:29526275). In China, over half of male ACS patients were smokers, yet only 35.3% received smoking cessation interventions before discharge, demonstrating a lack of widespread adoption of these interventions in clinical practice (PUBMED:33150137). A survey in the Lazio Region of Italy revealed that most cardiology units failed to provide recommended smoking care interventions to ACS patients, with brief smoking cessation advice before discharge being the only consistently implemented approach (PUBMED:23167150). Another survey among Italian cardiologists found that while a majority recognized smoking as an addiction and were favorable towards smoking cessation strategies, they felt inadequate in their approach and lacked specific tools and education for managing smoking cessation (PUBMED:26228612). A systematic review and meta-analysis identified factors associated with a greater likelihood of smoking cessation, such as attendance at cardiac rehabilitation and being married/not alone, as well as risk factors for continued smoking, including depression and chronic lung disease. This suggests that targeted interventions are needed for those less likely to quit (PUBMED:34541754). Finally, a study evaluating the effect of simple smoking cessation interventions on smokers with ACS found no significant difference in abstinence rates between the intervention and control groups, suggesting the need for more robust interventions (PUBMED:20356478).
Instruction: Is Internet use associated with anxiety in patients with and at risk for cardiomyopathy? Abstracts: abstract_id: PUBMED:26093868 Is Internet use associated with anxiety in patients with and at risk for cardiomyopathy? Objective: The aim of the study was to determine the relation between online health information seeking behavior and anxiety level among a sample of patients with manifested cardiomyopathy or at risk for cardiomyopathy. Methods: The research is a cross-sectional study conducted among 104 patients with cardiomyopathy diagnosis and patients at risk for cardiomyopathy. Patients completed 3 different questionnaires: Use of Internet Health Information questionnaire about the use of Internet, Short Form SF-12 items questionnaire on quality of life, and State-Trait Anxiety Inventory measuring general anxiety levels. Results: Forty-eight patients had a diagnosis of primary or secondary cardiomyopathy, and 56 patients, with conditions predisposing to cardiomyopathy. Eighty-five percent of the considered population is surfing the Internet to obtain nonspecific information about health in general, and the 65% use it to look specifically for heart disease. For both groups of patients with cardiomyopathy and at risk for cardiomyopathy, online health information seeking behavior is associated with substantially lower state anxiety levels (P = .041). Conclusion: Web use, as a source of health information, has been shown to be associated with anxiety reduction in patients with or at risk for cardiomyopathy, suggesting that Internet technology can be a useful instrument due to its informational power and its potentially therapeutic value. abstract_id: PUBMED:30255805 Pharmacological risk factors associated with hospital readmission rates in a psychiatric cohort identified using prescriptome data mining. Background: Worldwide, over 14% of individuals hospitalized for psychiatric reasons have readmissions to hospitals within 30 days after discharge. Predicting patients at risk and leveraging accelerated interventions can reduce the rates of early readmission, a negative clinical outcome (i.e., a treatment failure) that affects the quality of life of patient. To implement individualized interventions, it is necessary to predict those individuals at highest risk for 30-day readmission. In this study, our aim was to conduct a data-driven investigation to find the pharmacological factors influencing 30-day all-cause, intra- and interdepartmental readmissions after an index psychiatric admission, using the compendium of prescription data (prescriptome) from electronic medical records (EMR). Methods: The data scientists in the project received a deidentified database from the Mount Sinai Data Warehouse, which was used to perform all analyses. Data was stored in a secured MySQL database, normalized and indexed using a unique hexadecimal identifier associated with the data for psychiatric illness visits. We used Bayesian logistic regression models to evaluate the association of prescription data with 30-day readmission risk. We constructed individual models and compiled results after adjusting for covariates, including drug exposure, age, and gender. We also performed digital comorbidity survey using EMR data combined with the estimation of shared genetic architecture using genomic annotations to disease phenotypes. Results: Using an automated, data-driven approach, we identified prescription medications, side effects (primary side effects), and drug-drug interaction-induced side effects (secondary side effects) associated with readmission risk in a cohort of 1275 patients using prescriptome analytics. In our study, we identified 28 drugs associated with risk for readmission among psychiatric patients. Based on prescription data, Pravastatin had the highest risk of readmission (OR = 13.10; 95% CI (2.82, 60.8)). We also identified enrichment of primary side effects (n = 4006) and secondary side effects (n = 36) induced by prescription drugs in the subset of readmitted patients (n = 89) compared to the non-readmitted subgroup (n = 1186). Digital comorbidity analyses and shared genetic analyses further reveals that cardiovascular disease and psychiatric conditions are comorbid and share functional gene modules (cardiomyopathy and anxiety disorder: shared genes (n = 37; P = 1.06815E-06)). Conclusions: Large scale prescriptome data is now available from EMRs and accessible for analytics that could improve healthcare outcomes. Such analyses could also drive hypothesis and data-driven research. In this study, we explored the utility of prescriptome data to identify factors driving readmission in a psychiatric cohort. Converging digital health data from EMRs and systems biology investigations reveal a subset of patient populations that have significant comorbidities with cardiovascular diseases are more likely to be readmitted. Further, the genetic architecture of psychiatric illness also suggests overlap with cardiovascular diseases. In summary, assessment of medications, side effects, and drug-drug interactions in a clinical setting as well as genomic information using a data mining approach could help to find factors that could help to lower readmission rates in patients with mental illness. abstract_id: PUBMED:29625970 Clinical Course and Quality of Life in High-Risk Patients With Hypertrophic Cardiomyopathy and Implantable Cardioverter-Defibrillators. Background: High-risk patients with hypertrophic cardiomyopathy (HCM) are identified by contemporary risk stratification and effectively treated with implantable cardioverter-defibrillators (ICDs). However, long-term HCM clinical course after ICD therapy for ventricular tachyarrhythmias is incompletely understood. Methods And Results: Cohort of 486 high-risk HCM patients with ICDs was assembled from 8 international centers. Clinical course and device interventions were addressed, and survey questionnaires assessed patient anxiety level and psychological well-being related to ICD therapy. Of 486 patients, 94 (19%) experienced appropriate ICD interventions terminating ventricular tachycardia/ventricular fibrillation, 3.7% per year for primary prevention, over 6.4±4.7 years. Of 94 patients, 87 were asymptomatic or only mildly symptomatic at the time of appropriate ICD interventions; 74 of these 87 (85%) remained in classes I/II without significant change in clinical status over the subsequent 5.9±4.9 years (up to 22). Among the 94 patients, there was one sudden death (caused by device failure; 1.1%); 3 patients died from other HCM-related processes unrelated to arrhythmic risk (eg, end-stage heart failure). Post-ICD intervention, freedom from HCM mortality was 100%, 97%, and 92% at 1, 5, and 10 years, distinctly lower than in ischemic or nonischemic cardiomyopathy ICD trials. HCM patients with ICD interventions reported heightened anxiety in expectation of future shocks, but with intact general psychological well-being and quality of life. Conclusions: In HCM, unlike ischemic heart disease, prevention of sudden death with ICD therapy is unassociated with significant increase in cardiovascular morbidity or mortality, or transformation to heart failure deterioration. ICD therapy does not substantially impair overall psychological and physical well-being. abstract_id: PUBMED:24480783 An Internet-Based Counseling Intervention With Email Reminders that Promotes Self-Care in Adults With Chronic Heart Failure: Randomized Controlled Trial Protocol. Background: Chronic heart failure (CHF) is a public health priority. Its age-standardized prevalence has increased over the past decade. A major challenge for the management of CHF is to promote long-term adherence to self-care behaviors without overtaxing available health care resources. Counseling by multidisciplinary health care teams helps to improve adherence to self-care behaviors and to reduce the rate of death and hospitalization. In the absence of intervention, adherence to self-care is below recommended standards. Objective: This trial aims to establish and evaluate a Canadian e-platform that will provide a core, standardized protocol of behavioral counseling and education to facilitate long-term adherence to self-care among patients with CHF. Methods: Canadian e-Platform to Promote Behavioral Self-Management in Chronic Heart Failure (CHF-CePPORT) is a multi-site, double blind, randomized controlled trial with a 2 parallel-group (e-Counseling + Usual Care vs e-Info Control + Usual Care) by 3 assessments (baseline, 4-, and 12-month) design. We will identify subjects with New York Heart Association Class II or III systolic heart failure from collaborating CHF clinics and then recruit them (n=278) by phone. Subjects will be randomized in blocks within each site (Toronto, Montreal, and Vancouver). The primary outcome will be improved quality of life, defined as an increased number of subjects with an improvement of ≥5 points on the summary score of the Kansas City Cardiomyopathy Questionnaire. We will also assess the following secondary outcomes: (1) diet habits, depression, anxiety, smoking history, stress level, and readiness for change using self-report questionnaires, (2) physical activity level, current smoking status, and vagal-heart rate modulation by physiological tests, and (3) exercise capacity, prognostic indicators of cardiovascular functioning, and medication adherence through medical chart review. The primary outcome will be analyzed using generalized estimation equations with repeated measures on an intention-to-treat basis. Secondary outcomes will be analyzed using repeated-measures linear mixed models with a random effects intercept. All significant main effects or interactions in the statistical models will be followed up with post hoc contrasts using a Bonferroni correction with a 2-sided statistical significance criterion of P&lt;.05. Results: This 3.5-year, proof-of-principle trial will establish the e-infrastructure for a pan-Canadian e-platform for CHF that is comprised of a standardized, evidence-based protocol of e-Counseling. Conclusions: CHF-CePPORT is designed to improve long-term adherence to self-care behaviors and quality of life among patients with CHF. It will demonstrate a distinct Canadian initiative to build capacity for preventive eHealth services for patients with CHF. Trial Registration: ClinicalTrials.gov NCT01864369; http://clinicaltrials.gov/ct2/show/NCT01864369 (Archived by WebCite at http://www.webcitation.org/6Iiv6so7E). abstract_id: PUBMED:34535702 Associations of clinical, psychological, and socioeconomic characteristics with nicotine dependence in smokers. Cigarette smoking is a risk factor of mortality and morbidity from various cancerous, respiratory, and myocardial diseases. Nicotine dependence is assessed based on the degree of physical dependence. We aimed to determine the clinical, socioeconomic and psychological factors associated with the smoking status and degree of nicotine dependence of smokers. From April 2009 to September 2010, we retrospectively collected data from 17,577 subjects aged ≥ 18 years who had undergone a general health examination at a health promotion center. The instruments used included the Fagerström Tolerance Questionnaire (FTQ), Beck Depression Inventory (BDI), State-Trait Anxiety Inventory (STAI), Stress Response Inventory (SRI), and Alcohol Use Disorder Identification Test (AUDIT). Of the current smokers (N = 3946), 2345 (59%), 1154 (29%), and 447 (12%) had low, moderate, and high nicotine dependence, respectively. In multiple logistic analysis, predictors of high nicotine dependence were male sex (odds ratio [OR] 3.705, 95% confidence interval [CI] 1.997-6.945), older age (≥ 65 years) (OR 1.016, 95% CI 1.004-1.029), higher body mass index (BMI) (OR 1.048, 95% CI 1.018-1.078), diabetes (OR 1.870, 95% CI 1.251-2.794), single marital status (OR 1.575, 95% CI 1.186-2.092), lower education level (OR 1.887, 95% CI 1.463-2.433), and a higher stress level (OR 1.018, 95% CI 1.997-6.945). Thus, clinical, psychological, socioeconomic status including male, older age, higher BMI, diabetes, single marital status, lower education, and higher stress should be taken into consideration by promoting smoking cessation. abstract_id: PUBMED:26093858 Transitioning into a digital world: Time for providers to recommend Internet health information? N/A abstract_id: PUBMED:21370417 Increasing rates of breast cancer and cardiac surveillance among high-risk survivors of childhood Hodgkin lymphoma following a mailed, one-page survivorship care plan. Background: Hodgkin lymphoma (HL) survivors face substantially elevated risks of breast cancer and cardiovascular disease. They and their physicians are often unaware of these risks and surveillance recommendations. Procedure: A prospective one-arm study was conducted among a random sample of 72 HL survivors, ages 27-55 years, participating in the Childhood Cancer Survivor Study (CCSS) who were at increased risk for breast cancer and/or cardiomyopathy and had not had a screening mammogram or echocardiogram, respectively, within the prior 2 years. A one-page survivorship care plan with recommendations for surveillance was mailed to participants. In addition, survivors' primary physicians were contacted and provided patient-specific information and a web-based Virtual Information Center was made available for both survivors and physicians. Outcomes were assessed by telephone 6 months after the intervention. Results: The survivor participation (62/72; 86%) and 6-month retention (56/61; 92%) rates were high. Tension and anxiety, measured by the Profile of Mood States, did not increase following risk notification; 91% of survivors described their reactions to receiving the information in positive terms. At 6 months, 41% of survivors reported having completed the recommended mammogram; 20% reported having an echocardiogram (females 30%, males 10%). Only 29% of survivors visited the website. Nine physicians enrolled, and none used the study resources. Conclusion: A mailed, personalized survivorship care plan was effective in communicating risk and increasing compliance with recommended medical surveillance. Internet- and telephone-based strategies to communicate risk were not utilized by survivors or physicians. abstract_id: PUBMED:8329905 The systemic management of cardiovascular risk patients in dentistry. Recently, dental patients with cardiovascular diseases, especially hypertensive and ischemic heart disease, have been increasing. Performing dental procedures on these patients could bring about the acute exacerbations of preexisting cardiac diseases. Anxiety, tension, pain, and discomfort during dental treatment and the catecholamines contained in local anesthetic solutions are thought to be causative factors. Consequently, it is important to decide whether the procedure concerned could safely be performed by dental staffs in the dental clinic. Systemic management in dental treatment includes the application of psychosedation, proper selection of the vasoconstrictor in the anesthetic solution, and control of blood pressure by vasodilating agents under continuous careful monitoring. By these means, the magnitude of hemodynamic fluctuation should be kept minimal. Management of hypertension, ischemic heart disease, and cardiomyopathies will be explained. abstract_id: PUBMED:31617896 Quality of life predicting long-term outcomes in cardiac resynchronization therapy patients. Aims: While improvement in quality of life (QoL) has been widely reported in cardiac resynchronization therapy (CRT) patients, its predictive value is not well-understood. We aimed to assess the predictive role of baseline QoL on long-term heart failure (HF) or death events in mild HF patients enrolled in Multicenter Automatic Defibrillator Implantation Trial with Cardiac Resynchronization Therapy (MADIT-CRT). Methods And Results: A total of 1791 of 1820 patients had their QoL evaluated at baseline, using the EuroQol-5 dimensions (EQ-5D) and the Kansas City Cardiomyopathy Questionnaires (KCCQ). Kaplan-Meier survival analyses and multivariate Cox models were utilized. Issues within any of the domains of the baseline EQ-5D questionnaire (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression) were associated with long-term mortality (median follow-up 5.6 years) (all P &lt; 0.05). Heart failure or death events were predicted by issues in baseline mobility [hazard ratio (HR) = 1.41, P &lt; 0.001], usual activities (HR = 1.41, P &lt; 0.001), and anxiety/depression (HR = 1.21, P = 0.035). The risk of HF events alone was significantly higher in patients with baseline mobility issues (HR = 1.42, P &lt; 0.001) or usual activity (HR = 1.35, P = 0.003). Every 10% increase in the visual analogue scale (0-100) was associated with an 8% lower risk of all-cause mortality (P = 0.006), and a 6% lower risk of HF/death (P = 0.002). Mobility issues also predicted echocardiographic reverse remodelling (-33.08 mL vs. -31.17 mL, P = 0.043). Using the KCCQ, patients in the lower tertile of the clinical summary or physical limitations score had a significantly higher risk of long-term HF or death (P &lt; 0.05). Conclusion: In mild HF patients enrolled in MADIT-CRT, multiple baseline QoL questionnaire domains were predictors of echocardiographic remodelling, long-term all-cause mortality, and HF events. abstract_id: PUBMED:26563250 Distinct trajectories of disease-specific health status in heart failure patients undergoing cardiac resynchronization therapy. Purpose: It is well known that a significant proportion of heart failure patients (10-44 %) do not show improvement in symptoms or functioning from cardiac resynchronization therapy (CRT), yet no study has examined patient-reported health status trajectories after implantation. Methods: A cohort of 139 patients with a CRT-defibrillator (70 % men; age 65.7 ± 10.1 years) completed the Kansas City Cardiomyopathy Questionnaire (KCCQ) prior to implantation (baseline) and at 2, 6, and 12-14 months post-implantation. Latent class analyses were used to identify trajectories and associates of disease-specific health status over time. Results: All health status trajectories showed an initial small to large improvement from baseline to 2-month follow-up, whereafter most trajectories displayed a stable pattern between short- and long-term follow-up. Low educational level, NYHA class III/IV, smoking, no use of beta-blockers, use of psychotropic medication, anxiety, depression, and type D personality were found to be associated with poorer health status in unadjusted analyses. Interestingly, subgroups of patients (12-20 %) who experienced poor health status at baseline improved to stable good health status levels after implantation. Conclusions: Levels of disease-specific health status vary considerably across subgroups of CRT-D patients. Classification into poorer disease-specific health status trajectories was particularly associated with patients' psychological profile and NYHA classification. The timely identification of CRT-D patients who present with poor disease-specific health status (i.e., KCCQ score &lt; 50) and a distressed psychological profile (i.e., anxiety, depression, and/or type D personality) is paramount, as they may benefit from cardiac rehabilitation in combination with psychological intervention. Answer: Yes, Internet use is associated with anxiety in patients with and at risk for cardiomyopathy. A study found that for both groups of patients with cardiomyopathy and at risk for cardiomyopathy, online health information seeking behavior is associated with substantially lower state anxiety levels (P = .041) (PUBMED:26093868). This suggests that using the Internet as a source of health information can be a useful instrument due to its informational power and its potentially therapeutic value in reducing anxiety for these patients.
Instruction: Can the 12-item General Health Questionnaire be used to identify medical students who might 'struggle' on the medical course? Abstracts: abstract_id: PUBMED:23548161 Can the 12-item General Health Questionnaire be used to identify medical students who might 'struggle' on the medical course? A prospective study on two cohorts. Background: Students who fail to thrive on the Nottingham undergraduate medical course frequently suffer from anxiety, depression or other mental health problems. These difficulties may be the cause, or the result of, academic struggling. Early detection of vulnerable students might direct pastoral care and remedial support to where it is needed. We investigated the use of the short-form General Health Questionnaire (GHQ-12) as a possible screening tool. Methods: Two consecutive cohorts (2006 and 2007) were invited to complete the GHQ-12. The questionnaire was administered online, during the second semester (after semester 1 exams) for the 2006 cohort and during the first semester for the 2007 cohort. All data were held securely and confidentially. At the end of the course, GHQ scores were examined in relation to course progress. Results: 251 students entered the course in 2006 and 254 in 2007; 164 (65%) and 160 (63%), respectively, completed the GHQ-12. In both cohorts, the study and non-study groups were very similar in terms of pre-admission socio-demographic characteristics and overall course marks. In the 2006 study group, the GHQ Likert score obtained part-way through the first year was negatively correlated with exam marks during Years 1 and 2, but the average exam mark in semester 1 was the sole independent predictor of marks in semester 2 and Year 2. No correlations were found for the 2007 study group but the GHQ score was a weak positive predictor of marks in semester 2, with semester 1 average exam mark again being the strongest predictor. A post-hoc moderated-mediation analysis suggested that significant negative associations of GHQ scores with semester 1 and 2 exams applied only to those who completed the GHQ after their semester 1 exams. Students who were identified as GHQ 'cases' in the 2006 group were statistically less likely to complete the course on time (OR=4.74, p 0.002). There was a non-significant trend in the same direction in the 2007 group. Conclusions: Results from two cohorts provide insufficient evidence to recommend the routine use of the GHQ-12 as a screening tool. The timing of administration could have a critical influence on the results, and the theoretical and practical implications of this finding are discussed. Low marks in semester 1 examinations seem be the best single indicator of students at risk for subsequent poor performance. abstract_id: PUBMED:11060914 Mental health in medical students. A case control study using the 60 item General Health Questionnaire. This paper describes a cross-sectional case control study to measure the prevalence of psychological morbidity in first year medical students and compare it to the prevalence in in a randomly selected control group of other first year students at Edinburgh University. The study was conducted anonymously using the 60 item General Health Questionnaire. Participation rates were over 90% in both subjects and controls. A total of 17% of medical students had symptoms of psychological morbidity which may benefit from treatment and a further 29% of medical students had symptoms of psychological distress which would be expected to remit spontaneously. A similar rate was found in the control group of students. This suggests that if medical students or doctors, later in their careers, fare badly in terms of mental health then this may well be related to aspects of their lives and is not an intrinsic characteristic. abstract_id: PUBMED:37366154 Item response theory analysis of the University Personality Inventory in medical students. Aim: Young adulthood has been recognized as an important period for the transition from adolescence to adulthood. The University Personality Inventory (UPI), a mental health questionnaire for young adulthood, is widely used to screen university students in East Asia. However, dichotomous systems do not allow respondent choose responses other than two options on each symptom. This study employed item response theory (IRT) to examine the properties and performance of UPI items for mental health problems. Methods: Japanese medical students (n = 1185) participated in this study, and the UPI was completed at the time of university admission. The two-parameter IRT model was used to assess the measurement characteristics of the UPI items. Results: Among all participants, 35.4% (420/1185) had total UPI score of 21 or more, and 10.6% (126/1185) indicated that they had the idea of wanting to die (item 25). For further IRT analysis, unidimensionality was confirmed by exploratory factor analysis, in which the primary factor accounted for 39.6% of the variance. The scale has sufficient discrimination power. In the test characteristic curves, the rising slopes of the lines were between θ 0 and 2. Conclusion: The UPI is useful to assess mild or moderate mental health problems, while precision may decline among individuals experiencing both little and extremely high levels of stress. Our findings provide a basis for identifying people who have mental health concerns. abstract_id: PUBMED:7707460 Evaluation of the effects of a general medical course for non-medical university students In order to evaluate the effects of a general medical course on students' related attitudes and knowledge, 186 students who took the general medical course in the National Chung-Hsing University or the Private Fung-Chia University were asked to fill in a structured questionnaire both before and after their course during February, 1993 and January, 1994. The average age of the 177 students who completed the questionnaire on both occasions was 21.9 years; 109 were males and 68 were females. Two thirds of the students indicated that they would like to learn more about health and medical sciences as the main reason for taking the course. Before the class, the leading two perceptions of health given by students were no biopsychological problem (54.2%) and no biomedical problem (18.6%), while no biopsychological problem (55.4%) and no biopsychosocial problem (23.7%) were the leading two perceptions afterwards. After the course, about 21% of students counted health as the greatest concern in their lives, while only 16% did before the class. At first only 18.6% of students considered life style as the most important contributing factor to human health, but the frequency increased to 28.2% at the end of the class. Regarding knowledge on health and medical care, mean scores in all five domains of knowledge were significantly higher after than before the course. The general medical curriculum was welcomed and highly valued by students based on their evaluations after the class. We concluded that university students' attitudes toward and knowledge on health and medical care can be significantly increased by a general medical curriculum. abstract_id: PUBMED:24644486 Mental health of dubai medical college students. Objective: Considering the association between medical school dropout and psychiatric distress, we aimed to assess the prevalence of psychiatric distress among medical students at Dubai Medical College. Methods: One hundred and three medical students were chosen randomly and were assessed by the General Health Questionnaire (GHQ). Results: The mean age for the students was 18.85 year (Minimum: 17, Maximum: 22), and 90.3% were between 18 and 20 years old. The mean of GHQ score was 16.46. Of the participants, 47 (45.6%) were found to be in normal range (GHQ mean &lt; 16). A total of 33 (32.1%) of the students reported evidence of psychiatric distress. Only 23 (22.3%) were found to have severe psychiatric distress. Conclusions: Early detection of psychiatric distress is important to prevent psychiatric morbidity and its unwanted effects on medical students and young doctors. Our results reveals that although a low percentage of Dubai Medical College students reported a significant level of psychiatric distress, however, it should not be underestimated, and actions should be taken to encourage Dubai Medical College students to get help from for psychiatric services for their emotional problems. The risk factors as well as the protective factors must be identified in nation-wide studies to promote mental health of medical students. abstract_id: PUBMED:33132805 Validating the Medical Students' Stressor Questionnaire (MSSQ) from a Sri Lankan medical faculty. Objectives: The Medical Students' Stressor Questionnaire (MSSQ) is a specific tool to assess the degree of stress in medical students. However, this tool has not been validated in Sri Lanka. Therefore, this study contextually adapted the MSSQ and investigated its validity in the local context. Methods: A total of 603 medical students, in various phases of their undergraduate training, from the Faculty of Medicine at the University of Colombo, participated in the study. Students who have completed their studies in the faculty for at least six months were included. The self-administered questionnaire was contextually adapted to the local setting and was approved by subject and language experts. Responses were analysed for construct validity (including exploratory factor analysis to estimate factor structure of the scale), sample adequacy, and internal consistency. Data were analysed using the SPSS statistical package. Results: The mean age of the study cohort was 23.3 ± SD 2.0 years, while 258 (42.8%) were males. The MSSQ scale significantly correlated with the General Health Questionnaire (p&lt;0.001), indicating a strong concurrent validity. The exploratory factor analysis showed that items were loaded appropriately on five new factors, and the Kaiser-Meyer-Olkin measure (0.954) and Bartlett's test of sphericity (p&lt;0.001) showed excellent sample adequacy. The internal consistency of the MSSQ overall (α = 0.95), and each of the new factors: (α &gt; 0.82 for the first four factors and α = 0.54 for the fifth) were satisfactory. The test-retest reliability was high (Pearson's r = 0.918, p&lt;0.001). Conclusions: The contextually adapted MSSQ is a valid and reliable instrument that can be used in the assessment of stress among medical students. abstract_id: PUBMED:12235956 Peer education by medical students in a public health course Sex education by peers is becoming popular in Japan. To improve youth's reproductive health by continuous peer education and to encourage medical students to obtain the methodology necessary for health promotion, we started peer sex education by medical students in the annual public health course. One junior high school and one senior high school in Kitakyushu City in Japan asked medical students to conduct peer sex education classes at their schools. Medical students planned and carried out these classes based on the results of a questionnaire that they gave to the junior high school and senior high school students before the peer education. After the class, the students' responses to the class were investigated by another questionnaire. In this questionnaire, over 90% of the students answered that they were interested in talking about sexual issues with medical students. As for the medical students, they said that they could understand better the methodology of health promotion and also gained confidence by this experience. We will continue to carry out peer education in the medical school's public health course, and we intend to research further how junior high school and senior high school students' sexual behavior may change in the future. abstract_id: PUBMED:35412463 Readiness to Embrace Artificial Intelligence Among Medical Doctors and Students: Questionnaire-Based Study. Background: Similar to understanding how blood pressure is measured by a sphygmomanometer, physicians will soon have to understand how an artificial intelligence-based application has come to the conclusion that a patient has hypertension, diabetes, or cancer. Although there are an increasing number of use cases where artificial intelligence is or can be applied to improve medical outcomes, the extent to which medical doctors and students are ready to work and leverage this paradigm is unclear. Objective: This research aims to capture medical students' and doctors' level of familiarity toward artificial intelligence in medicine as well as their challenges, barriers, and potential risks linked to the democratization of this new paradigm. Methods: A web-based questionnaire comprising five dimensions-demographics, concepts and definitions, training and education, implementation, and risks-was systematically designed from a literature search. It was completed by 207 participants in total, of which 105 (50.7%) medical doctors and 102 (49.3%) medical students trained in all continents, with most of them in Europe, the Middle East, Asia, and North America. Results: The results revealed no significant difference in the familiarity of artificial intelligence between medical doctors and students (P=.91), except that medical students perceived artificial intelligence in medicine to lead to higher risks for patients and the field of medicine in general (P&lt;.001). We also identified a rather low level of familiarity with artificial intelligence (medical students=2.11/5; medical doctors=2.06/5) as well as a low attendance to education or training. Only 2.9% (3/105) of medical doctors attended a course on artificial intelligence within the previous year, compared with 9.8% (10/102) of medical students. The complexity of the field of medicine was considered one of the biggest challenges (medical doctors=3.5/5; medical students=3.8/5), whereas the reduction of physicians' skills was the most important risk (medical doctors=3.3; medical students=3.6; P=.03). Conclusions: The question is not whether artificial intelligence will be used in medicine, but when it will become a standard practice for optimizing health care. The low level of familiarity with artificial intelligence identified in this study calls for the implementation of specific education and training in medical schools and hospitals to ensure that medical professionals can leverage this new paradigm and improve health outcomes. abstract_id: PUBMED:34093269 Reliability, Validity, and Measurement Invariance of the General Anxiety Disorder Scale Among Chinese Medical University Students. Background: Medical students are affected by high levels of general anxiety disorder. However, few studies have specifically focused on the applicability of universal anxiety screening tools in this sample. This study was aimed to evaluate the psychometric property of the 7-item Generalized Anxiety Disorder Scale (GAD-7) among Chinese medical university students. Methods: A questionnaire survey was conducted among 1,021 medical postgraduates from six polyclinic hospitals. Internal consistency and convergent validity of the GAD-7 were evaluated. Factor analyses were used to test the construct validity of the scale. An item response theory (IRT) framework was used to estimate the parameters of each item. Multi-group confirmatory analyses and differential item function analyses were used to evaluate the measurement equivalence of the GAD-7 across age, gender, educational status, and residence. Results: Cronbach's α coefficient was 0.93 and the intraclass correlation coefficients ranged from 0.71 to 0.87. The GAD-7 summed score was significantly correlated with measures of depression symptoms, perceived stress, sleep disorders, and life satisfaction. Parallel analysis and confirmatory factor analysis supported the one-factor structure of the GAD-7. Seven items showed appropriate discrimination and difficulty parameters. The GAD-7 showed good measurement equivalence across demographic characteristics. The total test information of the scale was 22.85, but the test information within the range of mild symptoms was relatively low. Conclusions: The GAD-7 has good reliability, validity, and measurement invariance among Chinese medical postgraduate students, but its measurement precision for mild anxiety symptoms is insufficient. abstract_id: PUBMED:25964510 A questionnaire to measure medical students' attitudes towards occupational medicine. Background: Occupational medicine is not well represented in medical curricula. Educational programmes should aim at raising the interest of medical students in occupational medicine. Aims: To develop a questionnaire to measure the attitude of medical students towards occupational medicine. Methods: We drew up statements about attitudes. We used a 5-point Likert scale to elicit answers from second-year medical students. We analysed the questionnaires with exploratory factor analysis to find common scales. After item reduction, we used the questionnaire in third-year medical students from another medical faculty, before and after education. Results: Responses from 141 second-year and 208 third-year medical students were available. Analysis resulted in 18 statements in three scales: a career as an occupational physician, occupational medicine as an interesting speciality and role of the occupational physician. We measured modest changes following an occupational medicine programme. Further development of the questionnaire with qualitative research will lead to better usability. Conclusions: This questionnaire can be used to measure attitude towards occupational medicine; however, this questionnaire needs to be developed further. Answer: The 12-item General Health Questionnaire (GHQ-12) has been investigated as a potential screening tool to identify medical students who might struggle academically due to mental health issues such as anxiety and depression. However, the results from two cohorts of students at Nottingham undergraduate medical course provide insufficient evidence to recommend the routine use of GHQ-12 for this purpose (PUBMED:23548161). The study found that the timing of GHQ-12 administration could critically influence the results, and that low marks in semester 1 examinations were the best single indicator of students at risk for subsequent poor performance. While the GHQ-12 score was a weak positive predictor of marks in semester 2 for the 2007 cohort, it was not a strong enough indicator overall to justify its use as a screening tool for academic struggles among medical students (PUBMED:23548161).
Instruction: Does FTO have a paradoxical effect in fetal life? Abstracts: abstract_id: PUBMED:25539997 Does FTO have a paradoxical effect in fetal life? Background: Low weight at birth is associated with obesity in later life. One hypothesis to explain such an association is that genetic variants that increase the risk of obesity also reduce fetal weight. Recently, obesity in adults was found to be associated with common variants of the fat mass and obesity-associated (FTO) gene. We examined the association between FTO polymorphisms and birth weight in a singleton, full-term birth cohort of 494 newborn-mother pairs without any complications. Results: The risk alleles for obesity ("A" allele for the rs9939609 FTO variant and "G" allele for the rs9930506 FTO variant) were associated with low weight at birth. The mean differences per risk allele were -79 g (95% CI: -129 to -30; p = 0.002) for rs9939609 and -84 g (95% CI: -131 to -36; P &lt; 0.001) for rs9930506. The level of association remained statistically significant after adjustment for the maternal risk allele and for variables usually associated with birth weight (-50 g, 95% CI: -99 to 0; p = 0.05 for rs9939609 and -48 g, 95% CI: -100 to 0; p = 0.05 for rs9930506). In the follow-up, the allelic difference in weight was attenuated over time. Conclusions: The FTO variants that confer a predisposition to obesity later in life appear to be associated with low weight at birth. This finding favors the hypothesis of a common genetic denominator that predisposes to a low weight at birth and obesity in adults. abstract_id: PUBMED:26907388 Relation of FTO gene variants to fetal growth trajectories: Findings from the Southampton Women's survey. Introduction: Placental function is an important determinant of fetal growth, and fetal growth influences obesity risk in childhood and adult life. Here we investigated how FTO and MC4R gene variants linked with obesity relate to patterns of fetal growth and to placental FTO expression. Methods: Southampton Women's Survey children (n = 1990) with measurements of fetal growth from 11 to 34 weeks gestation were genotyped for common gene variants in FTO (rs9939609, rs1421085) and MC4R (rs17782313). Linear mixed-effect models were used to analyse relations of gene variants with fetal growth. Results: Fetuses with the rs9939609 A:A FTO genotype had faster biparietal diameter and head circumference growth velocities between 11 and 34 weeks gestation (by 0.012 (95% CI 0.005 to 0.019) and 0.008 (0.002-0.015) standard deviations per week, respectively) compared to fetuses with the T:T FTO genotype; abdominal circumference growth velocity did not differ between genotypes. FTO genotype was not associated with placental FTO expression, but higher placental FTO expression was independently associated with larger fetal size and higher placental ASCT2, EAAT2 and y + LAT2 amino acid transporter expression. Findings were similar for FTO rs1421085, and the MC4R gene variant was associated with the fetal growth velocity of head circumference. Discussion: FTO gene variants are known to associate with obesity but this is the first time that the risk alleles and placental FTO expression have been linked with fetal growth trajectories. The lack of an association between FTO genotype and placental FTO expression adds to emerging evidence of complex biology underlying the association between FTO genotype and obesity. abstract_id: PUBMED:35675978 Gene-environment interactions related to maternal exposure to environmental and lifestyle-related chemicals during pregnancy and the resulting adverse fetal growth: a review. Background: There are only limited numbers of reviews on the association of maternal-child genetic polymorphisms and environmental and lifestyle-related chemical exposure during pregnancy with adverse fetal growth. Thus, this article aims to review: (1) the effect of associations between the above highlighted factors on adverse fetal growth and (2) recent birth cohort studies regarding environmental health risks. Methods: Based on a search of the PubMed database through August 2021, 68 epidemiological studies on gene-environment interactions, focusing on the association between environmental and lifestyle-related chemical exposure and adverse fetal growth was identified. Moreover, we also reviewed recent worldwide birth cohort studies regarding environmental health risks. Results: Thirty studies examined gene-smoking associations with adverse fetal growth. Sixteen maternal genes significantly modified the association between maternal smoking and adverse fetal growth. Two genes significantly related with this association were detected in infants. Moreover, the maternal genes that significantly interacted with maternal smoking during pregnancy were cytochrome P450 1A1 (CYP1A1), X-ray repair cross-complementing protein 3 (XRCC3), interleukin 6 (IL6), interleukin 1 beta (IL1B), human leukocyte antigen (HLA) DQ alpha 1 (HLA-DQA1), HLA DQ beta 1 (HLA-DQB1), and nicotinic acetylcholine receptor. Fetal genes that had significant interactions with maternal smoking during pregnancy were glutathione S-transferase theta 1 (GSTT1) and fat mass and obesity-associated protein (FTO). Thirty-eight studies examined the association between chemical exposures and adverse fetal growth. In 62 of the 68 epidemiological studies (91.2%), a significant association was found with adverse fetal growth. Across the studies, there was a wide variation in the analytical methods used, especially with respect to the genetic polymorphisms of interest, environmental and lifestyle-related chemicals examined, and the study design used to estimate the gene-environment interactions. It was also found that a consistently increasing number of European and worldwide large-scale birth cohort studies on environmental health risks have been conducted since approximately 1996. Conclusion: There is some evidence to suggest the importance of gene-environment interactions on adverse fetal growth. The current knowledge on gene-environment interactions will help guide future studies on the combined effects of maternal-child genetic polymorphisms and exposure to environmental and lifestyle-related chemicals during pregnancy. abstract_id: PUBMED:20351740 Placental FTO expression relates to fetal growth. Objective: The fat mass and obesity-associated gene (FTO) participates in the control of postnatal weight gain. We assessed whether FTO is expressed in human placenta and whether such expression relates to prenatal weight gain and to the rs9939609 single nucleotide polymorphism (SNP) in FTO. Design And Subjects: In a birth cohort study, placentas from women (n = 147) with an uncomplicated, singleton, term pregnancy were weighed at delivery. Real-time PCR was used to study, in placental tissue, the expression of FTO and of housekeeping genes (TATA box binding protein and succinate dehydrogenase complex, subunit A) and to genotype the rs9939609 SNP in FTO. Weights and lengths of the newborns were measured; circulating insulin and insulin-like growth factor-I (IGF-I) were quantified in cord blood. Results: FTO was highly expressed in placenta and was associated with increased fetal weight and length (P&lt;0.001 to P&lt;0.0001). Maternal parity showed an interaction (P&lt;0.001) in the association between placental FTO expression and placental weight. Placental FTO mRNA expression was associated with increased fetal-to-placental weight ratio (P&lt;0.005) in infants from primiparous women, and was associated with increased fetal weight and length and placental weight (P&lt;0.001 to P&lt;0.0001) in infants from nonprimiparous women. These associations were not explained by either cord insulin or IGF-I. Placental FTO expression was unrelated to placental FTO rs9939609 SNP. Conclusion: FTO is expressed in the human placenta. In a maternal parity-dependent manner, placental FTO may participate either in the control of fetal weight gain or in the partitioning between placental and fetal growth. abstract_id: PUBMED:25054679 Placental expression of the obesity-associated gene FTO is reduced by fetal growth restriction but not by macrosomia in rats and humans. Genetic variants in the FTO (fat mass- and obesity-associated) gene have the highest association of all obesity-associated genes. Its placental expression was shown to relate to birth weight, suggesting that it may participate in the control of fetal weight gain. To gain more insight into the implication of FTO in fetal growth, we measured its placental expression in samples including extremes of abnormal fetal growth, such as after intrauterine growth restriction (IUGR) or macrosomia in both rats and humans. In rats, fetal growth was modulated by maternal nutritional modifications. In humans, placental villi were collected from pathological pregnancies (i.e. with IUGR or fetal macrosomia). Placental FTO mRNA expression was reduced by IUGR but was not significantly affected by macrosomia in either rats or humans. Our data suggest that placental FTO may participate in interactions between the in utero environment and the control of fetal growth under IUGR conditions by modulating epigenetic processes. abstract_id: PUBMED:19068149 Conference on "Multidisciplinary approaches to nutritional problems". Symposium on "Diabetes and health". Nutrition and its contribution to obesity and diabetes: a life-course approach to disease prevention? Whilst previously type 2 diabetes occurred in older adults, its incidence, together with obesity, has increased rapidly in children. An improved understanding of this disease pathway from a developmental view point is critical. It is likely that subtle changes in dietary patterns over an extended period of time contribute to diabetes, although this type of rationale is largely ignored in animal studies aimed at determining the mechanisms involved. Small-animal studies in which large, and often extreme, changes in the diet are imposed at different stages of the life cycle can have substantial effects on fat mass and/or pancreatic functions. These responses are not representative of the much more gradual changes seen in the human population. An increasing number of studies indicate that it is growth rate per se, rather than the type of dietary intervention that determines pancreatic function during development. Epigenetic mechanisms that regulate insulin secretion by the pancreas can be re-set by more extreme changes in dietary supply in early life. The extent to which these changes may contribute to more subtle modulations in glucose homeostasis that can accompany excess fat growth in childhood remains to be established. For human subjects there is much less information as to whether specific dietary components determine disease onset. Indeed, it is highly likely that genotype has a major influence, although recent data relating early diet to physical activity and the FTO gene indicate the difficulty of establishing the relative contribution of diet and changes in body mass to diabetes. abstract_id: PUBMED:29040503 The role of early life growth development, the FTO gene and exclusive breastfeeding on child BMI trajectories. Background: Recent studies have implicated the FTO gene in child and adult obesity. A longer duration of exclusive breastfeeding (EXBF) has been shown to reduce body mass index (BMI) and the risk of being overweight in the general population and among FTO gene carriers. However, it remains unclear whether the preventive effect of EXBF could be explained by its impact on early life growth development, e.g. ages at adiposity peak (AP) and adiposity rebound (AR) and BMI velocities in the first years of life, which are major determinants of overweight and obesity later in life. Methods: We studied 5590 children from the British Avon Longitudinal Study of Parents and Children (ALSPAC) cohort and modelled their longitudinal BMI profiles with mixed effects models from birth to 16 years of age, as well as their ages at AP, AR and BMI velocities in relation to the FTO gene variant and EXBF. Results: A longer duration of EXBF (i.e. at least 5 months) has substantial impact on BMI growth trajectories among children carrying the FTO adverse variant by modulating the age at AP, age at AR and BMI velocities. EXBF acts antagonistically to the FTO rs9939609 risk allele and by the age of 15, the predicted reduction in BMI after 5 months of EXBF is 0.56 kg/m2 [95% confidence interval (CI) 0.11-1.01; P = 0.003] and 1.14 kg/m2 (95% CI 0.67-1.62; P &lt; 0.0001) in boys and girls, respectively. Conclusions: EXBF influences early life growth development and thus plays a critical role in preventing the risks of overweight and obesity even when those are exacerbated by genetic factors. abstract_id: PUBMED:20702506 Life-course analysis of a fat mass and obesity-associated (FTO) gene variant and body mass index in the Northern Finland Birth Cohort 1966 using structural equation modeling. The association between variation in the fat mass and obesity-associated (FTO) gene and adulthood body mass index (BMI; weight (kg)/height (m)(2)) is well-replicated. More thorough analyses utilizing phenotypic data over the life course may deepen our understanding of the development of BMI and thus help in the prevention of obesity. The authors used a structural equation modeling approach to explore the network of variables associated with BMI from the prenatal period to age 31 years (1965-1997) in 4,435 subjects from the Northern Finland Birth Cohort 1966. The use of structural equation modeling permitted the easy inclusion of variables with missing values in the analyses without separate imputation steps, as well as differentiation between direct and indirect effects. There was an association between the FTO single nucleotide polymorphism rs9939609 and BMI at age 31 years that persisted after controlling for several relevant factors during the life course. The total effect of the FTO variant on adult BMI was mostly composed of the direct effect, but a notable part was also arising indirectly via its effects on earlier BMI development. In addition to well-established genetic determinants, many life-course factors such as physical activity, in spite of not showing mediation or interaction, had a strong independent effect on BMI. abstract_id: PUBMED:20945953 Body mass index change in females after short-time life style intervention is not dependent on the FTO polymorphisms. Variants within the FTO gene are important determinants of body mass index (BMI), but their role in determination of BMI changes after combined dietary/physical activity intervention is unclear. We have analyzed 107 unrelated overweight non-diabetic Czech females (BMI over 27.5 kg/m(2), age 49.2+/-12.3 years). FTO variants rs17817449 (first intron) and rs17818902 (third intron) were genotyped. The life style modification program (10 weeks) consisted of an age-matched reduction of energy intake and exercise program (aerobic exercise 4 times a week, 60 min each). The mean BMI before intervention was 32.8+/-4.2 kg/m(2) and the mean achieved weight loss was 4.8+/-3.5 kg (5.3+/-3.5 %, max. -15.5 kg, min. +2.0 kg, p&lt;0.01). No significant association between BMI decrease and FTO variants was found. Also waist-to-hip ratio, body composition (body fat, water, active tissue), lipid parameters (total, LDL and HDL cholesterol, triglycerides) glucose and hsCRP changes were independent on FTO variants. FTO variants rs17817449 and rs17818902 are not associated with BMI changes after combined short time dietary/physical activity intervention in overweight females. abstract_id: PUBMED:32111961 Effect direction meta-analysis of GWAS identifies extreme, prevalent and shared pleiotropy in a large mammal. In genome-wide association studies (GWAS), variants showing consistent effect directions across populations are considered as true discoveries. We model this information in an Effect Direction MEta-analysis (EDME) to quantify pleiotropy using GWAS of 34 Cholesky-decorrelated traits in 44,000+ cattle with sequence variants. The effect-direction agreement between independent bull and cow datasets was used to quantify the false discovery rate by effect direction (FDRed) and the number of affected traits for prioritised variants. Variants with multi-trait p &lt; 1e-6 affected 1∼22 traits with an average of 10 traits. EDME assigns pleiotropic variants to each trait which informs the biology behind complex traits. New pleiotropic loci are identified, including signals from the cattle FTO locus mirroring its bystander effects on human obesity. When validated in the 1000-Bull Genome database, the prioritized pleiotropic variants consistently predicted expected phenotypic differences between dairy and beef cattle. EDME provides robust approaches to control GWAS FDR and quantify pleiotropy. Answer: Yes, FTO does appear to have a paradoxical effect in fetal life. Studies have found that the same FTO gene variants associated with obesity in adults are linked to lower birth weight in newborns. Specifically, the risk alleles for obesity, such as the "A" allele for the rs9939609 FTO variant and the "G" allele for the rs9930506 FTO variant, were associated with low weight at birth, with mean differences per risk allele of -79 g and -84 g, respectively (PUBMED:25539997). This suggests a common genetic factor that predisposes individuals to both low birth weight and obesity later in life. Moreover, the FTO gene variants have been shown to influence fetal growth trajectories. For instance, fetuses with the rs9939609 A:A FTO genotype had faster growth velocities for biparietal diameter and head circumference between 11 and 34 weeks gestation compared to those with the T:T genotype (PUBMED:26907388). This indicates that FTO gene variants are not only associated with obesity but also play a role in fetal growth patterns. Additionally, placental FTO expression has been linked to increased fetal weight and length, suggesting that FTO may participate in the control of fetal weight gain or the partitioning between placental and fetal growth (PUBMED:20351740). However, placental FTO expression was found to be reduced by intrauterine growth restriction (IUGR) but not significantly affected by macrosomia, indicating a potential role of placental FTO in the control of fetal growth under IUGR conditions (PUBMED:25054679). In summary, the FTO gene exhibits a paradoxical effect in fetal life, where the same genetic variants associated with adult obesity are related to lower birth weight and influence fetal growth trajectories, suggesting a complex role of FTO in growth and development from prenatal to adult life.
Instruction: Cerebral and peripheral near-infrared spectroscopy: an alternative transfusion trigger? Abstracts: abstract_id: PUBMED:12366769 Cerebral and peripheral near-infrared spectroscopy: an alternative transfusion trigger? Background And Objectives: To develop a transfusion trigger based on tissue oxygenation, near-infrared spectroscopy (NIRS) was evaluated in a model of compensated haemorrhage. Patients And Methods: Regional haemoglobin oxygen saturation from the cerebral cortex (CsO2) and the gastrocnemius muscle (PsO2) was monitored (using an INVOS 4100 near-infrared oximeter) in 30 patients during acute normovolaemic haemodilution to a target haemoglobin of 11 g/dl. Arterial oxygen saturation, end-tidal carbon dioxide tension, mean arterial pressure and haemoglobin concentration were also measured. Results: During blood collection, CsO2 and PsO2 fell by a mean (95% CI) of 8 (5.3-10.7)% (P &lt; 0.001) and 5.5 (3.2-7.8)% (P &lt; 0.001), respectively. Arterial pressure and oxygen saturation did not change, whilst the end-tidal carbon dioxide tension fell by 2.3 (0.8-3.8) mmHg (P = 0.004). Haemoglobin concentration correlated with CsO2 (R = 0.76, P &lt; 0.001) and PsO2 (R = 0.63, P &lt; 0.001), as did the volume of blood removed. Conclusions: CsO2 and PsO2 fell predictably during compensated blood loss. With further research, NIRS may be developed into a transfusion trigger. abstract_id: PUBMED:34091814 Near-infrared spectroscopy device selection affects intervention management for cerebral desaturation during cardiopulmonary bypass surgery. Objective: Currently, several near-infrared spectroscopy oximetry devices are used for detecting cerebral ischemia during cardiopulmonary bypass (CPB) surgery. We investigated whether two different models of near-infrared spectroscopy oximetry devices affect the assessment of cerebral ischemia and its management during CPB. Methods: From January 2017 to August 2017, 70 adult cardiovascular surgery cases were randomly assigned to 1 of 2 different near-infrared spectroscopy oximetry devices. The devices were INVOS 5100C (Medtronic, Minneapolis, MN, USA) (group I; n = 35) and FORE-SIGHT ELITE (CAS Medical Systems, Branford, CT, USA) (group F; n = 35). Results: There were no significant differences in patient characteristics. The rSO2 values were significantly higher for patients in group F than for patients in group I. Scalp-Cortex distance showed negative correlations with the mean rSO2 values in group I (P = 0.01). Interventions for low rSO2 during CPB for groups I and F were increase perfusion flow (13:5; P = 0.03), blood transfusion (7:1; P = 0.02), and both (6:1; P = 0.04), respectively. The Scalp-Cortex distance in group I was significantly longer in patients who required intervention than in patients who did not (17.1 ± 2.5 vs 15.1 ± 1.6 mm; P = 0.007). Conclusions: It is inappropriate to use the same intervention criteria for different near-infrared spectroscopy oximetry devices. Moreover, brain atrophy influence rSO2 values depending on device selection. It is important to note that inappropriate device selection may misguide perfusionists into performing unnecessary or excessive intervention during CPB. abstract_id: PUBMED:34485197 Fetal Hemoglobin and Tissue Oxygenation Measured With Near-Infrared Spectroscopy-A Systematic Qualitative Review. Fetal hemoglobin (HbF) is a principal oxygen carrier in the blood of preterm and term neonates. Compared to adult hemoglobin, it has a significantly higher affinity for oxygen and its oxyhemoglobin dissociation curve (ODC) is left-shifted accordingly. Tissue oxygenation measured with near-infrared spectroscopy (NIRS) during neonatal intensive care is directly affected by hemoglobin concentration. We performed a systematic qualitative review regarding the impact of HbF on tissue oxygenation monitoring by NIRS. The PubMed/Medline, EMBASE, Cochrane library and CINAHL databases were searched from inception to May 2021 for studies relating to HbF and NIRS in preterm and term neonates in the first days and weeks after birth. Out of 1,429 eligible records, four observational studies were included. Three studies found no effect of HbF on cerebral tissue oxygenation. One peripheral NIRS study found a positive correlation between HbF and peripheral fractional oxygen extraction (FOE). Currently available limited data suggest that FHbF could affect peripheral muscle FOE, but seems not to affect cerebral oxygenation in preterm neonates. More studies are needed to draw a final conclusion on this matter, especially concerning the oxygenation changes driven by adult RBC transfusions. abstract_id: PUBMED:37767283 Effects of Different Hemoglobin Levels on Near-Infrared Spectroscopy-Derived Cerebral Oxygen Saturation in Elderly Patients Undergoing Noncardiac Surgery. Background: Near-infrared spectroscopy (NIRS) is a commonly used technique to evaluate tissue oxygenation and prevent harmful cerebral desaturation in the perioperative setting. The aims of the present study were to assess whether surgery-related anemia can be detected via NIRS of cerebral oxygen saturation and to investigate the effects of different perioperative transfusion strategies on cerebral oxygenation, potentially affecting transfusion decision-making. Study Design And Methods: Data from the ongoing multicenter LIBERAL-Trial (liberal transfusion strategy to prevent mortality and anemia-associated ischemic events in elderly noncardiac surgical patients, LIBERAL) were used. In this single-center sub-study, regional cerebral oxygenation saturation (rSO2) was evaluated by NIRS at baseline, pre-, and post-RBC transfusion. The obtained values were correlated with blood gas analysis-measured Hb concentrations. Results: rSO2 correlated with Hb decline during surgery (r = 0.35, p &lt; 0.0001). Different RBC transfusion strategies impacted rSO2 such that higher Hb values resulted in higher rSO2. Cerebral desaturation occurred at lower Hb values more often. Discussion: Cerebral oxygenation monitoring using NIRS provides noninvasive rapid and continuous information regarding perioperative alterations in Hb concentration without the utilization of patients' blood for blood sampling. Further investigations are required to demonstrate if cerebral rSO2 may be included in future individualized transfusion decision strategies. abstract_id: PUBMED:11424814 Cerebral oxygenation and near-infrared rays spectrophotometry Near infrared spectroscopy (NIRS) is a new technique which allows non invasive bedside monitoring of cerebral oxygenation and hemodynamics by measuring relative changes in cerebral oxy- and deoxyhaemoglobin and cytochrome aa3. We have applied this technique to evaluate the possible effects on cerebral oxygenation and hemodynamics of clinical procedures usually performed on preterm infants:--endotracheal suctioning, and we have demonstrated that the magnitude and the duration of the negative effects of open system are significantly reduced using closed endotracheal suctioning system;--withdrawal and infusion through umbilical vein and artery cause significant changes in cerebral hemodynamics: these effects are significantly reduced after administration of ibuprofen;--treatment of patent ductus arteriosus with ibuprofen does not significantly reduce cerebral perfusion and oxygen availability compared to indomethacin and ibuprofen administration also does not affect cerebral vasoreactivity to arterial carbon dioxide tension;--administration of different types and doses of natural surfactant causes different changes in cerebral hemodynamics and these effects seem to be dose-related. Therefore NIRS is an useful device to investigate cerebral oxygenation state of preterm infants and new possibilities could derive from the introduction of a new NIRS method which allows to measure the tissue oxygenation index. abstract_id: PUBMED:28882917 Effects of cerebral near-infrared spectroscopy on the outcome of patients undergoing cardiac surgery: a systematic review of randomised trials. Objectives: Goal-directed optimisation of cerebral oxygenation using near-infrared spectroscopy (NIRS) during cardiopulmonary bypass is widely used. We tested the hypotheses that the use of NIRS cerebral oximetry results in reductions in cerebral injury (neurocognitive function, serum biomarkers), injury to other organs including the heart and brain, transfusion rates, mortality and resource use. Design: Systematic review and meta-analysis. Setting: Tertiary cardiac surgery centres in North America, Europe and Asia. Participants: A search of Cochrane Central Register of Controlled Trials, ClinicalTrials.gov, Medline, Embase, and the Cumulative Index to Nursing and Allied Health Literature Plus from inception to November 2016 identified 10 randomised trials, enrolling a total of 1466 patients, all in adult cardiac surgery. Interventions: NIRS-based algorithms designed to optimise cerebral oxygenation versus standard care (non-NIRS-based) protocols in cardiac surgery patients during cardiopulmonary bypass. Outcome Measures: Mortality, organ injury affecting the brain, heart and kidneys, red cell transfusion and resource use. Results: Two of the 10 trials identified in the literature search were considered at low risk of bias. Random-effects meta-analysis demonstrated similar mortality (risk ratio (RR) 0.76, 95% CI 0.30 to 1.96), major morbidity including stroke (RR 1. 08, 95% CI 0.40 to 2.91), red cell transfusion and resource use in NIRS-treated patients and controls, with little or no heterogeneity. Grades of Recommendation, Assessment, Development and Evaluation of the quality of the evidence was low or very low for all of the outcomes assessed. Conclusions: The results of this systematic review did not support the hypotheses that cerebral NIRS-based algorithms have clinical benefits in cardiac surgery. Trial Registration Number: PROSPERO CRD42015027696. abstract_id: PUBMED:9886352 Use of mitochondrial inhibitors to demonstrate that cytochrome oxidase near-infrared spectroscopy can measure mitochondrial dysfunction noninvasively in the brain. The use of near-infrared spectroscopy to measure noninvasively changes in the redox state of cerebral cytochrome oxidase in vivo is controversial. We therefore tested these measurements using a multiwavelength detector in the neonatal pig brain. Exchange transfusion with perfluorocarbons revealed that the spectrum of cytochrome oxidase in the near-infrared was identical in the neonatal pig, the adult rat, and in the purified enzyme. Under normoxic conditions, the neonatal pig brain contained 15 micromol/L deoxyhemoglobin, 29 micromol/L oxyhemoglobin, and 1.2 micromol/L oxidized cytochrome oxidase. The mitochondrial inhibitor cyanide was used to determine whether redox changes in cytochrome oxidase could be detected in the presence of the larger cerebral hemoglobin concentration. Addition of cyanide induced full reduction of cytochrome oxidase in both blooded and bloodless animals. In the blooded animals, subsequent anoxia caused large changes in hemoglobin oxygenation and concentration but did not affect the cytochrome oxidase near-infrared signal. Simultaneous blood oxygenation level-dependent magnetic resonance imaging measurements showed a good correlation with near-infrared measurements of deoxyhemoglobin concentration. Possible interference in the near-infrared measurements from light scattering changes was discounted by simultaneous measurements of the optical pathlength using the cerebral water absorbance as a standard chromophore. We conclude that, under these conditions, near-infrared spectroscopy can accurately measure changes in the cerebral cytochrome oxidase redox state. abstract_id: PUBMED:28969313 Randomized trial of near-infrared spectroscopy for personalized optimization of cerebral tissue oxygenation during cardiac surgery. Background: We assessed whether a near-infrared spectroscopy (NIRS)-based algorithm for the personalized optimization of cerebral oxygenation during cardiopulmonary bypass combined with a restrictive red cell transfusion threshold would reduce perioperative injury to the brain, heart, and kidneys. Methods: In a randomized controlled trial, participants in three UK centres were randomized with concealed allocation to a NIRS (INVOS 5100; Medtronic Inc., Minneapolis, MN, USA)-based 'patient-specific' algorithm that included a restrictive red cell transfusion threshold (haematocrit 18%) or to a 'generic' non-NIRS-based algorithm (standard care). The NIRS algorithm aimed to maintain cerebral oxygenation at an absolute value of &gt; 50% or at &gt; 70% of baseline values. The primary outcome for the trial was cognitive function measured up to 3 months postsurgery. Results: The analysis population comprised eligible randomized patients who underwent valve or combined valve surgery and coronary artery bypass grafts using cardiopulmonary bypass between December 2009 and January 2014 ( n =98 patient-specific algorithm; n =106 generic algorithm). There was no difference between the groups for the three core cognitive domains (attention, verbal memory, and motor coordination) or for the non-core domains psychomotor speed and visuo-spatial skills. The NIRS group had higher scores for verbal fluency; mean difference 3.73 (95% confidence interval 1.50, 5.96). Red cell transfusions, biomarkers of brain, kidney, and myocardial injury, adverse events, and health-care costs were similar between the groups. Conclusions: These results do not support the use of NIRS-based algorithms for the personalized optimization of cerebral oxygenation in adult cardiac surgery. Clinical Trial Registration: http://www.controlled-trials.com , ISRCTN 23557269. abstract_id: PUBMED:12697404 Cerebral and peripheral oxygen saturation during red cell transfusion. Background: Changes in regional hemoglobin oxygen saturation occur in response to blood transfusion and can be measured by near infrared spectroscopy. Patients And Methods: Cerebral (CsO2) and peripheral (PsO2) oxygen saturation were monitored with an INVOS 4100 near infrared spectroscopy oximeter in 29 patients undergoing 84 intraoperative blood transfusions during aortic or spinal surgery. Hemoglobin concentration was measured before and after transfusion. Mean arterial pressure, end tidal carbon dioxide tension, and arterial oxygen saturation were also monitored. Results: Mean arterial pressure, arterial oxygen saturation and end tidal carbon dioxide tension remained stable during transfusion, while CsO2 rose by a mean (95% CI) of 4.2 (3.2-5.2%; P = 0.001) and PsO2 rose by a mean (95% CI) of 1.6 (0.3-2.8%; P = 0.016). The rise in CsO2 correlated well with the rise in hemoglobin (r = 0.59, P &lt; 0.001) and with the volume transfused (r = 0.58, P &lt; 0.001). PsO2 correlated with the volume transfused (r = 0.35, P = 0.019) but not with hemoglobin concentration (r = 0.08, P = 0.47). Conclusions: Near infrared spectroscopy detected significant rises in tissue oxygenation in response to blood transfusion, particularly in the cerebral cortex. CsO2 may be developed into a blood loss monitor if further research confirms our findings. abstract_id: PUBMED:15910352 Cerebral oxygenation monitoring using near infrared spectroscopy during controlled hypotension. Background: Controlled hypotension (CH) is used to limit intraoperative blood loss and decrease the need for homologous transfusions. Despite the efficacy of the technique, hypotension has the potential to affect cerebral perfusion and oxygen delivery. There are no data providing a direct measurement of cerebral oxygenation during this technique. Methods: The current study prospectively evaluated cerebral oxygenation during CH using near infrared spectroscopy. Nineteen patients ranging in age from 6 to 18 years were enrolled in the study. CH was provided using a combination of intravenous opioids and sevoflurane supplemented with labetolol as necessary. Results: There were a total of 268 readings obtained from the cerebral oximeter. The baseline cerebral oximeter reading was 81+/-8% on the right and 82+/-7% on the left. During CH (mean arterial pressure of 65-69, 60-64, 55-59, and &lt;54 mmHg), the right cerebral oximeter values were 80+/-9, 78+/-8, 78+/-10, and 84+/-9%, respectively while the left cerebral oximeter values were 79+/-7, 80+/-7, 78+/-8, and 78+/-8%, respectively. Of the 268 readings, there were 11 points (4%) at which either the left or right cerebral oximeter was 10-19 less than the baseline value and no points at which the reading was 20 or more from the baseline value. Conclusions: Our preliminary data with a measurement of cerebral oxygenation demonstrates the safety of CH within the accepted mean blood pressure recommendations of 55-65 mmHg. Answer: Cerebral and peripheral near-infrared spectroscopy (NIRS) has been investigated as a potential alternative transfusion trigger by monitoring tissue oxygenation during various clinical scenarios, including surgery and controlled hypotension. The studies suggest that NIRS may provide valuable information about tissue oxygenation that could inform transfusion decisions. One study found that during acute normovolaemic haemodilution, both cerebral and peripheral hemoglobin oxygen saturation (CsO2 and PsO2) fell predictably with compensated blood loss, indicating that NIRS could potentially be developed into a transfusion trigger with further research (PUBMED:12366769). Another study demonstrated that different near-infrared spectroscopy oximetry devices could affect the assessment of cerebral ischemia and its management during cardiopulmonary bypass surgery, suggesting that device selection is critical and that intervention criteria should not be the same for different devices (PUBMED:34091814). Furthermore, a study on elderly patients undergoing noncardiac surgery showed that NIRS-derived cerebral oxygen saturation correlated with hemoglobin decline during surgery, and different red blood cell (RBC) transfusion strategies impacted cerebral oxygenation, suggesting that NIRS could be included in future individualized transfusion decision strategies (PUBMED:37767283). Additionally, during controlled hypotension, cerebral oxygenation measured by NIRS remained stable, providing preliminary data on the safety of controlled hypotension within accepted mean blood pressure recommendations (PUBMED:15910352). However, it is important to note that the evidence is not entirely conclusive. A systematic review of randomized trials did not support the hypothesis that cerebral NIRS-based algorithms have clinical benefits in cardiac surgery (PUBMED:28882917). Moreover, a randomized trial found no difference in cognitive function or other outcomes between patients managed with a NIRS-based algorithm and those with standard care (PUBMED:28969313). In summary, while NIRS shows promise as a tool for monitoring tissue oxygenation and potentially guiding transfusion decisions, more research is needed to establish its efficacy and to determine the appropriate intervention criteria for different devices and clinical settings.
Instruction: Do evolving practices improve survival in operated lung cancer patients? Abstracts: abstract_id: PUBMED:19240651 Do evolving practices improve survival in operated lung cancer patients? A biobank may answer. Introduction: Biobanks may play a pivotal role in lung cancer patients' management, research, and health policy. The Nancy "Centre of Biologic Resources" analyzed the evolving profiles of operated lung cancer patients and their management over 20 years. Methods: A total of 1259 consecutive patients operated upon from 1988 till 2007 were included. Survival rates were statistically compared before and after 1997. The parameters associated with a significant improvement of survival were determined. Results: After 1997, lung cancer was diagnosed at an earlier stage. For Squamous Cell Lung Cancer (SQCLC), stages IA increased from 5.4 to 19.5% and for Adenocarcinoma (ADC), stage IA increased from 9.9 to 24.7%. More women with stage I ADC were operated upon after 1997 (p = 0.01). More patients with Large Cell Lung Cancer were diagnosed recently. Recent patients received more adjuvant or neo-adjuvant chemotherapy (p &lt; 0.001) and less radiotherapy (stage I SQCLC: p = 0.019, stage I ADC: p &lt; 0.001). A longer overall patients' survival was observed after 1997 (chi test for SQLC and ADC independently p &lt; or = 0.002). Among SQCLC long survivors, those at stage I-II, below 50 years, were more numerous. A longer survival was associated with early stage in ADC patients. Stage was the single constant factor for overall outcome. Conclusion: Overall and stage-adjusted survival of operated lung cancer patients has been improved in the last decade due mainly to earlier diagnosis. The generalized use of computed tomography scan, chemotherapy, and a collegial management improved patients' survival. abstract_id: PUBMED:30444595 Factors affecting survival of radically operated nonsmall cell lung cancer patients of 70 years and older It is performed univariate and multivariate analysis of factors affecting the overall cancer survival of 613 patients of 70 years or older radically operated for non-small cell lung cancer. It is found that the main independent predictors of survival are T and N categories. These variables retain their importance in the overall multivariate model both types of survival, and the variable T - in separate multivariate models for samples stratified by category N. The presence of diabetes dramatically reduced both types of survival. Among patients without regional metastases women had a higher survival rate. The age of the operated patients older than 70 years had no significant independent effect on survival. abstract_id: PUBMED:1228490 Statistical notes on the age and survival of 178 patients operated for malignant breast tumors Age and survival data in a series of 178 patients operated for malignant breast tumour during the period 1957-67 are statistically assessed. The mean survival period was 9 yr. Young subjects had a greater chance of survival, while their mortality rate was the lowest after the fifth year. abstract_id: PUBMED:36303686 Relationship between survival and erythrocyte sedimentation rate in patients operated for lung cancer. Background: This study aims to investigate the relationship between preoperative erythrocyte sedimentation rate and survival in patients undergoing pulmonary resection due to lung cancer. Methods: Between January 2011 and July 2017, a total of 575 patients (433 males, 142 females; mean age: 61.2±9.9 years; range, 29 to 82 years) who were operated due to primary lung cancer in our clinic were retrospectively analyzed. The patients were grouped according to erythrocyte sedimentation rate to analyze the relationship between erythrocyte sedimentation rate and survival. Results: The mean overall survival time was 61.8±1.7 months in 393 patients with an erythrocyte sedimentation rate of ≤24 mm/h and 48.9±2.9 months in 182 patients with an erythrocyte sedimentation rate of ≥25 mm/h (p&lt;0.001). Among the patients with Stage 1-2 disease, the mean survival time was 66.2±1.9 in patients with an erythrocyte sedimentation rate of ≤24 mm/h and 53.8±3.2 in patients with an erythrocyte sedimentation rate of ≥25 mm/h (p=0.008). The mean survival time in patients with adenocarcinoma was 62.4±2.4 months in patients with ≤24 mm/h erythrocyte sedimentation rate and 46.1±4.6 months in patients with ≥25 mm/h erythrocyte sedimentation rate (p=0.003). Conclusion: The relationship between elevated erythrocyte sedimentation rate and poor prognosis in patients with the same stage of the disease is promising for the use of erythrocyte sedimentation rate as a prognostic marker. abstract_id: PUBMED:30056558 Morbidity, mortality, and survival in elderly patients undergoing pulmonary metastasectomy for colorectal cancer. Purpose: The number of elderly patients with colorectal cancer is increasing. Nevertheless, they are undertreated compared to younger patients. This study compares postoperative morbidity, mortality, survival, and morbidity risk factors of elderly and younger patients undergoing pulmonary metastasectomy (PM). Methods: We retrospectively analyzed our prospective database of 224 patients operated for colorectal lung metastases between 1999 and 2014. Two groups were defined to evaluate the influence of the patients' age (A: &lt; 70 years; B: ≥ 70 years). Morbidity, mortality, and risk factors for morbidity were analyzed using χ2-test and Fisher's exact test. The Kaplan-Meier method, log-rank test, and multivariate Cox regression were used to assess survival and prognosticators. Results: Altogether, minor morbidity, major morbidity, and mortality were 17%, 5.8%, and 0%, respectively. Between groups A (n = 170) and B (n = 54), there was no difference in minor and major morbidity (p = 0.100) or mortality (0%). Heart arrhythmia was a risk factor for increased morbidity in group B (p = 0.007). The 5-, 10-, and 15-year survival rates were 43%, 30%, and 27%, respectively, in group A and 55%, 36 and 19%, respectively, in group B (p = 0.316). Disease-free interval ≥ 36 months (p = 0.023; OR 2.88) and anatomic resections (p = 0.022; OR 3.05) were associated with prolonged survival in elderly patients. Conclusions: Morbidity, mortality, and overall survival after PM with lymphadenectomy for elderly patients were comparable to younger patients. A disease-free interval &gt; 36 months and anatomic lung resections might be associated with prolonged survival. However, elderly patients should also be evaluated for a curative treatment. abstract_id: PUBMED:18648883 Does surgical resection of pulmonary metastases of head and neck cancer improve survival? Background: The prognosis of patients with metastasized head and neck cancer is poor. Limited experience exists with the benefit of resection of lung metastases and systematic mediastinal and hilar lymph node dissection on survival of patients with head and neck carcinoma. Methods: Eighty patients undergoing metastasectomy for pulmonary metastases of primary head and neck cancer entered the study. Multivariate analysis was performed by Cox regression analysis. Survival differences between patients operated and those not operated on were analyzed by matched pair analysis. Results: From 1984 until 2006, pulmonary metastases were diagnosed in 332 patients treated for head and neck cancer; 80 of these were admitted to our department for resection. Metastases of the primary head and neck tumor were confirmed histologically in 67 patients. The median overall survival after resection of lung metastases was 19.4 months and was statistically significantly better compared with patients who were not operated on (P &lt; .001). The multivariate analysis after metastasectomy revealed that incomplete resection of pulmonary lesions, complications associated with surgery, and adjuvant therapy of the primary tumor are independent negative prognostic factors for survival. We observed a trend to improved survival in patients without hilar or mediastinal lymph node metastases. Conclusion: The survival rate of patients operated on was statistically significantly higher than that of patients with conservative treatment. Even patients with multiple or bilateral pulmonary lesions after curative treatment of a primary tumor should be operated on if there is no contraindication against an extended surgical procedure and a complete resection of the metastases seems achievable. abstract_id: PUBMED:30069313 Five-year survival analysis and prognostic factors in patients operated on for non-small cell lung cancer with N2 disease. Background: The aim of the study is to analyse 5-year survival and prognostic factors in patients operated due to lung cancer with postoperatively confirmed metastases to N2 group nodes. Methods: In the years 2007-2015, 1,148 patients with lung cancer were treated surgically. A postoperative histopathological study confirmed N2 feature in 150 patients. One hundred and ten patients, in whom a 5-year survival analysis was possible to perform, were included in such analysis. The patients underwent the following procedures: pneumonectomy, 31 patients; lobectomy, 61; bilobectomy, 10; and wedge resection, 8. All patients were subjected to supplementary treatment after the surgery. Results: Five-year survival was achieved in 23 patients (21%). The patients' survival did not depend on the type of surgery, type of tumour, its location or presence of metastases in N1 nodes (P=0.82, P=0.51, P=0.36 and P=0.23, respectively). A statistically significant correlation was observed (P=0.01) between the 5-year survival of a patient and the occurrence of metastases only in one group of lymph nodes of the N2 feature (22 patients, 20%). Involvement of three or more mediastinal nodal groups resulted in survival shorter than 5-year. Conclusions: (I) In patients with the N2 feature, the type of performed surgery, type of tumour and the occurrence of metastases in the lymph nodes of the lung hilum do not affect 5-year survival; (II) involvement of only one nodal group allows to achieve 5-year survival in 20% of patients; (III) involvement of three and more nodal groups with the N2 feature results in decreased 5-year survival. abstract_id: PUBMED:29106506 The age-adjusted Charlson comorbidity index is a better predictor of survival in operated lung cancer patients than the Charlson and Elixhauser comorbidity indices. Objectives: To compare the prognostic performance between different comorbidity assessments of survival in patients with operated lung cancer. Methods: A total of 4508 lung cancer patients treated by surgery between 2003 and 2012 were identified through Taiwan's National Health Insurance Research Database. Information on pre-existing comorbidities prior to the cancer diagnosis was obtained and adapted to the Charlson comorbidity index, age-adjusted Charlson comorbidity index (ACCI) and Elixhauser comorbidity index scores. The influence on survival was analysed using a Cox proportional hazard model. The discriminatory ability of the comorbidity indices were evaluated using Akaike information criterion and Harrell's C-statistic. Results: The mean age of the study cohort was 64.95 ± 11.15 years, and 56.28% of the patients were male. The median follow-up time was 2.59 years, and the 3-year overall survival was 73.94%. Among these patients, 2134 (47.3%) patients received adjuvant therapy. The Charlson comorbidity index and ACCI scores correlated well with survival and higher scores were associated with an increased 3-year mortality risk (hazard ratio = 1.21, 95% confidence interval = 1.03-1.42 and hazard ratio = 1.43, 95% confidence interval = 1.08-1.90, respectively) in multivariate analysis. The ACCI scores provided better discriminatory ability with a smaller Akaike information criterion and greater Harrell's C-statistic for 3-year overall survival compared to the Charlson comorbidity index or Elixhauser comorbidity index scores. Conclusions: The operated lung cancer patients with severe comorbidities were associated with worse survival. The ACCI appears to be a more appropriate prognostic indicator and should be considered for use in clinical practice. abstract_id: PUBMED:9591882 Does surgery improve the survival of patients with advanced anaplastic thyroid carcinoma? Anaplastic thyroid carcinoma is one of the most lethal neoplasms, with poor prognosis being reported by most authors. The benefits of surgery for most cases of advanced disease remain controversial. In this study we asked the following question: Does surgical intervention alter outcomes for patients with advanced anaplastic thyroid carcinoma? Forty-six patients with advanced anaplastic thyroid carcinoma were analyzed. There were 20 patients with advanced localized disease (group 1), 15 of whom received surgery. Of the other 26 patients with evidence of distant metastases (group 2), 13 received surgery. For group 1 patients, the mean survival was 12.8 months versus 8.6 months in the surgical and nonsurgical subgroups (p = 0.46). For group 2 patients, the mean survival was 3.5 months versus 2.8 months in the surgical and nonsurgical subgroups (p = 0.72). These data suggest that surgery does not improve survival for patients with advanced anaplastic thyroid carcinoma. In conclusion, the mean survival showed no significant differences between surgical and nonsurgical patients (p = 0.43). This study suggests that surgical resection does not improve the survival of patients with advanced anaplastic thyroid carcinoma. abstract_id: PUBMED:34999793 Social determinants and facility type impact adherence to best practices in operable IIIAN2 lung cancer. Objectives: We aimed to identify patient- and facility-specific predictors of collective adherence to 4 recommended best treatment practices in operable IIIAN2 non-small-cell lung cancer (NSCLC) and test the hypothesis that collective adherence is associated with superior survival. Methods: We queried the National Cancer Database for clinical stage IIIAN2 NSCLC patients undergoing surgery during 2010-2015. The following best practices were examined: performance of an anatomic resection, performance of an R0 resection, examination of regional lymph nodes and administration of induction therapy. Multivariable regression models were fitted to identify independent predictors of guideline-concordance. Results: We identified 7371 patients undergoing surgical resection for IIIAN2 lung cancer, of whom 90.8% underwent an anatomic resection, 88.2% received an R0 resection, 92.5% underwent a regional lymph node examination, 41.6% received induction therapy and 33.7% received all 4 best practices. Higher income, private insurance and treatment at an academic facility were independently associated with adherence to all 4 best practices (P &lt; 0.01). A lower level of education and residence in a rural county were associated with a lack of adherence (P &lt; 0.05). Adherence to all 4 practices correlated with improved survival (P &lt; 0.01). Conclusions: National adherence to best treatment practices in operable IIIAN2 lung cancer was variable as evidenced by the majority of patients not receiving recommended induction therapy. Socioeconomic factors and facility type are important determinants of guideline-concordance. Future efforts to improve outcomes should take this into account since guideline concordance, in the form of collective adherence to all 4 best practices, was associated with improved survival. Answer: Yes, evolving practices appear to improve survival in operated lung cancer patients. The Nancy "Centre of Biologic Resources" conducted a study analyzing the profiles and management of operated lung cancer patients over 20 years and found that overall and stage-adjusted survival has improved in the last decade. This improvement is attributed to earlier diagnosis, the generalized use of computed tomography scans, chemotherapy, and a collegial management approach (PUBMED:19240651). Additionally, factors such as T and N categories were found to be the main independent predictors of survival in older patients operated for non-small cell lung cancer (PUBMED:30444595). Furthermore, the relationship between preoperative erythrocyte sedimentation rate and survival in patients undergoing pulmonary resection due to lung cancer suggests that elevated erythrocyte sedimentation rate is associated with poor prognosis, indicating its potential use as a prognostic marker (PUBMED:36303686). Moreover, elderly patients undergoing pulmonary metastasectomy for colorectal cancer showed comparable morbidity, mortality, and overall survival to younger patients, with a disease-free interval of ≥ 36 months and anatomic resections being associated with prolonged survival (PUBMED:30056558). In patients with head and neck cancer, surgical resection of pulmonary metastases significantly improved survival compared to those who did not undergo surgery (PUBMED:18648883). In patients operated on for non-small cell lung cancer with N2 disease, involvement of only one nodal group allowed for a 20% chance of achieving 5-year survival (PUBMED:30069313). The age-adjusted Charlson comorbidity index was also found to be a better predictor of survival in operated lung cancer patients than other comorbidity indices, suggesting that severe comorbidities are associated with worse survival (PUBMED:29106506). Lastly, adherence to best practices in operable IIIAN2 lung cancer, including anatomic resection, R0 resection, examination of regional lymph nodes, and administration of induction therapy, was associated with improved survival, with socioeconomic factors and facility type being important determinants of guideline concordance (PUBMED:34999793).
Instruction: Should nonalcoholic fatty liver disease be included in the definition of metabolic syndrome? Abstracts: abstract_id: PUBMED:31691175 Metabolic Syndrome in Children and Adolescents: Diagnostic Criteria, Therapeutic Options and Perspectives. Purpose Of Review: This review summarizes our current understanding of the metabolic syndrome (MetS) in children and adolescents. Special emphasis is given towards diagnostic criteria and therapeutic options. Recent Findings: Consistent diagnostic criteria to define MetS in childhood and adolescence are not available to date. There is common agreement that the main features defining MetS include (1) disturbed glucose metabolism, (2) arterial hypertension, (3) dyslipidemia, and (4) abdominal obesity. However, settings of cut-off values are still heterogeneous in the pediatric population. Additional features that may define cardiometabolic risk, such as non-alcoholic fatty liver disease (NAFDL) or hyperuricemia, are not considered to date. Prevalence of childhood obesity has more than doubled since 1980, and 6-39% of obese children and adolescents already present with MetS, depending on the definition applied. There is common agreement that a consistent definition of MetS is urgently needed for children to identify those at risk as early as possible. Such definition criteria should consider age, gender, pubertal stage, or ethnicity. Additional features such as NAFDL or hyperuricemia should also be included in MetS criteria. Lifestyle modification is still the main basis to prevent or treat childhood obesity and MetS, as other therapeutic options (pharmacotherapy, bariatric surgery) are not available or not recommended for the majority of affected youngster. abstract_id: PUBMED:37094320 Editorial: should NAFLD be included in the definition of Metabolic Syndrome? N/A abstract_id: PUBMED:26636520 Is it possible a new definition of metabolic syndrome in childhood? Objective: To investigate whether a group of Italian children and adolescents who were diagnosed to have metabolic syndrome (MS) according to a new ethnic age and gender specific definition had, in comparison with a control group, other signs and metabolic risk factors which are commonly associated with MS. Patients And Methods: The cross-sectional study population included 300 subjects (51% boys, age range 6-14 years), who were divided into 2 groups according to the presence of MS, diagnosed on the basis of 3/5 factors derived from the age and gender specific quantile distribution of MS components in a large regional Italian population survey (Calabrian Sierras Community Study, CSCS). In all subjects the following data were collected: anthropometric measures, blood pressure, liver function, C-reactive protein (hsCRP), uric acid blood levels, lipid and glucose profile. Triglycerides/HDL-cholesterol (TG/HDL-C) ratio was calculated. Results: There were 38 subjects (13%) with MS, who had higher indices of growth and fat distribution and higher blood levels of uric acid, alanine aminotransferase and gamma-glutamyltransferase. TG/HDL ratio was higher (median 3.11 vs. 1.14, p = 0.00001) in MS subjects who had lower apolipoprotein A and higher apolipoprotein B and non-HDL-C levels. hsCRP was not different between groups. Conclusions: Our ethnic age and gender specific definition of MS in Italian children and adolescents was able to identify in a youth group different cardiometabolic risk factors related to insulin resistance, endothelial damage and nonalcoholic fatty liver disease, which are commonly associated with MS diagnosis. abstract_id: PUBMED:36710531 The prognostic value of including non-alcoholic fatty liver disease in the definition of metabolic syndrome. Background/aims: Metabolic syndrome (MetS) affects over one third of the US adult population. Despite its close association with non-alcoholic fatty liver disease (NAFLD), the traditional definition of MetS does not account for the presence of NAFLD. The present study thus aims to evaluate the inclusion of NAFLD in the diagnostic criteria of metabolic syndrome on its accuracy of capturing individuals with metabolic dysregulation and its prediction of adverse events. Methods: Data collected from NHANES between 1999 and 2018 was analysed. Clinical characteristics and outcomes between individuals with metabolic syndrome from both the American Heart Association/National Heart, Lung, and Blood Institute (MetS) and the study's proposed diagnostic criteria (MetS2) were evaluated. Outcomes in both groups were evaluated with multivariate analyses, and further subgroup analysis on individuals matched with Coarsened Exact Matching was performed. Results: Of 46,184 individuals included, 32.54% and 40.54% fulfilled MetS and MetS2 criteria respectively. Considering NAFLD in the definition of metabolic syndrome, a further 8.00% (n = 3694) were included. MetS was significantly associated with all-cause (HR: 1.184, 95% CI: 1.110-1.263, p &lt; 0.001) and cardiovascular disease (CVD) mortality (SHR: 1.288, 95% CI: 1.233-1.347, p &lt; 0.001), and major adverse cardiovascular events (MACE). MetS2 was similarly associated with all-cause (HR: 1.175, 95% CI: 1.088-1.269, p &lt; 0.001), CVD mortality (SHR: 1.283, 95% CI: 1.245-1.323, p &lt; 0.001) and MACE. Conclusion: Inclusion of NAFLD allows for identification a greater proportion of the population with metabolic risk. This allows for early intervention and potential to lift some burden off the global healthcare system. abstract_id: PUBMED:33137486 Impact of the New Definition of Metabolic Associated Fatty Liver Disease on the Epidemiology of the Disease. Background & Aims: Recently, a group of hepatologists proposed to rename non-alcoholic fatty liver disease (NAFLD) as metabolic associated fatty liver disease (MAFLD) with modified diagnostic criteria. We aimed to study the impact of the new definition on the epidemiology of fatty liver disease. Methods: We randomly selected 1013 adults from the Hong Kong census database for clinical assessment, proton-magnetic resonance spectroscopy, and transient elastography. Five hundred sixty-five subjects without fatty liver at baseline underwent follow-up assessment. MAFLD was diagnosed as intrahepatic triglyceride content (IHTG) ≥5% and the presence of overweight/obesity, diabetes, or two other metabolic risk factors, with and without concomitant liver diseases. The diagnosis of NAFLD required the exclusion of concomitant liver diseases; metabolic factors were not considered. Results: The population prevalence of MAFLD and NAFLD was 25.9% (95% CI 23.2-28.7%) and 25.7% (95% CI 23.1-28.5%), respectively. Among 277 subjects with IHTG ≥5%, 247 (89.2%) fulfilled both the definitions of MAFLD and NAFLD. Fourteen subjects (5.1%) had IHTG ≥5% but did not meet the metabolic criteria of MAFLD. The incidence of MAFLD was 2.8 per 100 person-years at a median interval of 47 months (range 34-60 months). Among 78 subjects with incident NAFLD, 59 (75.6%) met the criteria of MAFLD; only one of the latter, a regular drinker, had liver stiffness ≥10 kPa. Conclusions: The new definition of MAFLD does not significantly change the prevalence compared with NAFLD, but it may reduce the incidence by 25%. People with hepatic steatosis but not fulfilling the definition of MAFLD unlikely have significant liver disease. abstract_id: PUBMED:32795106 Metabolic Syndrome in Children and Adolescents: Is There a Universally Accepted Definition? Does it Matter? The concept of metabolic syndrome (MetS) as a cluster of cardiovascular risk factors (obesity, altered glucose metabolism, dyslipidemia, and hypertension) has been around for more than 30 years. It is considered to be the result of complex interactions between centrally located fat, insulin resistance, subclinical inflammation, and other factors in genetically predisposed individuals. MetS diagnosis in adults has been linked to increased risk for cardiovascular disease (CVD) and type 2 diabetes mellitus (T2D). However, MetS in children and adolescents remains a controversial issue despite the extensive research in the field. It is still uncertain which definition should be used for its diagnosis in this age group, what is the clinical significance of such a diagnosis, and how reliably it can predict the future risk of developing CVD and T2D. Even if a child is diagnosed with MetS, management includes addressing each of the syndrome's components individually with weight loss and lifestyle modifications as the basic approach. Co-morbid conditions, such as nonalcoholic fatty liver disease, obstructive sleep apnea, and polycystic ovary syndrome should also be considered. It seems that MetS in children and adolescents should be used clinically as a conceptual framework for the identification of risk factors clustered around obesity and insulin resistance rather than a syndrome that needs to be diagnosed by measuring absolute "all-or-none" criteria. abstract_id: PUBMED:18056890 Should nonalcoholic fatty liver disease be included in the definition of metabolic syndrome? A cross-sectional comparison with Adult Treatment Panel III criteria in nonobese nondiabetic subjects. Objective: The ability of the Adult Treatment Panel III (ATP III) criteria of metabolic syndrome to identify insulin-resistant subjects at increased cardiovascular risk is suboptimal, especially in the absence of obesity and diabetes. Nonalcoholic fatty liver disease (NAFLD) is associated with insulin resistance and is emerging as an independent cardiovascular risk factor. We compared the strength of the associations of ATP III criteria and of NAFLD to insulin resistance, oxidative stress, and endothelial dysfunction in nonobese nondiabetic subjects. Research Design And Methods: Homeostasis model assessment of insulin resistance (HOMA-IR) &gt;2, oxidative stress (nitrotyrosine), soluble adhesion molecules (intracellular adhesion molecule-1, vascular cell adhesion molecule-1, and E-selectin), and circulating adipokines (tumor necrosis factor-alpha, leptin, adiponectin, and resistin) were cross-sectionally correlated to ATP III criteria and to NAFLD in 197 unselected nonobese nondiabetic subjects. Results: NAFLD more accurately predicted insulin resistance than ATP III criteria: sensitivity 73 vs. 38% (P = 0.0001); positive predictive value: 81 vs. 62% (P = 0.035); negative predictive value 87 vs. 74% (P = 0.012); positive likelihood ratio 4.39 vs. 1.64 (P = 0.0001); and negative likelihood ratio 0.14 vs. 0.35 (P = 0.0001). Adding NAFLD to ATP III criteria significantly improved their diagnostic accuracy for insulin resistance. Furthermore, NAFLD independently predicted HOMA-IR, nitrotyrosine, and soluble adhesion molecules on logistic regression analysis; the presence of NAFLD entailed more severe oxidative stress and endothelial dysfunction, independent of adiposity or any feature of the metabolic syndrome in insulin-resistant subjects. Conclusions: NAFLD is more tightly associated with insulin resistance and with markers of oxidative stress and endothelial dysfunction than with ATP III criteria in nonobese nondiabetic subjects and may help identify individuals with increased cardiometabolic risk in this population. abstract_id: PUBMED:19913850 The possible role of liver steatosis in defining metabolic syndrome in prepubertal children. Insulin resistance is a key component of the metabolic syndrome (MS) and is strongly associated with liver steatosis. Our aim was to evaluate whether MS should be diagnosed already in obese prepubertal children and whether its prevalence is influenced by the inclusion of hepatic steatosis as a diagnostic criterion. Eighty-nine obese children (43 boys; age median [range], 8.5 [6-10] years) were enrolled. Metabolic syndrome was diagnosed according to a classic definition: presence of 3 or more of the following criteria-body mass index greater than 2 standard deviation score, triglycerides greater than the 95th percentile, high-density lipoprotein cholesterol less than the fifth percentile, blood pressure greater than the 95th percentile, and impaired glucose tolerance. Afterward, liver steatosis was included as an additional criterion to this definition. Metabolic syndrome was diagnosed in 12 children (13.5%) according to the first definition and in 18 children (20.2%) when liver steatosis was included. The prevalence of MS increased across homeostasis model assessment of insulin resistance tertiles (P for trend = .01). The prevalence of the single components of the MS was as follows: obesity, 100%; hypertriglyceridemia, 27%; low high-density lipoprotein cholesterol, 2.2%; hypertension, 34.8%; impaired glucose tolerance, 4.5%; and nonalcoholic fatty liver disease, 21.3%. In conclusion, MS is common already among prepubertal obese children, particularly when liver steatosis is included among the diagnostic criteria. Therefore, screening for the MS should be performed in this age group; and hepatic steatosis should be considered as an additional diagnostic criterion. abstract_id: PUBMED:37322380 A simpler definition of MAFLD precisely predicts incident metabolic diseases: a 7-year cohort study. Background: Metabolic dysfunction-associated fatty liver disease (MAFLD) is a novel definition proposed in 2020 with a relatively complex set of criteria. Thus, simplified criteria that are more applicable are required. This study aimed to develop a simplified set of criteria for identifying MAFLD and predicting MAFLD-related metabolic diseases. Methods: We developed a simplified set of metabolic syndrome-based criteria for MAFLD, and compared the performance of the simplified criteria with that of the original criteria in predicting MAFLD-related metabolic diseases in a 7-year follow-up. Results: In the 7-year cohort, a total of 13,786 participants, including 3372 (24.5%) with fatty liver, were enrolled at baseline. Of the 3372 participants with fatty liver, 3199 (94.7%) met the MAFLD-original criteria, 2733 (81.0%) met the simplified criteria, and 164 (4.9%) were metabolic healthy and met neither of the criteria. During 13,612 person-years of follow-up, 431 (16.0%) fatty liver individuals newly developed T2DM, with an incidence rate of 31.7 per 1000 person-years. Participants who met the simplified criteria had a higher risk of incident T2DM than those who met the original criteria. Similar results were observed for incident hypertension, and incident carotid atherosclerotic plaque. Conclusion: The MAFLD-simplified criteria are an optimized risk stratification tool for predicting metabolic diseases in fatty liver individuals. abstract_id: PUBMED:18445722 Should nonalcoholic fatty liver disease be included in the definition of metabolic syndrome? A cross-sectional comparison with adult treatment panel III criteria in nonobese nondiabetic subjects: response to Musso et al. N/A Answer: The inclusion of nonalcoholic fatty liver disease (NAFLD) in the definition of metabolic syndrome (MetS) is a topic of ongoing debate. Several studies and reviews suggest that NAFLD is closely associated with the components of MetS and may serve as an independent cardiovascular risk factor, which could justify its inclusion in the MetS definition. One study found that NAFLD more accurately predicted insulin resistance than the Adult Treatment Panel III (ATP III) criteria of MetS in nonobese nondiabetic subjects, suggesting that NAFLD may help identify individuals with increased cardiometabolic risk in this population (PUBMED:18056890). Another study indicated that including NAFLD in the MetS definition could identify a greater proportion of the population with metabolic risk, potentially allowing for early intervention and reducing the burden on the global healthcare system (PUBMED:36710531). Furthermore, a study on Italian children and adolescents suggested that an ethnic age and gender-specific definition of MetS, which includes NAFLD, was able to identify different cardiometabolic risk factors related to insulin resistance, endothelial damage, and NAFLD, commonly associated with MetS diagnosis (PUBMED:26636520). Another study in prepubertal children found that the prevalence of MetS increased when liver steatosis was included as a diagnostic criterion (PUBMED:19913850). However, despite the evidence supporting the inclusion of NAFLD in the MetS definition, there is still no universally accepted definition of MetS in children and adolescents, and the clinical significance of such a diagnosis remains uncertain (PUBMED:32795106). The lack of consistent diagnostic criteria for MetS in childhood and adolescence, including the consideration of additional features such as NAFLD or hyperuricemia, is recognized as a gap that needs to be addressed (PUBMED:31691175). In summary, the evidence suggests that NAFLD is closely associated with MetS and its inclusion in the definition could improve the identification of individuals at risk for metabolic dysregulation and cardiovascular diseases. However, a universally accepted definition of MetS that includes NAFLD has not yet been established, particularly in pediatric populations.
Instruction: Do Children and Adolescents with Anorexia Nervosa Display an Inefficient Cognitive Processing Style? Abstracts: abstract_id: PUBMED:26133552 Do Children and Adolescents with Anorexia Nervosa Display an Inefficient Cognitive Processing Style? Objective: This study aimed to examine neuropsychological processing in children and adolescents with Anorexia Nervosa (AN). The relationship of clinical and demographic variables to neuropsychological functioning within the AN group was also explored. Method: The performance of 41 children and adolescents with a diagnosis of AN were compared to 43 healthy control (HC) participants on a number of neuropsychological measures. Results: There were no differences in IQ between AN and HC groups. However, children and adolescents with AN displayed significantly more perseverative errors on the Wisconsin Card Sorting Test, and lower Style and Central Coherence scores on the Rey Osterrieth Complex Figure Test relative to HCs. Conclusion: Inefficient cognitive processing in the AN group was independent of clinical and demographic variables, suggesting it might represent an underlying trait for AN. The implications of these findings are discussed. abstract_id: PUBMED:27083433 Cognitive performance in children with acute early-onset anorexia nervosa. When anorexia nervosa (AN) occurs in children below the age of 14 years, it is referred to as early-onset AN (EO-AN). Over the last years, there has been an increased focus on the role of cognitive functioning in the development and maintenance of AN. Adults with AN show inefficiencies in cognitive functions such as flexibility and central coherence. Systematic neuropsychological examinations of patients with EO-AN are missing. Thirty children with EO-AN and 30 adolescents with AN, as well as 60 healthy controls (HC) underwent an extensive neuropsychological examination. ANOVAs with post hoc tests and explorative regression analyses were conducted. Patients with EO-AN (mean age = 2.17 ± 1.57 years) showed no significant differences in flexibility, inhibition, planning, central coherence, visuospatial short- and long-term memory or recognition in comparison to HC (mean age = 11.62 ± 1.29 years). Performance of adolescents with AN (mean age = 15.93 ± 0.70 years) was not significantly different compared to HC (mean age = 16.20 ± 1.26 years). Explorative regression analyses revealed a significant interaction of age and group for flexibility (adjusted R 2 = 0.30, F = 17.85, p = 0.013, η p2 = 0.32). Contrary to expectations, the current study could not confirm the presence of inefficient cognitive processing in children with EO-AN compared to HC. Nonetheless, the expected age-related improvement of flexibility might be disrupted in children and adolescents with AN. Longitudinal neuropsychological examinations are necessary to provide more information about the role of cognitive functioning in the development and maintenance of AN. abstract_id: PUBMED:25504443 Cognitive profile of children and adolescents with anorexia nervosa. Objective: Few studies of cognitive functioning in children and adolescents with anorexia nervosa (AN) have been conducted. The aim of this study was to examine the neurocognitive and intelligence profile of this clinical group. Method: The study was a matched case-control (N = 188), multi-centre study including children and adolescents with AN (N = 94) and healthy control participants (N = 94). Results: The results suggest that Full Scale Intelligence Quotient (Wechsler Intelligence Scale for Children-III/Wechsler Adult Intelligence Scale-III) in this patient group is close to the normal population mean of 100. Individuals with AN exhibited significantly worse performance in nonverbal intelligence functions (i.e. Wechsler Intelligence Scale for Children-III/Wechsler Adult Intelligence Scale-III, Perceptual Organization Index) and in verbal memory (Test of Memory and Learning-Second Edition, Memory for Stories) and motor speed (Cambridge Neuropsychological Test Automated Battery, Simple and Choice Reaction Time) compared with healthy control participants. No significant difference in set-shifting ability (Cambridge Neuropsychological Test Automated Battery, Intra-Extra Dimensional Set Shift and Trail Making Test B) was found. Conclusions: Inefficiency in nonverbal intelligence functions and in specific cognitive functions was found in this study of children and adolescents with AN. abstract_id: PUBMED:36510231 Sensory processing in children and adolescents shortly after the onset of anorexia nervosa: a pilot study. Background: Alterations in sensory processing, such as vision, taste, and interoceptive sensation, have been reported in adult anorexia nervosa (AN). Whether these symptoms are traits, states, or "scars" due to chronic starvation has not been fully established. Based on the hypothesis that alterations in sensory processing also occur in adolescent AN in the early stages of the disease, the present study was conducted using both self-administered and parent-administered sensory processing questionnaires. Methods: Children and adolescents with anorexia nervosa treated at a single tertiary eating disorder treatment center in Japan (AN group) and female junior high school students attending a public junior high school in Saitama Prefecture, Japan (healthy control group: HC group) were included in the study. The Sensory Profile (SP) and Adult/Adolescent Sensory Profile (AASP) were administered to the participants and their caregivers. In addition, we collected demographic data and administered the Children's Eating Attitude Test and Autism-Spectrum Quotient Children's version. Results: Seventeen children and adolescents were enrolled in the AN group, and 63 were enrolled in the HC group. There was no statistically significant difference between the AN and HC groups in the quadrant scores of the AASP. In the SP, the Sensory Avoiding score and the Emotional/Social response score were higher in the AN group than in the HC group. Conclusion: From the parents' point of view, the patient avoids unexpected sensory stimuli, but the patients are unaware of their own avoiding behavior in the early stages of the disease. The results suggest that sensory sensitivity in AN may be a "scar" symptom due to chronic starvation and a state symptom. Longitudinal studies from shortly after the onset with larger sample sizes are needed to gain insight into the dynamic relation between sensory processing and eating disorder pathology. abstract_id: PUBMED:24636282 Cognitive remediation therapy for children and adolescents with anorexia nervosa in France: an exploratory study Objective: Cognitive remediation therapy (CRT) seems to be increasingly interesting in the treatment of anorexia nervosa for adult patients. We attempted to apply this support to a group of young inpatients, initially to assess its feasibility and acceptability, and then to improve its content for therapeutic application and future research. Methods: Ten 12- to 17-year-old inpatients with primary DSM-IV diagnosis of anorexia nervosa participated in a 10-week intervention program with a one-hour group session of CRT per week. All 10 patients were assessed before the intervention and those who completed the 10 sessions were assessed after. Assessment included a clinical examination by a psychiatrist, a battery of clinical inventories, and set-shifting tests. Moreover, each patient wrote a letter providing feedback on the intervention for subsequent analysis. Results: Only two patients completed all 10 sessions, the other eight who were discharged from the hospital in the meantime could not attend the sessions for practical reasons. After the 10 sessions, an improvement in BMI and in measured levels of some psychopathological symptoms was observed in our two patients. Most neuropsychological task performances were improved after cognitive remediation. Feedback from the 10 patients was generally positive. Conclusion And Implications For Practice: This preliminary investigation suggests that cognitive remediation therapy is acceptable and feasible in this population. Replication of these findings requires a larger sample, improvement of the trial design, more sensitive measures, and another training format to avoid loss of so many participants. abstract_id: PUBMED:38333943 The importance of improving cognitive flexibility in adolescents with anorexia nervosa: The case for cognitive remediation therapy. Objective: Inefficiencies in executive functioning (EF), more specifically cognitive flexibility and an overly detailed processing style, are frequently observed in individuals with Anorexia Nervosa (AN) and have been identified as potential targets in treatment. Cognitive Remediation Therapy (CRT) is an adjunctive treatment approach specifically designed to have a positive impact on EF. Mainly evaluated in adults, CRT has been criticized for its perceived ineffectiveness in promoting weight restoration or directly reducing eating disorder symptoms. Method: We argue that we need to refocus our conceptual framework for using CRT as an adjunctive treatment and specifically explore its potential benefit in adolescents. Results: Adolescence is a critical window for EF development during which CRT has the potential to have the most impact. While it may not specifically ameliorate eating disorder symptoms and directly improve weight gain, CRT may mitigate the impact of malnutrition on adolescent brain development, reduce attrition rates in treatment, and improve cognitive flexibility and (indirectly) other maintaining factors, thereby improving global functioning. Discussion: More research needs to be done to understand the development of EF in adolescents with AN and how best to employ CRT as an adjunctive treatment to support development and target maintaining factors. The current article broadly reviews findings on executive functioning inefficiencies in adolescents with AN and discusses the purpose and role of CRT in treating AN. Finally, we highlight key critiques of using CRT and pose questions for future research. Public Significance: Treatments targeting executive functioning in adolescents with AN are limited. We need to better understand how CRT can benefit adolescents in treatment. Increasing treatment options, including adjunctive treatments, is necessary to reduce the long-term impact of AN. abstract_id: PUBMED:27062554 Are Weight Status and Cognition Associated? An Examination of Cognitive Development in Children and Adolescents with Anorexia Nervosa 1 Year after First Hospitalisation. Objective: The aim of this study was to characterise the association between the cognitive profile and weight restoration in children and adolescents with anorexia nervosa. Methods: The study was a longitudinal, matched case-control, multicentre study. An assessment of cognitive functions was conducted by using the Wechsler Intelligence Scale for Children-III/the Wechsler Adult Intelligence Scale-III, the Test of Memory and Learning-second edition, Trail Making Tests A and B, the Rey-Osterrieth Complex Figure Test and the Cambridge Neuropsychological Test Automated Battery. Results: One hundred twenty individuals, 60 patients with anorexia nervosa with mean age of 14.65 (SD 1.820) years and 60 healthy controls with mean age of 14.76 (SD 1.704) years, participated. No association was found between weight recovery and cognitive functions. However, a significant increase in motor speed was found in Trail Making Test A (p = 0.004), Reaction Time (RTI) five-choice movement time (p = 0.002) and RTI simple movement time (p = 0.011), resulting in a normalisation corresponding to that found in healthy controls. Furthermore, a significantly lower score in the perceptual organization index (p = 0.029) was found at follow-up. Conclusions: Weight recovery appears not to be associated with cognition. Copyright © 2016 The Authors European Eating Disorders Review published by Eating Disorders Association and John Wiley &amp; Sons Ltd. abstract_id: PUBMED:26563611 Is inefficient cognitive processing in anorexia nervosa a familial trait? A neuropsychological pilot study of mothers of offspring with a diagnosis of anorexia nervosa. Objectives: Inefficient set shifting and poor global processing are thought to be possible traits in anorexia nervosa (AN). This study aimed to investigate the neuropsychological processing style of unaffected mothers of offspring with AN (unaffected AN mothers). Methods: The performance of 21 unaffected AN mothers were compared to 20 mothers of healthy control offspring on neuropsychological measures of set shifting (Wisconsin Card Sorting Test, WCST) and central coherence (Fragmented Pictures Task, FPT, and Rey Osterrieth Complex Figures Task, ROCFT). Associations between neuropsychological performance and clinical measures were examined in the unaffected AN mothers group. Results: There were significant differences in perseverative errors on the WCST (P≤0.01), with the unaffected mothers displaying a more inflexible thinking style compared to the control group. There were also significant differences on the FPT (P ≤ 0.01) and the ROCFT (P ≤ 0.01), whereby unaffected AN mothers showed lower levels of global processing. Conclusions: The results of this study support the idea of the familial nature of cognitive styles in AN. The implications of these findings are discussed. abstract_id: PUBMED:28743295 Group cognitive remediation therapy for younger adolescents with anorexia nervosa: a feasibility study in a Japanese sample. Objective: Cognitive remediation therapy (CRT) aims to increase patients' cognitive flexibility by practicing new ways of thinking as well as facilitating bigger picture thinking, supporting patients with relevant tasks and encouraging an awareness of their own thinking styles. CRT has been applied in the treatment of adult anorexia nervosa (AN), and has been shown to be effective and acceptable. In adolescents, CRT has been piloted on both individual and group format. However, no studies are published in CRT for adolescents with AN in a Japanese sample. The objectives of this study were to assess the feasibility, to estimate effect sizes for the purpose of designing a larger study, and to assess the acceptability of a CRT group for younger adolescents with AN in a Japanese sample. Methods: Group CRT interventions were carried out with a total of seven adolescents with AN. Neuropsychological and psychological assessments (motivation, self-efficacy and depression) were administered before and after the group intervention. The participants completed worksheets (documents of participants' thinking about their thinking style and the relation of the skills that they learnt through each session to real-life) and questionnaires after the group. Results: There were small effect sizes differences between the part of the pre and post neuropsychological tests and the pre and post ability to change (motivation). There were medium effect sizes differences between the pre and post depressive symptoms and importance to change (motivation). There was a large effect size shown between the pre and post weights. All participants were able to reflect on their own thinking styles, such as having difficulty with changing feelings and the tendency to focus on details in real-life. Adolescents' feedback was positive, and the rate of dropout was low. Conclusion: CRT groups could be feasible and acceptable for younger adolescents with AN in a Japanese sample. Trial registration UMIN No. 000020623. Registered 18 January 2016. abstract_id: PUBMED:33539993 The shifting perspectives study protocol: Cognitive remediation therapy as an adjunctive treatment to family based treatment for adolescents with anorexia nervosa. Background: Adolescents with anorexia nervosa have set-shifting inefficiencies that can be exacerbated by starvation and that may interfere with outcomes of treatment interventions. Cognitive Remediation Therapy (CRT), an adjunctive treatment focused on improving set-shifting, can target inefficiencies and may augment treatment effectiveness. The best way to add CRT to the standard of care (Family Based Treatment, FBT) for adolescents with anorexia remains understudied. Methods/design: This is a randomized controlled trial designed to determine if CRT is effective in increasing flexibility in adolescents with anorexia and/or their parents. Participants are adolescents 12-18 years old with anorexia and their parents. 54 family groups will be randomized into one of three groups: FBT only, FBT plus Parent-focused CRT, or FBT plus Adolescent-focused CRT. Psychosocial, neurocognitive, and behavioral measures will be collected throughout the study. Discussion: This is the first study of its kind to apply CRT to parents. All forms of CRT in the context of anorexia have targeted the individual with anorexia's thinking style. We propose that it may be impactful to target the parent of the adolescent with anorexia as parents carry the burden of treatment and re-nourishment of their child during FBT and may have similar thinking styles. Conclusion: This study takes an experimental therapeutics approach to further our understanding of the mechanisms of treatment for adolescents with anorexia. It focuses on increasing cognitive flexibility in patients or their parents and determining the appropriate dose of CRT needed to achieve positive change. Trial Registration: ClinicalTrails.gov Identifier NCT03928028. Answer: Yes, children and adolescents with anorexia nervosa (AN) have been found to display an inefficient cognitive processing style in some studies. For instance, a study comparing children and adolescents with AN to healthy controls (HC) found that those with AN displayed significantly more perseverative errors on the Wisconsin Card Sorting Test and lower Style and Central Coherence scores on the Rey Osterrieth Complex Figure Test relative to HCs, suggesting inefficient cognitive processing independent of clinical and demographic variables (PUBMED:26133552). Another study found inefficiencies in nonverbal intelligence functions and specific cognitive functions such as verbal memory and motor speed in children and adolescents with AN compared to healthy controls (PUBMED:25504443). However, another study examining children with early-onset AN (EO-AN) did not find significant differences in flexibility, inhibition, planning, central coherence, visuospatial short- and long-term memory, or recognition when compared to healthy controls, although it suggested that the expected age-related improvement of flexibility might be disrupted in children and adolescents with AN (PUBMED:27083433). This indicates that while some studies have found evidence of inefficient cognitive processing in AN, the findings are not entirely consistent across all cognitive domains or age groups. Furthermore, cognitive remediation therapy (CRT) has been explored as a treatment to address cognitive inefficiencies in AN. CRT aims to improve cognitive flexibility and has been shown to be feasible and acceptable in younger adolescents with AN in a Japanese sample (PUBMED:28743295). Another study suggested that CRT is acceptable and feasible for children and adolescents with AN, although it called for larger samples and improved trial designs for further research (PUBMED:24636282). Additionally, the importance of improving cognitive flexibility in adolescents with AN has been highlighted, with CRT being discussed as a potential adjunctive treatment to support development and target maintaining factors (PUBMED:38333943). In summary, while there is evidence to suggest that children and adolescents with AN may display an inefficient cognitive processing style, the findings are not uniform across all studies, and further research is needed to fully understand the cognitive profile of this population and the potential role of interventions like CRT.
Instruction: Is female sex associated with ED delays to diagnosis of appendicitis in the computed tomography era? Abstracts: abstract_id: PUBMED:19683117 Is female sex associated with ED delays to diagnosis of appendicitis in the computed tomography era? Background: Historically, females had delays to definitive diagnosis of appendicitis when compared to males. In this current millennium, appendicitis is now most commonly diagnosed by computed tomography (CT) in the emergency department (ED) rather than at surgery. Objective: The aim of the study was to assess if female gender is still associated with delays to diagnosis of appendicitis in the CT era. Methods: A retrospective cohort analysis of adult patients with appendicitis at a university teaching hospital ED was conducted. Inclusion criteria was age of more than 18 years and an International Classification of Diseases, Ninth Revision (ICD-9), diagnosis of appendicitis. Patients were excluded from analysis if they were pregnant, no CT scan was obtained in the ED, or had incomplete outcome data. Results: One hundred thirty-seven patients met inclusion criteria; 65 female, 72 males. Time from triage to CT order was 138 minutes in females and 95 minutes in males (P = .0012). Time from initial physician evaluation to CT order was 45 minutes in females and 28 minutes in males (P = .0012). Nonclassic symptoms were more common in females and pelvic evaluation did not delay the CT order. Conclusion: Female gender is still associated with delays to CT acquisition and diagnosis of appendicitis. abstract_id: PUBMED:17543650 The use of the clinical scoring system by Alvarado in the decision to perform computed tomography for acute appendicitis in the ED. Study Objective: Appendicitis is part of the differential of an acute abdomen and can be a difficult diagnosis to make. Strategies to suggest which patients presenting to the emergency department (ED) should undergo computed tomography (CT) scan to confirm appendicitis have not been addressed. We develop guidelines for CT scanning based on Alvarado clinical scores for patients with suspected and confirmed cases of appendicitis. Methods: A retrospective review of 150 charts of patients aged 7 and older who presented with abdominal pain to the ED of a 392-bed acute care facility over a 6-month period were evaluated by ED physicians and underwent CT to rule out appendicitis. Patient demographics, presenting signs, and symptoms were documented. Using the scoring system for appendicitis, developed by Alvarado, each chart was retrospectively scored. The Alvarado scores were correlated with positive pathology findings, as well as Alvarado scores with a negative CT scan. Equivocal scores, having neither high sensitivity nor specificity for appendicitis were calculated. Results: Computed tomography scans with Alvarado scores of 3 or lower were performed in 37% (55/150) of patients to rule out appendicitis. The sensitivity of Alvarado scores 3 or lower for not having appendicitis was 96.2% (53/55), and the specificity 67% (2/3). Patients with Alvarado scores 7 or higher had an incidence of acute appendicitis of 77.7% (28/36). The sensitivity of Alvarado scores 7 or higher for appendicitis was 77% (28/36), and the specificity 100% (8/8). The sensitivity of equivocal Alvarado scores, defined as scores of 4 to 6, for acute appendicitis was 35.6% (21/59), and the specificity 94% (36/38). The sensitivity and specificity of CT scans in patients with equivocal Alvarado scores remained high, at 90.4% and 95%, respectively. Conclusions: In the equivocal clinical presentation of appendicitis as defined by Alvarado scores of 4 to 6, adjunctive CT is recommended to confirm the diagnosis in the ED setting. If clinical presentation suggests acute appendicitis by an Alvarado score of 7 or higher, surgical consultation is recommended. Computed tomography is not indicated in patients with Alvarado scores of 3 or lower to diagnose acute appendicitis. abstract_id: PUBMED:26810023 Pylephlebitis Complicating Acute Appendicitis: Prompt Diagnosis with Contrast-Enhanced Computed Tomography. Background: Pylephlebitis, a rare complication of abdominal infections, is a septic thrombophlebitis of the portal venous system with high rates of morbidity and mortality. Case Report: We present a case of pylephlebitis complicating acute appendicitis and report the utility of a computed tomography scan in the diagnosis. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: The prompt diagnosis and appropriate treatment of pylephlebitis are crucial to reduce morbidity and mortality, but clinical presentation is often nonspecific. A computed tomography scan is instrumental in the early diagnosis of pylephlebitis because it readily reveals the thrombus in the mesenteric or portal vein in the setting of acute appendicitis. Early and aggressive treatment with broad-spectrum antibiotics is necessary, and anticoagulation therapy can also be used to prevent bowel ischemia. abstract_id: PUBMED:26041371 Diverticulitis of the appendix, a distinctive entity: preoperative diagnosis by computed tomography. Diverticular disease of the appendix is rare and is usually diagnosed during surgery. We report a case of a 50-year-old man who presented to the emergency department with right lower quadrant pain of 1-day duration. A preoperative diagnosis of appendiceal diverticulitis was made by computed tomography of the abdomen. The patient underwent emergency laparoscopic appendectomy, which confirmed the diagnosis. abstract_id: PUBMED:28418367 The use of computed tomography in the diagnosis of acute appendicitis Aim: To present the use of computed tomography in the diagnosis of acute appendicitis. Material And Methods: The study was conducted in 139 patients who were hospitalized with suspected acute appendicitis. However, the diagnosis was not clearly confirmed by clinical and ultrasonic data. Results: Computed tomography confirmed/rejected presumptive diagnosis in 90.8% of patients. It was concluded that CT has high diagnostic accuracy to confirm acute appendicitis. abstract_id: PUBMED:11303204 The use of helical computed tomography in pregnancy for the diagnosis of acute appendicitis. Objective: Accurate diagnosis of acute appendicitis in pregnancy by clinical evaluation is difficult. A safe, reliable test was sought to decrease a delay in diagnosis and to avoid unnecessary invasive procedures. A helical or spiral computed tomographic technique has proven to be a very accurate test in the nonobstetric population for the identification of acute appendicitis. We report its use in pregnant patients with suspected acute appendicitis. Study Design: All pregnant patients who were undergoing helical computed tomography at our institution from April 1997 to February 1998 for the suspected clinical diagnosis of acute appendicitis were retrospectively reviewed. Helical computed tomography was performed by standard departmental protocol. A positive study was reported if an enlarged appendix, which did not fill with contrast material, was present with periappendiceal inflammatory changes. Outcomes were determined by the results of surgery and pathologic examination or clinical follow-up. Results: Seven patients were identified in the study period. Two patients had positive findings on helical computed tomography, and acute appendicitis was confirmed at laparotomy and by pathologic inspection. There were no further prenatal complications and both patients delivered at term. Five patients had a normal-appearing appendix on helical computed tomography, and all of these patients had resolution of their pain and symptoms. Conclusion: Helical computed tomography appears to be a useful, noninvasive test to accurately diagnose acute appendicitis in pregnancy. abstract_id: PUBMED:22633722 Intravenous contrast alone vs intravenous and oral contrast computed tomography for the diagnosis of appendicitis in adult ED patients. Objective: When the diagnosis of appendicitis is uncertain, computerized tomography (CT) scans are frequently ordered. Oral contrast is often used but is time consuming and of questionable benefit. This study compared CT with intravenous contrast alone (IV) to CT with IV and oral contrast (IVO) in adult patients with suspected appendicitis. Methods: This is a prospective, randomized study conducted in a community teaching emergency department (ED). Patients with suspected appendicitis were randomized to IV or IVO CT. Scans were read independently by 2 designated study radiologists blinded to the clinical outcome. Surgical pathology was used to confirm appendicitis in patients who went to the operating room (OR). Discharged patients were followed up via telephone. The primary outcome measure was the diagnosis of appendicitis. Secondary measures included time from triage to ED disposition and triage to OR. Results: Both IV (n = 114) and IVO (n = 113) scans had 100% sensitivity (95% confidence interval [CI], 89.3-100 and 87.4-100, respectively) and negative predictive value (95% CI, 93.7-100 and 93.9-100, respectively) for appendicitis. Specificity of IV and IVO scans was 98.6 and 94.9 (95% CI, 91.6-99.9 and 86.9-98.4, respectively), respectively, with positive predictive values of 97.6 and 89.5 (95% CI, 85.9-99.9 and 74.2-96.6). Median times to ED disposition and OR were 1 hour and 31 minutes (P &lt; .0001) and 1 hour and 10 minutes (P = .089) faster for the IV group, respectively. Patients with negative IV scans were discharged nearly 2 hours faster (P = .001). Conclusions: Computerized tomography scans with intravenous contrast alone have comparable diagnostic performance to IVO scans for appendicitis in adults. Patients receiving IV scans are discharged from the ED faster than those receiving IVO scans. abstract_id: PUBMED:28929862 The Accuracy of the Computed Tomography Diagnosis of Acute Appendicitis: Does the Experience of the Radiologist Matter? Background And Aims: To assess the accuracy of computed tomography in diagnosing acute appendicitis with a special reference to radiologist experience. Material And Methods: Data were collected prospectively in our randomized controlled trial comparing surgery and antibiotic treatment for uncomplicated acute appendicitis (APPAC trial, NCT01022567). We evaluated 1065 patients who underwent computed tomography for suspected appendicitis. The on-call radiologist preoperatively analyzed these computed tomography images. In this study, the radiologists were divided into experienced (consultants) and inexperienced (residents) ones, and the comparison of interpretations was made between these two radiologist groups. Results: Out of the 1065 patients, 714 had acute appendicitis and 351 had other or no diagnosis on computed tomography. There were 700 true-positive, 327 true-negative, 14 false-positive, and 24 false-negative cases. The sensitivity and the specificity of computed tomography were 96.7% (95% confidence interval, 95.1-97.8) and 95.9% (95% confidence interval, 93.2-97.5), respectively. The rate of false computed tomography diagnosis was 4.2% for experienced consultant radiologists and 2.2% for inexperienced resident radiologists (p = 0.071). Thus, the experience of the radiologist had no effect on the accuracy of computed tomography diagnosis. Conclusion: The accuracy of computed tomography in diagnosing acute appendicitis was high. The experience of the radiologist did not improve the diagnostic accuracy. The results emphasize the role of computed tomography as an accurate modality in daily routine diagnostics for acute appendicitis in all clinical emergency settings. abstract_id: PUBMED:21084934 Sonography and computed tomography of the mimics of appendicitis. There are numerous entities that can mimic acute appendicitis. Ultrasound and computed tomography are the most common first-line, cross-sectional imaging modalities in the acute care setting. Ideally, imaging will either confirm appendicitis or exclude it by identifying a normal appendix. In the latter scenario, an alternate diagnosis can frequently be established that range from genitourinary, gastrointestinal, to even abdominal wall processes. Imaging is especially helpful in cases of patients presenting with atypical signs/symptoms for acute appendicitis and those presenting with a classic presentation where an alternative diagnosis is determined. The correct diagnosis will allow the most appropriate clinical management and therapy; specifically, avoiding nonindicated surgery is essential. Common and uncommon mimics of acute appendicitis are discussed with specific attention to their sonographic and computed tomographic appearances. abstract_id: PUBMED:26511881 Effect of time delays for appendectomy as observed on computed tomography in patients with noncomplicated appendicitis. Objectives: Delaying appendectomy is a controversial issue. This study aimed at identifying the effect of time delays in surgery, especially for patients with noncomplicated appendicitis on computed tomography (CT). Methods: Postappendectomy patients were analyzed from October 2013 to August 2014. Among the patients, data of those with findings of noncomplicated appendicitis on CT were gathered and the following time parameters were reviewed: CT examination time and appendectomy time. Other basic information and postoperative complications were checked. Patients were divided into a noncomplicated appendicitis group and a complicated appendicitis group. Perforated appendicitis and periappendiceal abscesses were regarded as complicated appendicitis. All other appendicitis from simple, focal to suppurative, and gangrenous appendicitis were regarded as noncomplicated appendicitis. Results: In total, 397 patients were enrolled. The mean age was 33.2 years and the number of male/female patients was 216:181, respectively. The mean times from CT to operation were 5.9 vs 6.3hours for both the noncomplicated and complicated appendicitis groups (P=.758). The time from CT to operation showed no statistical relation to occurrences of complicated appendicitis, or postoperative complications such as ileuses, wound complications, and length of hospital stay. Conclusions: The time from CT to operation has no effect on the results of appendicitis. Further study in large-scaled, multicenter setting might yield more reliable results. Answer: Yes, female sex is associated with emergency department (ED) delays to diagnosis of appendicitis in the computed tomography (CT) era. A study conducted at a university teaching hospital ED found that the time from triage to CT order was significantly longer for females (138 minutes) compared to males (95 minutes), and the time from initial physician evaluation to CT order was also longer for females (45 minutes) compared to males (28 minutes). These findings suggest that despite the widespread use of CT for diagnosing appendicitis, female patients still experience delays in the acquisition of CT and diagnosis of appendicitis (PUBMED:19683117).
Instruction: Irrigation of port sites: prevention of port site metastases? Abstracts: abstract_id: PUBMED:23741110 Analysis of laparoscopic port site complications: A descriptive study. Context: The rate of port site complications following conventional laparoscopic surgery is about 21 per 100,000 cases. It has shown a proportional rise with increase in the size of the port site incision and trocar. Although rare, complications that occur at the port site include infection, bleeding, and port site hernia. Aims: To determine the morbidity associated with ports at the site of their insertion in laparoscopic surgery and to identify risk factors for complications. Settings And Design: Prospective descriptive study. Materials And Methods: In the present descriptive study, a total of 570 patients who underwent laparoscopic surgeries for various ailments between August 2009 and July 2011 at our institute were observed for port site complications prospectively and the complications were reviewed. Statistical Analysis Used: Descriptive statistical analysis was carried out in the present study. The statistical software, namely, SPSS 15.0 was used for the analysis of the data. Results: Of the 570 patients undergoing laparoscopic surgery, 17 (3%) had developed complications specifically related to the port site during a minimum follow-up of three months; port site infection (PSI) was the most frequent (n = 10, 1.8%), followed by port site bleeding (n = 4, 0.7%), omentum-related complications (n = 2; 0.35%), and port site metastasis (n = 1, 0.175%). Conclusions: Laparoscopic surgeries are associated with minimal port site complications. Complications are related to the increased number of ports. Umbilical port involvement is the commonest. Most complications are manageable with minimal morbidity, and can be further minimized with meticulous surgical technique during entry and exit. abstract_id: PUBMED:22919132 Is port site metastasis a result of systemic involvement? Aims: Port site metastasis (PSM) is an unwelcome consequence of laparoscopy for oncological procedures with uncertain etiology. We present the clinical evidence to prove that PSM is likely to be due to the hematogenous spread of tumor cells. Materials And Methods: Six cases of port site metastasis, four following laparoscopic radical nephrectomy for localized renal cell carcinoma (RCC), one after laparoscopic radical prostatectomy done in two patients and one after laparoscopic partial cystectomy for tumor at bladder dome done in two were studied. One case of metastatic RCC with bilateral gluteal masses and two cases of open radical nephrectomy, with recurrence at the drain and incision site were also studied. Results: During the median follow up of 59 months (range 24-120), 4/136 patients with RCC (1.47%) developed port site metastasis between 7-36 months after surgery. All six cases of PSM had advanced disease and recurrences at other sites, that is, peritoneum, omentum, bones, and lungs. None of the patients had isolated PSM. One patient of metastatic RCC, who did not have any intervention but repeated intramuscular injections of analgesics-developed bilateral gluteal masses, confirmed to be RCC on fine needle aspiration cytology. Two patients had metastasis at the incision site (one at the drain site) with distance, including cutaneous metastases. Conclusions: Port site metastasis did not develop in isolation. There could be a likely existence of circulating tumor cells at the time of surgical trauma of penetrating nature, that is, port site or injection site, which manifest in some patients depending upon their immune response. abstract_id: PUBMED:15245662 Irrigation of port sites: prevention of port site metastases? Introduction: Port site metastases can occur when free viable tumor cells implant at trocar wounds. Irrigation of port sites with cytotoxic agents has been suggested to prevent port site metastases. The objective of this study is to assess whether tumor growth at port sites can be reduced by irrigation of these port sites. Methods: WAG rats were insufflated with CO(2) for 20 minutes and 5 x 10(5) CC531 tumor cells were injected intraperitoneally. Port sites were irrigated after completion of the pneumoperitoneum with povidone-iodine, a mixture of taurolidine and heparin, or sodium chloride. Controls did not undergo any irrigation of port sites. In experiment 1, all 16 rats had all 4 irrigation modalities. In experiment 2, four groups of 20 rats had one type of irrigation on two trocar wounds. Tumor growth was evaluated 4 weeks after the procedure. Results: No difference in tumor growth at trocar wounds was found between any type of irrigation and controls in both experiments. Conclusion: In this experimental model, no beneficial or adverse effects of irrigation of port sites could be shown. abstract_id: PUBMED:34707953 Diagnosis of Secondary Sclerosing Cholangitis by Port Site Metastasis. Port site metastasis is an uncommon but challenging pathological entity whereby metastatic cancer is discovered at the operative port site after surgery. Secondary sclerosing cholangitis is a multifocal stricture disease of the biliary system as the result of extra-biliary pathology; rarely, it is due to an infiltrative disorder such as neoplasia. This is the first reported case of secondary sclerosing cholangitis that was diagnosed with metastatic cancer following the discovery of port site metastasis after laparoscopic cholecystectomy. abstract_id: PUBMED:23133108 Port site metastasis after laparoscopic cholecystectomy. We report a case of 62-year-old female, who developed port site metastasis one year after undergoing laparoscopic cholecystectomy for calculous cholecystitis. This is only second such report in Indian literature. abstract_id: PUBMED:24371643 Port-site recurrence in a patient undergoing robot-assisted gynecologic cancer surgery for endometrial cancer - A case report. ► First case of port-site metastasis after robotic staging surgery for uterine cancer. ► Changes to robotic surgical technique to reduce risk of port-site recurrence. ► Further areas of investigation worth examining in this aspect of robotic surgery. abstract_id: PUBMED:33116825 The Pathogenesis and Prevention of Port-Site Metastasis in Gynecologic Oncology. Port-site metastasis (PSM) is a specific and challenging complication of laparoscopic gynecologic oncologic procedures. Research has demonstrated that PSM is associated with significant morbidity and poor outcome. The exact pathogenesis of PSM in gynecologic patients is not clear. Several preventive measures of PSM have been addressed in the relevant literature, and novel approaches to prevent this uncommon complication keep emerging. In this review, we summarized the potential mechanisms of PSM and discuss the controversies and merits of proposed preventive measures of PSM in gynecologic oncology. We undertook a literature search using the Medline database to identify studies on the pathogenesis and prevention of laparoscopic PSM. Hypotheses of PSM pathogeneses center on the immune response, pneumoperitoneum, wound contamination, and surgical method. Cogent evidence of effective prevention of PSM after laparoscopic surgery is lacking. Traditional preventive actions such as irrigation and tumor manipulation should be taken individually. Insufflation of hyperthermic CO2 and humidified CO2 leads to a better outcome in patients with a malignant tumor who undergo a laparoscopic procedure compared with normal CO2 pneumoperitoneum. Port-site resection shows no advantage in survival and results in more wound events. PSM prevention plays a crucial part in the overall care of patients with gynecologic malignancies who undergo laparoscopic procedures. abstract_id: PUBMED:24371608 Port-site metastasis following robotic-assisted radical hysterectomy for squamous cell cervical cancer. ► Port-site metastases can occur following treatment for cervical cancer. ► Port-site metastases can occur following robotic assisted laparoscopic surgery. ► The pathogenesis of port-site metastases is poorly understood. abstract_id: PUBMED:33879998 Port-Site Metastasis in Gynecological Malignancies. Background: Minimally invasive oncologic surgery has become the standard of care in many gynecologic cancers. While laparoscopic surgery provides many benefits to patients, such as faster recovery, there are unique challenges associated with minimally invasive techniques. Port-site metastasis is a rare complication after laparoscopic oncologic surgery in management of gynecologic malignancies. Methods: We present the case of a 44-year-old female with isolated port-site recurrence following laparoscopic radical hysterectomy with node-negative, clinical stage IB1 cervical adenocarcinoma. In addition, we provide an updated review of the literature on management and oncologic outcomes of port-site metastasis. Conclusion: Port-site metastasis prevention necessitates a better understanding of underlying risk factors and pathophysiology in order to optimize outcomes. Future studies are needed on risk-reducing strategies and standardization of management for port-site metastasis. abstract_id: PUBMED:26793764 Delayed and clinically isolated port site carcinosarcoma recurrence as an early indicator of disseminated disease. A 71-year-old woman with suspected endometrial cancer underwent robotic-assisted hysterectomy, bilateral salpingo-oophorectomy, pelvic and para-aortic lymph node dissection, and infracolic omentectomy revealing a stage II uterine carcinosarcoma with components of serous adenocarcinoma and undifferentiated spindle cell sarcoma. There was no evidence of distant metastasis at the time of surgery. However pelvic washings were positive for malignant cells. She received adjuvant chemotherapy and vaginal cuff brachytherapy. Forty months later she developed a subcutaneous mass at the location of previous port site which was confirmed to be recurrence of the uterine primary. She subsequently developed additional distant metastases to the abdominal wall, lungs, and bone. Port site metastasis (PSM) was the earliest indicator of disseminated metastatic disease in this patient. We review challenges in the management of patients with PSM and propose that PSM be considered as a sign of systemic disease even when presenting as an apparently isolated recurrence. Answer: The question of whether irrigation of port sites can prevent port site metastases (PSM) has been explored in the literature. In an experimental study involving WAG rats, port sites were irrigated with different agents, including povidone-iodine, a mixture of taurolidine and heparin, or sodium chloride, after the introduction of tumor cells and CO2 insufflation. The control group did not undergo any irrigation. The results showed no difference in tumor growth at trocar wounds between any type of irrigation and controls, suggesting that in this experimental model, irrigation of port sites did not have beneficial or adverse effects on the prevention of PSM (PUBMED:15245662). This finding is significant as PSM is a specific and challenging complication of laparoscopic gynecologic oncologic procedures, associated with significant morbidity and poor outcomes. The exact pathogenesis of PSM in gynecologic patients is not clear, and while several preventive measures have been proposed, there is a lack of cogent evidence for effective prevention after laparoscopic surgery. Traditional preventive actions such as irrigation and tumor manipulation should be considered individually, and novel approaches continue to emerge (PUBMED:33116825). In conclusion, based on the available experimental evidence, irrigation of port sites does not appear to prevent PSM. However, the pathogenesis of PSM is complex and not fully understood, and further research is needed to explore other potential preventive measures and to better understand the underlying risk factors and pathophysiology of PSM in order to optimize outcomes (PUBMED:33879998).
Instruction: Prematurity, Opioid Exposure and Neonatal Pain: Do They Affect the Developing Brain? Abstracts: abstract_id: PUBMED:25871803 Prematurity, Opioid Exposure and Neonatal Pain: Do They Affect the Developing Brain? Background: Traditionally, 10 years ago, children born preterm often routinely received morphine, especially during mechanical ventilation. Studies in neonatal rats, whose stage of brain development roughly corresponds to that of children born preterm, found negative long-term effects after pain and opioid exposure. Objectives: We studied possible effects of prematurity, procedural pain and opioids in humans 10 years later. We hypothesized that these factors would negatively influence neurobiological, neuropsychological and sensory development later in life. Methods: We included 19 children born preterm who as neonates participated in an RCT on the short-term effects of morphine administration and who previously participated in our follow-up studies at ages 5 and 8/9 years. We assessed associations between brain morphology (n = 11), neuropsychological functioning (n = 19) and thermal sensitivity (n = 17) and prematurity, opioid exposure and neonatal pain. Results: Significant correlations (coefficients 0.60-0.85) of gestational age, number of painful procedures and morphine exposure with brain volumes were observed. Significant correlations between these factors and thermal sensitivity were not established. Neuropsychological outcome was significantly moderately correlated with morphine exposure in only two subtests, and children performed in general 'average' by Dutch norms. Conclusions: Although prematurity, opioid exposure and neonatal pain were significantly associated with brain volume, no major associations with neuropsychological functioning or thermal sensitivity were detected. Our findings suggest that morphine administration during neonatal life does not affect neurocognitive performance or thermal sensitivity during childhood in children born preterm without brain damage during early life. Future studies with larger sample sizes are needed to confirm these findings. abstract_id: PUBMED:25319840 Risk predictors of opioid-induced critical respiratory events in children: naloxone use as a quality measure of opioid safety. Background: Opioid-induced respiratory depression (OIRD) is a life-threatening complication of opioid therapy in children. Naloxone administration triggered by OIRD has been used to monitor safety of opioid therapy in adults. We used this trigger as a quality measure of opioid safety in hospitalized children to identify risk predictors of OIRD. Methods: We retrospectively reviewed medical records of 38 patients identified from the hospital risk management database as requiring naloxone for critical respiratory events between January 2010 and June 2012 for demographics, comorbidities, surgery, naloxone event details, and outcomes. These data were compared with baseline prevalence in contemporary patients followed by pain service, who did not receive naloxone, to calculate unadjusted odds ratios. Thematic classification of preventable events was undertaken based on analysis of each event. Results: The incidence of naloxone use among hospital inpatients, who received opioids at-least once, was 0.06% compared with 0.23% for patients on the pain service. A majority of naloxone events occurred in postoperative patients (n = 27/38, 71.1%) within the first 24 hours of surgery (n = 20/27, 75.1%) and in the critical care unit (50%). Patients undergoing airway surgeries had higher risk for OIRD (P = 0.01). Patient risk factors for naloxone use included age &lt;1 year (P &lt; 0.001), obstructive sleep apnea (P &lt; 0.001), obesity (P = 0.019), being underweight (P &lt; 0.0001), prematurity (P &lt; 0.001), and developmental delay (P &lt; 0.001). Majority of events (87%) were found to be preventable, which were classified into six main themes based on type of event. Conclusion: OIRD is an important, albeit mostly preventable, complication of opioid therapy in children. Naloxone use can be used as a measure to track opioid safety in children, identify contributing factors, and formulate preventive strategies to reduce the risk for OIRD. abstract_id: PUBMED:18547548 Effect of caffeine and morphine on the developing pre-mature brain. Apnea of pre-maturity is common, occurring in 85% of infants born less than 34 week gestation. Oral caffeine is the most frequent form of therapy, often in conjunction with the use of intubation and intermittent ventilation. Morphine is used to reduce the pain believed to be associated with the latter. Little information is available on the effects of caffeine, morphine or their combination, on the developing brain. We determined the effect of caffeine and morphine alone and in combination of cell death on the developing brain of the rat. Cell death, measured by Fluoro-jade B and activated caspase-3, was significantly increased at 12 and 24 hour post-caffeine injection (P &lt; 0.05) in the cortex, caudate, nucleus accumbens, hypothalamus, hippocampus and superior colliculus. No alterations were seen following morphine injection alone. However, in the thalamus, the combination of caffeine and morphine did increase cell death to a significantly greater extent than caffeine alone. Further research is required to determine the long-term pathologic and functional effects of caffeine and the combination of caffeine and morphine on the developing immature brain. abstract_id: PUBMED:24228168 Neonatal pain in very preterm infants: long-term effects on brain, neurodevelopment and pain reactivity. Effects of early life psychosocial adversity have received a great deal of attention, such as maternal separation in experimental animal models and abuse/neglect in young humans. More recently, long-term effects of the physical stress of repetitive procedural pain have begun to be addressed in infants hospitalized in neonatal intensive care. Preterm infants are more sensitive to pain and stress, which cannot be distinguished in neonates. The focus of this review is clinical studies of long-term effects of repeated procedural pain-related stress in the neonatal intensive care unit (NICU) in relation to brain development, neurodevelopment, programming of stress systems, and later pain sensitivity in infants born very preterm (24-32 weeks' gestational age). Neonatal pain exposure has been quantified as the number of invasive and/or skin-breaking procedures during hospitalization in the NICU. Emerging studies provide convincing clinical evidence for an adverse impact of neonatal pain/stress in infants at a time of physiological immaturity, rapidly developing brain microstructure and networks, as well as programming of the hypothalamic-pituitary-adrenal axis. Currently it appears that early pain/stress may influence the developing brain and thereby neurodevelopment and stress-sensitive behaviors, particularly in the most immature neonates. However, there is no evidence for greater prevalence of pain syndromes compared to children and adults born healthy at full term. In addressing associations between pain/stress and outcomes, careful consideration of confounding clinical factors related to prematurity is essential. The need for pain management for humanitarian care is widely advocated. Non-pharmacological interventions to help parents reduce their infant's stress may be brain-protective. abstract_id: PUBMED:38488791 Pain Exposure and Brain Connectivity in Preterm Infants. Importance: Early-life exposure to painful procedures has been associated with altered brain maturation and neurodevelopmental outcomes in preterm infants, although sex-specific differences are largely unknown. Objective: To examine sex-specific associations among early-life pain exposure, alterations in neonatal structural connectivity, and 18-month neurodevelopment in preterm infants. Design, Setting, And Participants: This prospective cohort study recruited 193 very preterm infants from April 1, 2015, to April 1, 2019, across 2 tertiary neonatal intensive care units in Toronto, Canada. Structural connectivity data were available for 150 infants; neurodevelopmental outcomes were available for 123 infants. Data were analyzed from January 1, 2022, to December 31, 2023. Exposure: Pain was quantified in the initial weeks after birth as the total number of invasive procedures. Main Outcome And Measure: Infants underwent early-life and/or term-equivalent-age magnetic resonance imaging with diffusion tensor imaging to quantify structural connectivity using graph theory measures and regional connection strength. Eighteen-month neurodevelopmental outcomes were assessed with the Bayley Scales of Infant and Toddler Development, Third Edition. Stratifying by sex, generalized estimating equations were used to assess whether pain exposure modified the maturation of structural connectivity using an interaction term (early-life pain exposure × postmenstrual age [PMA] at scan). Generalized estimating equations were used to assess associations between structural connectivity and neurodevelopmental outcomes, adjusting for extreme prematurity and maternal education. Results: A total of 150 infants (80 [53%] male; median [IQR] gestational age at birth, 27.1 [25.4-29.0] weeks) with structural connectivity data were analyzed. Sex-specific associations were found between early-life pain and neonatal brain connectivity in female infants only, with greater early-life pain exposure associated with slower maturation in global efficiency (pain × PMA at scan interaction P = .002) and local efficiency (pain × PMA at scan interaction P = .005). In the full cohort, greater pain exposure was associated with lower global efficiency (coefficient, -0.46; 95% CI, -0.78, to -0.15; P = .004) and local efficiency (coefficient, -0.57; 95% CI, -1.04 to -0.10; P = .02) and regional connection strength. Local efficiency (coefficient, 0.003; 95% CI, 0.001-0.004; P = .005) and regional connection strength in the striatum were associated with cognitive outcomes. Conclusions And Relevance: In this cohort study of very preterm infants, greater exposure to early-life pain was associated with altered maturation of neonatal structural connectivity, particularly in female infants. Alterations in structural connectivity were associated with neurodevelopmental outcomes, with potential regional specificities. abstract_id: PUBMED:26670662 Comment on the Paper by van den Bosch et al. Entitled 'Prematurity, Opioid Exposure and Neonatal Pain: Do They Affect the Developing Brain': The Impact of Subtle Messaging. N/A abstract_id: PUBMED:35903331 Pharmacokinetics of Cannabis and Its Derivatives in Animals and Humans During Pregnancy and Breastfeeding. Cannabis is one of the most widely used illicit drugs during pregnancy and lactation. With the recent legalization of cannabis in many countries, health professionals are increasingly exposed to pregnant and breastfeeding women who are consuming cannabis on a regular basis as a solution for depression, anxiety, nausea, and pain. Cannabis consumption during pregnancy can induce negative birth outcomes such as reduced birth weight and increased risk of prematurity and admission to the neonatal intensive care unit. Yet, limited information is available regarding the pharmacokinetics of cannabis in the fetus and newborn exposed during pregnancy and lactation. Indeed, the official recommendations regarding the use of cannabis during these two critical development periods lack robust pharmacokinetics data and make it difficult for health professionals to guide their patients. Many clinical studies are currently evaluating the effects of cannabis on the brain development and base their groups mostly on questionnaires. These studies should be associated with pharmacokinetics studies to assess correlations between the infant brain development and the exposure to cannabis during pregnancy and breastfeeding. Our project aims to review the available data on the pharmacokinetics of cannabinoids in adults, neonates, and animals. If the available literature is abundant in adult humans and animals, there is still a lack of published data on the exposure of pregnant and lactating women and neonates. However, some of the published information causes concerns on the exposure and the potential effects of cannabis on fetuses and neonates. The safety of cannabis use for non-medical purpose during pregnancy and breastfeeding needs to be further characterized with proper pharmacokinetic studies in humans feasible in regions where cannabis has been legalized. Given the available data, significant transfer occurs to the fetus and the breastfed newborn with a theoretical risk of accumulation of products known to be biologically active. abstract_id: PUBMED:30975474 Dysmaturation of Premature Brain: Importance, Cellular Mechanisms, and Potential Interventions. Prematurity, especially preterm birth (less than 32 weeks' gestation), is common and associated with high rates of both survival and neurodevelopmental disability, especially apparent in cognitive spheres. The neuropathological substrate of this disability is now recognized to be related to a variety of dysmaturational disturbances of the brain. These disturbances follow initial brain injury, particularly cerebral white matter injury, and involve many of the extraordinary array of developmental events active in cerebral white and gray matter structures during the premature period. This review delineates these developmental events and the dysmaturational disturbances that occur in premature infants. The cellular mechanisms involved in the genesis of the dysmaturation are emphasized, with particular focus on the preoligodendrocyte. A central role for the diffusely distributed activated microglia and reactive astrocytes in the dysmaturation is now apparent. As these dysmaturational cellular mechanisms appear to occur over a relatively long time window, interventions to prevent or ameliorate the dysmaturation, that is, neurorestorative interventions, seem possible. Such interventions include pharmacologic agents, especially erythropoietin, and particular attention has also been paid to such nutritional factors as quality and source of milk, breastfeeding, polyunsaturated fatty acids, iron, and zinc. Recent studies also suggest a potent role for interventions directed at various experiential factors in the neonatal period and infancy, i.e., provision of optimal auditory and visual exposures, minimization of pain and stress, and a variety of other means of environmental behavioral enrichment, in enhancing brain development. abstract_id: PUBMED:36870076 Systemic opioids versus other analgesics and sedatives for postoperative pain in neonates. Background: Neonates may undergo surgery because of malformations such as diaphragmatic hernia, gastroschisis, congenital heart disease, and hypertrophic pyloric stenosis, or complications of prematurity, such as necrotizing enterocolitis, spontaneous intestinal perforation, and retinopathy of prematurity that require surgical treatment. Options for treatment of postoperative pain include opioids, non-pharmacological interventions, and other drugs. Morphine, fentanyl, and remifentanil are the opioids most often used in neonates. However, negative impact of opioids on the structure and function of the developing brain has been reported. The assessment of the effects of opioids is of utmost importance, especially for neonates in substantial pain during the postoperative period. Objectives: To evaluate the benefits and harms of systemic opioid analgesics in neonates who underwent surgery on all-cause mortality, pain, and significant neurodevelopmental disability compared to no intervention, placebo, non-pharmacological interventions, different types of opioids, or other drugs. Search Methods: We searched Cochrane CENTRAL, MEDLINE via PubMed and CINAHL in May 2021. We searched the WHO ICTRP, clinicaltrials.gov, and ICTRP trial registries. We searched conference proceedings, and the reference lists of retrieved articles for RCTs and quasi-RCTs. SELECTION CRITERIA: We included randomized controlled trials (RCTs) conducted in preterm and term infants of a postmenstrual age up to 46 weeks and 0 days with postoperative pain where systemic opioids were compared to 1) placebo or no intervention; 2) non-pharmacological interventions; 3) different types of opioids; or 4) other drugs. DATA COLLECTION AND ANALYSIS: We used standard Cochrane methods. Our primary outcomes were pain assessed with validated methods, all-cause mortality during initial hospitalization, major neurodevelopmental disability, and cognitive and educational outcomes in children more than five years old. We used the fixed-effect model with risk ratio (RR) and risk difference (RD) for dichotomous data and mean difference (MD) for continuous data. We used GRADE to assess the certainty of evidence for each outcome. Main Results: We included four RCTs enrolling 331 infants in four countries across different continents. Most studies considered patients undergoing large or medium surgical procedures (including major thoracic or abdominal surgery), who potentially required pain control through opioid administration after surgery. The randomized trials did not consider patients undergoing minor surgery (including inguinal hernia repair) and those individuals exposed to opioids before the beginning of the trial. Two RCTs compared opioids with placebo; one fentanyl with tramadol; and one morphine with paracetamol. No meta-analyses could be performed because the included RCTs reported no more than three outcomes within the prespecified comparisons. Certainty of the evidence was very low for all outcomes due to imprecision of the estimates (downgrade by two levels) and study limitations (downgrade by one level). Comparison 1: opioids versus no treatment or placebo Two trials were included in this comparison, comparing either tramadol or tapentadol with placebo. No data were reported on the following critical outcomes: pain; major neurodevelopmental disability; or cognitive and educational outcomes in children more than five years old. The evidence is very uncertain about the effect of tramadol compared with placebo on all-cause mortality during initial hospitalization (RR 0.32, 95% Confidence Interval (CI) 0.01 to 7.70; RD -0.03, 95% CI -0.10 to 0.05, 71 participants, 1 study; I² = not applicable). No data were reported on: retinopathy of prematurity; or intraventricular hemorrhage. Comparison 2: opioids versus non-pharmacological interventions No trials were included in this comparison. Comparison 3: head-to-head comparisons of different opioids One trial comparing fentanyl with tramadol was included in this comparison. No data were reported on the following critical outcomes: pain; major neurodevelopmental disability; or cognitive and educational outcomes in children more than five years old. The evidence is very uncertain about the effect of fentanyl compared with tramadol on all-cause mortality during initial hospitalization (RR 0.99, 95% CI 0.59 to 1.64; RD 0.00, 95% CI -0.13 to 0.13, 171 participants, 1 study; I² = not applicable). No data were reported on: retinopathy of prematurity; or intraventricular hemorrhage. Comparison 4: opioids versus other analgesics and sedatives One trial comparing morphine with paracetamol was included in this comparison. The evidence is very uncertain about the effect of morphine compared with paracetamol on COMFORT pain scores (MD 0.10, 95% CI -0.85 to 1.05; 71 participants, 1 study; I² = not applicable). No data were reported on the other critical outcomes, i.e. major neurodevelopmental disability; cognitive and educational outcomes in children more than five years old, all-cause mortality during initial hospitalization; retinopathy of prematurity; or intraventricular hemorrhage. Authors' Conclusions: Limited evidence is available on opioid administration for postoperative pain in newborn infants compared to either placebo, other opioids, or paracetamol. We are uncertain whether tramadol reduces mortality compared to placebo; none of the studies reported pain scores, major neurodevelopmental disability, cognitive and educational outcomes in children older than five years old, retinopathy of prematurity, or intraventricular hemorrhage. We are uncertain whether fentanyl reduces mortality compared to tramadol; none of the studies reported pain scores, major neurodevelopmental disability, cognitive and educational outcomes in children older than five years old, retinopathy of prematurity, or intraventricular hemorrhage. We are uncertain whether morphine reduces pain compared to paracetamol; none of the studies reported major neurodevelopmental disability, cognitive and educational outcomes in children more than five years old, all-cause mortality during initial hospitalization, retinopathy of prematurity, or intraventricular hemorrhage. We identified no studies comparing opioids versus non-pharmacological interventions. abstract_id: PUBMED:24251423 Neonatal anesthesia with limited resources. Neonates are the most vulnerable age group in terms of anesthetic risk and perioperative mortality, especially in the developing world. Prematurity, malnutrition, delays in presentation, and sepsis contribute to this risk. Lack of healthcare workers, poorly maintained equipment, limited drug supplies, absence of postoperative intensive care, unreliable water supplies, or electricity are further contributory factors. Trained anesthesiologists with the skills required for pediatric and neonatal anesthesia as well as basic monitoring equipment such as pulse oximetry will go a long way to improve the unacceptably high anesthetic mortality. Answer: Prematurity, opioid exposure, and neonatal pain have been subjects of research concerning their potential effects on the developing brain. Studies have indicated that these factors can indeed influence brain development and neurodevelopmental outcomes in various ways. Research on children born preterm who received morphine as neonates during mechanical ventilation found significant correlations between gestational age, the number of painful procedures, morphine exposure, and brain volumes. However, no major associations with neuropsychological functioning or thermal sensitivity were detected in these children at ages 5 and 8/9 years. This suggests that while morphine administration during neonatal life is associated with brain volume, it may not affect neurocognitive performance or thermal sensitivity during childhood in children born preterm without early life brain damage (PUBMED:25871803). Another study focused on opioid-induced respiratory depression (OIRD) in children, which is a life-threatening complication of opioid therapy. The use of naloxone, an opioid antagonist, was proposed as a quality measure of opioid safety in hospitalized children. The study identified risk factors for naloxone use, including age under one year, obstructive sleep apnea, obesity, being underweight, prematurity, and developmental delay. The majority of OIRD events were found to be preventable (PUBMED:25319840). In terms of the effects of caffeine and morphine on the developing premature brain, one study found that caffeine increased cell death in various brain regions of neonatal rats, while morphine alone did not. However, the combination of caffeine and morphine increased cell death in the thalamus more than caffeine alone, indicating a need for further research on the long-term effects of these substances on the immature brain (PUBMED:18547548). Clinical studies have also provided evidence for the adverse impact of neonatal pain/stress on brain development, neurodevelopment, and programming of stress systems, particularly in very preterm infants (24-32 weeks' gestational age). Early pain/stress may influence the developing brain and thereby neurodevelopment and stress-sensitive behaviors, especially in the most immature neonates. However, there is no evidence for a greater prevalence of pain syndromes compared to children and adults born healthy at full term (PUBMED:24228168). A prospective cohort study found sex-specific associations between early-life pain exposure and alterations in neonatal structural connectivity in preterm infants, with greater pain exposure associated with slower maturation in global efficiency and local efficiency in female infants only. Furthermore, alterations in structural connectivity were associated with neurodevelopmental outcomes, suggesting that early-life pain exposure can have a significant impact on brain development, particularly in female preterm infants (PUBMED:38488791). While the impact of opioids on the developing brain is a concern, the evidence on the effects of systemic opioid analgesics in neonates who underwent surgery is limited. A Cochrane review included four RCTs that compared opioids with placebo, different types of opioids, or other drugs for postoperative pain in neonates. The review concluded that the evidence is very uncertain about the effects of opioids on outcomes such as mortality, pain, and major neurodevelopmental disability due to the imprecision of estimates and study limitations (PUBMED:36870076). In summary, prematurity, opioid exposure, and neonatal pain have been associated with changes in brain volume and structural connectivity, which may influence neurodevelopmental outcomes. However, the evidence is not conclusive, and further research is needed to fully understand the long-term impacts of these factors on the developing brain. Additionally, the use of opioids for pain management in neonates requires careful consideration due to potential risks and the need for more robust data on their safety and efficacy.
Instruction: Frequency of critical stenosis in primary arteriovenous fistulae before hemodialysis access: should duplex ultrasound surveillance be the standard of care? Abstracts: abstract_id: PUBMED:33163714 A Multicenter Randomized Clinical Trial of Hemodialysis Access Blood Flow Surveillance Compared to Standard of Care: The Hemodialysis Access Surveillance Evaluation (HASE) Study. Introduction: Arteriovenous (AV) access thrombosis remains 1 of the most troubling AV access-related complications affecting hemodialysis patients. It necessitates an urgent and occasionally complicated thrombectomy procedure and increases the risk of AV access loss. AV access stenosis is found in the majority of thrombosed AV accesses. The routine use of AV access surveillance for the early detection and management of stenosis to reduce the thrombosis rate remains controversial. Methods: We have conducted a multicenter, prospective, randomized clinical trial comparing the standard of care coupled with ultrasound dilution technique (UDT) flow measurement monthly surveillance with the standard of care alone. Results: We prospectively randomized 436 patients with end-stage renal disease on hemodialysis with arteriovenous fistula (AVF) or arteriovenous graft (AVG) using cluster (shift) randomization to surveillance and control groups. There were no significant differences in the baseline demographic data between the 2 groups, except for ethnicity (P = 0.017). Patients were followed on average for 15.2 months. There were significantly less per-patient thrombotic events (Poisson rate) in the surveillance group (0.12/patient) compared with the control group (0.23/patient) (P = 0.012). There was no statistically significant difference in the total number of procedures between the 2 groups, irrespective of whether thrombectomy procedures were included or excluded, and no statistically significant differences in the rate of or time to the first thrombotic event or the number of catheters placed due to thrombosis. Conclusion: The use of UDT flow measurement monthly AV access surveillance in this multicenter randomized controlled trial reduced the per-patient thrombotic events without significantly increasing the total number of angiographic procedures. Even though there is a trend, surveillance did not reduce the first thrombotic event rate. abstract_id: PUBMED:36864960 The role of hemodialysis access duplex ultrasound for evaluation of patency and access surveillance. The rise in prevalence of end stage renal disease (ESRD) and the impact on health care resulted in increasing focus on delivery of vascular access. Hemodialysis vascular access is the most common renal replacement therapy method. The vascular access types include arteriovenous fistula, arteriovenous graft, and tunneled central venous catheters. Vascular access function remains an important outcome measure with significant impact on morbidity and health care cost. The survival and quality of life of patients on hemodialysis is dependent on the adequacy of dialysis through proper vascular access. Early detection of failure to mature vascular access, stenosis, thrombosis, and aneurysm or pseudoaneurysm formation remains crucial. Ultrasound can help identify complications, even though ultrasound evaluation of the arteriovenous access is less well defined. Some published vascular access guidelines support ultrasound for detecting stenosis. The evolution of ultrasound has improved throughout the years, both multi parametric top-line systems and hand-held systems. Ultrasound evaluation is inexpensive, rapid, noninvasive, and repeatable, it is a powerful tool used for early diagnosis. The ultrasound image quality still depends on the skill of the operator. Careful attention to technical details is needed and avoidance of several diagnostic pitfalls is necessary. This review is focused on the role of ultrasound for hemodialysis access surveillance, evaluation of maturation, detection of access complications, and aid with cannulation. abstract_id: PUBMED:15944600 Frequency of critical stenosis in primary arteriovenous fistulae before hemodialysis access: should duplex ultrasound surveillance be the standard of care? Objective: Increasing use of primary arteriovenous fistulae (pAVFs) is a desired goal in hemodialysis patients (National Kidney Foundation /Dialysis Outcome Quality Initiative guidelines). However, in many instances, pAVFs fail to adequately mature due to ill-defined mechanisms. We therefore investigated pAVFs with color duplex ultrasound (CDU) surveillance 4 to 12 weeks postoperatively to identify hemodynamically significant abnormalities that may contribute to pAVF failure. Methods: From March 2001 to October 2003, 54 upper extremity pAVFs were subjected to CDU assessment before access. A peak systolic velocity ratio (SVR) of &gt;/=2:1 was used to detect &gt;/=50% stenosis involving arterial inflow and venous outflow, whereas an SVR of &gt;/=3:1 was used to detect &gt;/=50% anastomotic stenosis. CDU findings were compared with preoperative vein mapping and postoperative fistulography when available. Results: Of 54 pAVFs, there were 23 brachiocephalic, 14 radiocephalic, and 17 basilic vein transpositions. By CDU surveillance, 11 (20%) were occluded and 14 (26%) were negative. Twenty-nine (54%) pAVFs had 38 hemodynamically significant CDU abnormalities. These included 16 (42%) venous outflow, 13 (34%) anastomotic, and 2 (5%) inflow stenoses. In seven (18%), branch steal with reduced flow was found. In 35 of 54 (65%) pAVFs, preoperative vein mapping was available and demonstrated adequate vein size (&gt;/=3 mm) and outflow in 86% of cases. Twenty-one fistulograms (38%) were available for verifying the CDU abnormalities. In each fistulogram, the arterial inflow, anastomosis, and venous outflow were compared with the CDU findings (63 segments). The sensitivity, specificity, and accuracy of CDU in detecting pAVF stenoses &gt;/=50% were 93%, 94%, was 97%, respectively. Conclusions: Before initiation of hemodialysis, an unexpectedly high prevalence of critical stenoses was found in patent pAVFs using CDU surveillance. These de novo stenoses appear to develop rapidly after arterialization of the upper extremity superficial veins and can be reliably detected by CDU surveillance. Turbulent flow conditions in pAVFs may play a role in inducing progressive vein wall and valve leaflet intimal thickening, although stenoses may be due to venous abnormalities that predate AVF placement. Routine CDU surveillance of pAVFs should be considered to identify and correct flow-limiting stenoses that may compromise pAVF long-term patency and use. abstract_id: PUBMED:24143120 Hemodialysis catheter implantation in the axillary vein by ultrasound guidance versus palpation or anatomical reference. Background: WE COMPARED THE RESULTS OF FOUR DIFFERENT METHODS OF HEMODIALYSIS CATHETER INSERTION IN THE MEDIAL SEGMENT OF THE AXILLARY VEIN: ultrasound guidance, palpation, anatomical reference, and prior transient catheter. Methods: All patients that required acute or chronic hemodialysis and for whom it was determined impossible or not recommended either to place a catheter in the internal jugular vein (for instance, those patients with a tracheostomy), or to practice arteriovenous fistula or graft; it was then essential to obtain an alternative vascular access. When the procedure of axillary vein catheter insertion was performed in the Renal Care Facility (RCF), ultrasound guidance was used, but in the intensive care unit (ICU), this resource was unavailable, so the palpation or anatomical reference technique was used. Results: Two nephrologists with experience in the technique performed 83 procedures during a period lasting 15 years and 8 months (from January 1997-August 2012): 41 by ultrasound guidance; 19 by anatomical references; 15 by palpation of the contiguous axillary artery; and 8 through a temporary axillary catheter previously placed. The ultrasound-guided patients had fewer punctures than other groups, but the value was not statistically significant. Arterial punctures were infrequent in all techniques. Analyzing all the procedure-related complications, such as hematoma, pneumothorax, brachial-plexus injury, as well as the reasons for catheter removal, no differences were observed among the groups. The functioning time was longer in the ultrasound-guided and previous catheter groups. In 15 years and 8 months of surveillance, no clinical or image evidence for axillary vein stenosis was found. Conclusion: The ultrasound guide makes the procedure of inserting catheters in the axillary veins easier, but knowledge of the anatomy of the midaxillary region and the ability to feel the axillary artery pulse (for the palpation method) also allow relatively easy successful implant of catheters in the axillary veins. abstract_id: PUBMED:38166508 FLOW: Flow dysfunction of hemodialysis vascular access: A randomized controlled trial on the effectiveness of surveillance of arteriovenous fistulas and grafts. Introduction: It is assumed that identification and correction of asymptomatic stenoses in the vascular access circuit will prevent thrombosis that would require urgent intervention to continue hemodialysis treatment. However, the evidence base for this assumption is limited. Recent international clinical practice guidelines reach different conclusions on the use of surveillance for vascular access flow dysfunction and recommend further research to inform clinical practice. Methods: The FLOW trial is a double-blind, multicenter, randomized controlled trial with a 1:1 individual participant treatment allocation ratio over two study arms. In the intervention group, only symptomatic vascular access stenoses detected by clinical monitoring are treated, whereas in the comparison group asymptomatic stenoses detected by surveillance using monthly dilution flow measurements are treated as well. Hemodialysis patients with a functional arteriovenous vascular access are enrolled. The primary outcome is the access-related intervention rate that will be analyzed using a general linear model with Poisson distribution. Secondary outcomes include patient satisfaction, access-related serious adverse events, and quality of the surveillance process. A cost effectiveness analysis and budget impact analysis will also be conducted. The study requires 828 patient-years of follow-up in 417 participants to detect a difference of 0.25 access-related interventions per year between study groups. Discussion: As one of the largest randomized controlled trials assessing the clinical impact of vascular access surveillance using a strong double-blinded study design, we believe the FLOW trial will provide much-needed evidence to improve vascular access care for hemodialysis patients. abstract_id: PUBMED:30393327 Development of an AVF Stenosis Assessment Tool for Hemodialysis Patients Using Robotic Ultrasound System. With the aging population and lifestyle changes, the number of hemodialysis (HD) patients increases year by year. The arteriovenous fistula (AVF) is the gold standard vascular access used to access the blood for HD treatment. Since the status of stenosis affects HD efficiency, current clinical practices usually use a Doppler ultrasound imaging system to assess the parameters of the stenosis, such as the degree of stenosis (DOS). Unfortunately, this is a very time-consuming task. Furthermore, it is difficult to stably maintain the ultrasound probe for a prolonged period to give doctors clearer or reproducible images. In this study, a robotic ultrasound system (RUS) with ultrasound sequential imaging analysis was designed to evaluate the DOS of the AVF. The sequential imaging analysis was capable of image smoothing and vessel boundary detection. It enabled clinicians to mark the thickness of the plaque for further processing. Finally, the system was used to reconstruct 3D models of fistulas and calculated the DOS for clinical assessment. We also designed a pressure sensing module attached to the ultrasound probe to prevent the probe from coming loose, vibrating, and exerting abnormal pressure on the skin. In the phantom test, the results showed that the error of the DOS that was calculated by RUS was less than 3%. The results of clinical trials obtained from ten patients show that the error between the RUS and clinicians' measurement was about 10% and had a highly linear correlation (R Square &gt; 0.95). In addition, the reproducibility error was about 3% and could effectively save about 46% of the time during clinical examinations. abstract_id: PUBMED:21326504 Surveillance of hemodialysis vascular access. A mature, functional arteriovenous (AV) access is the lifeline for a hemodialysis (HD) patient as it provides sufficient enough blood flow for adequate dialysis. As the chronic kidney disease (CKD) and end-stage renal disease (ESRD) population is expanding, and because of the well-recognized hazardous complications of dialysis catheters, the projected placement and use of AV accesses for HD is on the rise. Although a superior access than catheters, AV accesses are not without complications. The primary complication that causes AV accesses to fail is stenosis with subsequent thrombosis. Surveying for stenosis can be performed in a variety of ways. Clinical monitoring, measuring flow, determining pressure, and measuring recirculation are all methods that show promise. In addition, stenosis can be directly visualized, through noninvasive techniques such as color duplex imaging, or through minimally invasive venography. Each method of screening has its advantages and disadvantages, and several studies exist which attempt to answer the question of which test is the most useful. Ultimately, to maintain the functionality of the access for the HD patient, a team approach becomes imperative. The collaboration and cooperation of the patient, nephrologist, dialysis nurse and technician, vascular access coordinator, interventionalist, and vascular surgeon is necessary to preserve this lifeline. abstract_id: PUBMED:30719539 The Efficacy of Paclitaxel Drug-Eluting Balloon Angioplasty Versus Standard Balloon Angioplasty in Stenosis of Native Hemodialysis Arteriovenous Fistulas: An Analysis of Clinical Success, Primary Patency and Risk Factors for Recurrent Dysfunction. Purpose: To investigate the efficacy of paclitaxel drug-eluting balloons (PEB) versus standard balloon angioplasty (BA) in stenosis of native hemodialysis arteriovenous fistulas (AVFs). Materials And Methods: A total of 96 patients with ESRD (mean ± SD age 57.0 ± 9.1 years, 63.5% were males) who underwent endovascular treatment with PEB angioplasty (n = 32) and BA (n = 64) for a dysfunctional native AVF were included. Clinical success, complications, primary patency and postoperative recurrence parameters were recorded in each group. Results: Primary patency rate at 6 months was significantly higher in PEB than in BA group (96.9 vs. 20.3%, p &lt; 0.001), while the two groups had similar primary patency rates at 9 months (66.8 vs. 50.0%) and 12 months (6.3% for each). No significant difference was noted between PEB and BA groups in terms of the rate (21.9% and. 31.3%), time (median 220 vs. 152.5 days) and reasons (reocclusion in 18.8 vs. 28.1%) for dysfunction recurrence as well as the number of recurrent treatments. AVF dysfunction recurrence was more likely in younger age AVF (median 4 vs. 23 months, p &lt; 0.001 in PEB, and 8.5 vs. 20.5 months p = 0.001 in SBA) and in AVF ≤ 6 months in both SBA and PEB groups (71.4 vs. 12.0%, p = 0.005 in PEB, 40.0 vs. 2.3%, p &lt; 0.001). Conclusion: In conclusion, our findings emphasize favorable safety and efficacy of PEB and BA in the management of dysfunctional hemodialysis AVFs with similar rates of post-PTA recurrence of AVF dysfunction. Nonetheless, there was a nonsignificant tendency for lower rate and a delay for recurrent dysfunction in patients treated with PEB and a significant association younger AVF age with an increased risk of post-PTA recurrence of AVF dysfunction. Level Of Evidence: 3, Retrospective cohort study. abstract_id: PUBMED:37463647 Duplex ultrasound-guided angioplasty of hemodialysis vascular access. Objective: Duplex ultrasound-guided angioplasty (DA) for hemodialysis vascular accesses remains questionable regarding its feasibility and safety. Minor complications (requiring no more treatment than nominal therapy) might be over-reported. Our hypothesis is that this procedure has no significant differences between observed rates and the recommended threshold of main outcomes of the procedure defined by the standards of arteriovenous fistulas (AVF) angioplasty. Methods: In a single-center retrospective study, 298 DA performed on 141 patients from 2015 to 2019 were analyzed. Occluded AVF or concomitant use of radiographic guidance were excluded. Duplex ultrasound parameters were collected up to 1 month before, at the end of angioplasty, and on day 30 after the procedure. Complications were registered, and patency rates were studied at 24 months of follow-up. Results: Anatomical success was achieved in 142 procedures (47.7%), clinical success in 284 (95.3%), and hemodynamic success in 283 (95.0%). Major complications-requiring at least a specific therapy-were reported in 8 procedures (2.7%) and minor complications-requiring no adjunctive therapy-in 157 (52.7%). At 24 months, overall postintervention primary patency was 34.0%, primary-assisted patency 87.4%, and secondary patency 92.5%. There were no significant differences of patency rates between groups with or without minor complications (P value for primary patency, 0.08; primary-assisted patency, 0.08; secondary patency, 0.23) or 30% residual stenosis (P value for primary patency, 0.82; primary-assisted patency, 0.46; secondary patency: 0.63). Duplex parameters further improved at postoperative day 30 after angioplasty. Conclusions: DA of AVF is feasible, safe-despite over-reported minor complications having no impact on postintervention patency rates-and efficient. A minor complication can be seen as an event without bad or good consequences. Anatomical definition of success does not fit on DA for hemodialysis vascular access. Further studies are required to define the duplex parameter threshold for efficacy. abstract_id: PUBMED:33896727 Arteriovenous Fistula Surveillance Using Tomographic 3D Ultrasound. Objective: A well functioning arteriovenous fistula (AVF) is essential for haemodialysis. Despite regular duplex ultrasound (DUS) a significant number of AVFs fail. Tomographic 3D ultrasound (tUS) creates a 3D image of the AVF that can be interpreted by the clinician. DUS, tUS, and fistulograms were compared for the identification and measurement of flow limiting stenosis. Methods: Patients with AVF dysfunction on routine Transonic surveillance, defined as (1) &gt; 15% reduction in flow on two consecutive occasions, (2) &gt; 30% reduction in flow on one occasion, (3) flow of &lt; 600 mL/sec, (4) presence of recirculation, underwent DUS. AVF tUS imaging was performed prior to fistulography. All fistulograms were reported by the same consultant radiologist and tUS images by the same vascular scientist blinded to the fistulogram results. Maximum diameter reduction in all stenoses were measured using all three imaging techniques. Results: In 97 patients with 101 stenoses, the mean (± standard deviation [SD]) severity of stenosis was 63.0 ± 13.9%, 65.0 ± 11.6%, and 64.8 ± 11.7% for the fistulograms, DUS, and tUS respectively. The mean (± SD) time between ultrasound and fistulography imaging was 15.0 ± 14.5 days. Assuming the fistulogram as the "gold standard", Bland-Altman agreement for DUS was -1.9 ± 15.5% (limit of agreement [LOA] -32.2 - 28.4) compared with -1.7 ± 15.4% (LOA -31.9 - 28.4) for tUS. Median (± interquartile range) time to complete the investigation was 09:00 ± 03:19 minutes for DUS and 03:13 ± 01:56 minutes for tUS (p &lt; .001). Conclusion: DUS and tUS were equally accurate at detecting AVF complications but tUS investigation requires less skill and was significantly quicker than DUS. Answer: The question of whether duplex ultrasound surveillance should be the standard of care for detecting critical stenosis in primary arteriovenous fistulae before hemodialysis access is addressed in the abstract from PUBMED:15944600. The study found an unexpectedly high prevalence of critical stenoses in patent primary arteriovenous fistulae (pAVFs) using color duplex ultrasound (CDU) surveillance before the initiation of hemodialysis. Specifically, 54% of pAVFs had hemodynamically significant CDU abnormalities, including venous outflow, anastomotic, and inflow stenoses, as well as branch steal with reduced flow. The sensitivity, specificity, and accuracy of CDU in detecting pAVF stenoses ≥50% were high, at 93%, 94%, and 97%, respectively. These findings suggest that routine CDU surveillance of pAVFs should be considered to identify and correct flow-limiting stenoses that may compromise pAVF long-term patency and use. The abstract supports the use of duplex ultrasound surveillance as a reliable method for detecting critical stenosis in pAVFs, which could potentially improve the maturation and long-term patency of these fistulae. Given the high prevalence of critical stenoses detected and the reliability of CDU surveillance, the study implies that duplex ultrasound surveillance could be beneficial as a standard of care in this context.
Instruction: Is herpes zoster a marker for occult or subsequent malignancy? Abstracts: abstract_id: PUBMED:15720930 Is herpes zoster a marker for occult or subsequent malignancy? Background: It has been suggested that herpes zoster may be a marker for occult malignancy. Aim: To examine the emergence of a subsequent cancer diagnosis in patients with and without herpes zoster. Design Of Study: Retrospective cohort study. Setting: Results were based on the database of Intego, an ongoing Belgian general practice-based morbidity registry, covering 37 general practitioners and including about 311 000 patient years between the years 1994 and 2000. Method: Survival analysis comparing the emergence of malignancy in patients with and without herpes zoster. Results: The number of patients below the age of 65 years with herpes zoster, cancer or both was too low to draw any sensible conclusions. Above the age of 65 years we identified a significant increase of cancer emergence in the whole group and in females (hazard ratio = 2.65, 95% confidence interval = 1.43 to 4.90), but not in males. No difference could be identified in the first year after the herpes zoster infection. Conclusion: Our results do not justify extensive testing for cancer in herpes zoster patients. The association we identified, however, leaves open a number of questions with respect to the physiopathology behind it. abstract_id: PUBMED:6480952 Herpes zoster and occult malignancy. Herpes zoster and malignancy have both been associated with conditions of immune suppression. It has been well documented that herpes zoster occurs more frequently in patients with a previously diagnosed malignancy, especially lymphoma. To determine whether there is an increased frequency of malignancy subsequent to the diagnosis of herpes zoster, we studied fifty outpatients with herpes zoster and compared them with fifty control patients with psoriasis. In a follow-up period ranging from 24 to 68 months, one individual in the herpes zoster group developed a subsequent malignancy, in comparison to none in the control population and to 1.5 cancer cases expected in the general population. These results support the view that herpes zoster is not a marker for an occult malignancy. abstract_id: PUBMED:27845154 Herpes zoster as a marker of occult cancer: A systematic review and meta-analysis. Objectives: Researchers have advocated for an increased awareness of occult cancer among herpes zoster patients, but there are no systematic reviews to support these claims. We therefore conducted a systematic review and meta-analysis of evidence on zoster and risk of occult cancer. Methods: Through February 18, 2016, we searched PubMed, EMBASE and references of relevant papers for studies on zoster and risk of any cancer. One author screened retrieved papers by title and abstract; included papers were reviewed by two authors for eligibility, data extraction, and potential biases. Despite statistical heterogeneity, associations were consistently in the same direction and we therefore computed pooled relative risks using random-effects models. Results: We identified 46 eligible studies, 10 of which considered all cancer types combined. The pooled relative risk for any cancer was 1.42 (95% confidence interval: 1.18, 1.71) overall and 1.83 (95% confidence interval: 1.17, 2.87) at one year after zoster. Considering cancer subtypes, the highest estimates were generally reported for occult hematological cancer. The absolute risk of any cancer at one year after presentation with zoster was 0.7-1.8%. Conclusion: This study supports an association between zoster and occult cancer, but the low absolute risk of cancer limits the clinical implications. abstract_id: PUBMED:23545577 Herpes zoster and subsequent risk of cancer: a population-based study. Background: In this cohort study, we investigated whether a diagnosis of herpes zoster (HZ) was associated with a higher risk of subsequent cancer as compared with the Taiwanese general population. Methods: Data were obtained from the Taiwan National Health Insurance Research Database. In total, 38 743 patients who were aged 50 years or older and had received ambulatory care for HZ between 1997 and 2006 were identified as the study cohort; 116 229 age- and sex-matched patients without HZ were included as the comparison cohort. We used Cox proportional hazards regression models to estimate the hazard ratios (HRs) for subsequent cancer, after controlling for potential confounders. Results: The HR for subsequent cancer varied according to time since HZ diagnosis. The HR was 1.58 (95% CI, 1.38-1.80) within the first year, 1.30 (95% CI, 1.15-1.46) between 1 and 2 years, 1.10 (95% CI, 0.98-1.24) between 2 and 3 years, 1.02 (95% CI, 0.91-1.15) between 3 and 4 years, and 1.08 (95% CI, 0.96-1.21) between 4 and 5 years. The risk of subsequent cancer, particularly lung cancer, was significantly higher during the first 2 years after initial diagnosis of HZ. Conclusions: Our findings suggest that an HZ diagnosis is a marker of occult malignancy, particularly in lung cancer. The HRs for cancer decreased gradually over time and were no longer significant after 2 years of follow-up, which indicates that the association between HZ and cancer is likely due to detection bias. abstract_id: PUBMED:23906263 Association between recent herpes zoster but not herpes simplex infection and subsequent risk of malignancy in women: a retrospective cohort study. The association between herpes zoster and subsequent cancer risk is still unclear. Consequently, doubts remain regarding the need for investigation of herpes patients for co-existing or subsequent malignancy. This is a retrospective cohort study comparing cancer risk in patients after herpes zoster and age-/sex-matched non-herpes zoster patients, in a primary care-based continuous morbidity database. We tested for interaction by gender, age, diabetes, HRT use or antiviral therapy. Analyses were repeated for patients with and without herpes simplex. The hazard ratio (HR) comparing cancer risk in herpes zoster vs. control patients was significant in all women, women aged &gt; 65 years and subgroups of breast and colorectal cancer (HRs 1·60, 1·82, 2·14, 2·19, respectively). For men, a significant association was found for haematological cancers (HR 2·92). No associations were found with herpes simplex. No interaction was identified with antiviral therapy, diabetes or HRT treatment. We concluded that there was a moderate significant association between herpes zoster and subsequent cancer risk in women aged &gt; 65 years, without any influence of antiviral therapy. No association was found with herpes simplex. There is insufficient reason for extensively testing older patients with herpes zoster or herpes simplex for the presence of occult cancer. abstract_id: PUBMED:7481976 Herpes zoster and internal malignancy. Herpes zoster (HZ) often occurs concomitantly with various internal malignancies, most commonly hematologic in origin. Authors in the past proposed that HZ was a marker for internal malignancy, since it is often found in association with a malignancy. A critical review of the literature revealed that when HZ and malignancy occur in the same individual, rarely does HZ precede the malignancy, but usually follows it. Many studies have evaluated HZ in cancer patients, but only three studies have evaluated the occurrence of malignancy after the diagnosis of HZ, and all found no increased incidence of internal malignancy in patients with HZ. Since HZ is a poor marker for internal malignancy, extensive workups to find occult malignancy are not indicated. In rare cases, however, HZ precedes a malignancy. We therefore recommend a baseline history and physical examination, with further directed workup only if there are abnormal findings. abstract_id: PUBMED:6979711 Risk of cancer after herpes zoster: a population-based study. Herpes zoster has been associated with immune suppression, as has an increased risk of cancer. To determine whether patients with herpes zoster are at increased risk for subsequent cancer, we followed 590 residents of Rochester, Minnesota, for 9389 person-years after the diagnosis of herpes zoster. Subsequent relative cancer risks, overall and by specific site, were determined for the entire cohort and selected subgroups. In addition, relative risks of cancer during various intervals after the diagnosis of herpes zoster were determined. The overall relative risk was 1.1 (95 per cent confidence interval, 0.9 to 1.3). Relative risks of specific cancer sites were not different from unity except for those for colon and bladder tumors in women, both of which were slightly elevated. Patients with disseminated, recurrent, or gangrenous zoster, with postherpetic neuralgia, and with ophthalmic zoster wee not at elevated risk for subsequent cancer. These findings do not support the investigation of patients for occult cancer at the time of diagnosis of herpes zoster or enhanced surveillance for cancer after such a diagnosis. abstract_id: PUBMED:24348886 Herpes zoster as a marker of underlying malignancy. Background: Both herpes zoster and malignancy are associated with immunosuppression. However, the association between herpes zoster and the subsequent diagnosis of malignancy is unclear. We undertook this study to assess whether a diagnosis of herpes zoster is a risk factor for subsequent malignancy. Methods: For this matched retrospective cohort study, a physician billing database was used to identify individuals 18 years of age or older with a diagnosis of herpes zoster and no prior diagnosis of cancer or HIV infection. Individuals with a herpes zoster diagnosis were matched one-to-one to individuals without a herpes zoster diagnosis, and both groups were examined for up to 5 years for diagnosis of cancer. Results: A total of 542,575 individuals with a diagnosis of herpes zoster were identified. Compared with matched controls, these patients were more likely (p &lt; 0.001) to have a history of myocardial infarction, asthma, congestive heart failure, chronic obstructive pulmonary disease, diabetes mellitus, and hypertension. The incidence of cancer was significantly greater among individuals with herpes zoster than among those without herpes zoster, for both men and women and across all time intervals studied (up to 5 years). The greatest adjusted hazard ratio was seen 180 days after a herpes zoster diagnosis (1.19, 95% confidence interval 1.12-1.25); the hazard ratio decreased as the time from herpes zoster diagnosis increased. Lymphoma was the type of cancer with the greatest relative increase in incidence following diagnosis of herpes zoster. Interpretation: There is a risk of malignancy following an episode of herpes zoster in both men and women and in all age groups 18 years and over. The risk is greatest during the first 180 days following the diagnosis of herpes zoster. abstract_id: PUBMED:23114019 Herpes zoster is associated with an increased risk of subsequent lymphoid malignancies - a nationwide population-based matched-control study in Taiwan. Background: Infectious agents have been shown to contribute to the development of lymphoid malignancies. The different distribution of lymphoid malignancies in Asian and Western populations suggests possibly different etiologies in Asian populations. Herpes zoster infection, commonly seen in immunocompromised persons, has been reported to be associated with lymphoid malignancies in retrospective case-control studies from Western populations, but the results are controversial and large-scale prospective studies from Asian populations are lacking. Methods: A nationwide population-based matched-controlled prospective study on Taiwanese patients was performed using the National Health Insurance Research Database from 1996 to 2007. Herpes zoster and malignancies were defined by compatible ICD-9-CM (International Classification of Disease, 9th Revision, Clinical Modification) codes. Patients who had been diagnosed with any malignancies before herpes zoster, with known viral infections including human immunodeficiency virus, and duration from herpes zoster to diagnosis of malignancies less than 6 months were excluded. Results: Of 42,498 patients with herpes zoster prior to the diagnosis of any malignancies, the cumulative incidence for lymphoid malignancies was 0.11% (n = 48), compared with 0.06% (n = 106) in 169,983 age- and gender-matched controls (univariate hazard ratio (HR): 1.82, 95%CI: 1.29-2.55). The most common lymphoid malignancy was non-Hodgkin's lymphoma (60.4%, n = 29), followed by multiple myeloma (27.1%, n = 13). Risk for developing lymphoid malignancies is significantly higher in herpes zoster patients (log rank P = 0.005). After adjusting for presence of any comorbidities in Charlson comorbidity index, time-dependent covariate for herpes group, and income category using Cox proportional hazard regressions, herpes zoster patients had an increased risk of developing lymphoid malignancies (adjusted HR: 1.68, 95%CI: 1.35-2.42, P = 0.0026), but did not have an increased risk of developing non-lymphoid malignancies (adjusted HR: 1.00, 95%CI: 0.91-1.05, P = 0.872). Conclusion: Preceding herpes zoster infection is an independent risk marker for subsequent lymphoid malignancies in Taiwanese subjects. Further studies are warranted for pathogenesis exploration and preventive strategies in Asian populations. abstract_id: PUBMED:18154871 Oral manifestations of internal malignancy and paraneoplastic syndromes. Malignant tumors of visceral organs are a fundamental feature of familial cancer and paraneoplastic syndromes. In many instances, the presence of an internal and often occult malignancy may be forewarned by various external manifestations. Several of these findings are preferentially localized to the head and neck region, including the oral cavity proper. This places the dental practitioner in a unique position to detect these "markers" of occult neoplastic involvement. Because these markers may present before an established syndrome or cancer diagnosis, even representing the first expression of disease in some cases, early recognition by a dentist may lead to timely diagnosis and management of these cancer-associated syndromes. Answer: The relationship between herpes zoster (HZ) and the risk of occult or subsequent malignancy has been explored in various studies, with mixed results. Some studies suggest that there is an association between HZ and an increased risk of cancer, particularly in certain populations and within specific time frames after a herpes zoster diagnosis. For instance, a retrospective cohort study based on the Intego database indicated a significant increase in cancer emergence in patients above the age of 65 years with herpes zoster, especially in females, but not in males (PUBMED:15720930). Another population-based study from Taiwan found that the risk of subsequent cancer, particularly lung cancer, was significantly higher during the first 2 years after the initial diagnosis of HZ (PUBMED:23545577). Similarly, a systematic review and meta-analysis found a pooled relative risk for any cancer of 1.42 overall and 1.83 at one year after zoster, with the highest estimates generally reported for occult hematological cancer (PUBMED:27845154). A nationwide population-based matched-control study in Taiwan also reported that preceding herpes zoster infection is an independent risk marker for subsequent lymphoid malignancies (PUBMED:23114019). However, other studies have not found a significant association between herpes zoster and subsequent malignancy. A study comparing outpatients with herpes zoster to control patients with psoriasis found no increased frequency of malignancy subsequent to the diagnosis of herpes zoster (PUBMED:6480952). Another study that followed residents of Rochester, Minnesota, after the diagnosis of herpes zoster found that the overall relative risk of subsequent cancer was not elevated (PUBMED:6979711). A critical review of the literature also suggested that herpes zoster is a poor marker for internal malignancy and that extensive workups to find occult malignancy are not indicated (PUBMED:7481976). In conclusion, while some studies suggest an association between herpes zoster and an increased risk of certain types of cancer, particularly in older adults and within a certain time frame after the HZ diagnosis, other studies do not support this view. The evidence is mixed, and the absolute risk of cancer after herpes zoster is generally low, which limits the clinical implications for extensive cancer testing in herpes zoster patients (PUBMED:27845154).
Instruction: Being an elderly woman: is it a risk factor for morbidity after coronary artery bypass surgery? Abstracts: abstract_id: PUBMED:22396586 Addition of frailty and disability to cardiac surgery risk scores identifies elderly patients at high risk of mortality or major morbidity. Background- Cardiac surgery risk scores perform poorly in elderly patients, in part because they do not take into account frailty and disability which are critical determinants of health status with advanced age. There is an unmet need to combine established cardiac surgery risk scores with measures of frailty and disability to provide a more complete model for risk prediction in elderly patients undergoing cardiac surgery. Methods and Results- This was a prospective, multicenter cohort study of elderly patients (≥70 years) undergoing coronary artery bypass and/or valve surgery in the United States and Canada. Four different frailty scales, 3 disability scales, and 5 cardiac surgery risk scores were measured in all patients. The primary outcome was the STS composite end point of in-hospital postoperative mortality or major morbidity. A total of 152 patients were enrolled, with a mean age of 75.9±4.4 years and 34% women. Depending on the scale used, 20-46% of patients were found to be frail, and 5-76% were found to have at least 1 disability. The most predictive scale in each domain was: 5-meter gait speed ≥6 seconds as a measure of frailty (odds ratio [OR], 2.63; 95% confidence interval [CI], 1.17-5.90), ≥3 impairments in the Nagi scale as a measure of disability (OR, 2.98; 95% CI, 1.35-6.56) and either the Parsonnet score (OR, 1.08; 95% CI, 1.04-1.13) or Society of Thoracic Surgeons Predicted Risk of Mortality or Major Morbidity (STS-PROMM) (OR, 1.05; 95% CI, 1.01-1.09) as a cardiac surgery risk score. Compared with the Parsonnet score or STS-PROMM alone, (area under the curve, 0.68-0.72), addition of frailty and disability provided incremental value and improved model discrimination (area under the curve, 0.73-0.76). Conclusions- Clinicians should use an integrative approach combining frailty, disability, and risk scores to better characterize elderly patients referred for cardiac surgery and identify those that are at increased risk. abstract_id: PUBMED:21050978 Gait speed as an incremental predictor of mortality and major morbidity in elderly patients undergoing cardiac surgery. Objectives: The purpose of this study was to test the value of gait speed, a clinical marker for frailty, to improve the prediction of mortality and major morbidity in elderly patients undergoing cardiac surgery. Background: It is increasingly difficult to predict the elderly patient's risk posed by cardiac surgery because existing risk assessment tools are incomplete. Methods: A multicenter prospective cohort of elderly patients undergoing cardiac surgery was assembled at 4 tertiary care hospitals between 2008 and 2009. Patients were eligible if they were 70 years of age or older and were scheduled for coronary artery bypass and/or valve replacement or repair. The primary predictor was slow gait speed, defined as a time taken to walk 5 m of ≥ 6 s. The primary end point was a composite of in-hospital post-operative mortality or major morbidity. Results: The cohort consisted of 131 patients with a mean age of 75.8 ± 4.4 years; 34% were female patients. Sixty patients (46%) were classified as slow walkers before cardiac surgery. Slow walkers were more likely to be female (43% vs. 25%, p = 0.03) and diabetic (50% vs. 28%, p = 0.01). Thirty patients (23%) experienced the primary composite end point of mortality or major morbidity after cardiac surgery. Slow gait speed was an independent predictor of the composite end point after adjusting for the Society of Thoracic Surgeons risk score (odds ratio: 3.05; 95% confidence interval: 1.23 to 7.54). Conclusions: Gait speed is a simple and effective test that may identify a subset of vulnerable elderly patients at incrementally higher risk of mortality and major morbidity after cardiac surgery. abstract_id: PUBMED:17449262 Being an elderly woman: is it a risk factor for morbidity after coronary artery bypass surgery? Objective: Despite the refinements in surgical techniques and postoperative care, elderly women still have a higher prevalence of postoperative morbidity. Methods: The outcomes of 112 elderly women (&gt;80 years) who underwent an elective CABG procedure were compared with those of males operated during the same time interval (n, 164). Results: Median age of female and male patients were 82 and 83 years, respectively. Mean number of grafts did not differ significantly (3.7+/-0.8 vs 3.9+/-0.3, p=0.4) between groups. Overall early operative mortality rate was 8.6% (24 of 276 patients); 8.9% (10 of 112 patients) for female and 8.5% (14 of 164 patients) for male patients (p=0.1). Postoperative complications including prolonged ventilation time (13.4% in females vs 8.5% in male, p&lt;0.01), atrial fibrillation (40% in females vs 33% in males, p=0.01), sternal reclosure (8% in females vs 4.2% in males, p=0.01), pneumonia (5.3% in females vs 3% in males, p=0.03), leg wound infection (11.7% in females vs 2.4% in males, p&lt;0.001), renal dysfunction (10.7% in females vs 7.3% in young patients, p=0.02) have been found to be significantly higher in elderly women. Mean intensive care unit (3.2+/-1.1 days in females vs 1.6+/-0.4 in males, p=0.03), and hospital stays (13.6+/-2.1 days in females vs 9.1+/-1.2 in males, p=0.02) were also longer in female patients. Five-year survivals including all deaths for female and male patients were 57% and 62%, respectively. Conclusions: In elderly women, revascularization procedures can be done with acceptable mortality rates; but these patients are still associated with a higher prevalence of postoperative morbidity when compared with the male counterparts. Therefore, these patients have to be very carefully evaluated preoperatively and their postoperative care should be more comprehensive to reduce the incidence of postoperative complications. abstract_id: PUBMED:23827207 Anaortic off-pump coronary artery bypass grafting in the elderly and very elderly. Background: Elderly patients undergoing surgical revascularisation may disproportionately benefit from avoiding manipulation of the ascending aorta and cardiopulmonary bypass. In this multi-centre observational study, we sought to determine the 30-day outcomes of anaortic off-pump coronary artery bypass (OPCAB) in an elderly and very-elderly population. Methods: Data were prospectively collected for all patients aged &gt;70 years old (y) undergoing anaortic OPCAB at three hospitals between January 2002 and October 2011. Analysis was carried out on two age sub-groups (70-79 y vs. ≥80 y). Multiple logistic regression was used to identify predictors of post-operative 30-day mortality. Results: In total, 1135 patients &gt;70 years underwent anaortic OPCAB - 817 (72%) patients were aged 70-79 y while 318 (28%) were aged ≥80 y. The rate of mortality and morbidity for both groups was low: 30-day mortality (70-79 y: 1.8%; ≥80 y: 2.8%) or permanent stroke (70-79 y: 0.2%; ≥80 y: 0.9%). Chronic lung disease, a history of previous myocardial infarction and left main disease were independent predictors of 30-day mortality. Conclusions: OPCAB is associated with low rates of 30-day mortality and peri-operative stroke in this elderly and very elderly patient cohort. Anaortic OPCAB can provide excellent short-term post-operative outcomes and may give the elderly and very elderly population the opportunity to benefit from surgical coronary revascularisation. abstract_id: PUBMED:8637936 Cardiovascular risk-factor reduction in elderly patients with cardiac disease. As a result of increased numbers of elderly persons in the United States, many of whom have coronary heart disease, the mean age of patients in coronary care programs following a myocardial infarction is now greater than age 65 years. One third of cardiac operations on adults are performed on elderly patients. Despite the fact that in the United States utilization of interventions such as coronary artery bypass grafting and angioplasty is on the rise in elderly patients, there appears to be some reluctance to manage risk factors in this group. The four major modifiable risk factors for the development and progression of coronary artery disease are hypertension, hypercholesterolemia, habitual smoking, and physical infectivity. The effectiveness of risk-factor modification for the secondary prevention of events of cardiac morbidity and mortality has been described for younger persons; however, less is known for elderly persons. The purpose of this review is to focus on these four major risk factors in elderly patients with cardiac disease by describing the effectiveness of modification of these factors, methods for doing so, and considerations for the future of such interventions. abstract_id: PUBMED:28417859 Cardiovascular Surgery in the Elderly. The elderly population is the fastest growing demographic in Western countries. As the population ages, the incidence of age-related comorbidities such as diabetes mellitus, chronic obstructive pulmonary disease, peripheral vascular disease, renal disease, cerebrovascular disease, and cardiovascular disease increases. With cardiovascular disease occurring in approximately one-quarter of the population over the age of 75 years and more than half of all cardiac procedures performed on this age group, the number of potential elderly surgical candidates is increasing. However, data suggest that old age is associated with increased morbidity and mortality following cardiac surgery. Over the past 2 decades, improvements in myocardial protection, extracorporeal circulation, anesthesia, and surgical techniques have significantly reduced the morbidity and mortality associated with cardiac surgery. Although most prospective studies exclude elderly patients, data from large retrospective studies and subgroup analyses suggest that cardiac surgery is a viable option for many elderly patients with cardiovascular disease, with good outcomes observed in reasonable-risk candidates; moreover, there are a growing number of available less-invasive options for them when surgical risk becomes prohibitive. In this article, we discuss the current state of cardiovascular surgery in the elderly as well as emerging technologies on the horizon. abstract_id: PUBMED:34988175 Risk factors for delirium after coronary artery bypass grafting in elderly patients. Background: Postoperative delirium (POD) is a common complication of major surgery and is associated with fortified morbidity, mortality, and long-term cognitive dysfunction. This study sought to evaluate the incidence and risk factors of delirium in elderly (aged ≥65 years) patients who underwent coronary artery bypass grafting (CABG). Methods: We performed a retrospective cohort analysis. The clinical data of 1,426 elderly patients who underwent CABG at our hospital from October 2018 to October 2020 were collected and analyzed. Delirium was defined as any positive Confusion Assessment Method for Intensive Care Unit examination following surgery during the intensive care unit (ICU) stay. Risk factors for POD were authenticated via univariate and multivariate logistic regression analyses. The intraoperative and postoperative factors were evaluated using a propensity score-matched regression analysis based on preoperative factors. Results: A total of 39.3% (560 of 1,426) of elderly patients who underwent CABG were diagnosed with delirium. Based on the multivariate analysis, age [odds ratio (OR) 1.013], long-term alcohol consumption (OR 2.026), diabetes (OR 1.51), stroke (OR 1.41), and extracardiac arteriopathy (OR 1.61) were found to be independent predictors of post-cardiac surgery delirium. Conversely, high-density lipoprotein cholesterol levels (HDL-C) ≥1.0 mmol/L (OR 0.71) was found to be a protective factor. Among the intraoperative and postoperative factors evaluated, only a prolonged ICU stay (≥48 h; OR 1.62) was identified as a risk factor for developing delirium in the propensity-score matched analysis, after adjusting for potential confounding variables and selection bias. In contrast to the earlier analysis, the multivariate logistic regression further revealed that peri-procedural myocardial infarction and postoperative arterial oxygen partial pressures were not significantly associated with POD. Conclusions: The incidence of POD in elderly patients undergoing CABG was high. Factors such as older age, long-term alcohol consumption, diabetes, stroke, and extracardiac arteriopathy were independent risk factors for POD in these patients. In relation to the intraoperative and postoperative factors, after adjusting for preoperative confounding factors, only a prolonged ICU stay was identified as a risk factor for POD. Additionally, high levels of HDL-C may be beneficial in reducing the incidence of delirium. abstract_id: PUBMED:37833902 A Network Comprised of miR-15b and miR-29a Is Involved in Vascular Endothelial Growth Factor Pathway Regulation in Thymus Adipose Tissue from Elderly Ischemic Cardiomyopathy Subjects. As the human thymus ages, it undergoes a transformation into adipose tissue known as TAT. Interestingly, in previous research, we observed elevated levels of vascular endothelial growth factor A (VEGFA) in TAT from patients with ischemic cardiomyopathy (IC), particularly in those over 70 years old. Moreover, in contrast to subcutaneous adipose tissue (SAT), TAT in elderly individuals exhibits enhanced angiogenic properties and the ability to stimulate tube formation. This makes TAT a promising candidate for angiogenic therapies and the regeneration of ischemic tissues following coronary surgery. MicroRNAs (miRNAs) have emerged as attractive therapeutic targets, especially those that regulate angiogenic processes. The study's purpose is to determine the miRNA network associated with both the VEGFA pathway regulation and the enrichment of age-linked angiogenesis in the TAT. RT-PCR was used to analyze angiogenic miRNAs and the expression levels of their predicted target genes in both TAT and SAT from elderly and middle-aged patients treated with coronary artery bypass graft surgery. miRTargetLink Human was used to search for miRNAs and their target genes. PANTHER was used to annotate the biological processes of the predicted targets. The expression of miR-15b-5p and miR-29a-3p was significantly upregulated in the TAT of elderly compared with middle-aged patients. Interestingly, VEGFA and other angiogenic targets were significantly upregulated in the TAT of elderly patients. Specifically: JAG1, PDGFC, VEGFA, FGF2, KDR, NOTCH2, FOS, PDGFRA, PDGFRB, and RHOB were upregulated, while PIK3CG and WNT7A were downregulated. Our results provide strong evidence of a miRNA/mRNA interaction network linked with age-associated TAT angiogenic enrichment in patients with IC. abstract_id: PUBMED:8523877 Myocardial performance in elderly patients after cardiopulmonary bypass is suppressed by tumor necrosis factor. The aim of this study was to determine whether elderly patients (aged &gt; or = 65 years, n = 20) in comparison with younger patients (aged &lt; or = 55 years, n = 23) demonstrate a different biochemical and hemodynamic response to coronary artery bypass operations. In the elderly group, we calculated a smaller body surface area (p &lt; 0.01) than that in the younger group, and more female patients were included in this group (p &lt; 0.05). During cardiopulmonary bypass, the elderly had higher endotoxin plasma concentrations (p &lt; 0.01) than the younger patients, and significantly more circulating tumor necrosis factor-alpha was found after operation (p &lt; 0.04). In the intensive care unit, the elderly patients had a significantly higher pulmonary capillary wedge pressure (p &lt; 0.001), a higher mean pulmonary artery pressure (p &lt; 0.01), and a lower calculated left ventricular stroke work index (p &lt; 0.05). Multivariate analysis for the postoperative outcome showed that the intergroup differences in tumor necrosis factor-alpha, mean pulmonary artery pressure, and pulmonary capillary wedge pressure could be explained mainly by the difference in age between the groups and that the calculated left ventricular stroke work index difference could be explained by the difference in circulating tumor necrosis factor-alpha levels. Thus in elderly patients higher circulating endotoxin and tumor necrosis factor-alpha concentrations were detected than in younger patients, which clinically resulted in a suppressed myocardial performance. abstract_id: PUBMED:16112962 Elevated placental growth factor (PlGF) predicts cardiovascular morbidity and mortality in type 1 diabetic patients with diabetic nephropathy. Background And Aim: Placental growth factor (PlGF) is up-regulated in early and advanced atherosclerotic lesions, acts as a primary inflammatory instigator of atherosclerotic plaque instability, and may be an independent biomarker of adverse outcome in patients with acute coronary syndromes. In diabetic nephropathy the relative cardiovascular mortality and morbidity is increased and therefore, this study investigated the prognostic value of PlGF in a large cohort of type 1 diabetic patients with and without diabetic nephropathy. Research Design And Methods: In a prospective, observational follow-up study 190 type 1 diabetic patients with overt diabetic nephropathy (116 men, age (mean (SD)) 41+/-10 years, duration of diabetes 28+/-8 years, glomerular filtration rate (GFR) 76+/-33 mL/min/1.73 m2) and a matched control group of 174 patients with normoalbuminuria (104 men, age 43+/-10 years, duration of diabetes 27+/-9) were followed for 10 years (range: 0-10.3). The primary endpoint was a composite endpoint of cardiovascular death, hospitalization for myocardial infarction or stroke, coronary artery bypass grafting or percutanous coronary intervention, ischaemic amputation or peripheral bypass-surgery. Plasma PlGF was determined by an enzyme linked immunosorbent assay at baseline. Results: During 10 years of follow-up 74 patients (39%) with diabetic nephropathy reached the primary endpoint versus only 18 (10%) of normoalbuminuric patients, log rank test; p&lt;0.001. During follow-up 16 (25%) patients in the lowest, 24 (39%) in the middle and 34 (52%) patients in the upper tertile reached the primary cardiovascular endpoint, p=0.007. Hazard ratios in the second and third tertile as compared with the first tertile were 1.76 (0.92-3.38) and 2.64 (1.41-4.91) (p=0.009). Cox regression analyses including PlGF concentration as a continuous variable revealed an unadjusted hazard ratio of the primary endpoint for each 1 ng/L increase in PlGF of 1.10 (1.03-1.16), p=0.002; covariate adjusted hazard ratio 1.07 (1.00-1.14), p=0.03. Conclusions: Increased PlGF is a new independent predictor of cardiovascular morbidity and mortality in type 1 diabetic patients with diabetic nephropathy. Answer: Yes, being an elderly woman is associated with a higher prevalence of postoperative morbidity after coronary artery bypass surgery (CABG). A study comparing outcomes of elderly women (over 80 years) with those of males operated during the same time interval found that elderly women experienced significantly higher rates of postoperative complications such as prolonged ventilation time, atrial fibrillation, sternal reclosure, pneumonia, leg wound infection, and renal dysfunction. Additionally, elderly female patients had longer stays in the intensive care unit and hospital compared to their male counterparts (PUBMED:17449262). This suggests that elderly women should be carefully evaluated preoperatively, and their postoperative care should be more comprehensive to reduce the incidence of complications.
Instruction: Thromboxane production in human lung during cardiopulmonary bypass: beneficial effect of aspirin? Abstracts: abstract_id: PUBMED:9456103 Thromboxane production in human lung during cardiopulmonary bypass: beneficial effect of aspirin? Background: Increased systemic levels of thromboxane (Tx) during cardiopulmonary bypass (CPB) in humans have been reported. It is not known whether this reflects a general systemic response to the surgical procedure or an increased pulmonary production of Tx in response to ischemia and reperfusion. Methods: Thromboxane B2 levels were measured in the right atrium and left atrium of 14 patients undergoing coronary artery bypass grafting for angina. Eight patients (group 1) were without aspirin for at least 15 days before operation, and 6 patients (group 2) were treated with aspirin (100 mg/day) for at least 1 month before operation. Levels of TxB2 were determined by enzyme immunoassay after lipid extraction and separation. Results: Thromboxane B2 levels were elevated throughout CPB. In group 1, left atrial TxB2 levels were significantly higher (p &lt; 0.05) than right atrial levels at all study points during CPB. After pulmonary reperfusion, TxB2 levels in both atria increased significantly (p &lt; 0.02) compared with the levels before cross-clamping of the aorta, and there was an increasing gradient between the two atria (p &lt; 0.05). Mean plasma TxB2 levels during CPB in group 2 were significantly reduced (p &lt; 0.0001) in the right atrium (by 73%) and in the left atrium (by 69%) compared with levels in group 1. Conclusions: The rise in TxB2 levels in the left atrium after CPB in humans reflects production of Tx mainly in the lungs, most probably by ischemic pulmonary tissue and intravascular hematologic components. Aspirin markedly reduces Tx production during CPB, and it might play a major role in preventing pulmonary injury after operations with CPB in humans. abstract_id: PUBMED:8252718 Rapid, direct enzyme immunoassay of 11-keto-thromboxane B2 in urine, validated by immunoaffinity/gas chromatography-mass spectrometry. We have developed a direct enzyme immunoassay (EIA) for quantifying immunoreactive 11-keto-thromboxane B2 (iKTXB) in unprocessed human urine. Cross-reactivity with other thromboxane metabolites and prostanoids was negligible. Analytical recovery of 11-keto-TXB2 in urine specimens was 97.4% to 99.8%. Total imprecision for two clinical specimens was 8.5% and 12.2%. Intake of acetylsalicylic acid decreased the measured concentration of iKTXB. Cardiopulmonary bypass, a procedure known to activate platelets, increased the mean excretion rate of iKTXB 10-fold. Simultaneous gas chromatography-mass spectrometry analysis of 11-keto-TXB2 and 11-keto-2,3-dinor TXB2 in urine specimens (n = 17) from healthy subjects indicated that urinary iKTXB concentrations measured by EIA represented a sum of the two 11-keto metabolites. We conclude that the direct EIA is sufficiently sensitive, rapid, simple, and specific to allow screening for alterations in thromboxane biosynthesis in patients. abstract_id: PUBMED:32082794 Frequency of development of aspirin resistance in the early postoperative period and inadequate inhibition of thromboxane A2 production after coronary artery bypass surgery. Background: This study aims to investigate the frequency of the development of aspirin resistance, whether or not this resistance was reversible, and to evaluate the efficiency of the mechanism of incomplete inhibition of thromboxane A2 in development of aspirin resistance in the early postoperative period in patients who had undergone coronary artery bypass grafting. Methods: Eighty patients (55 males, 25 females; mean age 63.1±9.2 years; range 51 to 75 years) who underwent coronary artery bypass grafting between February 2009 and March 2010 at our clinic were prospectively evaluated. Venous blood samples were collected from all patients and evaluated by a platelet function analyzer in the preoperative period and on postoperative days 7 and 15. Aspirin resistance diagnosis was defined as collagen-epinephrine closure time less than 186 seconds. The urine levels of 11-dehidro thromboxane B2 were also measured on postoperative day one. Results: Aspirin resistance was found in 23 patients (28.75%) in the preoperative period, in 31 patients (38.75%) on the postoperative seventh day and in 25 patients (31.25%) on the postoperative 15th day. The urine levels of 11-dehidro thromboxane B2 in patients with aspirin resistance on the postoperative seventh day were significantly higher than those in patients without aspirin resistance (p&lt;0.001). The mean aortic cross-clamping time (p=0.003) and cardiopulmonary bypass time (p=0.029) in the patients with aspirin resistance on the postoperative seventh day were significantly higher than those in patients without aspirin resistance. Conclusion: The results of this study suggest that aspirin resistance develops within the first seven days after coronary artery bypass grafting and is highly reversible, and that the mechanism of inadequate inhibition of thromboxane A2 by aspirin has a role in the development of aspirin resistance in the early postoperative period. abstract_id: PUBMED:8747897 The haemodynamic effects of the thromboxane A2 receptor antagonist GR32191B during cardiopulmonary bypass in the dog. This study examined whether treatment with the specific thromboxane (TX) A2 receptor antagonist GR32191B would result in an improvement in peripheral haemodynamics during and after cardiopulmonary bypass (CPB) in anaesthetized dogs compared with animals given either saline (control) or aspirin. Following thoracotomy, heparinization and aortic cannulation, and 35 minutes before CPB, dogs received intravenously either GR32191B (15 micrograms/kg/min), saline (50 ml bolus) or aspirin (225 mg bolus) (n = 6 per group). Cardiac output (dye dilution), femoral artery blood flow (electromagnetic flowmeter), gastrocnemius muscle tissue perfusion (133Xe clearance), retinal blood flow (fluorescein angiography), and thromboxane biosynthesis (urinary excretion rates of TXB2 and the metabolite 2,3-dinor-TXB2) were measured before, during and after a standard 90 minute period of CPB at 2.4 l/min/m2 and 28 degrees C. The aspirin-treated group manifested an eightfold reduction in TXB2 excretion compared with controls, indicating a decrease in TXA2 biosynthesis. There were few haemodynamic differences between the groups, though the aspirin-treated group had better maintained muscle tissue perfusion post-CPB and significantly fewer retinal microcirculatory occlusions than GR32191B-treated animals. We conclude that specific TXA2 receptor antagonism provides no significant improvement in peripheral haemodynamics; rather aspirin provides a modest haemodynamic benefit. abstract_id: PUBMED:1614195 Effect of skin temperature on platelet function in patients undergoing extracorporeal bypass. Thirty-seven patients undergoing cardiopulmonary bypass operations were studied to assess the effect of skin temperature on platelet function. Differences in skin temperature between the two arms were created during bypass, at the completion of bypass, and at 2 and 24 hours after the completion of bypass. In each of 37 patients the temperature of one arm was increased with a water-filled blanket set at 40 degrees C. In 11 of these patients the other arm was allowed to equilibrate with the environment, and in the other 26 patients the arm was cooled with ice. Except for the differences in local skin temperature between the two arms, all factors known to affect the patient's bleeding time were similar. Measurements were made of bleeding times and the levels of thromboxane B2 and 6-keto-prostaglandin F1 alpha in shed blood obtained at the template bleeding time site. In the 33 patients not treated with aspirin, local hypothermia produced an increased bleeding time and a significant reduction in the thromboxane B2 level at the bleeding time site, but no reduction in 6-keto-prostaglandin F1 alpha level. Local rewarming produced a significant increase in the shed blood thromboxane B2 level. In the four patients treated with aspirin, local hypothermia produced no differences in bleeding times or shed blood levels of thromboxane B2 or 6-keto-prostaglandin F1 alpha. These data show the benefits of rewarming patients with hypothermia who have nonsurgical blood loss to restore to normal both core and peripheral temperatures before resorting to the transfusion of homologous blood products. abstract_id: PUBMED:10027713 The effects of aspirin and hypothermia on platelet function in vivo. Patients undergoing hypothermic cardiopulmonary bypass are often receiving aspirin therapy. Hypothermia, aspirin and cardiopulmonary bypass can each induce a platelet function defect, but it is not known if the effects of aspirin and hypothermia are additive in this regard. To address this question in humans in vivo, the forearm skin temperature of healthy volunteers was equilibrated and maintained at either normothermia (32 degrees C) or hypothermia (28 degrees C or 22 degrees C) before and 16 h after the ingestion of 650 mg aspirin. A standardized template bleeding time was performed on the forearm and the shed blood emerging from the wound was assayed for platelet surface P-selectin expression by whole blood flow cytometry (reflecting alpha granule secretion) and thromboxane B2 (the stable metabolite of thromboxane A2) by radioimmunoassay. Hypothermia resulted in marked prolongation of the bleeding time. Aspirin resulted in prolongation of the bleeding time under normothermic conditions, but only minimally augmented the hypothermia-induced prolongation of the bleeding time. Platelet surface P-selectin up-regulation in shed blood was abolished by hypothermia. Aspirin had no effect on maximal platelet surface P-selectin expression under normothermic or hypothermic conditions. Both hypothermia and aspirin resulted in markedly reduced shed blood thromboxane B2. Although aspirin slightly augmented the hypothermia-induced reduction in shed blood thromboxane B2, the concentration of thromboxane generated in shed blood under hypothermic conditions in the absence of aspirin had no effect on platelet surface P-selectin or platelet aggregation in whole blood. In conclusion, as determined by three independent parameters of the shed blood emerging from a standardized bleeding time wound (bleeding time, platelet surface P-selectin, and thromboxane B2), aspirin did not significantly augment hypothermia-induced platelet dysfunction in vivo. abstract_id: PUBMED:7524460 Aprotinin preserves hemostasis in aspirin-treated patients undergoing cardiopulmonary bypass. Various clinical trials have shown that hemostasis is improved by the administration of aprotinin during cardiopulmonary bypass. However, this effect has not been proved for those patients treated preoperatively with aspirin. Therefore, a double-blind, placebo-controlled study was conducted to test the efficacy of low-dose aprotinin (2 x 10(6) KIU in the pump prime solution) in preserving hemostasis in 40 aspirin-treated (325 mg) patients undergoing coronary artery bypass grafting. Aprotinin brought about a decrease in the postoperative blood loss (p &lt; 0.05). The in vitro bleeding test (Thrombostat) demonstrated that aprotinin preserved the platelet hemostatic function in aspirin-treated patients during cardiopulmonary bypass (p &lt; 0.05). The inhibitory effects of aspirin on collagen-induced platelet aggregation and thromboxane production were not influenced by aprotinin treatment. The findings from the present study indicate that aprotinin preserves hemostasis in aspirin-treated patients during cardiopulmonary bypass, but aspirin's effect on platelets is maintained. Therefore, aprotinin seems to be a useful adjunct treatment in aspirin-treated patients undergoing coronary artery bypass grafting. abstract_id: PUBMED:7564450 Hemostatic function of aspirin-treated platelets vulnerable to cardiopulmonary bypass. Altered shear-induced pathway. The impaired hemostasis of aspirin-treated patients is an annoying problem during and after cardiopulmonary bypass. The hemostatic function of platelets comprises two mechanisms: the shear-induced and the cyclooxygenase pathways. Because the latter is inhibited in aspirin-treated patients, the hemostatic function depends mainly on the former pathway. To investigate the effect of cardiopulmonary bypass on the shear-induced pathway, a double-blind study of preoperative aspirin treatment (325 mg) and placebo was conducted in 40 patients undergoing coronary artery bypass grafting. Postoperative blood loss was higher in the aspirin-treated patients than in the placebo-treated patients (p &lt; 0.05). The shear-induced hemostasis was monitored by the in vitro bleeding test (Thrombostat), which mimics bleeding through an injured arteriole. The shear-induced pathway of aspirin-treated platelets was not affected before cardiopulmonary bypass, but it was impaired more during the operation (p &lt; 0.01) and remained worse afterward (p &lt; 0.05), compared with that of placebo-treated platelets. The inhibitory effects of aspirin on thromboxane production and on collagen-induced platelet aggregation remained throughout the operation. In aspirin-treated platelets, the aggregation capacity induced by adenosine diphosphate was inhibited before the operation (p &lt; 0.05) and showed substantial recovery during the operation (p &lt; 0.05). These results suggest that the shear-induced pathway of aspirin-treated platelets is more vulnerable to cardiopulmonary bypass than the pathway in normal platelets and causes severe impairment of hemostasis afterward. abstract_id: PUBMED:14681089 Beneficial effect of aspirin on renal function post-cardiopulmonary bypass. Urine thromboxane, plasma creatinine, and creatinine clearance were determined perioperatively in 20 patients undergoing coronary bypass surgery. Ten patients took aspirin until the day of surgery, and 10 discontinued aspirin at least one week before surgery. A significant increase in urine thromboxane following establishment of cardiopulmonary bypass was observed only in the control group. Plasma creatinine increased in the control group on the 1st postoperative day (from 81.9 +/- 13.2 to 97.6 +/- 13.2 micromol.L(-1), p = 0.02) and decreased next day to the preoperative level (82.7 +/- 9 micromol.L(-1), p = 0.03). In the aspirin group, creatinine remained unchanged on the 1st postoperative day (89.4 +/- 14.2 vs. 87.2 +/- 7.7 micromol.L(-1), p = 0.6), and increased significantly on the 2nd day (101.4 +/- 8.5 micromol.L(-1), p = 0.01). The aspirin group had higher creatinine levels (p &lt; 0.0001) and lower creatinine clearance (60.2 +/- 16.5 vs. 82 +/- 25.7 mL.min(-1), p &lt; 0.0001) than the control group on the 2nd postoperative day. A significant positive correlation was seen between urine thromboxane and creatinine on day 2 in both groups (r = 0.6). Aspirin administrated before coronary surgery may have a beneficial effect on renal function, probably mediated by its antiplatelet activity and thromboxane inhibition. abstract_id: PUBMED:17957569 Aspirin-induced platelet inhibition in patients undergoing cardiac surgery. Platelet function and response to pharmacological inhibition are altered by cardiac surgery. For example, aggregation is increased early after aortic valve replacement (AVR) and platelet response to aspirin is often insufficient after coronary artery bypass grafting (CABG). We hypothesized that the effect of aspirin administration after cardiac surgery might be impaired due to platelet activation. Therefore, the antiplatelet effect of aspirin was compared in patients (n = 20 per group) after CABG and AVR surgery (bileaflet prosthesis). Arachidonic acid-induced aggregation (turbidimetry) and thromboxane formation (radioimmunoassay) were determined before and 1, 5, and 10 days after surgery. In CABG-patients, antiplatelet treatment had been discontinued 10 days before surgery. Oral aspirin was started on day 1 after CABG. AVR-patients did not receive oral aspirin. Before surgery, platelet aggregation and thromboxane formation were significantly higher in patients with aortic stenosis. After CABG, thromboxane formation was not significantly changed from control values before surgery (66 +/- 13% on day 10) despite oral aspirin treatment, whereas thromboxane formation in patients undergoing AVR significantly increased compared to values before surgery (216 +/- 29% on day 10). In both groups of patients, 100 micromol/l aspirin in vitro largely inhibited platelet function before surgery, with markedly attenuated effects after surgery. In conclusion, thromboxane formation increased after AVR but not after CABG. The antiplatelet effect of aspirin, therefore, may be impaired after CABG by increased platelet activity. An additional in vitro "resistance" of platelets was seen after both CABG and AVR. Answer: Thromboxane production in the human lung during cardiopulmonary bypass (CPB) is a significant concern as it can reflect a systemic response to the surgical procedure or an increased pulmonary production of thromboxane in response to ischemia and reperfusion. A study measured thromboxane B2 levels in the right and left atria of patients undergoing coronary artery bypass grafting and found that levels were elevated throughout CPB. Notably, left atrial thromboxane B2 levels were significantly higher than right atrial levels during CPB, suggesting that the lungs, particularly ischemic pulmonary tissue and intravascular hematologic components, are a primary source of thromboxane production during CPB. Aspirin treatment (100 mg/day) for at least one month before operation significantly reduced thromboxane B2 levels during CPB by 73% in the right atrium and by 69% in the left atrium compared with levels in patients not treated with aspirin. This indicates that aspirin can markedly reduce thromboxane production during CPB and may play a major role in preventing pulmonary injury after operations involving CPB (PUBMED:9456103). Furthermore, aspirin has been shown to have a beneficial effect on renal function post-CPB, potentially mediated by its antiplatelet activity and thromboxane inhibition (PUBMED:14681089). However, the antiplatelet effect of aspirin may be impaired after cardiac surgery due to platelet activation, as seen in patients undergoing coronary artery bypass grafting (CABG) where thromboxane formation was not significantly changed from control values before surgery despite oral aspirin treatment (PUBMED:17957569). In conclusion, aspirin appears to have a beneficial effect on reducing thromboxane production in the human lung during CPB, which may help in preventing pulmonary injury associated with the procedure. However, the effectiveness of aspirin in inhibiting platelet function post-surgery may be variable and could be influenced by the surgical procedure and the level of platelet activation.
Instruction: Is there increased sympathetic activity in patients with hypertrophic cardiomyopathy? Abstracts: abstract_id: PUBMED:26457274 Cardiac sympathetic activity in hypertrophic cardiomyopathy and Tako-tsubo cardiomyopathy. 123I-meta-iodobenzylguanidine (123I-mIBG) scintigraphy has been established as an important technique to evaluate cardiac sympathetic function and it has been shown to be of clinical value, especially for the assessment of prognosis, in many cardiac diseases. The majority of 123I-mIBG scintigraphy studies have focused on patients with cardiac dysfunction due to hypertension, ischemic heart disease, or valvular disease. However less is known about the role of 123I-mIBG scintigraphy in primary cardiomyopathies. This overview shows the clinical value of 123I-mIBG scintigraphy in two types of primary cardiomyopathy: The genetic hypertrophic cardiomyopathy (HCM) and the acquired Tako-tsubo cardiomyopathy (TCM). Cardiac sympathetic activity is increased in HCM and correlates to the septal wall thickness and consequently to the LVOT obstruction. Moreover, increased cardiac sympathetic activity correlates with impaired diastolic and systolic LV function. In addition, 123I-mIBG scintigraphy may be useful for determining the risk of developing congestive heart failure and ventricular tachycardia in these patients. In TCM 123I-mIBG scintigraphy can be used to assess cardiac sympathetic hyperactivity. In addition, 123I-mIBG scintigraphy may identify those patients who are prone to TCM recurrence and may help to identify responders to individual (pharmacological) therapy. abstract_id: PUBMED:11479478 Increased QT dispersion does not reflect the increased regional variation of cardiac sympathetic nervous activity in hypertrophic cardiomyopathy. Background: QT dispersion (QTD) reflects regional variation of ventricular repolarization. However, the relationship between QTD and the regional variation of cardiac sympathetic nerve activity in hypertrophic cardiomyopathy (HCM) is not yet elucidated. Methods: Cardiac sympathetic nerve activity was evaluated in 25 patients with HCM by iodine 123 metaiodobenzylguanidine (MIBG) myocardial scintigraphy. With planar MIBG imaging, heart and mediastinum ratios (H/M) at early (20 minutes) and delayed (3 hours) acquisition and the washout rate (WR) were calculated. Polar maps of left ventricular myocardium were divided into 20 segments. The SD of early uptake (EU-SD), delayed uptake (DU-SD), and WR (WR-SD) in 20 segments as indices of regional variation were calculated. QT intervals were corrected by use of the Bazett formula. Results: Maximum QTc correlated positively with H/M early, WR, and left ventricular wall thickness (LVWT). Minimum QTc correlated positively with WR and LVWT. Corrected QTD (QTDc) correlated negatively with EU-SD, DU-SD, and WR-SD and positively with the interventricular septal thickness/posterior wall thickness ratio. Stepwise regression analysis revealed that the most powerful determinants for maximum QTc, minimum QTc, and QTDc were WR, LVWT, and EU-SD, respectively. Conclusions: QTD correlated negatively rather than positively with the regional variability index of cardiac sympathetic nerve activity. These results suggest that increased QTD in patients with HCM may not reflect increased heterogeneity of cardiac sympathetic nerve activity. abstract_id: PUBMED:29686011 Direct Recording of Cardiac and Renal Sympathetic Nerve Activity Shows Differential Control in Renovascular Hypertension. There is increasing evidence that hypertension is initiated and maintained by elevated sympathetic tone. Increased sympathetic drive to the heart is linked to cardiac hypertrophy in hypertension and worsens prognosis. However, cardiac sympathetic nerve activity (SNA) has not previously been directly recorded in hypertension. We hypothesized that directly recorded cardiac SNA levels would be elevated during hypertension and that baroreflex control of cardiac SNA would be impaired during hypertension. Adult ewes either underwent unilateral renal artery clipping (n=12) or sham surgery (n=15). Two weeks later, electrodes were placed in the contralateral renal and cardiac nerves to record SNA. Baseline levels of SNA and baroreflex control of heart rate and sympathetic drive were examined. Unilateral renal artery clipping induced hypertension (mean arterial pressure 109±2 versus 91±3 mm Hg in shams; P&lt;0.001). The heart rate baroreflex curve was shifted rightward but remained intact. In the hypertensive group, cardiac sympathetic burst incidence (bursts/100 beats) was increased (39±14 versus 25±9 in normotensives; P&lt;0.05), whereas renal sympathetic burst incidence was decreased (69±20 versus 93±8 in normotensives; P&lt;0.01). The renal sympathetic baroreflex curve was shifted rightward and showed increased gain, but there was no change in the cardiac sympathetic baroreflex gain. Renovascular hypertension is associated with differential control of cardiac and renal SNA; baseline cardiac SNA is increased, whereas renal SNA is decreased. abstract_id: PUBMED:9476946 Increased lung uptake of iodine-123-MIBG in diabetics with sympathetic nervous dysfunction. Unlabelled: Scintigraphy with 123I-MIBG and 201TI was compared in patients with various diseases including diabetes mellitus, with and without sympathetic nervous dysfunction. This study was done to assess lung uptake of these tracers semiquantitatively. Methods: Thirty-eight patients with diabetes mellitus, seven patients with dilated cardiomyopathy (DCM), 12 patients with hypertrophic cardiomyopathy (HCM) and eight healthy subjects were studied. Sympathetic nervous dysfunction was observed in 13 of the 38 diabetic patients. Simultaneous imaging with 123I-MIBG and 201TI was performed. The ratio of lung to total injected dose count and washout rate in the lung were calculated from dynamic images acquired in the initial 2 min and static images acquired at 15 min and at 4 hr after injection of the tracers. Results: Lung uptake of 123I-MIBG at 4 hr was significantly increased in the diabetic group as compared with those in the other groups. In diabetic patients with sympathetic nervous dysfunction, the lung uptake ratio of 123I-MIBG at 4 hr was significantly higher than that in the diabetic patients without sympathetic nervous dysfunction, due to decreased clearance of 123I-MIBG from the lung. On the other hand, increased lung uptake of 201TI was observed in DCM patients at both 15 min and 4 hr. There was no significant difference between lung uptake of 201TI in diabetic patients and that in healthy subjects. Conclusion: Lung uptake of 123I-MIBG was increased and lung washout of 123I-MIBG was decreased in diabetic patients with sympathetic nervous dysfunction, while lung uptake of 201TI was not altered. lodine-123-MIBG scintigraphy of the lung may provide information on sympathetic nervous activity in diabetic patients. It is a promising method for studying the kinetics of norepinephrine in the lung because MIBG is taken up in the lung by the same mechanism as norepinephrine. abstract_id: PUBMED:11801697 Heterogeneity of cardiac sympathetic nerve activity and systolic dysfunction in patients with hypertrophic cardiomyopathy. Unlabelled: Cardiac sympathetic nerve activity is changed in patients with hypertrophic cardiomyopathy (HCM). However, the relationship between heterogeneity of this activity and systolic left ventricular dysfunction in patients with HCM is not well established. This study was performed to evaluate the sympathetic nerve activity in various cardiac regions and to investigate the relationship between cardiac dysfunction and heterogeneity of the cardiac sympathetic nerve activity in patients with HCM. Methods: Cardiac sympathetic nerve activity was evaluated in 25 patients with HCM and 10 control subjects using planar imaging and SPECT by 123I-metaiodobenzylguanidine (MIBG) myocardial scintigraphy. With planar 123I-MIBG imaging, the heart-to-mediastinum activity ratios (H/M), at early (20 min) and delayed (3 h) acquisition, and the washout rate were calculated. Polar maps of the left ventricular myocardium were divided into 20 segments, and the dispersion (maximal to minimal values) and SD of uptake and the washout rate in 20 segments were calculated. Results: The early H/M did not differ between the 2 groups. The delayed H/M was significantly lower and the washout rate of the whole heart was significantly higher in the HCM group than those in the control group. In patients with HCM, the delayed H/M, early uptake dispersion, and SD of early uptake showed good correlation with the left ventricular end-diastolic and end-systolic dimensions and the percentage of fractional shortening. A stepwise regression analysis revealed that the SD of early uptake was a powerful determinant for the percentage of fractional shortening in patients with HCM. Conclusion: These results suggest that the heterogeneity of regional cardiac sympathetic nerve activity may be correlated with cardiac dysfunction in patients with HCM. abstract_id: PUBMED:7608453 Is there increased sympathetic activity in patients with hypertrophic cardiomyopathy? Objectives: This study aimed to assess autonomic nervous system activity in patients with hypertrophic cardiomyopathy. Background: Patients with hypertrophic cardiomyopathy are traditionally thought to have increased sympathetic activity. However, convincing evidence is lacking. Methods: Heart rate variability was assessed from 24-h ambulatory electrocardiographic (Holter) recordings in 31 patients with hypertrophic cardiomyopathy and 31 age- and gender-matched normal control subjects in a drug-free state. Spectral heart rate variability was calculated as total (0.01 to 1.00 Hz), low (0.04 to 0.15 Hz) and high (0.15 to 0.40 Hz) frequency components using fast Fourier transformation analysis. Results: There was a nonsignificant decrease in the total frequency component of heart rate variability in patients with hypertrophic cardiomyopathy compared with that of normal subjects (mean +/- SD 7.24 +/- 0.88 versus 7.59 +/- 0.57 ln[ms2], p = 0.072). Although there was no significant difference in the high frequency component (5.31 +/- 1.14 versus 5.40 +/- 0.91 ln[ms2], p = 0.730), the low frequency component was significantly lower in patients than in normal subjects (6.25 +/- 1.00 versus 6.72 +/- 0.61 ln[ms2], p = 0.026). After normalization (i.e., division by the total frequency component values), the low frequency component was significantly decreased (38 +/- 8% versus 43 +/- 8%, p = 0.018) and the high frequency component significantly increased (16 +/- 6% versus 12 +/- 6%, p = 0.030) in patients with hypertrophic cardiomyopathy. The low/high frequency component ratio was significantly lower in these patients (0.94 +/- 0.64 versus 1.33 +/- 0.55, p = 0.013). In patients with hypertrophic cardiomyopathy, heart rate variability was significantly related to left ventricular end-systolic dimension and left atrial dimension but not to maximal left ventricular wall thickness. No significant difference in heart rate variability was found between 14 victims of sudden cardiac death and 10 age- and gender-matched low risk patients. Conclusions: Our observations suggest that during normal daily activities, patients with hypertrophic cardiomyopathy experience a significant autonomic alteration with decreased sympathetic tone. abstract_id: PUBMED:11087300 The importance of the sympathetic nervous system and systolic hypertension in patients with hypertension: benefits in treating patients with increased cardiovascular risk. The sympathetic nervous system is intimately involved in both the origin and the perpetuation of the hypertensive state, although there is some debate as to its precise contribution to the long-term hypertensive condition. Catecholamines are leading candidates to be both the pressor mechanism that initiates the rise in blood pressure and the trophic mechanism that maintains hypertension by vascular hypertrophy. Although the relationship between sympathetic nervous system and systolic blood pressure would seem intuitive, the multiple interactive pathways between these two pathways still needs to be more completely clarified, particularly as relates to interplay with the renin-angiotensin axis. Along this line of reasoning, a number of studies have examined the interplay between ACE inhibitors or angiotensin-receptor antagonists and alterations in sympathetic nervous system activity. Recent data support a role for one of the angiotensin-receptor antagonists, eprosartan, in having a potent inhibitory effect on pre-synaptic type-1 angiotensin receptors (AT(1) -receptors) involved in the enhancement of sympathetic nervous system activity. This may provide an explanation for the differential effect on systolic blood pressure exhibited by this compound. abstract_id: PUBMED:9651730 Effect of sympathetic overactivity on cardiovascular prognosis in hypertension. Increased sympathetic tone is found in about 30% of patients with hypertension. This abnormality is closely associated with the metabolic syndrome of dyslipidaemia and hyperinsulinaemia. In this short review we discuss the mechanisms by which sympathetic over-activity could cause the metabolic syndrome. Sympathetic stimulation enhances cardiac and vascular hypertrophy. Left ventricular hypertrophy is a strong predictor of poor cardiovascular outcomes. Hypertrophy of resistance vessels accelerates hypertension, whereas hypertrophy of smaller coronary vessels limits coronary reserve and increases tendency for coronary spasms. Epidemiologically, high haematocrit is associated with hypertension and is recognized as an independent coronary risk factor. Sympathetic stimulation increases haematocrit through an increase of post-capillary vascular resistance. Sympathetic over-activity is also associated with platelet activation which may further add to the risk of coronary thrombosis in neurogenic hypertension. Tachycardia, which is due to increased sympathetic and deceased parasympathetic tone, is a hallmark of neurogenic hypertension. Fast heart rate is a strong predictor of coronary events and sudden death. The mechanisms by which tachycardia increases the cardiovascular risk are outlined. abstract_id: PUBMED:22233452 Decreased adrenergic tone in acromegaly: evidence from direct recording of muscle sympathetic nerve activity. Objective: Sympathovagal imbalance has been shown in acromegaly by indirect measurements of adrenergic tone. Data regarding direct measurement of sympathetic activity are lacking as yet. Aim of this study was to assess the adrenergic tone through direct recording of muscle sympathetic nerve activity (MSNA) in acromegalic patients. Design: Fifteen patients (age 26-66 years, eight women) with newly diagnosed active acromegaly without hyperprolactinaemia, pituitary hormone deficiencies, obstructive sleep apnoea and cardiac hypertrophy, and 15 healthy subjects matched for age, sex and body mass index were recruited. After evaluating anthropometric and echocardiographic parameters, anterior pituitary function, glucose and lipid metabolism, and measuring plasma leptin, direct recording of sympathetic outflow via the microneurographic technique was performed. Results: For similar anthropometric and metabolic parameters in patients and controls, HOMA index was significantly increased in the former (4·2 ± 2·39 vs 1·6 ± 0·19, P &lt; 0·001). Surprisingly, this finding of insulin resistance was accompanied by a marked sympathetic inhibition (MSNA 18·3 ± 8·10 vs 37·3 ± 6·48 bursts/min, P &lt; 0·0001, respectively in patients and controls). A reduction in plasma leptin (1·6 ± 1·04 vs 6·5 ± 2·01 μg/l, P &lt; 0·0001) was also recorded in the patients. MSNA was positively correlated with leptin (P &lt; 0·0001). Conclusions: Newly diagnosed acromegalic patients without cardiac hypertrophy display a decreased sympathetic outflow in spite of insulin resistance. This finding might be related to hypoleptinaemia. abstract_id: PUBMED:26493305 Effects of catheter-based renal denervation on cardiac sympathetic activity and innervation in patients with resistant hypertension. Objectives: To investigate, whether renal denervation (RDN) has a direct effect on cardiac sympathetic activity and innervation density. Background: RDN demonstrated its efficacy not only in reducing blood pressure (BP) in certain patients, but also in decreasing cardiac hypertrophy and arrhythmias. These pleiotropic effects occur partly independent from the observed BP reduction. Methods: Eleven patients with resistant hypertension (mean office systolic BP 180 ± 18 mmHg, mean antihypertensive medications 6.0 ± 1.5) underwent I-123-mIBG scintigraphy to exclude pheochromocytoma. We measured cardiac sympathetic innervation and activity before and 9 months after RDN. Cardiac sympathetic innervation was assessed by heart to mediastinum ratio (H/M) and sympathetic activity by wash out ratio (WOR). Effects on office BP, 24 h ambulatory BP monitoring, were documented. Results: Office systolic BP and mean ambulatory systolic BP were significantly reduced from 180 to 141 mmHg (p = 0.006) and from 149 to 129 mmHg (p = 0.014), respectively. Cardiac innervation remained unchanged before and after RDN (H/M 2.5 ± 0.5 versus 2.6 ± 0.4, p = 0.285). Cardiac sympathetic activity was significantly reduced by 67 % (WOR decreased from 24.1 ± 12.7 to 7.9 ± 25.3 %, p = 0.047). Both, responders and non-responders experienced a reduction of cardiac sympathetic activity. Conclusion: RDN significantly reduced cardiac sympathetic activity thereby demonstrating a direct effect on the heart. These changes occurred independently from BP effects and provide a pathophysiological basis for studies, investigating the potential effect of RDN on arrhythmias and heart failure. Answer: Yes, there is increased sympathetic activity in patients with hypertrophic cardiomyopathy (HCM). Studies using 123I-meta-iodobenzylguanidine (123I-mIBG) scintigraphy, which evaluates cardiac sympathetic function, have shown that cardiac sympathetic activity is increased in HCM and correlates with septal wall thickness and left ventricular outflow tract (LVOT) obstruction. This increased sympathetic activity is also associated with impaired diastolic and systolic left ventricular (LV) function (PUBMED:26457274). Additionally, heterogeneity of cardiac sympathetic nerve activity may be correlated with cardiac dysfunction in patients with HCM (PUBMED:11801697). However, it is important to note that increased QT dispersion (QTD), which reflects regional variation of ventricular repolarization, does not necessarily reflect the increased regional variation of cardiac sympathetic nervous activity in HCM (PUBMED:11479478). Despite traditional beliefs that patients with HCM have increased sympathetic activity, one study suggests that during normal daily activities, patients with HCM experience a significant autonomic alteration with decreased sympathetic tone (PUBMED:7608453). This highlights the complexity of autonomic nervous system alterations in HCM and the need for further research to fully understand the relationship between sympathetic activity and HCM.
Instruction: Is surgical education associated with additional costs? Abstracts: abstract_id: PUBMED:23619653 Is surgical education associated with additional costs? A controlled economic study on the German DRG System for primary TKA Aim: Total knee arthroplasty (TKA) is one of the most common procedures in orthopaedic surgery, the cost of surgical training has as yet not been quantified. In a pilot study, we investigated the economic impact of surgical training under DRG system influences, analysing the cost-proceeds structure in surgical training for orthopaedic residents. Methods: Consecutive TKAs were performed by the most educated surgeon (Group A) having implanted ≥ 1000 TKAs, another attending (Group B) with ≥ 200 TKAs and a resident (Group C) having assisted in 25 TKAs (n = 30 patients per Group A-C). All patients were embedded in a standardised clinical pathway. By analysing the costs parameters such as numbers of blood transfusions, the operating time and the length of stay in the hospital we investigated the health care-related costs matched to the DRG-based financial refunding. Data were analysed after undergoing a analysis of variance followed by a post-hoc Scheffé procedure. Results: On the one hand the resident generated additional costs of 1111,7 ± 97 € in comparison to the Group A surgeon and 1729,8 ± 152 € compared to the attending Group B (p &gt; 0,05), these were generated by longer stay in hospital, longer operation time and higher need of resources. On the other hand there were significantly higher proceeds of the Group C in comparison to the attending Group B and also to Group A: 474,78 ± 82 € vs. A and 150,54 ± 52 € vs. Group B (p &lt; 0,05). This was generated both by a higher patient clinical level of complexity (PCCL) and increased complication rates resulting in a consecutively augmented profit by grouping these patients to a more lucrative DRG. Overall the deficit per patient treated by the resident is 637 ± 77 € vs. Group A and 1579,3 ± 137 € vs. Group B (p &gt; 0,05). Conclusion: The German DRG matrix results in higher profits accounted to the learning surgeon by increased PCCL relevant status and grouping the case to a more profitable DRG. Hereby, the additional costs are only partly redeemed. Surgical education of residents is associated with additional costs for the hospital. These costs have to be redeemed to allow good surgical training for hospitals having good teaching conditions. abstract_id: PUBMED:35055586 Estimated Costs Associated with Surgical Site Infections in Patients Undergoing Cholecystectomy. Among healthcare-associated infections, surgical site infections (SSIs) are the most frequent in Spain. The aim of this work was to estimate the costs of SSIs in patients who underwent a cholecystectomy at the Hospital General Universitario de Alicante (Spain) between 2012-2017. This was a prospective observational cohort study. The Active Epidemiological Surveillance Program at our hospital recorded all the cholecystectomies performed. Risk factors associated with the development of SSIs were determined by multivariate analysis and two homogeneous comparison groups were obtained by using the propensity score. The number of extra days of hospital stay were recorded for patients with an SSI and with the cost per hospitalised day data, the additional cost attributed to SSIs was calculated. A total of 2200 cholecystectomies were considered; 110 patients (5.0%) developed an SSI. The average length of hospital stay was 5.6 days longer among patients with an SSI. The cost per SSI was EUR 1890.60 per patient, with the total cost for this period being EUR 207,961.60. SSIs after cholecystectomy lead to a prolongation of hospital stay and an increase in economic costs. It is essential to implement infection surveillance and control programs to reduce SSIs, improve patient safety, and reduce economic burden. abstract_id: PUBMED:32721417 Additional medical costs associated with ventilator-associated pneumonia in an intensive care unit in Japan. Background: Additional health care costs associated with ventilator-associated pneumonia (VAP) vary widely per country; none of which have been explored in Japan. Thus, we aimed to examine the economic and clinical effects of VAP in Japan. Methods: This was a retrospective matched case-control study of 22 patients with VAP who were treated in the intensive care unit of Yokohama Rosai Hospital between January 2012 and December 2018. Twenty-two matched controls were selected based on 5 variables (ie, sex, age, diagnosis and surgical procedure, underlying disease with or without advanced malignant tumor, and best motor response). The additional health care costs incurred owing to VAP were calculated from the difference between the mean costs of VAP and control cases. Results: VAP incurred an additional cost of approximately United States Dollars (USD) 34,884 per case. The length of hospitalization itself was the major factor contributing to additional medical costs, generating a difference of 9,824 USD. Discussion: VAP not only worsens patient outcomes but also generates significant additional medical costs. Patients who had developed VAP required more medical resources such as the performance of a tracheostomy. Conclusions: VAP incurs a higher mean total hospital medical cost. Thus, appropriate infection control strategies should be implemented. abstract_id: PUBMED:24814582 Patient education after stoma creation may reduce health-care costs. Introduction: Researchers are urged to include health-economic assessments when exploring the benefits and drawbacks of a new treatment. The aim of the study was to assess the costs associated with the establishment of a new patient education programme for patients with a stoma. Material And Methods: Following a previous case-control study that explored the effect of patient education for stoma patients, we set out to examine the costs related to such a patient education programme. The primary outcome was disease-specific health-related quality of life measured with the Ostomy Adjustment Scale six months after surgery. The secondary outcome was generic health-related quality of life measured with Short Form (SF)-36. In this secondary analysis, we calculated direct health-care costs for the first six months post-operatively from the perspective of the health-care system, including costs related to the hospital as well as primary health care. Results: The overall cost related to establishing a patient education programme showed no significant increase in the overall average costs. However, we found a significant reduction in costs related to unplanned readmissions (p = 0.01) as well as a reduction in visits to the general practitioner (p = 0.05). Conclusion: Establishing a patient education programme - which increased quality of life - will probably not increase the overall costs associated with the patient course. Funding: The study received financial support from Søster Inge Marie Dahlgaards Fond, Diakonissestiftelsen, Denmark, and from Aase and Ejnar Danielsens Foundation, Denmark. Trial Registration: NCT01154725. abstract_id: PUBMED:24139667 A surgical team with focus on staff education in a community hospital improves outcomes, costs and patient satisfaction. Background: Outcomes, decreased costs, and patient satisfaction are the driving forces of a successful surgical practice. Methods: A surgical team was assembled on October 1, 2010, and educational sessions were implemented. The outcomes and costs for patients who underwent laparoscopic fundoplication and Heller myotomy before and after October 1, 2010, were compared. A Press Ganey patient satisfaction survey was mailed to all patients. Results: There were 268 procedures (103 before and 165 after October 1, 2010): 64 laparoscopic fundoplications and Heller myotomies (23 before and 41 after). There were significant reductions in median operating time (185 minutes [interquartile range {IQR}, 155 to 257 minutes] vs 126 minutes [IQR, 113 to 147 minutes]; P = .001), length of stay (2.0 days [IQR, 2.0 to 4.0 days] vs 1.0 day [IQR, 1.0 to 2.5 days]; P = .05), operating room costs ($2,407 [IQR, $2,171 to $2,893] vs $2,147 [IQR, $1,942 to $2,345]; P = .004), and hospital room costs ($937 [IQR, $799 to $2,159] vs $556 [IQR, $484 to $937]; P = .044). The survey showed significant improvements in patients' experiences in communication with nurses (P = .025), pain management (P = .000), communication about medications (P = .037), and discharge instructions (P = .024). Conclusions: Assembling a surgical team with focus on staff education has a significant impact on outcomes, costs, and patient satisfaction. abstract_id: PUBMED:38275312 Estimation of Additional Costs in Patients with Ventilator-Associated Pneumonia. Healthcare-associated infections (HAIs) present a global public health challenge, contributing to high morbidity and mortality and substantial economic burdens. Ventilator-associated pneumonia (VAP) ranks as the second most prevalent HAI in intensive care units (ICUs), emphasizing the need for economic analyses in this context. This retrospective cohort study, conducted at the General Hospital of Alicante from 2012 to 2019, aimed to assess additional costs related to VAP by comparing the extended length of stay for infected and non-infected ICU patients undergoing mechanical ventilation (MV) for more than 48 h. Employing propensity score association, 434 VAP patients were compared to an equal number without VAP. The findings indicate a significantly longer mechanical ventilation period for VAP patients (17.40 vs. 8.93 days, p &lt; 0.001), resulting in an extra 13.56 days of stay and an additional cost of EUR 20,965.28 per VAP episode. The study estimated a total cost of EUR 12,348,965.28 for VAP during the study period, underscoring the economic impact of VAP. These findings underscore the urgent need for rigorous infection surveillance, prevention, and control measures to enhance healthcare quality and reduce overall expenditures. abstract_id: PUBMED:31466894 The Value of Surgical Graduate Medical Education (GME) Programs Within An Integrated Health Care System. Objective: Surgical graduate medical education (GME) programs add both significant cost and complexity to the mission of teaching hospitals. While expenses tied directly to surgical training programs are well tracked, overall cost-benefit accounting has not been performed. In this study, we attempt to better define the costs and benefits of maintaining surgical GME programs within a large integrated health system. Design: We examined the costs, in 2018 US dollars, associated with the surgical training programs within a single health system. Total health system expenses were calculated using actual and estimated direct GME expenses (salary, benefits, supplies, overhead, and teaching expenses) as well as indirect medical education (IME) expenses. IME expenses for each training program were estimated by using both Medicare percentages and the Medicare Payment Advisor Commission study. The projected cost to replace surgical trainees with advanced practitioners or hospitalists was obtained through interviews with program directors and administrators and was validated by our system's business office. Setting: A physician lead, integrated, rural health system consisting of 8 hospitals, a medical school and a health insurance company. Participants: GME surgical training programs within a single health system's department of surgery. Results: Our health system's department of surgery supports 8 surgical GME programs (2 general surgery residencies along with residencies in otolaryngology, ophthalmology, oral-maxillofacial surgery, urology, pediatric dentistry, and vascular surgery), encompassing 89 trainees. Trainees work an average of 64.4 hours per week. Total health system cost per resident ranged from $249,657 to $516,783 based on specialty as well as method of calculating IME expenses. After averaging program costs and excluding IME and overhead expenses, we estimated the average annual cost per trainee to be $84,171. We projected that replacing our surgical trainees would require hiring 145 additional advanced practitioners at a cost of $166,500 each per year, or 97 hospitalists at a cost of $346,500 each per year. Excluding overhead, teaching and IME expenses, these replacements would cost the health system an estimated additional $16,651,281 or $26,119,281 per year, respectively. Conclusions: Surgical education is an integral part of our health system and ending surgical GME programs would require large expansion of human resources and significant additional fiscal capital. abstract_id: PUBMED:23997674 The impact of health care economics on surgical education. Just like the world economy in 2012, health care is in a state of flux. The current economic environment will impact not only current colorectal surgery residents, but also future generations of surgical trainees. To understand the economic impact of the current health care environment on colorectal surgery residencies, we need to know the basics of graduate medical education (GME) funding for all residents. Since the 1960s with the initiation of Medicare, the federal government through the Center for Medicare and Medicaid Services (CMS) has been the largest source of GME funding. There are two types of costs associated with GME. Direct GME (DME) funding covers costs directly attributed to the training of residents. These costs include residents' stipends, salaries, and benefits; cost of supervising faculty; direct program administration costs; overhead; and malpractice coverage. Indirect GME (IME) costs are payments to hospitals as an additional or add-on payment for the increased cost of care that is generally found in teaching hospitals. In 2010, President Barak Obama signed into law H.R. 3200, the Patient Protection and Affordable Care Act (PPACA). In 2011, the Supreme Court held that the majority of the PPACA is constitutional. Although the true impact of this bill is unknown, it will change the formula for Medicare GME reimbursement as well as shift unused residency positions to primary care. abstract_id: PUBMED:24848779 Costs Associated With Surgical Site Infections in Veterans Affairs Hospitals. Importance: Surgical site infections (SSIs) are potentially preventable complications that are associated with excess morbidity and mortality. Objective: To determine the excess costs associated with total, deep, and superficial SSIs among all operations and for high-volume surgical specialties. Design, Setting, And Participants: Surgical patients from 129 Veterans Affairs (VA) hospitals were included. The Veterans Health Administration Decision Support System and VA Surgical Quality Improvement Program databases were used to assess costs associated with SSIs among VA patients who underwent surgery in fiscal year 2010. Main Outcomes And Measures: Linear mixed-effects models were used to evaluate incremental costs associated with SSIs, controlling for patient risk factors, surgical risk factors, and hospital-level variation in costs. Costs of the index hospitalization and subsequent 30-day readmissions were included. Additional analysis determined potential cost savings of quality improvement programs to reduce SSI rates at hospitals with the highest risk-adjusted SSI rates. Results: Among 54,233 VA patients who underwent surgery, 1756 (3.2%) experienced an SSI. Overall, 0.8% of the cohort had a deep SSI, and 2.4% had a superficial SSI. The mean unadjusted costs were $31,580 and $52,620 for patients without and with an SSI, respectively. In the risk-adjusted analyses, the relative costs were 1.43 times greater for patients with an SSI than for patients without an SSI (95% CI, 1.34-1.52; difference, $11,876). Deep SSIs were associated with 1.93 times greater costs (95% CI, 1.71-2.18; difference, $25,721), and superficial SSIs were associated with 1.25 times greater costs (95% CI, 1.17-1.35; difference, $7003). Among the highest-volume specialties, the greatest mean cost attributable to SSIs was $23,755 among patients undergoing neurosurgery, followed by patients undergoing orthopedic surgery, general surgery, peripheral vascular surgery, and urologic surgery. If hospitals in the highest 10th percentile (ie, the worst hospitals) reduced their SSI rates to the rates of the hospitals in the 50th percentile, the Veterans Health Administration would save approximately $6.7 million per year. Conclusions And Relevance: Surgical site infections are associated with significant excess costs. Among analyzed surgery types, deep SSIs and SSIs among neurosurgery patients are associated with the highest risk-adjusted costs. Large potential savings per year may be achieved by decreasing SSI rates. abstract_id: PUBMED:32880250 Costs of ambulatory pediatric healthcare-associated infections: Central-line-associated bloodstream infection (CLABSIs), catheter-associated urinary tract infection (CAUTIs), and surgical site infections (SSIs). Objective: Ambulatory healthcare-associated infections (HAIs) occur frequently in children and are associated with morbidity. Less is known about ambulatory HAI costs. This study estimated additional costs associated with pediatric ambulatory central-line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTI), and surgical site infections (SSIs) following ambulatory surgery. Design: Retrospective case-control study. Setting: Four academic medical centers. Patients: Children aged 0-22 years seen between 2010 and 2015 and at risk for HAI as identified by electronic queries. Methods: Chart review adjudicated HAIs. Charges were obtained for patients with HAIs and matched controls 30 days before HAI, on the day of, and 30 days after HAI. Charges were converted to costs and 2015 USD. Mixed-effects linear regression was used to estimate the difference-in-differences of HAI case versus control costs in 2 models: unrecorded charge values considered missing and a sensitivity analysis with unrecorded charge considered $0. Results: Our search identified 177 patients with ambulatory CLABSIs, 53 with ambulatory CAUTIs, and 26 with SSIs following ambulatory surgery who were matched with 382, 110, and 75 controls, respectively. Additional cost associated with an ambulatory CLABSI was $5,684 (95% confidence interval [CI], $1,005-$10,362) and $6,502 (95% CI, $2,261-$10,744) in the 2 models; cost associated with a CAUTI was $6,660 (95% CI, $1,055, $12,145) and $2,661 (95% CI, -$431 to $5,753); cost associated with an SSI following ambulatory surgery at 1 institution only was $6,370 (95% CI, $4,022-$8,719). Conclusions: Ambulatory HAI in pediatric patients are associated with significant additional costs. Further work is needed to reduce ambulatory HAIs. Answer: Yes, surgical education is associated with additional costs. A study analyzing the economic impact of surgical training under the German DRG system for primary total knee arthroplasty (TKA) found that residents generated additional costs due to longer hospital stays, longer operation times, and a higher need for resources. Although there were higher proceeds due to a higher patient clinical level of complexity (PCCL) and increased complication rates, which resulted in patients being grouped into a more lucrative DRG, the additional costs were only partly redeemed. The deficit per patient treated by a resident was significant when compared to more experienced surgeons (PUBMED:23619653). Moreover, the impact of health care economics on surgical education is substantial, as the current economic environment affects not only current residents but also future generations of surgical trainees. The funding for graduate medical education (GME), which includes direct GME (DME) costs for residents' stipends, faculty salaries, and program administration, as well as indirect GME (IME) costs for the increased cost of care in teaching hospitals, is largely provided by the federal government through the Center for Medicare and Medicaid Services (CMS). Changes in healthcare legislation, such as the Patient Protection and Affordable Care Act (PPACA), have altered the formula for Medicare GME reimbursement and shifted unused residency positions to primary care, which may impact the economics of surgical education (PUBMED:23997674). In a broader context, healthcare-associated infections (HAIs) such as surgical site infections (SSIs) and ventilator-associated pneumonia (VAP) also contribute to additional costs in healthcare settings. These infections lead to prolonged hospital stays and increased economic costs, emphasizing the need for infection surveillance and control programs to improve patient safety and reduce economic burdens (PUBMED:35055586, PUBMED:32721417, PUBMED:38275312). However, it's worth noting that some studies have shown that educational programs can lead to cost savings. For instance, patient education after stoma creation did not significantly increase overall costs and led to a reduction in unplanned readmissions and visits to the general practitioner (PUBMED:24814582). Similarly, a surgical team with a focus on staff education in a community hospital improved outcomes, reduced costs, and enhanced patient satisfaction (PUBMED:24139667). In summary, while surgical education does incur additional costs, there are also instances where educational interventions can lead to cost savings and improved healthcare outcomes.
Instruction: Right ventricle-to-pulmonary artery conduit longevity: is it related to allograft size? Abstracts: abstract_id: PUBMED:34316981 Successful TPV Implantation in a Pregnant Patient With Right Ventricle to Pulmonary Artery Conduit Obstruction. A patient with repaired double outlet right ventricle presented during early gestation with heart failure symptoms due to severe right ventricle-pulmonary artery conduit stenosis and insufficiency. In the first trimester, she underwent transcatheter therapy with Melody pulmonary valve implantation with excellent hemodynamic results and completed pregnancy without significant maternal complications. (Level of Difficulty: Advanced.). abstract_id: PUBMED:23804005 Left Main Coronary Artery Compression by Right Ventricle-to-Pulmonary Artery Conduit Relieved by Anterior Translocation of the Right Pulmonary Artery. Pulmonary artery translocation has been described as an alternative surgical strategy to treat anomalous aortic origin of the coronary artery from the wrong sinus of Valsalva. We describe another application of this strategy in a patient who underwent complete repair of pulmonary atresia with ventricular septal defect and presented years later with ischemic symptoms due to compression of the left main coronary artery by the right ventricular-to-pulmonary artery conduit. To our knowledge, this is the first application of the pulmonary artery translocation technique to treat such a problem. abstract_id: PUBMED:17720398 Right ventricle-to-pulmonary artery conduit longevity: is it related to allograft size? Background: Cryopreserved valved allografts are routinely oversized to account for somatic growth in children requiring right ventricle-to-pulmonary artery (RV-PA) continuity. The objective of this study is to determine the effect of oversizing on conduit longevity. Methods: We reviewed the records of all patients undergoing RV-PA cryopreserved valved allograft placement from 1988 to 2006 for diagnosis, age, allograft type, time to valved conduit explant, and indication for surgery. Conduit size at the time of insertion was compared with pulmonary valve size normalized for body surface area (z score). Multivariate Cox regression models with cluster analysis were constructed to assess risk of allograft oversizing for conduit failure. Kaplan-Meier analysis was used to obtain median freedom from explantation time. Results: A total of 140 cryopreserved valved allografts (z score, 1.8 +/- 1.3; range, -1.5 to 4.9) were implanted in 99 patients (median age, 5.6 years). Reoperation was required in 66 (67%) of 99 patients during the study period. Modeling z scores as a dichotomous variable revealed that risk of allograft explantation increases 113% when allografts with z scores of 2.7 or higher are used compared with those with z scores of less than 2.7 (p &lt; 0.01). Median adjusted freedom from explantation for this same grouping was 4.9 years versus 9.4 years. The presence of branch pulmonary artery stenosis shortens the conduit life (p &lt; 0.001), whereas insertion of a pulmonary allograft may extend the conduit life (p = 0.13). Conclusions: Cryopreserved valved allograft oversizing (z score &gt; or = 2.7) in the pulmonary position results in decreased longevity in children. Presence of branch pulmonary artery stenosis is also associated with earlier conduit explantation. abstract_id: PUBMED:34966931 Tricuspid annuloplasty ring for external support of right ventricle-to-pulmonary artery conduit. We describe the use of a rigid tricuspid annuloplasty ring as an external support to avoid compression of a right ventricle-to-pulmonary artery conduit. abstract_id: PUBMED:38174893 Outcomes after Biventricular Repair Using a Conduit between the Right Ventricle and Pulmonary Artery in Infancy. Background: This study investigated the outcomes of biventricular repair using right ventricle to pulmonary artery (RV-PA) conduit placement in patients aged &lt;1 year. Methods: Patients aged &lt;1 year who underwent biventricular repair using an RV-PA conduit between 2011 and 2020 were included in this study. The outcomes of interest were death from any cause, conduit reintervention, and conduit dysfunction (peak velocity of ≥3.5 m/sec or moderate or severe regurgitation). Results: In total, 141 patients were enrolled. The median age at initial conduit implantation was 6 months. The median conduit diameter z-score was 1.3. The overall 5-year survival rate was 89.6%. In the multivariable analysis, younger age (p=0.006) and longer cardiopulmonary bypass time (p=0.001) were risk factors for overall mortality. During follow-up, 61 patients required conduit reintervention, and conduit dysfunction occurred in 68 patients. The 5-year freedom from conduit reintervention and dysfunction rates were 52.9% and 45.9%, respectively. In the multivariable analysis, a smaller conduit z-score (p&lt;0.001) was a shared risk factor for both conduit reintervention and dysfunction. Analysis of variance demonstrated a nonlinear relationship between the conduit z-score and conduit reintervention or dysfunction. The hazard ratio was lowest in patients with a conduit z-score of 1.3 for reintervention and a conduit z-score of 1.4 for dysfunction. Conclusion: RV-PA conduit placement can be safely performed in infants. A significant number of patients required conduit reintervention and had conduit dysfunction. A slightly oversized conduit with a z-score of 1.3 may reduce the risk of conduit reintervention or dysfunction. abstract_id: PUBMED:25725927 Outcome of right ventricle to pulmonary artery conduit for biventricular repair. Background: The objective of this study was to assess the outcomes of the right ventricle to pulmonary artery conduit for biventricular repair. Methods: This is a retrospective review of all right ventricle to pulmonary artery conduit operations for biventricular repair of congenital heart disease between 1982 and 2013 at a single institution. Results were compared among the conduit size and materials. Results: A total of 476 physiologic right ventricle to pulmonary artery conduit operations were identified, with 195 pulmonary homografts, 105 handmade valved expanded polytetrafluoroethylene conduits, 103 Medtronic Hancock (Minneapolis, MN) bioprosthetic valved conduits, 38 non-valved expanded polytetrafluoroethylene tubes, and 35 others. The actuarial survival was 92.4% and the freedom from conduit reoperation was 33.0% at 20 years. The freedom from conduit reoperation was significantly different among conduit materials (76.8%, 92.1%, 81.9%, 80.6%, and 63.8% for pulmonary homograft, valved expanded polytetrafluoroethylene conduit, Hancock conduit, non-valved expanded polytetrafluoroethylene tube, and others at 5 years, p = 0.0001). The Cox proportional hazards model showed that age (p &lt; 0.001 and p = 0.04), preoperative diagnosis (p &lt; 0.001 and p &lt; 0.001), conduit size (p &lt; 0.001 and p &lt; 0.001), and conduit material (the valved expanded polytetrafluoroethylene conduit versus combined other materials; p = 0.01 and p = 0.02, respectively) were significant factors for the freedom from conduit reoperation both when treating conduit size as a categoric predictor and as a quantitative predictor. Conclusions: The handmade valved expanded polytetrafluoroethylene conduit showed excellent early outcome as a right ventricle to pulmonary artery conduit for biventricular repair. A longer follow-up and a randomized study will be necessary to explore the advantages of the valved expanded polytetrafluoroethylene conduit. abstract_id: PUBMED:21976874 Short-term outcome comparison of Norwood procedures with right ventricle to pulmonary artery conduit versus modified Blalock-Taussig shunt: A meta-analysis. A small number of nonrandomized and retrospective studies have compared outcomes of classical Norwood procedures for hypoplasticleft-heart syndrome and single ventricle lesions involving a Blalock-Taussig (BT) shunt to the modified procedure using a right ventricle to pulmonary artery conduit. Some of these studies reported data for the same outcomes and a meta-analysis was done to analyze pooled outcomes comparing in-hospital mortality, interstage mortality, cardiopulmonary bypass time, systolic and diastolic blood pressures 24 h postoperatively, length of intensive care and hospital stay, and need for postoperative extracorporeal membrane oxygenation. Right ventricle to pulmonary artery conduit was associated with an insignificant reduction of in-hospital mortality (odds ratio, 0.674, 95% confidence interval, 0.367 to 1.238), and in the length of hospital stay. There were significant reductions in cardiopulmonary bypass time, length of intensive care unit stay, and need for postoperative extracorporeal membrane oxygenation, postoperative ventilation times, and interstage mortality (odds ratio, 0.191, confidence interval, 0.0620 to 0.587). There was a significant increase in diastolic blood pressure and an insignificant increase in systolic blood pressure 24 h postoperatively. This pooled analysis demonstrates potential advantages associated with the right ventricle to pulmonary artery conduit when compared to the modified BT shunt in palliation and demonstrates the need for large randomized controlled trials that compare a number of outcomes in both procedures. abstract_id: PUBMED:32187360 Interstage management of pulmonary blood flow after the Norwood procedure with right ventricle-to-pulmonary artery conduit. Objectives: Our goal was to assess the efficacy of managing pulmonary blood flow from the Norwood procedure with a right ventricle-to-pulmonary artery (RV-PA) conduit until stage 2 palliation (S2P). Methods: Among 48 consecutive patients undergoing the Norwood procedure between 2008 and 2018, 40 (83.3%) patients who survived to discharge were included in this study. The primary diagnosis was hypoplastic left heart syndrome in 28 (70%) patients and hypoplastic left heart syndrome variant in 12 (30%) patients. All patients received bilateral pulmonary artery banding. The median age and weight at the time of the Norwood procedure were 41 (25th-75th percentiles: 27-89) days and 3.2 (2.7-3.9) kg, respectively. In keeping with institutional strategy, S2P was undertaken when body weight exceeded 5.0 kg, and normal gross motor development was confirmed. Results: The RV-PA conduit was clipped in 28 (70%) patients during the perioperative period of the Norwood procedure, then partial unclipping was performed in 8 (20%) patients and full unclipping was performed in 20 (50%) patients. Before S2P, the median pulmonary-to-systemic blood flow ratio was 1.0 (0.7-1.3). The median age and weight at the time of S2P were 10.7 (9.0-12.9) months and 6.3 (5.5-7.1) kg, respectively. The survival rate 5 years after Norwood discharge was 85.3%. Pre-S2P pulmonary-to-systemic blood flow ratio was linearly correlated with greater interstage changes in systemic atrioventricular valve regurgitation (R2 = 0.223, P = 0.004). Conclusions: Interstage management of pulmonary blood flow by RV-PA conduit clipping and gradual unclipping provided good interstage outcomes. The median pulmonary-to-systemic blood flow ratio could be controlled to 1.0 at pre-S2P catheter examination. abstract_id: PUBMED:37634733 Association between postoperative exposure to fine particulate matter and patency of the right ventricle-pulmonary artery conduit. Background: Ambient air pollution is a leading risk factor for cardiovascular diseases. No study has investigated the association between exposure to fine particulate matter &lt;2.5 μm in aerodynamic diameter (PM2.5) and the prognosis of patients undergoing surgery for the right ventricle-pulmonary artery (RV-PA) conduit. Methods: From 2013 to 2020, patients with six complicated congenital heart defects who had undergone surgery for the RV-PA conduit were selected. Four conduit materials were used: bovine jugular vein graft (BJV), pulmonary homograft (PHG), aortic homograft (AHG), and handmade tri-leaflet expanded polytetrafluoroethylene (ePTFE) conduit. Telephone interviews were used to confirm the postoperative address of patients. The monthly average PM2.5 concentration was obtained from the ChinaHighPM2.5 dataset using the place of residence of patients. By comparing findings of echocardiography undertaken before patients' return to their residence and during re-examination, we defined an increase in the trans-conduit peak velocity of ≥1.5 m/s as the study endpoint. Results: We enrolled 232 patients. Compared with BJV conduits, homografts (AHGs and PHGs) (0.052 (95 % CI = 0.005-0.558), P = 0.015) and ePTFE conduits (0.009 (95 % CI = 0.002-0.054), P &lt; 0.001) were protective factors. The cumulative monthly PM2.5 concentration (10 μg/m3) was a risk factor (1.014 (95 % CI = 1.001-1.026), P = 0.028). Winter experience was a risk factor (1.971 (95 % CI: 1.021-3.804), P = 0.043). In the subgroup analysis, Spearman correlation analysis indicated BJV conduits (r = 0.680, P &lt; 0.001), PHGs (r = 0.559, P = 0.020), and AHGs (r = 0.745, P = 0.021) had medium-to-high positive correlations between the cumulative PM2.5 concentration and change in the conduit velocity. For ePTFE, the correlation was weak and not significant (r = 0.222, P = 0.073). Conclusions: Postoperative exposure to PM2.5 affects the patency of biologic prosthetic conduits (especially xenografts). The increase in the velocity of the ePTFE conduit is not associated with PM2.5 exposure, and is a suitable material for patients living in areas with high pollutant concentrations. abstract_id: PUBMED:38434570 Right ventricle to pulmonary artery conduit: a comparison of long-term graft-related events between bovine jugular vein conduit, aortic homograft, and porcine-valved conduits. Background: The optimal conduit for right ventricle to pulmonary artery (RV-PA) reconstruction does not exist. Reinterventions are common due to conduit stenosis and endocarditis. Tailoring conduit choice according to patients' characteristics could improve the outcomes. The study aimed to compare graft-related events (infective endocarditis, transcatheter pulmonary valve replacement (PVR), transcatheter conduit dilatation, surgical conduit replacement, and transcatheter pulmonary branch intervention for RV-PA reconstruction using bovine jugular vein, aortic homograft, and porcine-valved conduits. Methods: In a retrospective cohort study, 155 patients with 193 procedures that were done in King Faisal Specialist Hospital and Research Center-Jeddah (KFSHRC JED) for implanting RV-PA conduits from 1999 to 2021 were included. The patients were grouped according to the type of conduit into 3 groups; Group 1 (n=153) received bovine jugular vein (BJVs) grafts, Group 2 (n=29) received aortic homografts, Group 3 (n=11) received porcine-valved conduits. Factors associated with graft-related events were evaluated using stepwise Cox regression analysis. Results: Patients with BJVs were significantly younger than those with porcine-valved conduits 3 (P=0.009). The weight of BJVs patients was significantly lower than homografts (P=0.002) and porcine-valved conduits patients 3 (P&lt;0.001). The conduit size was as expected significantly lower in BJVs patients than patients who received porcine-valved conduits (P&lt;0.001) and patients who received aortic homografts (P&lt;0.001). There was no difference between Group 2 and 3 (P=0.084). Operative mortality occurred in 13 patients: 12 (7.84%) with BJV conduit and 1 (9.09%) with porcine valved conduit (P=0.351). Male gender [odds ratio (OR): 10.04; 95% confidence interval (CI): 1.28-78.86; P=0.028] and smaller conduit size (OR: 0.78; 95% CI: 0.61-0.99; P=0.048) were associated with increased operative mortality. Freedom from graft-related events at 5 and 10 years was 67% and 52% in BJVs patients, 74% and 36% in patients who received aortic homografts, and 53% in patients who received porcine-valved conduits. Factors associated with increased graft-related events were male gender (HR: 1.58; 95% CI: 1.004-2.50, P=0.048) and younger age (HR: 0.995; 95% CI: 0.991-0.999, P=0.041). Conclusions: RV-PA reconstruction was associated with low mortality, unrelated to the conduit type. Reinterventions for graft-related events were common. The durability and graft-related events might be comparable among BJV grafts, aortic homografts, and porcine-valved conduits. Factors associated with increased graft-related events in this study were male gender and younger age. Answer: Yes, right ventricle to pulmonary artery (RV-PA) conduit longevity is related to allograft size. A study that reviewed records of patients undergoing RV-PA cryopreserved valved allograft placement from 1988 to 2006 found that oversizing the allografts (z score ≥ 2.7) in the pulmonary position resulted in decreased longevity in children. The presence of branch pulmonary artery stenosis was also associated with earlier conduit explantation. The study concluded that cryopreserved valved allograft oversizing in the pulmonary position leads to decreased longevity, and the presence of branch pulmonary artery stenosis shortens the conduit life (PUBMED:17720398).
Instruction: Patient loyalty in a mature IDS market: is population health management worth it? Abstracts: abstract_id: PUBMED:19901005 Multidisciplinary management of Hunter syndrome. Hunter syndrome is a rare, X-linked disorder caused by a deficiency of the lysosomal enzyme iduronate-2-sulfatase. In the absence of sufficient enzyme activity, glycosaminoglycans accumulate in the lysosomes of many tissues and organs and contribute to the multisystem, progressive pathologies seen in Hunter syndrome. The nervous, cardiovascular, respiratory, and musculoskeletal systems can be involved in individuals with Hunter syndrome. Although the management of some clinical problems associated with the disease may seem routine, the management is typically complex and requires the physician to be aware of the special issues surrounding the patient with Hunter syndrome, and a multidisciplinary approach should be taken. Subspecialties such as otorhinolaryngology, neurosurgery, orthopedics, cardiology, anesthesiology, pulmonology, and neurodevelopment will all have a role in management, as will specialty areas such as physiotherapy, audiology, and others. The important management topics are discussed in this review, and the use of enzyme-replacement therapy with recombinant human iduronate-2-sulfatase as a specific treatment for Hunter syndrome is presented. abstract_id: PUBMED:26633932 Challenges in the Management of Mucopolysaccharidosis Type II (Hunter's Syndrome) in a Developing Country: a Case Report. Background: Mucopolysaccharidosis type II (Hunter's syndrome) is an X-linked chromosomal storage disorder due to deficiency of the lysosomal enzyme iduronate-2-sulfatase with patients rarely living till adulthood. Failure to identify patients early could contribute to an increased morbidity as identified in this case report. Case Details: An eight year old patient with Hunter's syndrome identified five years after disease onset with severe cardiovascular complications exemplifies the challenges faced in resource-limited countries towards making diagnosis and treatment of rare conditions. Elevated urinary glycosaminoglycans levels or a strong clinical suspicion of Hunter's syndrome, as identified in the index case, is a prerequisite for enzyme activity testing. Urinary mucopolysaccharide(MPS) level was 69.6 mg/mmol(normal range is 0.0 - 11.6 mg/mmol), and the confirming MPS electrophoresis analysis showed elevated heparan sulphate in the urine sample. Enzyme activity testing, with absent or very low iduronate-2-sulfatase activity, is diagnostic. However, the scarce availability and high cost of these tests is another constraint in making a diagnosis. Conclusion: Identification and management of mucopolysaccharidosis type II pose a problem in resource-constrained countries due to late presentation, lack of facility for diagnosis and treatment, cost and expertise required for the management. abstract_id: PUBMED:22059643 Mucopolysaccharidosis type II: European recommendations for the diagnosis and multidisciplinary management of a rare disease. Unlabelled: Mucopolysaccharidosis type II (MPS II) is a rare, life-limiting, X-linked recessive disease characterised by deficiency of the lysosomal enzyme iduronate-2-sulfatase. Consequent accumulation of glycosaminoglycans leads to pathological changes in multiple body systems. Age at onset, signs and symptoms, and disease progression are heterogeneous, and patients may present with many different manifestations to a wide range of specialists. Expertise in diagnosing and managing MPS II varies widely between countries, and substantial delays between disease onset and diagnosis can occur. In recent years, disease-specific treatments such as enzyme replacement therapy and stem cell transplantation have helped to address the underlying enzyme deficiency in patients with MPS II. However, the multisystem nature of this disorder and the irreversibility of some manifestations mean that most patients require substantial medical support from many different specialists, even if they are receiving treatment. This article presents an overview of how to recognise, diagnose, and care for patients with MPS II. Particular focus is given to the multidisciplinary nature of patient management, which requires input from paediatricians, specialist nurses, otorhinolaryngologists, orthopaedic surgeons, ophthalmologists, cardiologists, pneumologists, anaesthesiologists, neurologists, physiotherapists, occupational therapists, speech therapists, psychologists, social workers, homecare companies and patient societies. Take-home Message: Expertise in recognising and treating patients with MPS II varies widely between countries. This article presents pan-European recommendations for the diagnosis and management of this life-limiting disease. abstract_id: PUBMED:28464912 Ten years of the Hunter Outcome Survey (HOS): insights, achievements, and lessons learned from a global patient registry. Mucopolysaccharidosis type II (MPS II; Hunter syndrome; OMIM 309900) is a rare lysosomal storage disease with progressive multisystem manifestations caused by deficient activity of the enzyme iduronate-2-sulfatase. Disease-specific treatment is available in the form of enzyme replacement therapy with intravenous idursulfase (Elaprase®, Shire). Since 2005, the Hunter Outcome Survey (HOS) has collected real-world, long-term data on the safety and effectiveness of this therapy, as well as the natural history of MPS II. Individuals with a confirmed diagnosis of MPS II who are untreated or who are receiving/have received treatment with idursulfase or bone marrow transplant can be enrolled in HOS. A broad range of disease- and treatment-related information is captured in the registry and, over the past decade, data from more than 1000 patients from 124 clinics in 29 countries have been collected. Evidence generated from HOS has helped to improve our understanding of disease progression in both treated and untreated patients and has extended findings from the formal clinical trials of idursulfase. As a long-term, global, observational registry, various challenges relating to data collection, entry, and analysis have been encountered. These have resulted in changes to the HOS database platform, and novel approaches to maximize the value of the information collected will also be needed in the future. The continued evolution of the registry should help to ensure that HOS provides further insights into the burden of the disease and patient care and management in the coming years. abstract_id: PUBMED:33335838 A cDNA analysis disclosed the discordance of genotype-phenotype correlation in a patient with attenuated MPS II and a 76-base deletion in the gene for iduronate-2-sulfatase. We previously showed that the genotype-phenotype correlation in MPS II is well-conserved in Japan (Kosuga et al., 2016). Almost all of our patients with attenuated MPS II have missense variants, which is expected to result in residual activity of iduronate-2-sulfatase. In contrast, our patients with severe MPS II have so-called null-type disease-associated variants, such as nonsense variants, frame-shifts, gene insertions, gene deletions and rearrangement with pseudogene (IDS2), none of which are expected to result in residual activity. However, we recently encountered a patient with attenuated MPS II who had a presumable null-type disease-associated variant and 76-base deletion located in exon 1 that extended into intron 1. To investigate this discordance, we extracted RNA from the leukocytes of the patient and performed reverse transcription polymerase chain reaction. One of the bands of the cDNA analysis was found to include a nucleotide sequence whose transcript was expected to generate an almost full-length IDS mature peptide lacking only part of its signal peptide as well as only one amino acid at the end of the N-terminus. This suggests that an alternative splicing donor site is generated in exon 1 upstream of the deleted region. Based on these observations, we concluded that the phenotype-genotype discordance in this patient with MPS II was due to the decreased amount of IDS protein induced by the low level of the alternatively spliced mRNA, lacking part of the region coding for the signal peptide but including the region coding almost the full mature IDS protein. The first 25 amino acids at the N-terminus of IDS protein are a signal peptide. The alternative splice transcript has only 13 (1 M-13 L) of those 25 amino acids; 14G-25G are missing, suggesting that the exclusively hydrophobic 1 M-13 L of the signal peptide of IDS might have a crucial role in the signal peptide. abstract_id: PUBMED:22929184 Development of the multi-attribute Adolescent Health Utility Measure (AHUM). Objective: Obtain utilities (preferences) for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods: A health state classification system, the Adolescent Health Utility Measure (AHUM), was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4-7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO) approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747). Results: Observations (i.e., utilities) totaled 3,744 (12*312 participants), with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p &lt; 0.0001; adjusted R2 = 0.361; RMSE = 0.194). When AHUM utilities were applied to the idursulfase trial, mean utilities in the idursulfase weekly and placebo groups improved +0.087 and +0.006, respectively, from baseline to week 53. In the extension, when all patients received idursulfase, the utilities in the treatment group remained stable and the placebo group improved +0.039. Discussion: The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions. abstract_id: PUBMED:22153556 The first cardiac transplant experience in a patient with mucopolysaccharidosis. Hunter syndrome (MPSII) is a rare X-linked lysosomal storage disorder that can affect multiple systems but primarily affects the heart. We report the case of a previously asymptomatic 23-year-old patient who had an attenuated form of MPSII and presented with refractory heart failure that required a heart transplant. The diagnosis was confirmed by detection of an increase in urinary excretion of glycosaminoglycans, a deficiency in enzymatic activity, and molecular analysis. A myocardial biopsy revealed hypertrophic cardiomyocytes, mild fibrosis, and lysosomal storage in interstitial cells. Molecular analysis identified a novel mutation in the iduronate-2-sulfatase gene. Although the clinical outcome was not favorable, we believe that this approach may be valid in end-stage heart failure. abstract_id: PUBMED:38197333 Impact of COVID-19 pandemic on healthcare delivery for lysosomal storage disorders at a tertiary care public hospital in Mumbai. Introduction: Management of lysosomal storage disorders (LSDs) requires periodic visits for medical surveillance and hospitalizations. Management of LSDs may have been adversely impacted during the COVID-19 pandemic. Objective: To identify the factors impacting health care for patients with LSDs during the COVID-19 pandemic. Methods: An observational study was conducted in Mumbai comparing infusion practices and reasons for missed infusions for 15 months before March 2020 versus two phases during the pandemic (April 2020-March 2021 and April 2021-March 2022) in patients receiving intravenous enzyme replacement therapy (ERT) and on oral substrate reduction therapy (SRT). Results: Fifteen patients with LSDs were enrolled. Before the pandemic, 6/13 (46%) were receiving ERT at the study site, 4/13 (31%) at a local hospital, and 3/13 (23%) at home; two were on SRT. The median distance traveled for receiving ERT was 37 km, and 4.4 infusions/patient were missed. From April 2020 to March 2021, two more patients opted for home ERT infusions. The median distance traveled for receiving ERT was 37 km, and 11.6 infusions/patient were missed. From April 2021 to March 2022, one more patient opted for home ERT infusions. The median distance traveled for receiving ERT was 7 km, and 5.6 infusions/patient were missed. The pandemic also affected SRT compliance adversely. For all patients, the cause of disrupted treatment was travel curbs (69%) and fear of getting COVID-19 infection (38%). Conclusions: Treatment of LSDs was disrupted during the pandemic, with an increase in missed ERT infusions and SRT doses. abstract_id: PUBMED:18331837 Molecular investigations of a novel iduronate-2-sulfatase mutant in a Chinese patient. Background: Molecular investigations of iduronate-2-sulfatase (IDS) mutants for the X-linked lysosomal storage disease mucopolysaccharidosis type II (MPS II, Hunter disease), commonly depends on transient expression studies to verify a single nucleotide change to be pathogenic. In 2 severely affected patients, IDS missense mutations, c.1016T&gt;C (novel) and c.1016T&gt;G (known) were identified predicting the substitution of an ambivalent cyclic proline and a hydrophilic arginine respectively for the hydrophobic leucine at residue 339. We hypothesized that residue Leu339 may be functionally critical. Methods: We performed a study for the 2 mutations by in-situ mutagenesis, in vitro expression, and functional analysis. Results: Transient expression revealed that both the missense variants had stable mRNA but their residual enzyme activities remained &lt;2.5% of normal level. The effect of the missense mutations on protein expression was detected by Western blot analysis. Both the missense mutations synthesized the precursor form but had reduced mature form of IDS. Conclusion: The novel mutation p.L339P is a disease-causing mutation affecting maturation of the protein. abstract_id: PUBMED:20040319 Management of infusion-related reactions to enzyme replacement therapy in a cohort of patients with mucopolysaccharidosis disorders. Objective: Enzyme replacement therapy (ERT) is currently available for the treatment of mucopolysaccharidosis (MPS) Type I, MPS II and MPS VI. Hypersensitivity reactions have been reported in some patients receiving ERT, but these can usually be easily managed. Methods: In this retrospective study, we evaluated the manifestations and management of hypersensitivity reactions in patients at a single center who were receiving ERT for either MPS I, MPS II or MPS VI between 2002 and 2008. Results: Hypersensitivity reactions were observed in 28 (36%) out of 77 patients, and were most common in children with severe disease. When an infusion-related reaction occurred, ERT was immediately suspended until the patient's symptoms had resolved. Antihistamines and antipyretics were administered to treat the acute symptoms of hypersensitivity reactions. In some patients, low-dose corticosteroids were administered to attenuate late-phase or biphasic reactions. There were no instances in which resuscitation was necessary. When ERT was restarted, patients were given premedication in the form of antihistamines and antipyretics, and ERT was administered at a slower rate. In most cases, this approach overcame the hypersensitivity. After gradually increasing the infusion rate, patients were generally able to resume a normal infusion schedule without premedication after a period ranging from 8 weeks to 3.5 years. Conclusion: Close monitoring of patients receiving ERT is essential. Use of an adapted ERT infusion regimen with premedication resulted in improvement of signs and symptoms of hypersensitivity in most of the patients who experienced infusion-related reactions. Answer: The provided abstracts do not directly address the question of patient loyalty in a mature iduronate-2-sulfatase (IDS) market or the value of population health management in this context. However, they do provide insights into the management of Hunter syndrome (Mucopolysaccharidosis type II, MPS II), which is treated with enzyme replacement therapy (ERT) using recombinant human IDS (PUBMED:19901005, PUBMED:26633932, PUBMED:22059643, PUBMED:28464912, PUBMED:33335838, PUBMED:22929184, PUBMED:38197333, PUBMED:18331837, PUBMED:20040319). Hunter syndrome is a rare, progressive disease that requires a multidisciplinary approach to management, involving various specialists and therapies (PUBMED:19901005, PUBMED:22059643). The complexity of the disease and the need for ongoing treatment suggest that patient loyalty could be influenced by the quality and coordination of care provided. Population health management, which focuses on improving health outcomes for a group of individuals through coordinated care, could potentially be beneficial in managing a chronic condition like Hunter syndrome by ensuring that patients receive timely and appropriate treatments, thus potentially improving patient loyalty. The Hunter Outcome Survey (HOS) is an example of a global patient registry that collects data on the safety and effectiveness of ERT, as well as the natural history of MPS II (PUBMED:28464912). Such registries can be seen as a component of population health management, as they help in understanding disease progression and informing patient care. Challenges in managing MPS II, particularly in resource-limited settings, highlight the importance of early diagnosis, availability of treatment, and the need for expertise in managing the disease (PUBMED:26633932). These challenges could impact patient loyalty if not adequately addressed. The COVID-19 pandemic has also shown the importance of flexible healthcare delivery models, such as home infusions, to maintain continuity of care for patients with chronic conditions like MPS II (PUBMED:38197333). Adapting care delivery to meet patient needs could be an important aspect of population health management and may influence patient loyalty. In summary, while the abstracts do not directly answer the question, they suggest that population health management strategies that improve the coordination and quality of care for patients with Hunter syndrome could potentially enhance patient loyalty in a mature IDS market.
Instruction: Early onset of lipid peroxidation after human traumatic brain injury: a fatal limitation for the free radical scavenger pharmacological therapy? Abstracts: abstract_id: PUBMED:11523701 Early onset of lipid peroxidation after human traumatic brain injury: a fatal limitation for the free radical scavenger pharmacological therapy? Background: On the basis of the contradiction between data on experimental head trauma showing oxidative stress-mediated cerebral tissue damage and failure of the majority of clinical trials using free radical scavenger drugs, we monitored the time-course changes of malondialdehyde (MDA, an index of cell lipid peroxidation), ascorbate, and dephosphorylated ATP catabolites in cerebrospinal fluid (CSF) of traumatic brain-injured patients. Methods: CSF samples were obtained from 20 consecutive patients suffering from severe brain injury. All patients were comatose, with a Glasgow Coma Scale on admission of 6 +/- 1. The first CSF sample for each patient was collected within a mean value of 2.95 hours from trauma (SD=1.98), after the insertion of a ventriculostomy catheter for the continuous monitoring of intracranial pressure. During the next 48 hours, CSF was withdrawn from each patient once every 6 hours. All samples were analyzed by an ion-pairing high-performance liquid chromatographic method for the simultaneous determination of MDA, ascorbic acid, hypoxanthine, xanthine, uric acid, inosine, and adenosine. Results: In comparison with values recorded in 10 herniated-lumbar-disk, noncerebral control patients, data showed that all CSF samples of brain-injured patients had high values (0.226 micromol/L; SD=0.196) of MDA (undetectable in samples of control patients) and decreased ascorbate levels (96.25 micromol/L; SD=31.74), already at the time of first withdrawal at the time of hospital admission. MDA was almost constant in the next two withdrawals and tended to decrease thereafter, although 48 hours after hospital admission, a mean level of 0.072 micromol/L CSF (SD=0.026) was still recorded. The ascorbate level was normalized 42 hours after hospital admission. Changes in the CSF values of ATP degradation products (oxypurines and nucleosides) suggested a dramatic alteration of neuronal energy metabolism after traumatic brain injury. Conclusions: On the whole, these data demonstrate the early onset of oxygen radical-mediated oxidative stress, proposing a valid explanation for the failure of clinical trials based on the administration of oxygen free radical scavenger drugs and suggesting a possible rationale for testing the efficacy of lipid peroxidation "chain breakers" in future clinical trials. abstract_id: PUBMED:11186247 Anti-hypercholesterolaemia, antioxidant activity and free radical scavenger effects of traditional Chinese medicine prescriptions used for stroke. The generation of oxygen radicals and lipid peroxidation may be factors in the cerebral damage secondary to ischaemia of the cerebrovascular disease, as in stroke. Five traditional Chinese medicinal prescriptions were investigated for their antioxidant activity: Shiee Fuu Jwu Iu Tang (TCMP1), Oh Yaw Shuen Chin Saan (TCMP2), Buu Yang Hwan Wuu Tang (TCMP3), Sheau Shiuh Ming Tang (TCMP4), and Chir Hwu Jia Long Guu Muu Lih Tang (TCMP5). Anti-lipid peroxidation, anti-superoxide formation and free radical scavenger activity were determined by the FeCl2--ascorbic acid-induced lipid peroxidation effects on lipids in-vitro, xanthine oxidase inhibition, cytochrome C system and an electron spin resonance spectrometer, respectively. The results showed that TCMP5 had greater anti-lipid peroxidation and anti-superoxide formation activity than the other prescriptions. TCMP4 had the greatest free radical scavenging effect, TCMP5 showed the greatest superoxide radical scavenger activity and TCMP3 showed the greatest hydroxyl radical scavenger activity. Tests were also performed to evaluate the effects of the five prescriptions on blood lipid in-vivo. The test showed that the prescriptions decreased the level of total cholesterol and LDL-cholesterol in serum in high cholesterol-fed rats. From these results, it seems probable that these prescriptions may be effective in the prevention and therapy of stroke and ischaemia. abstract_id: PUBMED:9312934 Chemiluminescent study of free radical lipid peroxidation in the rabbit brain after cranio-cerebral trauma The method of chemiluminescence was applied for the determination of free radicals content in rabbit brain after the experimental craniocerebral trauma. The level of malondialdehyde was measured spectrophotometrically. The quantity of free radical products in hemispheres, brain stem and cerebellum of rabbits with traumatic injury was shown to be significantly increased and correlated with MDA level. Data obtained indicate that the chemiluminescent technique can be useful for the investigation of lipid peroxidation in brain pathology. abstract_id: PUBMED:36381468 Effect of MCI-186 on Lipid Peroxidation in Experimental Traumatic Brain Damage in Rats. Objective: Brain damage occurs in many clinical conditions, including trauma, ischemia, and hypertension. Reactive oxygen products and lipid peroxidation are responsible for the brain damage that occurs in these clinical conditions. We investigated whether MCI-186 (3-methyl-1-phenyl-2-pyrazoline-5-one), a free radical binding agent on lipid peroxidation, affects malondialdehyde (MDA), glutathione (GSH), and glutathione peroxidase (GPx) levels in traumatic brain damage. Methods: The traumatic brain damage model, modified by Feeney, was performed on 28 male Wistar rats separated into 4 groups. The MDA, GSH, and GPx levels in the brain tissues of each group were studied. Results: MDA levels in the traumatized group were significantly higher than those in the sham and MCI-186 groups (p&lt;0.05), while GSH levels were significantly higher in the sham group than in the trauma and solvent groups (p&lt;0.05). No significant difference was observed between the sham and MCI-186 groups (p&gt;0.05). Although the average GPx level was higher in the sham and MCI-186 groups, no significant difference was found between groups. Conclusion: Reactive oxidation products significantly decreased in the MCI-186 group. Thus, MCI-186 can be used as a free radical-binding agent in traumatic brain damage. abstract_id: PUBMED:1588600 Involvement of lipid peroxidation in CNS injury. The generation of oxygen radicals and the process of lipid peroxidation have become a focus of attention for investigators in the fields of central nervous system (CNS) injury and stroke (e.g., ischemia). While absolute proof for their involvement in the pathophysiology of traumatic and ischemic damage to the CNS remains to be established, numerous recent studies have provided considerable support for the occurrence of free radical and lipid peroxidation reactions in the injured or ischemic CNS. Furthermore, the use of antioxidants and free radical scavengers in the treatment of experimental and clinical CNS trauma and ischemia has provided convincing support for the involvement of oxygen radicals and lipid peroxidation in these conditions. In this report we will review some of the history behind the hypothesis for an involvement of oxygen radical-mediated lipid peroxidation in the pathophysiology of CNS injury and look at some of the more recent work conducted in this area. abstract_id: PUBMED:8380240 Free radicals in CNS injury. This chapter has reviewed the current state of knowledge regarding the occurrence and possible role of oxygen radical generation and lipid peroxidation in experimental models of acute CNS injury. Although much work remains, four criteria that are logically required to establish the pathophysiological importance of oxygen radical reactions have been met, at least in part. First of all, oxygen radical generation and lipid peroxidation appear to be early biochemical events subsequent to CNS trauma. Second, a growing body of direct or circumstantial evidence suggests that oxygen radical formation and lipid peroxidation are linked to pathophysiological processes such as hypoperfusion, edema, axonal conduction failure, failure of energy metabolism, and anterograde (wallerian) degeneration. Third, there is a striking similarity between the pathology of blunt mechanical injury to CNS tissue and that produced by chemical induction of peroxidative injury. Fourth, and most convincing, is the repeated observation that compounds that inhibit lipid peroxidation or scavenge oxygen radicals can block posttraumatic pathophysiology and promote functional recovery and survival in experimental studies. Nevertheless, the significance of oxygen radicals and lipid peroxidation ultimately depends on whether it can be demonstrated that early application of effective antifree radical or antiperoxidative agents can promote survival and neurological recovery after CNS injury and stroke in humans. The results of the NASCIS II clinical trial, which have shown that an antioxidant dosing regimen with methylprednisolone begun within 8 hr after spinal cord injury can significantly enhance chronic neurological recovery, strongly supports the significance of lipid peroxidation as a posttraumatic degenerative mechanism. However, ongoing Phase III trials with the more selective and effective antioxidant U74006F (tirilazad mesylate) will give a more clear-cut answer as to the therapeutic importance of inhibition of posttraumatic free radical reactions in the injured CNS. abstract_id: PUBMED:2663663 Central nervous system trauma and stroke. II. Physiological and pharmacological evidence for involvement of oxygen radicals and lipid peroxidation. The previous article outlined the biochemical basis and evidence for the occurrence of oxygen radical generation and lipid peroxidation during the acute phase of central nervous system (CNS) trauma or stroke (ischemic and hemorrhagic). The identification of oxygen radicals and lipid peroxidation as important pathophysiological mediators of trauma or stroke-induced neural degeneration, rather than simply epiphenomena, depends upon the successful demonstration of their association with actual secondary physiological and structural degenerative events. Moreover, their significance in the pathophysiology of CNS trauma or stroke must be supported by experimental observations that pharmacological antagonism of either oxygen radical generation and/or lipid peroxidation results in a therapeutic effect (i.e., interruption of secondary nervous tissue degeneration). Indeed, recent investigations have provided compelling evidence for the view that oxygen radical-mediated processes play a key pathophysiological role during the acute phase of CNS trauma or stroke. Furthermore, their pharmacological manipulation may serve as an avenue for therapeutic attempts aimed at limiting neural degeneration and improving neurological recovery. abstract_id: PUBMED:30086292 Newer pharmacological approaches for antioxidant neuroprotection in traumatic brain injury. Reactive oxygen species-induced oxidative damage remains an extensively validated secondary injury mechanism in traumatic brain injury (TBI) as demonstrated by the efficacy of various pharmacological antioxidants agents in decreasing post-traumatic free radical-induced lipid peroxidation (LP) and protein oxidative damage in preclinical TBI models. Based upon strong preclinical efficacy results, two antioxidant agents, the superoxide radical scavenger polyethylene glycol-conjugated superoxide dismutase (PEG-SOD) and the 21-aminosteroid LP inhibitor tirilazad, which inhibits lipid peroxidation, (LP) were evaluated in large phase III trials in moderately- and severely-injured TBI patients. Both failed to improve 6 month survival and neurological recovery. However, in the case of tirilazad, a post hoc analysis revealed that the drug significantly improved survival of male TBI patients who exhibited traumatic subarachnoid hemorrhage (tSAH) that occurs in half of severe TBIs. In addition to reviewing the clinical trial results with PEG-SOD and tirilazad, newer antioxidant approaches which appear to improve neuroprotective efficacy and provide a longer therapeutic window in rodent TBI models will be presented. The first approach involves pharmacological enhancement of the multi-mechanistic Nrf2-antioxidant response element (ARE) pathway. The second involves scavenging of the neurotoxic LP-derived carbonyl compounds 4-hydroxynonenal (4-HNE) and acrolein which are highly damaging to neural protein and stimulate additional free radical generation. A third approach combines mechanistically complimentary antioxidants to interrupt post-TBI oxidative neurodegeneration at multiple points in the secondary injury cascade. These newer strategies appear to decrease variability in the neuroprotective effect which should improve the feasibility of achieving successful translation of antioxidant therapy to TBI patients. abstract_id: PUBMED:12850585 Stable nitroxide Tempol ameliorates brain injury by inhibiting lipid peroxidation in a rat model of transient focal cerebral ischemia. Oxygen free radicals have been implicated in the pathogenesis of cerebral ischemia and reperfusion injury. 4-Hydroxy-2,2,6,6-tetramethylpiperidene-1-oxyl (Tempol) has been reported as a stable nitroxide and a membrane-permeable free radical scavenger. This study was performed to investigate the mechanism of Tempol in attenuating ischemia-reperfusion injury in a rat model of transient focal cerebral ischemia. We measured the cerebral 2,3-dihydroxybenzoic acid (DHBA) level as the amount of hydroxyl radical production using a microdialysis technique with salicylic acids trapping during ischemia and reperfusion. The concentration of cerebral thiobarbituric acid reactive substances (TBARS), representing the extent of lipid peroxidation by free radicals, and the area of cerebral infarction were also measured. The level of cerebral 2,3-DHBA was increased during ischemia and reperfusion, especially during the early reperfusion stage at the periphery of the infarct area (nearly 500-fold). Intravenous administration of Tempol at the time of reperfusion reduced 2,3-DHBA production (Vehicle group: 472.2+/-196.2, Tempol group: 238.3+/-77.2) and the cerebral TBARS level (Vehicle group: 541.7+/-84.7, Tempol group: 339.0+/-147.2 nmol/g), and decreased the size of the cerebral infarction (Vehicle group: 202.2+/-98.4, Tempol group: 98.5+/-13.7 mm(3)). In contrast, Tempol administered 15 min prior to reperfusion reduced neither the TBARS level nor the size of the infarction. These results indicate that Tempol administration at the time of reperfusion reduced lipid peroxidation by scavenging free radicals, resulting in a reduction of the infarct size. abstract_id: PUBMED:10702729 Melatonin as a free radical scavenger in experimental head trauma. Head trauma causes two kinds of injury in the neural tissue. One is the primary injury which occurs at the time of impact. The other one is a secondary injury and is a progressive process. Free radicals are produced during oxidative reactions formed after trauma. They have been thought to be responsible in the mechanism of the secondary injury. Some studies have been conducted to demonstrate the role of free oxygen radicals in neuronal injury. The alterations in the free radical level during the early posttraumatic period and the effect of a free radical scavenger on these alterations have not been studied as a whole. We aimed to demonstrate the free oxygen radical level changes in the early posttraumatic period and the effect of melatonin, which is a potent free radical scavenger, on the early posttraumatic free radical level. A two-staged experimental head trauma study was designed. In stage one, posttraumatic free radical level changes were determined. In the second stage, the effect of melatonin on the free radical level changes in the posttraumatic period was studied. Two main groups of rats each divided into four subgroups were studied. Rats in one of the main groups underwent severe head trauma, and malondealdehyde (MDA) levels were measured in the contused cerebral tissue at different time points. Rats in the other main group also underwent the same type of trauma, and melatonin was injected intraperitoneally at different time points after trauma. The MDA level alteration in the tissue was determined after the injection of melatonin. The MDA level increased rapidly in the early posttraumatic period. But in time, it decreased in the groups with only trauma. In the melatonin-treated group, the MDA level decreased after the injection of melatonin, when injected in the early posttraumatic period, compared to the control and trauma groups. However, melatonin increased MDA to a higher level than in the groups with only trauma and the control group when injected later than 2 h after trauma. The MDA level increases in the very early posttraumatic period of cerebral trauma and decreases in time. Melatonin, which is the most potent endogenous free radical scavenger, when injected intraperitoneally to the cerebral traumatized rats in the very early posttraumatic period, causes a significant decrease in the MDA level. But, melatonin, when injected more than 2 h after trauma, increases the MDA level in experimental cerebral trauma in rats. Answer: The early onset of lipid peroxidation following human traumatic brain injury (TBI) has been identified as a potential limitation for the effectiveness of pharmacological therapy using free radical scavengers. Studies have shown that oxidative stress, indicated by the presence of malondialdehyde (MDA), a marker of lipid peroxidation, occurs very early after TBI. For instance, patients with severe brain injury exhibited high levels of MDA and decreased ascorbate levels in cerebrospinal fluid (CSF) samples collected within a mean time of 2.95 hours from trauma, suggesting an immediate onset of oxidative stress (PUBMED:11523701). This early onset of lipid peroxidation could explain the failure of clinical trials involving free radical scavenger drugs, as the therapeutic window for these drugs may be missed by the time treatment is initiated. Moreover, the involvement of lipid peroxidation in CNS injury has been well-documented, with oxygen radicals and lipid peroxidation being early biochemical events subsequent to CNS trauma (PUBMED:8380240, PUBMED:2663663). The pathophysiological processes linked to oxygen radical formation and lipid peroxidation include hypoperfusion, edema, axonal conduction failure, and failure of energy metabolism, among others (PUBMED:8380240). The similarity between the pathology of blunt mechanical injury to CNS tissue and that produced by chemical induction of peroxidative injury, along with the protective effects of antioxidants and free radical scavengers in experimental studies, further supports the involvement of these processes in CNS injury (PUBMED:1588600). Despite this, the clinical application of free radical scavengers has been challenging. For example, the superoxide radical scavenger polyethylene glycol-conjugated superoxide dismutase (PEG-SOD) and the lipid peroxidation inhibitor tirilazad failed to improve survival and neurological recovery in phase III trials for TBI, although a post hoc analysis of tirilazad showed some benefit in male patients with traumatic subarachnoid hemorrhage (PUBMED:30086292). In conclusion, the early onset of lipid peroxidation after TBI presents a significant challenge for the effectiveness of free radical scavenger pharmacological therapy, as the timing of intervention is critical. This necessitates further research into the timing and mechanisms of oxidative stress post-TBI and the development of therapeutic strategies that can effectively target and mitigate these early pathological events.
Instruction: "Hanging donors": are we still skeptical about the lungs? Abstracts: abstract_id: PUBMED:25769558 "Hanging donors": are we still skeptical about the lungs? Background: Suicidal hanging may cause compression of the neck blood vessels and the airway leading, to hypoxic brain damage caused by cerebral ischemia and respiratory distress. Hanging engenders global tissue hypoxia, particularly affecting the lungs as the result of pulmonary edema and barotrauma. There is scant evidence about outcomes of transplantation with the use of lungs from "hanging donors." This study evaluates the outcomes of lung transplantations (LTx) that use organs from this group. Methods: We retrospectively analyzed lung "organ offers" and LTx at our center between January 2007 and November 2013. The outcomes of LTx with lungs from donors with hanging as the cause of death ("hanging group") were compared with those with donors having other causes of death (control group), with the use of an unadjusted model as well as propensity score matching. Results: LTx (n = 302) were performed during this period and were grouped on the basis of the cause of death in donors: the hanging group (n = 8) and the control group (n = 294). No statistically significant difference was found in the donor characteristics except for the incidence of cardiac arrest, which was significantly higher in hanging donors. Preoperative characteristics in recipients in both groups were comparable. Intra-operative and post-LTx variables including PaO2/FiO2 ratios, duration of mechanical ventilation, and intensive care unit and hospital stays were comparable. One-year and 3-year survival rates were also comparable in both groups. Two recipients in the hanging group required extracorporeal life support after LTx and could not survive. Conclusions: Suicidal hanging is a relatively rare cause of death for potential organ donors. Because it does not appear to negatively affect outcomes after LTx, it should not be considered per se a contraindication for lung donation. A word of caution is necessary until we gather larger experience with lungs from hanging donors. abstract_id: PUBMED:31758578 Hanging donor lungs give good short-, mid- and long-term results in lung transplantation. Background: Hanging donors are considered as marginal donors and frequently unsuitable for lung transplantation. However, there is no evidence of higher lung transplantation (LTx) morbidity-mortality with lungs providing by hanging donor. Methods: Between January 2010 and July 2015, we performed a retrospective study at Foch hospital. We aimed to assess whether hanging donor grafts are suitable for lung transplantation. Results: A total of 299 LTx were performed. Subjects were allocated to a hanging group (HG) (n = 20) and a control group (CG) (n = 279). Donor and recipient characteristics did not differ. Primary graft dysfunction (PGD) at 72 hours was comparable in both groups (P = .75). The median duration of postoperative mechanical ventilation (1 [range, 0-84] vs 1 [range, 0-410] day, P = .35), the hospital length of stay (31 days [20-84] vs 32 days [12-435], P = .36) did not differ between the two groups. No statistically significant difference was found in 1-year and 5-year survival between the HG (83% and 78%) and the CG (86% and 75%), P = .85. Conclusion: We believe that hanging donors should be considered as conventional donors with particular caution in the final evaluation of the graft and in perioperative management. abstract_id: PUBMED:31511853 In lung transplantation, are pulmonary grafts from donors deceased from hanging as suitable as grafts from donors deceased from other causes? A best evidence topic was constructed according to a structured protocol. The question addressed was whether pulmonary grafts from donors deceased from hanging offer the same benefit as grafts from donors deceased from other causes in lung transplantation. Of the 17 papers found, 4 provided the best evidence to answer the question. The authors, date, journal, country of publication, study type, group studied, relevant outcomes and results of these papers are tabulated. One study reported a large cohort of donors and analysed the outcomes by cause of death, reporting no differences in survival. The remaining 3 papers analysed observational studies on the outcomes of lung transplantation using pulmonary grafts from donors deceased from hanging, compared with donors deceased from other causes. No differences in the rates of post-transplantation pulmonary graft dysfunction and long-term overall survival were reported. Although the cohort of donors deceased from hanging is small, we conclude that these donors are an important contribution to the donor pool. Ex vivo lung perfusion may have a role in assessing graft viability in this scenario. abstract_id: PUBMED:29620623 Suicidal hanging donors for lung transplantation: Is this chapter still closed? Midterm experience from a single center in United Kingdom. In the context of limited donor pool in cardiothoracic transplantation, utilization of organs from high risk donors, such as suicidal hanging donors, while ensuring safety, is under consideration. We sought to evaluate the outcomes of lung transplantations (LTx) that use organs from this group.Between January 2011 and December 2015, 265 LTx were performed at our center. Twenty-two recipients received lungs from donors after suicidal hanging (group 1). The remaining 243 transplantations were used as a control (group 2). Analysis of recipient and donor characteristics as well as outcomes was performed.No statistically significant difference was found in the donor characteristics between analyzed groups, except for higher incidence of cardiac arrest, younger age and smoking history of hanging donors (P &lt; .001, P = .022 and P = .0042, respectively). Recipient preoperative and perioperative characteristics were comparable. Postoperatively in group 1 there was a higher incidence of extracorporeal life support (27.3 vs 9.1%, P = .019). There were no significant differences in chronic lung allograft dysfunction-free survival between group 1 and 2: 92.3 vs 94% at 1 year and 65.9 vs 75.5% at 3 years (P = .99). The estimated cumulative survival rate was also similar between groups: 68.2 vs 83.2% at 1 year and 68.2% versus 72% at 3 years (P = .3758).Hanging as a donor cause of death is not associated with poor mid-term survival or chronic lung allograft dysfunction following transplantation. These results encourage assessment of lungs from hanging donors, and their consideration for transplantation. abstract_id: PUBMED:35539362 Moral underdetermination and a new skeptical challenge. In this paper, I introduce a new challenge to moral realism: the skeptical argument from moral underdetermination. The challenge arises as a consequence of two recent projects in normative ethics. Both Parfit (On what matters, vol 1. Oxford University Press, Oxford, 2011a) and a group called consequentializers have independently claimed that the main traditions of normative theories can agree on the set of correct particular deontic verdicts. Nonetheless, as Dietrich and List (Philos Rev 126(4):421-479, 2017) and myself (Baumann in J Ethics Soc Philos 13(3):191-221, 2018; Australas J Philos 97(3):511-527, 2019; Ethical Theory Moral Pract 24(4):999-1018, 2021a) have argued, the traditions still disagree about why these are the correct verdicts. This means that we can understand the situation in terms of an idea from the philosophy of science, the underdetermination of theory by the evidence. Yet underdetermination figures in one of the most important skeptical challenges to scientific realism. I show how an analogous skeptical argument can be construed for the moral realm. I propose a standard form for that argument. I then defend it against three possible objections, arguing that it is at least as plausible as, if not more plausible than, its counterpart in the philosophy of science. abstract_id: PUBMED:37125386 Lungs From Donors ≥70 Years of Age for Transplantation-Do Long-Term Outcomes Justify Their Use? Donor shortages have led transplant centers to extend their criteria for lung donors. Accepting lung donors ≥70 years of age has previously shown good short-term outcomes; however, no mid- and long-term outcome data on these extended criteria donors has been published to date. In this study, all patients who underwent lung transplantation between 06/2010 and 12/2019 were included in the analysis, and the outcomes were compared between patients receiving organs from donors &lt;70 years of age and patients transplanted with lungs from donors ≥70 years of age. Among the 1,168 lung-transplanted patients, 62 patients received lungs from donors ≥70 years of age. The recipient age of those receiving older organs was significantly higher, and they were more likely to suffer from obstructive lung disease. Older donors were exposed to significantly shorter periods of mechanical ventilation prior to donation, had higher Horowitz indices, and were less likely to have smoked. The postoperative time on mechanical ventilation, time on ICU, and total hospital stay were comparable. The overall survival as well as CLAD-free survival showed no differences between both groups in the follow-up period. Utilization of lungs from donors ≥70 years of age leads to excellent mid- and long-term results that are similar to organs from younger donors when the organs from older donors are carefully preselected. abstract_id: PUBMED:25038007 Ex vivo lung perfusion allows successful transplantation of donor lungs from hanging victims. Background: Donor lungs acquired from victims of asphyxiation by hanging are not routinely used for lung transplantation because of the associated lung injury. Ex vivo lung perfusion (EVLP) is a technique to evaluate marginal donor lungs before transplantation. We report here our experience with the use of EVLP in donor lungs procured from victims of asphyxia by hanging. Methods: Lungs from 5 donors who became brain dead secondary to hanging were evaluated by EVLP. Donor organs were perfused according to trial protocol. Donor lungs were accepted for transplantation if they maintained a PaO2 greater than or equal to 350 mm Hg, had a clear roentgenogram, and had no significant worsening of physiologic metrics. Results: Perfused organs included single and double lung blocs, and all were perfused without technical incident. Three of the 5 donor organs evaluated met criteria for transplantation after 3 hours of EVLP and were transplanted. Donor organs rejected for transplantation showed either signs of worsening PaO2 or deterioration of physiologic metrics. There were no intraoperative complications in the patients who underwent transplantation, and all were alive at 30 days. Conclusions: We report here the successful use of EVLP to assess donor lungs acquired from victims of asphyxiation by hanging. The use of EVLP in this particular group of donors has the potential to expand the available donor pool. We demonstrate that EVLP is a viable option for evaluating the function of lung allografts before transplantation and would recommend that all donor lungs obtained from hanging victims undergo EVLP to assess their suitability for transplantation. abstract_id: PUBMED:25954634 Course of Near-hanging Victims Succumbed to Death: A Seven Year Study. Introduction: Near hanging refers to victims who survive a hanging injury following attempted hanging, long enough to reach hospital. Delayed deaths in near hanging patients are mostly due to complication of hanging. The purpose of this study was to evaluate the demographics, mortality patterns and cause of delayed deaths in near hanging victims. Materials And Methods: In this study autopsy files over a seven year period from 2007 to 2013 were reviewed, and data of near hanging deaths (attempted hanging cases who succumbed to death and subjected for medicolegal autopsy) was extracted. Records of 14,000 autopsies was reviewed, and 10 deceased having died delayed deaths after near hanging episode were identified. In each case, the patients' details, including gender, age, type of suspension, type of ligature material used for hanging and subsequent hanging mark produced were reviewed using autopsy reports and photographs taken during autopsy. Results: Demographic and pathological aspects of the each case discussed to throw light on autopsy findings in victims who died following near hanging. Complete suspension was present in 3 cases, while partial suspension was present in 7 cases. Survivals in delayed death after near hanging episode have ranged from 9 h to 72 d. Hypoxic encephalopathy was the most common cause of death, followed by pneumonia. Conclusion: Most of the near hanging patients did succumb to hypoxic encephalopathy; however, consolidation of lungs (pneumonia) was the next common cause of death reflecting need for aggressive oxygen therapy and selective resuscitation should be performed in all such cases. abstract_id: PUBMED:35496245 The CODA Model: A Review and Skeptical Extension of the Constructionist Model of Emotional Episodes Induced by Music. This paper discusses contemporary advancements in the affective sciences (described together as skeptical theories) that can inform the music-emotion literature. Key concepts in these theories are outlined, highlighting their points of agreement and disagreement. This summary shows the importance of appraisal within the emotion process, provides a greater emphasis upon goal-directed accounts of (emotion) behavior, and a need to move away from discrete emotion "folk" concepts and toward the study of an emotional episode and its components. Consequently, three contemporary music emotion theories (BRECVEMA, Multifactorial Process Approach, and a Constructionist Account) are examined through a skeptical lens. This critique highlights the over-reliance upon categorization and a lack of acknowledgment of appraisal processes, specifically goal-directed appraisal, in examining how individual experiences of music emerge in different contexts. Based on this critique of current music-emotion models, we present our skeptically informed CODA model - Constructivistly-Organised Dimensional-Appraisal model. This model addresses skeptical limitations of existing theories, reinstates the role of goal-directed appraisal as central to what makes music relevant and meaningful to an individual in different contexts and brings together different theoretical frameworks into a single model. From the development of the CODA model, several hypotheses are proposed and applied to musical contexts. These hypotheses address theoretical issues such as acknowledging individual and contextual differences in emotional intensity and valence, as well as differentiating between induced and perceived emotions, and utilitarian and aesthetic emotions. We conclude with a sections of recommendations for future research. Altogether, this theoretical critique and proposed model points toward a positive future direction for music-emotion science. One where researchers can take forward testable predictions about what makes music relevant and meaningful to an individual. abstract_id: PUBMED:24933455 Liver transplantation with grafts from donors who die from suicide by hanging: a matched cohort study. Introduction: Hanging is a common method of suicide globally, and the incidence is increasing at an alarming rate. The purpose of this study was to analyze our experience with the use of liver grafts from donors whose cause of death is suicidal hanging and to evaluate the outcome of the graft recipients. Patients And Methods: Twenty-four consecutive patients were transplanted with grafts obtained from suicidal hanging donors. These patients (group A) were case-matched to 24 patients who underwent liver transplantation with a liver graft from a brain-dead donor whose cause of death was not hanging or strangulation (group B). The analysis was focused on postoperative complications, graft, and recipient outcomes. Results: Donors in group A showed a significantly higher mean [±2 standard deviation] serum aspartate aminotransferase and alanine aminotransferase compared to group B (aspartate aminotransferase, 168[166] vs. 57[67] IU/L; P=0.006; alanine aminotransferase, 151[197] vs. 58[103] IU/L; P=0.049); however, there was no difference in graft failure rates between the two groups [8.3% (n=2) in each group]. Overall 30-day postoperative morbidity, 1-year and 5-year survivals were comparable in both groups (P=0.96, P=0.35, P=0.69, respectively). Conclusion: The use of grafts from suicidal hanging donors (without hemodynamic instability and with downward trend in the donor transaminases) is not associated with higher post-liver transplantation complication rates and does not negatively influence recipient's outcome. Answer: The skepticism around the use of lungs from "hanging donors" for lung transplantation (LTx) has been addressed in several studies, and the evidence suggests that outcomes from these donors are comparable to those from donors with other causes of death. A study analyzing lung transplantations at a center between January 2007 and November 2013 found no statistically significant difference in outcomes, including one-year and three-year survival rates, between recipients of lungs from hanging donors and those from a control group. This led to the conclusion that suicidal hanging should not be considered a contraindication for lung donation, although a note of caution was advised until more experience is gathered (PUBMED:25769558). Another retrospective study at Foch hospital between January 2010 and July 2015 compared outcomes between a hanging group (HG) and a control group (CG) and found no significant difference in primary graft dysfunction, duration of postoperative mechanical ventilation, hospital length of stay, and one-year and five-year survival rates. This suggests that hanging donors should be considered as conventional donors with particular caution in the final evaluation of the graft and perioperative management (PUBMED:31758578). Further evidence from a single center in the United Kingdom, which performed 265 LTx between January 2011 and December 2015, showed that hanging as a donor cause of death is not associated with poor mid-term survival or chronic lung allograft dysfunction following transplantation. This encourages the assessment of lungs from hanging donors for transplantation (PUBMED:29620623). Additionally, the use of ex vivo lung perfusion (EVLP) has been reported as a successful method to assess donor lungs from victims of asphyxiation by hanging, potentially expanding the available donor pool (PUBMED:25038007). In summary, the skepticism about the use of lungs from "hanging donors" for transplantation has been challenged by several studies that report comparable outcomes to those from non-hanging donors. These findings suggest that with careful selection and assessment, including the use of EVLP, lungs from hanging donors can be a viable option for transplantation.
Instruction: HAART and liver: is it safe? Abstracts: abstract_id: PUBMED:26587072 Risk Factors of Cardiovascular Diseases in HIV/AIDS Patients on HAART. Background: The introduction and widespread use of combination antiretroviral therapy referred to as highly active antiretroviral therapy (HAART) in the mid 1990's, has led HIV-infected individuals to experience a dramatic decline in immunodeficiency-related events and death. There is growing concern on metabolic complications associated with HIV and HAART which may increase cardiovascular risk and disease. The aim of this study was to investigate the cardiovascular risk profile of HIV/AIDS patients receiving HAART and those not receiving HAART at HIV/AIDS treatment centres in the South West Region of Cameroon. Methods: Consenting participants, who had been receiving HAART, were compared with HAART naive participants. A questionnaire was administered; anthropometric and blood pressure measurements were recorded under standard conditions. Blood samples were obtained for the determination of plasma glucose and lipid levels. Results: Two hundred and fifteen participants were recruited, 160 (74.4%) were on HAART and 55 (25.6%) were HAART naive. Among the individual lipid abnormalities, increased total cholesterol was the most prevalent (40.0%). Participants on HAART were significantly about 8 times at risk of developing hypercholesterolemia when compared to the HAART inexperienced group (OR 8.17; 95% CI: 3.31-20.14; p&lt;0.001). Hypertension had a prevalence of 25.6% (95% CI: 15.3%-35.9%) and was about 2 times significantly higher in the HAART treated than the HAART untreated group (p=0.033). The prevalence of low HDL-c was significantly higher in males (24.1%) compared to females (11.2%) (p=0.0196). Many females (27.3%) were obese compared to males (7.4%) (p=0.0043). HAART use and treatment duration of more than five years were significantly associated with higher prevalence of CVD risk factors. Conclusion: HAART treatment was associated with significantly higher prevalence of hypercholesterolemia, increased LDL-c and hypertension, hence the risk of cardiovascular diseases. abstract_id: PUBMED:26910292 Ocular Manifestations in Patients with Human Immunodeficiency Virus Infection in the Pre-HAART Versus the HAART Era in the North Indian Population. Purpose: To compare changes in the demographic profile and ocular manifestations in patients with HIV in the pre-HAART and HAART era in North India. Methods: In this single-center cross-sectional study, 100 HIV patients receiving HAART and 96 HIV patients in the pre-HAART era were enrolled. Prevalence of ocular manifestations of HIV was calculated for both cohorts. Results: The prevalence of ocular manifestations was not statistically different in the two eras (38%, SE: 4.85% in HAART era; 41.67%, SE: 5% in pre-HAART era) (p = 0.60). Mean CD4 counts were lower in the pre-HAART era compared with the HAART era (p &lt; 0.001). In the HAART era, cytomegalovirus (CMV) retinitis and HIV retinopathy continued to remain the most common infectious and non-infectious cause of visual morbidity. Conclusions: While the introduction of HAART has resulted in a major impact on the overall health of patients with HIV, the spectrum of ocular disease remains largely unchanged in developing countries such as India. abstract_id: PUBMED:36387433 Prevalence and predictors of dyslipidemia among HAART treated and HAART naive HIV positive clients attending Debre Tabor Hospital, Debre Tabor, Ethiopia. Background: Highly active anti-retroviral therapy has been reported to be associated with a number of side effects in human immunodeficiency virus patients among which dyslipidemia isa common metabolic disorder. Methods: A Hospital based comparative cross-sectional study among 228 HIV positive patients was conducted from July to August 2020. Socio-demographic and clinical data were collected using structured questionnaires. Fasting venous blood sample was collected and analyzed for Lipid profiles. EDTA sample was analyzed for CD4+ T cell determination. Anthropometric measurement was done. Data were analyzed using SPSS version 22. Independent t-test was done. Logistic and binary regression was done. Result: A total of 228 HIV patients were enrolled in the study. Prevalence of dyslipidemia in HAART naive and HAART treated patients was 61 (53.5%) and 84 (73.7%), respectively. The prevalence of Total Cholesterol ≥200 mg/dl was 50% and 30%; High density lipoprotein cholesterol &lt;40 mg/dl was 43.8% and 36%; Low density lipoprotein cholesterol ≥130 mg/dl was 48.3% and 28.1%; and Triglyceride ≥ 150 mg/dl 59.6% and 39% among HAART treated and HAART naive, respectively. Age greater than 40 years (AOR = 3.27, 95% C.I: 1.47-7.25), blood pressure ≥140/90 (AOR = 16.13, 95% C.I: 5.81-44.75), being on HAART (AOR = 2.73, 95% C.I: 1.35-5.53) and body mass index &gt;25 kg/m2 (AOR = 1.92, 95% C.I: 1.20-4.81) were identified as determinants of dyslipidemia. Conclusion: The mean value of lipid profile was significantly higher among HAART treated as compared to those HAART naive HIV positive clients. abstract_id: PUBMED:31603989 HIV-1, HAART and cancer: A complex relationship. HIV infected people are at higher risk of developing cancer, although it is globally diminished in the era of highly active antiretroviral treatment (HAART). Recently, antioncogenic properties of some HAART drugs were discovered. We discuss the role of HAART in the prevention and improvement of treatment outcomes of cancers in HIV-infected people. We describe different trends in HAART-cancer relationships: cancer-predisposing as well as cancer-preventing. We cover the roles of particular drug regimens in cancer prevention. We also describe the causes of cancer treatment with HAART drugs in HIV-negative people, including ongoing clinical studies that may directly point to a possible independent anti-oncogenic activity of HAART drugs. We conclude that despite potent antioncogenic activities of every class of HAART drugs reported in preclinical models, the evidence to date indicates that their independent clinical impact in HIV-infected people is limited. Improved cancer prevention strategies besides HAART are needed to reduce HIV-cancer-related mortality. abstract_id: PUBMED:29681960 First uses of HAART 300 rings for aortic valve repair in Poland - 4 case studies. Introduction: Aortic valve reconstructions using geometric annuloplasty rings HAART 300/200 open new era in aortic valve surgery. The HAART technology resizes, reshapes, stabilizes and simplifies aortic valve repair. The HAART aortic repair rings are designed to be implanted directly into aortic annulus (under aortic valve leaflets). Aim: We present first in Poland 4 cases of aortic valve reconstructions using geometric annuloplasty rings HAART 300. Material And Methods: Two patients had type IA aortic insufficiency (due to El-Khoury classification) - they were treated by HAART 300 ring insertion and ascending aorta prosthesis implantation. Third patient, Marfan with type IB aortic insufficiency was repaired by HAART 300 ring implantation followed by remodeling (Yacoub) procedure. Fourth patient with type II aortic insufficiency (due to RCC prolapse) was repaired by HAART 300 implantation and cusp plication. Results: All patients shows good results on 6 months postoperative 3D TTE examinations. Conclusions: Presented technique is reproducible and simplify aortic valve reconstructions. abstract_id: PUBMED:29577081 Pharmacogenetic Associations with ADME Variants and Virologic Response to an Initial HAART Regimen in HIV-Infected Women. Background: Clinical response to highly active antiretroviral therapy (HAART) varies among different populations. A portion of this variability may be due to variation in genes involved in the absorption, distribution, metabolism, and excretion (ADME) of HAART. Design: To identify genetic factors involved in virologic responses to HAART, 13 genes in ADME pathways were analyzed in a cohort of HIV-infected women on HAART. A total of 569 HIV-positive participants from the Women's Interagency HIV Study who initiated HAART from 1994-2012 and had genotype data were included in these analyses. Methods: Admixture maximum likelihood burden testing was used to evaluate gene-level associations between common genetic variation and virologic response (achieving &lt;80 viral copies/mL) to HAART overall and with specific drug classes. Results: Six statistically significant (P&lt;0.05) gene-level burden tests were observed with response to specific regimen types. CYP2B6, CYP2C19 and CYP2C9 were significantly associated with response to protease inhibitor (PI)-based regimens. CYP2C9, ADH1A and UGT1A1 were significantly associated with response to triple nucleoside reverse transcriptase inhibitor (NRTI) treatment. Conclusions: Although no genome-wide associations with virologic response to HAART overall were detected in this cohort of HIV-infected women, more statistically significant gene-level burden tests were observed than would be expected by chance (two and a half expected, six observed). It is likely that variation in one of the significant genes is associated with virologic response to certain HAART regimens. Further characterization of the genes associated with response to PI-based treatment is warranted. abstract_id: PUBMED:27390719 Impact of vitamin supplements on HAART related hematological abnormalities in HIV-infected patients. Background: The human immunodeficiency virus (HIV) is one of the most life- threatening human infections. The advent of highly active antiretroviral therapy (HAART) has dramatically changed the course of HIV infection and patients' quality of life. In addition to the benefits, HAART can have numerous side effects and toxicities. Therefore, we aimed to assess the impact of short-term vitamins treatment on hematological parameters of HIV infected patients receiving HAART. Methods: This cross-sectional study was conducted on 100 confirmed HIV positive patients who referred to Shiraz HIV/AIDS research center in southwest of Iran. The first-line of HAART regimen contained Zidovudine, Lamivudine, and Efavirenz. The studied population received vitamin B12 weekly and folic acid daily for at least one month. Results: After receiving HAART for at least 6 months with adherence above 90%, significant differences (p&lt;0.05) were observed in MCV, MCH, HCT, TLC and RBC status compared to the baseline parameters. After one month of treatment, vitamins in four hematological parameters including TLC, MCV, RBC, and WBC showed significant differences compared to HAART parameters. Conclusion: Combined administration of B12 and folate supplements is a beneficial adjuster on hematologic status of HIV infected persons receiving HAART. However, future research with larger studied population and longer follow-up periods is required. Moreover, especial attention should be given to gender because the effect of vitamins was significantly different on some hematologic parameters between different genders. abstract_id: PUBMED:18159353 Treatment of CMV retinitis with intravitral ganciclovir in the HAART era. Objective: To describe the course and outcome of cytomegalovirus (CMV) retinitis among AIDS patients treated with intravitreal ganciclovir and systemic highly active antiretroviral therapy (HAART). The secondary objective was to compare the course of CMV retinitis between patients receiving HAART and those not receiving this treatment. Design: A retrospective cohort design consisting of 21 eyes from 16 patients with AIDS and CMV retinitis consecutively enrolled between January 1996 and August 1999. All patients received intravitreal ganciclovir therapy, and half of the patients began HAART as well. Duration of intravitreal therapy and ensuing disease quiescence, as well as CD4+ T cell counts at diagnosis and at cessation of ganciclovir, were calculated. Secondly, instantaneous hazards for outcomes such as CMV retinitis progression, ocular complications and mortality were compared. Setting: Tertiary care centre in Ottawa, Ontario. Results: Five of eight patients receiving HAART discontinued intravitreal ganciclovir after a mean treatment period of 428 days. During this period, their mean CD4+ count rose from 7.5 to 190microL. Subsequently, none of these patients experienced retinitis progression during follow-up periods lasting up to 820 days (mean of 617 days). Progression of CMV retinitis was 11.4 times more likely among those not receiving HAART (P=0.049). Conclusions: On initiating HAART, patients with CMV retinitis may enjoy significant recovery in CD4+ counts and sustained retinitis quiescence without specific anti-CMV therapy. Intravitreal ganciclovir injections seem well suited to offer effective CMV control during temporary periods of decreased CD4+ counts while awaiting HAART-mediated immune system reconstitution. abstract_id: PUBMED:29333525 Dyslipidemia and its Correlates among HIV Infected Children on HAART Attending Mbarara Regional Referral Hospital. Background: HAART and chronic HIV associated inflammation has been attributed to abnormal lipids in HIV infected people. Little is known about dyslipidemia among children in Uganda in the era of increasing Highly Active Anti Retroviral Therapy (HAART) use. We determined the prevalence of lipid abnormalities, the correlation of the lipid abnormalities to CD4 count, HIV clinical stage and duration on HAART among HIV infected children. Methods: This was a cross-sectional, descriptive and analytical study of HIV infected children age 1-17 years receiving HAART for more than 6 months in Mbarara Regional Referral Hospital. Consent and assent were obtained as appropriate. Sociodemographic, clinical and immunological data were collected and recorded in a questionnaire. A blood sample was taken for lipid profiling. Dyslipidemia was defined as any low HDL (&lt;=40mg/dl), high LDL (&gt;130mg/dl), high TG (&gt;130mg/dl) and a high total cholesterol (&gt;200mg/dl) or a combination of these in the study population. The proportion of children with dyslipidemia was calculated and logistic regression analysis for associated factors. Results: The mean age was 118 months (SD 49 months) with 49.5% of the children male and 62.1% had severe HIV disease at initiation of HAART. Mean duration of HAART was 55.6 months (SD 31.2 months). The prevalence of dyslipidemia was 74%. Among the children with dyslipidemia, 56.6% exhibited low HDL, 22% had hypertriglyceridemia, 15.6% had high LDL and 11% had hypercholesterolemia. We found significant association between dyslipidemia and WHO clinical stage at initiation of HAART (AOR 2.9 1.05 - 8.45 p=0.040). Conclusion: There was a high prevalence of dyslipidemia associated with severe HIV disease at initiation of HAART among HIV-infected children on HAART. abstract_id: PUBMED:26316787 A narrative review of cost-effectiveness analysis of people living with HIV treated with HAART: from interventions to outcomes. Background: Since its introduction in 1996, highly active antiretroviral therapy (HAART), which involves the combination of antiretroviral drugs, has resulted in significant improvements in the morbidity, mortality, and life expectancy of HIV-infected patients. Numerous studies of the cost-effectiveness of HAART from different perspectives in HIV have been reported. Aim: To investigate the economic outcomes and relevance of HAART for people living with HIV. Materials And Methods: A narrative literature review was conducted on 22 peer-reviewed full economic evaluations of people living with HIV treated with different HAART regimens and published in English between January 2005 and December 2014. Information regarding study details, such as interventions, outcomes, and modeling methods, was extracted. The high heterogeneity of the included studies rendered a meta-analysis inappropriate; therefore, we conducted a comparative analysis of studies grouped according to the similarity of the different intervention types and outcomes. Results: Most of the economic evaluations of HAART focused on comparisons between the specific HAART regimens and others from the following perspectives: injecting drug users versus noninjecting drug users, HIV-infected adults without AIDS versus those with AIDS, regimens based on developed world guidelines versus those based on developing world guidelines, self-administered HAART versus directly observed HAART, and "ideal" versus "typical" regimens. Conclusion: In general, HAART is more cost-effective than other therapeutic regimens adopted so far. Further investigations, especially head-to-head comparisons of "ideal" and "typical" trials of different regimen combinations, are required to identify the optimal HAART regimens. Answer: The provided abstracts do not directly address the safety of highly active antiretroviral therapy (HAART) in relation to liver health. However, some of the abstracts discuss the metabolic side effects and complications associated with HAART, which may indirectly suggest potential concerns for liver health. For instance, the abstract from PUBMED:26587072 discusses the cardiovascular risk profile of HIV/AIDS patients receiving HAART and notes that HAART treatment is associated with a significantly higher prevalence of hypercholesterolemia, increased LDL-c, and hypertension, which are risk factors for cardiovascular diseases. While this does not specifically mention liver safety, dyslipidemia is a known risk factor for non-alcoholic fatty liver disease (NAFLD), suggesting that HAART may have implications for liver health. Similarly, the abstract from PUBMED:36387433 reports that HAART-treated HIV-positive patients have a higher prevalence of dyslipidemia compared to HAART-naive patients. Dyslipidemia is a condition that can have implications for liver health, as it can contribute to the development of NAFLD and other liver conditions. The abstract from PUBMED:31603989 discusses the complex relationship between HIV-1, HAART, and cancer, noting that HIV-infected people are at higher risk of developing cancer, but the introduction of HAART has diminished this risk. While this abstract does not specifically address liver safety, it does highlight the multifaceted effects of HAART on health outcomes. None of the abstracts provided offer a clear answer to the question of HAART's safety in relation to liver health. To determine the safety of HAART for the liver, one would need to review literature specifically focused on the hepatic side effects of antiretroviral drugs, including studies on drug-induced liver injury (DILI) in the context of HAART. It is important to note that the safety profile of HAART may vary depending on the specific drugs included in the regimen, the duration of treatment, and individual patient factors such as pre-existing liver conditions or concurrent medication use.
Instruction: Is a specific drinking pattern a consistent feature of the Mediterranean diet in Spain in the XXI century? Abstracts: abstract_id: PUBMED:24954423 Is a specific drinking pattern a consistent feature of the Mediterranean diet in Spain in the XXI century? Background And Aim: Over the last 50 years, people in Spain have increasingly been eating their main meal away from home and are shifting from the typical Mediterranean diet (MD). In addition, wine consumption has decreased whereas beer intake has risen. Consequently, it is uncertain if the Mediterranean drinking pattern (MDP; moderate alcohol intake mainly from wine and during meals) is a habitual feature of the MD today. Methods And Results: Cross-sectional study conducted from 2008 to 2010 among 8894 individuals representative of the Spanish population aged 18-64 years. Consumption of alcoholic beverages and food was collected with a validated diet history. Accordance with the MD was defined as a score ≥8 on the Mediterranean Diet Adherence Screener (MEDAS) or ≥5 in the Trichopoulou index (after excluding alcohol intake from both indices). Among individuals with MEDAS-based MD accordance, only 17.1% had a MDP. After adjustment for potential confounders, this drinking pattern showed a weak association with higher MD accordance (odds ratio (OR) 1.32; 95% confidence interval (CI) 1.12-1.57). Only 14.7% of those with Trichopoulou-based MD accordance had a MDP; this pattern showed an even weaker association with higher MD accordance (OR 1.17; 95% CI 1.01-1.36). Similar results were obtained when this drinking pattern was redefined to include persons who drank wine with or outside of meals, as well as those who were primarily beer drinkers. Conclusions: The MDP is not a habitual feature of the MD in the early XXI century in Spain. abstract_id: PUBMED:38217643 Alcohol consumption patterns and adherence to the Mediterranean diet in the adult population of Spain. Purpose: The objective is to evaluate the association between various indicators of alcohol consumption and the degree of adherence to the Mediterranean diet among the Spanish adult population. Methods: A cross-sectional study including 44,834 participants ≥ 15 years of age from the 2017 National Health Survey and the 2020 European Health Survey in Spain. Alcohol patterns were defined based on (1) average intake: individuals were classified as low risk (1-20 g/day in men and 1-10 g/day in women) and high risk (&gt; 20 g/day in men or &gt; 10 g/day in women), (2) binge drinking, and (3) alcoholic beverage preference. Non-adherence to the Mediterranean diet was defined as scoring &lt; 7 points on an adapted Mediterranean Diet Adherence Screener index (range 0-10). Odds ratios (OR) were estimated using logistic regression models adjusted for relevant covariates. Results: Compared to non-drinkers, low and high-risk drinkers were more likely to report non-adherence to the Mediterranean diet: ORs 1.35 (95% CI 1.23; 1.49) and 1.54 (95% CI 1.34; 1.76), respectively. Similarly, reports of binge drinking less than once a month was associated with higher likelihood of non-adherence (OR 1.17; 95% CI 1.04; 1.31). Individuals reporting no preference for a specific beverage and those with a preference for beer or for spirits had lower adherence: ORs 1.18 (95% CI 1.05; 1.33), 1.31 (95% CI 1.17; 1.46), and 1.72 (95% CI 1.17; 2.54), respectively, while a preference for wine showed no association (OR 1.01; 95% CI 0.90; 1.13). Conclusion: Alcohol consumption, even in low amounts, is associated with lower adherence to the Mediterranean diet. Therefore, alcoholic beverages should not be included in measures that define the Mediterranean diet. abstract_id: PUBMED:30813581 Agreement among Mediterranean Diet Pattern Adherence Indexes: MCC-Spain Study. There are many different methods used to measure the degree of adherence to a Mediterranean diet (MD), limiting comparison and interpretation of their results. The concordance between different methodologies has been questioned and their evaluation recommended. The aim of this study was to evaluate the agreement among five indexes that measure adherence to a Mediterranean dietary pattern. The study population included healthy adults selected in the Multi-Case Control Spain (MCC-Spain) study recruited in 12 provinces. A total of 3640 controls were matched to cases by age and sex. To reach the aim, the following scores of adherence to a Mediterranean dietary pattern were calculated: Mediterranean diet score (MDS), alternative Mediterranean diet (aMED), relative Mediterranean diet (rMED), dietary score (DS) and literature-based adherence score (LBAS). The relative frequency of subjects with a high level of adherence to a MD varied from 22% (aMED index) to 37.2% (DS index). Similarly, a high variability was observed for the prevalence of a low level of MD: from 24% (rMED) to 38.4% (aMED). The correlation among MDS, aMED and rMED indexes was moderate, except for MDS and aMED with a high coefficient of correlation 0.75 (95% CI 0.74⁻0.77). The Cohen's Kappa coefficient among indexes showed a moderate⁻fair concordance, except for MDS and aMED with a 0.56 (95% CI 0.55⁻0.59) and 0.67 (95% CI 0.66⁻0.68) using linear and quadratic weighting, respectively. The existing MD adherence indexes measured the same, although they were based on different constructing algorithms and varied in the food groups included, leading to a different classification of subjects. Therefore, concordance between these indexes was moderate or low. abstract_id: PUBMED:36678178 Mediterranean Alcohol-Drinking Pattern and Arterial Hypertension in the "Seguimiento Universidad de Navarra" (SUN) Prospective Cohort Study. Alcohol drinking patterns may determine the risk of hypertension and may also modify the detrimental effect of high alcohol intake. We prospectively evaluated the effect of the Mediterranean alcohol-drinking pattern and its interaction with the amount of alcohol consumed on the incidence of arterial hypertension. In the “Seguimiento Universidad de Navarra” (SUN) cohort, we followed-up 13,805 participants, all of them initially free of hypertension, during a maximum period of 16 years. Information about diet, chronic diseases, lifestyle and newly diagnosed hypertension was collected using validated questionnaires. We used a 7-item score (0 to 9 points) that jointly considered moderate alcohol consumption, distributed over the week, with meals, and a preference for red wine and avoidance of binge-drinking. During 142,404 person-years of follow-up, 1443 incident cases of hypertension were identified. Low adherence (score &lt; 2) to the Mediterranean alcohol-drinking pattern was significantly associated with a higher incidence of hypertension (multivariable-adjusted hazard ratio 1.81, 95% confidence interval 1.09−2.99) as compared to the high-adherence (score &gt; 7) category. Among alcohol consumers, a high adherence to the MADP is associated with a lower incidence of hypertension. Compared with abstinence, a high adherence did not seem to differ regarding its effect on hypertension risk. abstract_id: PUBMED:28421787 Adherence to the Mediterranean diet pattern, cognitive status and depressive symptoms in an elderly non-institutionalized population. Introduction: Scientific evidence indicates that adherence to the Mediterranean diet protects against the deterioration of cognitive status and depressive symptoms during aging. However, few studies have been conducted in elderly non-institutionalized subjects. Objective: This study evaluated the relation between the adherence to the Mediterranean dietary pattern and cognitive status and depressive symptoms in an elderly population over 75 years. Methods: A cross-sectional study was conducted in a Mediterranean city (Garrucha, Spain) in 79 elderly people over 75 (36 men and 41 women). Adherence to the Mediterranean dietary pattern was determined using the Mediterranean Diet Adherence Screener (MEDAS). Cognitive function was determined by the Mini Mental State Examination (MMSE), and depressive symptoms were assessed by the Geriatric Depression Scale (GDS). Results: Most of population showed a very high adherence to the Mediterranean diet pattern and optimal cognitive and affective status. They consumed olive oil as their main source of fat, high levels of fish and fruit, low levels of foods with added sugars, and a low consumption of red meat. A significant relation between the MEDAS and MMSE scores was found. However, no relationship was observed between the MEDAS and GDS. Conclusions: The Mediterranean diet pattern was positively related with the cognitive function, although the infl uence of a healthy dietary pattern on the symptomatology of depression was unclear. However, an effective strategy against cognitive function and depression would be to improve physical activity rates, establish lifelong healthy eating habits, and consume a nutritionally-rich diet in order to enhance quality of life of the elderly. abstract_id: PUBMED:20508672 Complexity of emergency care in XXI century Spain In this article, the authors explore the factors that, in their opinion, currently explain the complexity of emergency care in Spain. Since the start of the XXI century, and in spite of the fact that accident and emergency medicine in the world is undergoing considerable scientific-technical progress, accident and emergency care in Spain is immersed in a care maelstrom acting on a terrain that is ill-prepared for the successive emergent technological advances to take root and develop. This problem is due to the persistence of numerous obstacles preventing progress, with the lack of specialisation and the corporate spirit of the management boards of some scientific societies playing a principal and direct role in this. Due to all of this, advances are frequently achieved more through inertia, or through the overflow of information from other disciplines, than due to the initiative of the professionals of emergency medicine in Spain. Similarly, there is a growing tendency amongst these professionals to move to other disciplines or fields of care that offer better working conditions or simply better professional expectations. abstract_id: PUBMED:31122086 Mediterranean food pattern vs. Mediterranean diet: a necessary approach? "Food pattern" and "diet" sometimes are referred with the same meaning. "Food pattern" can be defined by a priori and a posteriori approaches and refers to the characterisation of foods and frequency of consumption within a population combining with various characteristics. "Diet" may be represented as an individual way of life and considers the food consumption and others like culture, history, society, economy, sustainability, culinary activities, conviviality, physical activity and rest. Mediterranean diet fits on these two concepts. The question is if we are measuring Mediterranean diet as a whole, or whether we are only measuring one of its parts, neglecting the rest of components. Can this compromise the way we monitor its adherence and evolution, with the possibility of losing the perception of certain aspects? How can we preserve and promote the concept if we do not monitor all the pieces of the puzzle? abstract_id: PUBMED:38201889 High Adherence to a Mediterranean Alcohol-Drinking Pattern and Mediterranean Diet Can Mitigate the Harmful Effect of Alcohol on Mortality Risk. Background: Alcohol is a psychoactive substance with deleterious effects on human health and mortality. This study aims to investigate the joint associations between the Mediterranean Diet (MedDiet), alcohol- consumption patterns and mortality from the following: all causes, cardiovascular, neoplastic, the digestive system, and other causes. Methods: A sample of 3411 alcohol consumers aged ≥18 years was selected from two prospective cohort studies: the MICOL and NUTRIHEP Study. Cohorts were enrolled in 2005-2006, and followed up until December 2022, capturing data on alcohol consumption, diet, and mortality. Adherence to the MedDiet was measured by the relative Mediterranean score (rMED), and alcohol consumption by the Mediterranean Alcohol-drinking Pattern index (MADP). Statistical analyses included flexible parametric survival models and subdistribution hazard ratios, to consider different causes of death. Results: a significant increase in digestive-system (SHR 2.77, 95% CI 1.16; 63) and cancer mortality risk (SHR 2.25, 95% CI 1.08; 4.70) was observed among individuals with low adherence to the MADP. Low adherence to the Mediterranean pattern of alcohol consumption, combined with low adherence to the MedDiet, was associated with higher overall mortality (HR 2.29, 95% CI 1.04, 5.04), and, in particular, with higher mortality from digestive system diseases (SHR 4.38, 95% CI 1.22, 15.8). Conclusions: This study suggests that deleterious effects of alcohol on mortality vary, depending on alcohol consumption patterns and dietary context. Higher adherence to the MedDiet appears to mitigate the adverse effects of moderate alcohol consumption, particularly for wine drinkers. abstract_id: PUBMED:22525205 Dietary habits of the hypertensive population of Spain: accordance with the DASH diet and the Mediterranean diet. Objectives: Dietary treatment is appropriate for all patients with hypertension. However, only a few population-based studies have evaluated the diet of hypertensive individuals, and none of them has been conducted in Europe. This study examined accordance with the Dietary Approaches to Stop Hypertension (DASH) diet and the Mediterranean diet in the hypertensive population of Spain. Methods: A cross-sectional study conducted in 2008-2010 among 12 948 individuals representative of the Spanish population aged at least 18 years. Blood pressure was measured with validated devices under standardized conditions. Habitual food consumption was assessed with a computerized diet history. DASH accordance was defined as at least 4.5 points on a score using nine nutrient targets, and Mediterranean diet accordance as at least 9 points on the Mediterranean Diet Adherence Screener score. Results: Among the diagnosed hypertensive individuals, 17.3% [95% confidence interval (CI) 15.4-19.2%] had a DASH-accordant diet, and 17.2% (95% CI 15.4-19.1%) had a Mediterranean-accordant diet. The frequency of DASH accordance increased with age, was higher among women and hypercholesterolaemic individuals, and lower in current smokers. Similar results were found for Mediterranean diet accordance. Only 60% of the diagnosed hypertensive individuals reported receiving and following a diet prescribed to control hypertension; this group showed a better accordance with the DASH diet [age and sex-adjusted odds ratio (aOR) 1.43; 95% CI 1.08-1.88]. As compared with the 1518 hypertensive individuals unaware of their condition, those who were diagnosed showed a similar frequency of accordance with the DASH diet (aOR 1.08; 95% CI 0.87-1.34) and the Mediterranean diet (aOR 0.98; 95% CI 0.79-1.20). Conclusion: The diet of hypertensive individuals in Spain has a low accordance with the DASH and Mediterranean dietary patterns. The similarity in healthy-diets accordance between the diagnosed and undiagnosed hypertensive individuals suggests that nutritional interventions in hypertensive patients are poor, a problem that should be compellingly addressed. abstract_id: PUBMED:30241304 Effect of Tourism Pressure on the Mediterranean Diet Pattern. Despite proposed conceptual frameworks of eating behaviors, little is known about environmental factors contributing to changes in food habits. Few studies have reported the external influence of tourism on the inhabitants' eating patterns. The present study aimed to investigate whether tourism pressure affects Canary Islands inhabitants' adherence to the Mediterranean diet pattern. Data were obtained from a health and lifestyle population-based survey conducted in 2009 and 2015. From the reported intake frequency, a Mediterranean diet score was defined (0 to 11 points). Tourist overnight stays, which were stratified by nationality and area of destination, were used as a proxy variable to measure tourism pressure. A multilevel linear regression analysis by restricted maximum likelihood estimation was performed to examine the relationship between tourism pressure and the Mediterranean diet score. A significant negative association between the Mediterranean diet score and British tourism pressure was observed (β = -0.0064, p = 0.010), whereas German tourism pressure increased inhabitants' adherence (β = 0.0092, p = 0.042). The socioeconomic level of tourists seems to play a role in differences in the tourism pressure effect by nationality. Further investigation of other highly touristic destinations is needed to confirm these findings that could contribute to a shift in tourism and public health nutrition policies. Answer: No, a specific drinking pattern is not a consistent feature of the Mediterranean diet in Spain in the XXI century. A cross-sectional study conducted from 2008 to 2010 among 8894 individuals representative of the Spanish population aged 18-64 years found that among individuals with Mediterranean Diet Adherence Screener (MEDAS)-based Mediterranean diet (MD) accordance, only 17.1% had a Mediterranean drinking pattern (MDP). This drinking pattern showed a weak association with higher MD accordance (PUBMED:24954423). Additionally, a study including 44,834 participants from the 2017 National Health Survey and the 2020 European Health Survey in Spain found that alcohol consumption, even in low amounts, is associated with lower adherence to the Mediterranean diet, suggesting that alcoholic beverages should not be included in measures that define the Mediterranean diet (PUBMED:38217643). Therefore, the traditional Mediterranean drinking pattern, characterized by moderate alcohol intake mainly from wine and during meals, is not a habitual feature of the MD in Spain in the early XXI century.
Instruction: Is an individual prediction of maximal work rate by 6-minute walk distance and further measurements reliable in male patients with different lung diseases? Abstracts: abstract_id: PUBMED:23363649 Is an individual prediction of maximal work rate by 6-minute walk distance and further measurements reliable in male patients with different lung diseases? Background: In patients with chronic lung diseases, the work rate for endurance training is calculated by the maximal work rate (Wmax). Because the assessment bears side effects, a prediction by easier accessible tests would be of practical use. Objective: We addressed the reliability of predicting Wmax on the basis of the 6-min walk distance (6MWD) test and a set of further parameters in patients with different lung diseases. Methods: Baseline data of a longitudinal study including 6MWD, Wmax, peripheral muscle force, lung function, fat-free mass and dyspnea (Modified Medical Research Council score) of 255 men with occupational lung diseases (104 asthma, 69 asbestosis, 42 silicosis, 40 chronic obstructive pulmonary disease) were evaluated. Results: 6MWD correlated with Wmax (r = 0.51, p &lt; 0.05). The product of 6MWD and body weight, in particular fat-free mass, led to an improvement in the correlation of Wmax with 6MWD. Muscle force, lung function and Modified Medical Research Council score correlated moderately but significantly with Wmax (p &lt; 0.05 each). The maximum correlation gained by including 6MWD and further parameters in the prediction equations was r = 0.76 in patients with obstructive lung function impairment and r = 0.61 in asbestosis patients. The residual standard deviations of Wmax predicted by the calculated equations ranged between 20 and 28 W, and the 95% prediction intervals of Wmax ranged between ±47 and ±65 W. Conclusions: A reliable prediction of individual Wmax by 6MWD or related measures and therefore a replacement by other tests is not possible. Nevertheless, it may be useful for the comparison of average values in epidemiological and clinical studies. abstract_id: PUBMED:38141576 Pulmonary function test results are correlated with 6-minute walk distance, distance-saturation product, and 6-minute walk work in patients with lymphangioleiomyomatosis. Background And Objective: Lymphangioleiomyomatosis (LAM) is a rare multicystic lung disease. Although a correlation between pulmonary function test (PFT) results and exercise capacity appears probable, it has not yet been demonstrated. The aim of this study was to assess whether PFT results correlate with 6-minute walk test (6MWT) results in patients with LAM. Methods: We conducted a retrospective study of all patients with a diagnosis of LAM followed in a French reference centre over a 13-year period. PFT and 6MWT data were collected. Distance-saturation product (DSP) and 6-minute walk work (6MWORK) were calculated. Results: A total of 62 patients were included. Their median forced expiratory volume in 1 s (FEV1) was 82.7 % predicted and their median forced vital capacity (FVC) was 96.7 % predicted. The median diffusing capacity of the lungs for carbon monoxide (DLCO) was 58.5 % predicted and was decreased in 79 % of the patients. The median 6-minute walk distance was 535 m, which was 90.9 % of the 602 m predicted distance. The median DSP was 497.4 m % and the median 6MWORK was 32,910 kg.m. The distance walked during the 6MWT was significantly correlated with FVC%predicted (R = 0.435), FEV1 %predicted (R = 0.303), TLC%predicted (R = 0.345), FRC%predicted (R = 0.262), RV/TLC ratio (R = -0.271), and DLCO%predicted (R = 0.279). DSP and 6MWORK were each significantly correlated with different PFT results. Conclusion: The present study shows that PFT results are potential predictors of the exercise capacity in patients with LAM. Additional studies are required to evaluate the interest of DSP and 6MWORK in LAM. abstract_id: PUBMED:28571188 Six Minute Walk Test: A Tool for Predicting Mortality in Chronic Pulmonary Diseases. Introduction: Six Minute Walk Test (6MWT) is a simple test used to measure exercise capacity in Chronic Pulmonary Diseases (CPDs). Decreased exercise capacity significantly affects the quality of life. The 6MWT is a known tool to measure exercise capacity and quality of life in CPD. However, the role of change in follow up Six Minute Walk Distance (6MWD) in assessing mortality and its correlation with known prognostic factors haven't been tested to the best of our knowledge. Aim: To study the correlation of change in 6MWD with change in spirometry and to study the role of 6MWD in predicting mortality in CPD. Settings And Design: Prospective cohort study. Materials And Methods: A total of 139 CPD patients were grouped into obstructive, restrictive and post Tuberculosis (TB) sequelae. Optimized treatment was given and they were followed up regularly. Spirometry and 6MWT was done at the beginning and after one year. Mean decline in FEV1, FVC was correlated with mean decline in 6MWD. Statistical Analysis: With STATA 13.1 correlation between 6MWT and spirometric values were done using Wilcoxon signed rank test. Predictors of mortality were studied using multivariate analysis. Results: Overall mean decline in 6MWD was 16.07 m/year. There was no correlation between mean change in 6MWD and change in FEV1, FVC, and FEV1/FVC. On multivariate analysis, 6MWD was the better predictor of mortality than FEV1 (p&lt;0.001), FVC (p&lt;0.001). At cut off value of 240 meters, sensitivity and specificity of 6MWT in predicting mortality was 71.05% and 81.94% respectively. Conclusion: The 6MWD changes independently and has no correlation with change in spirometry, it is an independent predictor of mortality in CPD. abstract_id: PUBMED:26229558 Overweight-mortality paradox and impact of six-minute walk distance in lung transplantation. Unlabelled: Overweight-mortality paradox and impact of six-minute walk distance (SMWD) in lung transplantation. Background: The objective of this study was to examine combined prognostic influence of body mass index (BMI) and SMWD on mortality in lung transplant recipients. Methods: Consecutive isolated lung transplant recipients were identified. Preoperative BMI and SMWD data were collected. The cohort was followed for all-cause mortality. Results: The study included 324 lung transplant recipients with mean age of 57 ± 13 years and 58% were male (27% obstructive, 3% vascular, 6% cystic fibrosis, and 64% with restrictive lung diseases). In the total cohort; 37% had normal BMI, 10% were underweight, 33% were overweight, and 20% were obese. The median SMWD was 700 feet. The lower SMWDgroup was defined as the patients who had SMWD &lt;237 feet as determined by receiver operating characteristic (ROC). Based on this definition, 66 patients (20%) had lower SMWD. There were 71 deaths during a median follow-up of 2.3 years. In multivariate analysis, both BMI and SMWD were independently associated with death. Being overweight was associated with reduced mortality risk (hazard ratio (HR) 0.50, P = 0.042) compared to the normal BMI group, and this was primarily driven by early mortality posttransplant. This paradoxical overweight-mortality relationship remained significant in the lower SMWD group (HR 0.075, P = 0.018), but not in the higher SMWD group (P = 0.552). Conclusion: In lung transplant recipients under lung allocation score (LAS) era, pretransplant BMI and SMWD were independent predictors for mortality after the transplant. The lowest mortality risk was noted in a group of transplant recipients identified as overweight; whereas, being underweight or obese was associated with increased mortality. abstract_id: PUBMED:23322886 Prognostic value of walk distance, work, oxygen saturation, and dyspnea during 6-minute walk test in COPD patients. Background: Distance walked during the 6-min walk test (6MWT) predicts mortality in COPD. The body weight of the patient affects the work required to walk. Calculated work during the 6MWT (6MWT work) may account for differences in walk distance resulting from change in body weight. Thus, 6MWT work might be a better predictor of mortality than distance walked. This study was designed to test this hypothesis and to assess if other variables measured during the 6MWT, like continuous oximetry recording, offered additional prognostic information. Methods: This was a retrospective analysis of prospectively collected data; 104 COPD patients were studied. 6MWT was performed in all cases. 6MWT work was calculated as body weight (in kg) × distance walked (in m). Receiver operating characteristic curves were used to assess the value of variables to predict mortality. Additional analysis was performed using Kaplan-Meier survival plots and Cox proportional hazards regression models. Results: Mean follow-up was 590 ± 472 d. Eleven subjects (10.6%) died. 6MWT work was not better than distance walked to predict mortality (area under the curve 0.77 for 6MWT work vs 0.80 for distance; difference 0.03, 95% CI -0.05 to 0.12, P = .45). Subjects who died had more dyspnea (measured using the Borg scale) after the 6MWT (8.5 vs 4.0, P &lt; .001), lower baseline SpO2 (85% vs 93%, P = .001), worse oxygen saturation during the 6MWT (mean SpO2 while walking 74.0% vs 86.6%, P = .02) and walked less distance (255 m vs 480 m, P = .001). On multivariate analysis, only 6MWT distance and dyspnea after the test correlated independently with mortality (P = .005 for both variables). Conclusions: 6MWT work was not more useful than 6MWT distance to predict mortality. The study confirms that 6MWT distance and dyspnea on exertion are key elements in prognostic evaluation in COPD, while the value of exercise oxygen desaturation is less clear. abstract_id: PUBMED:37415094 Longitudinal significance of six-minute walk test in patients with nontuberculous mycobacterial pulmonary disease: an observational study. Background: The long-term exercise tolerance changes in patients with nontuberculous mycobacterial pulmonary disease (NTM-PD) are of great interest because of its chronic course. This study aimed to characterize the associations between changes over time in six-minute walking test (6MWT) parameters and clinical parameters in patients with NTM-PD. Methods: Overall, 188 patients with NTM-PD, visiting outpatient clinics at Keio University Hospital from April 2012 to March 2020 were included in the study. Data were collected using the St. George's Respiratory Questionnaire (SGRQ), pulmonary function test (PFT), blood tests, and the 6MWT at registration and at least once after that. The association of the anchors and clinical indicators with the 6MWT parameters was assessed. Results: The median age [interquartile range] of the patients was 67 [63-74] years. The median baseline six-minute walk distance (6MWD) and final Borg scale (FBS) were 413 [361-470] m and 1 [0-2], respectively. In the correlation analysis, ΔSGRQ total/year (yr), Δforced vital capacity (FVC, % predicted)/yr, Δforced expiratory volume in 1 s (FEV1, % predicted)/yr, and Δdiffusing capacity for carbon monoxide (DLCO, % predicted)/yr correlated with both Δ6MWD/yr and ΔFBS/yr in the longitudinal analysis (|Rho| &gt; 0.20). When stratified into three quantiles of changes in each anchor, the 6MWT parameters worsened over time in the bottom 25% group by mixed-effects model. Specifically, Δ6MWD was affected by SGRQ activity, SGRQ impacts, PFT (FVC, FEV1, and DLCO), and C-reactive protein (CRP). ΔFBS was affected by all SGRQ components, total score, and PFT. Anchor scores and variables at baseline that worsened Δ6MWD were higher SGRQ scores, lower FVC (% predicted), lower DLCO (% predicted), higher Krebs von den Lungen-6, old age, and undergoing treatment at registration. Similarly, these clinical parameters and elevated CRP, excluding undergoing treatment at registration, worsened ΔFBS. Conclusions: The decreased walking distance and exacerbation of dyspnea on exertion over time in patients with NTM-PD may reflect a deterioration of health-related quality of life and pulmonary function. Thus, the change in 6MWT over time can be used as an indicator to accurately assess the patient's condition and tailor their healthcare environment. abstract_id: PUBMED:35062542 Development Technologies for the Monitoring of Six-Minute Walk Test: A Systematic Review. In the pandemic time, the monitoring of the progression of some diseases is affected and rehabilitation is more complicated. Remote monitoring may help solve this problem using mobile devices that embed low-cost sensors, which can help measure different physical parameters. Many tests can be applied remotely, one of which is the six-minute walk test (6MWT). The 6MWT is a sub-maximal exercise test that assesses aerobic capacity and endurance, allowing early detection of emerging medical conditions with changes. This paper presents a systematic review of the use of sensors to measure the different physical parameters during the performance of 6MWT, focusing on various diseases, sensors, and implemented methodologies. It was performed with the PRISMA methodology, where the search was conducted in different databases, including IEEE Xplore, ACM Digital Library, ScienceDirect, and PubMed Central. After filtering the papers related to 6MWT and sensors, we selected 31 papers that were analyzed in more detail. Our analysis discovered that the measurements of 6MWT are primarily performed with inertial and magnetic sensors. Likewise, most research studies related to this test focus on multiple sclerosis and pulmonary diseases. abstract_id: PUBMED:31689414 Six-Minute Walk Test: Clinical Role, Technique, Coding, and Reimbursement. The 6-min walk test (6MWT) is a commonly used test for the objective assessment of functional exercise capacity for the management of patients with moderate-to-severe pulmonary disease. Unlike pulmonary function testing, the 6MWT captures the often coexisting extrapulmonary manifestations of chronic respiratory disease, including cardiovascular disease, frailty, sarcopenia, and cancer. In contrast with cardiopulmonary exercise stress testing, this test does not require complex equipment or technical expertise. In this low complexity, safe test, the patient is asked to walk as far as possible along a 30-m minimally trafficked corridor for a period of 6 min with the primary outcome measure being the 6-min walk distance (6MWD) measured in meters. There has been interest in other derived indexes, such as distance-desaturation product (the product of nadir oxygen saturation and walk distance), which in small studies has been predictive of morbidity and mortality in certain chronic respiratory conditions. Special attention to methodology is required to produce reliable and reproducible results. Factors that can affect walk distance include track layout (continuous vs straight), track length, oxygen amount and portability, learning effect, and verbal encouragement. The absolute 6MWD and change in 6MWD are predictive of morbidity and mortality in patients with COPD, pulmonary arterial hypertension, and idiopathic pulmonary fibrosis and patients awaiting lung transplant, highlighting its use in management decisions and clinical trials. As of January 2018, Current Procedural Terminology code 94620 (simple pulmonary stress test) has been deleted and replaced by two new codes, 94617 and 94618. Code 94617 includes exercise test for bronchospasm including pre- and postspirometry, ECG recordings, and pulse oximetry. Code 94618, pulmonary stress testing (eg, 6MWT), includes the measurement of heart rate, oximetry, and oxygen titration when performed. If 94620 is billed after January 2018 it will not be reimbursed. abstract_id: PUBMED:37209905 Association of remote dielectric sensing and six-minute walk distance among those with severe aortic stenosis. Background: Remote dielectric sensing (ReDS) is a novel non-invasive electromagnetic energy-incorporated technology to quantify lung fluid levels. The six-minute walk test is an established method to assess exercise capacity among those with a variety of chronic conditions related to heart and pulmonary diseases. We aimed to understand the association between ReDS value and six-minute walk distance (6MWD) in patients with severe aortic stenosis being evaluated for valve replacement. Methods: Patients who were hospitalized to receive trans-catheter aortic valve replacement were prospectively included and simultaneous ReDS and 6MWD measurements were performed on admission. We attempted to correlate 6MWD with ReDS value. Results: A total of 25 patients (median 85 years, 11 men) were included. Median 6MWD was 168 (133, 244) meters and median ReDS value was 26 % (23 %, 30 %). 6MWD displayed a moderate inverse correlation with ReDS value (r = -0.516, p = 0.008) and significantly distinguished ReDS value ≥30 %, representing mild or greater pulmonary congestion, at a cut-off of 170 m (sensitivity 0.67 and specificity 1.00). Conclusions: 6MWD had a moderate inverse correlation with ReDS values among candidates for trans-catheter aortic valve replacement, indicating that patients with shorter 6MWD had increased pulmonary congestion as assessed by ReDS system. abstract_id: PUBMED:23513011 Physiological response to the 6-minute walk test in chronic heart failure patients versus healthy control subjects. Background: The distance walked in 6 minutes (6MWD) has been reported to be linearly related to peak oxygen uptake (VO2) in cardiac diseases and in lung diseases. In these patients, the VO2 during a 6-min walk test (walkVO2) has been found to be nearly equivalent to peakVO2, but with a lower respiratory exchange ratio (RER). Whether these observations translate to the less functionally impaired patients or healthy control subjects is not exactly known. Methods: Thirty-two healthy control subjects and 15 chronic heart failure (CHF) patients performed a 6-min walk test and a maximal cardiopulmonary exercise test (CPET) both with measurements of gas exchange. Results: The 6MWD and peakVO2 were linearly correlated, but with an increased slope appearing above 532 m. In CHF patients, walkVO2 was similar to peakVO2, but with lower heart rate and ventilation than measured at peak exercise. In healthy control subjects, VO2, ventilation and heart rate were lower during the 6-min walk than at maximal exercise but higher than at the anaerobic threshold. The RER during the 6-min walk remained &lt;1 in both groups. Conclusions: Above 500 m, 6MWD becomes less sensitive to any increase in peakVO2. Furthermore, CHF patients and healthy control subjects exercise respectively at maximal and high VO2, but below the anaerobic threshold (as assessed by a CPET) during the 6-min walk test. Answer: No, an individual prediction of maximal work rate (Wmax) by 6-minute walk distance (6MWD) and further measurements is not considered reliable in male patients with different lung diseases. A study that included 255 men with occupational lung diseases (asthma, asbestosis, silicosis, chronic obstructive pulmonary disease) found that although there was a moderate correlation between 6MWD and Wmax, the prediction of individual Wmax using 6MWD or related measures was not reliable. The maximum correlation achieved by including 6MWD and other parameters in prediction equations was r = 0.76 in patients with obstructive lung function impairment and r = 0.61 in asbestosis patients. However, the residual standard deviations of the predicted Wmax by the calculated equations ranged between 20 and 28 W, and the 95% prediction intervals of Wmax ranged between ±47 and ±65 W. These findings suggest that while 6MWD and related measures may be useful for comparing average values in epidemiological and clinical studies, they cannot reliably predict individual Wmax in patients with different lung diseases (PUBMED:23363649).
Instruction: Does the position of the femoral tunnel affect the laxity or clinical outcome of the anterior cruciate ligament-reconstructed knee? Abstracts: abstract_id: PUBMED:18063177 Does the position of the femoral tunnel affect the laxity or clinical outcome of the anterior cruciate ligament-reconstructed knee? A clinical, prospective, randomized, double-blind study. Purpose: The aim of this study was to evaluate whether a change in the femoral graft insertion site between the 1-o'clock (high) and 2-o'clock (low) positions could change the laxity, the score on the 2000 International Knee Documentation Committee (IKDC) Knee Examination Form, or the score on the 2000 subjective IKDC Knee Evaluation Form in the anterior cruciate ligament (ACL)-reconstructed knee. Methods: The study was designed as a prospective, randomized, double-blind investigation. We randomized 30 patients to the low tunnel position group and 30 to the high tunnel position group. Four-stranded semitendinosus and gracilis single-bundle grafts were used. At follow-up, the patients were examined according to the IKDC evaluation form and the IKDC examination form. The exact measurements of laxity at 25 degrees and 70 degrees were performed by use of the Rolimeter (Aircast, Boca Raton, FL). Standardized radiographs were evaluated. Results: In total, 26 patients in the low tunnel position group and 25 in the high tunnel position group completed the study. At follow-up, we found no significant difference in the laxity at 25 degrees and 70 degrees or scores on the IKDC examination form. We found a significant difference in the scores on the IKDC evaluation form, favoring the low position, with a subjective score of 82.8 versus 70.4. Conclusions: A change in the femoral tunnel placement from 1 o'clock to 2 o'clock did not result in a detectable change in the sagittal laxity at 25 degrees and 70 degrees , on the pivot-shift test, or on the IKDC examination form scores. However, we found a significant difference between the two groups in the scores on the IKDC evaluation form, most evident in the subgroups dealing with the patient's subjective knee stability. We conclude that it is possible to improve the clinical result in 1-bundle ACL reconstruction by lowering the tibial tunnel angle and thereby lowering the femoral tunnel toward the 2-o'clock position. Level Of Evidence: Level I, therapeutic prospective randomized trial. abstract_id: PUBMED:37609439 High anteromedial insertion reduced anteroposterior and rotational knee laxity on mid-term follow-up after anatomic anterior cruciate ligament reconstruction. Background: The position of the femoral insertion has a great influence on the laxity of the knee joint after ACLR, especially for rotational laxity. Purpose: To compare the effects of different femoral tunnel positions on knee stability after arthroscopic anterior cruciate ligament reconstruction (ACLR). Methods: The clinical outcomes of 165 patients after autograft ACLR were analyzed retrospectively. The patients were separated into three groups according to the position of the femoral tunnel, as follows: low center (LC) group, 53 patients; high center (HC) group, 45 patients; and high anteromedial (HAM) group, 67 patients. The side-to-side differences (SSDs) in anteroposterior knee laxity measured using a KT-2000 arthrometer and the pivot shift test (PST) pre- and postoperatively were compared among the three groups and analyzed. Results: After 5 years postoperatively, the SSD in the anteroposterior knee laxity in the three groups was significantly decreased postoperatively compared with preoperatively in knees; meanwhile, the negative PST rate was significantly increased in the three groups. The postoperative SSD in anteroposterior knee laxity was significantly increased in the HC group compared with the LC and HAM groups (1.5 ± 1.3 VS 1.0 ± 1.1 VS 1.0 ± 1.0, P&lt;0.05). The negative postoperative PST rate was higher in both the LC and HAM groups than in the HC group (84.9% VS 91.0% VS 71.1%, P&lt;0.05), and there was no significant difference in the negative PST rate between the LC and HAM groups (84.9% VS 91.0%, P&gt;0.05). The negative postoperative PST rate was significantly higher in the HAM group than in the LC and HC groups for patients with a high degree of laxity preoperatively (31.3% VS 3.3% VS 14.4%, P&gt;0.05). Conclusion: Patients in HAM group showed better control over anteroposterior laxity, rotational laxity, and subjective knee function compared to other groups post operation. Therefore, the HAM point is the closest to the I.D.E.A.L point concept, and is recommended as the preferred location for the femoral tunnel in ACLR. abstract_id: PUBMED:7573651 The effect of femoral tunnel position and graft tensioning technique on posterior laxity of the posterior cruciate ligament-reconstructed knee. We report the effects of femoral tunnel position and graft tensioning technique on posterior laxity of the posterior cruciate ligament-reconstructed knee. An isometric femoral tunnel site was located using a specially designed alignment jig. Additional femoral tunnel positions were located 5 mm proximal and distal to the isometric femoral tunnel. With the graft in the proximal femoral tunnel, graft tension decreased as the knee flexed; with the graft in the distal femoral tunnel, graft tension increased as the knee flexed. When the graft was placed in the isometric femoral tunnel, a nearly isometric graft tension was maintained between 0 degrees and 90 degrees of knee flexion. One technique tested was tensioning the graft at 90 degrees of knee flexion while applying an anterior drawer force of 156 N to the tibia. This technique restored statistically normal posterior stability to the posterior cruciate ligament-deficient knee between 0 degrees and 90 degrees for the distal femoral tunnel position, between 0 degrees and 75 degrees for the isometric tunnel position, and between 0 degrees and 45 degrees for the proximal tunnel position. When the graft was tensioned with the knee in full extension and without the application of an anterior drawer force, posterior translation of the reconstructed knee was significantly different from that of the intact knee between 15 degrees and 90 degrees for all femoral tunnel positions. abstract_id: PUBMED:33938969 Laterally shifted tibial tunnel can be the risk of residual knee laxity for double-bundle anterior cruciate ligament reconstruction. Purpose: To elucidate the relationship between graft tunnel position and knee laxity in the cases of double-bundle ACL reconstruction. Methods: Total of 132 cases were included. Femoral and tibial tunnels were evaluated by quadrant method on 3D-CT. As additional reference of tibia, the distances from medial tibial spine to the tunnel center (DMS) and from Parsons' knob to the tunnel center (DPK) were evaluated; %DMS/ML and %DPK/AP were calculated (ML and AP: mediolateral and anteroposterior width of tibial plateau). Preoperative and postoperative (1 year from surgery) stabilities were evaluated by Lachman and pivot-shift procedures. If there was ≥ 2 mm side-to-side difference, the subject was defined as having anterior knee laxity (AKL); if the pivot-shift phenomenon was observed with IKDC grade ≥ 1, there was rotatory knee laxity (RKL). Multiple logistic regression analysis was conducted with the prevalence of AKL or RKL as the dependent variable and with tunnel positions as the independent variables. Results: Overall, 21 subjects (15.9%) showed AKL, and 15 subjects (11.4%) showed RKL. Those with postoperative laxity showed higher %DMS/ML and higher femoral position than those without laxity. Regarding posterolateral bundle, logistic regression model estimated that %DMS/ML was associated with the prevalence of AKL (B = 0.608; p &lt; 0.001) and RKL (B = 0.789; p &lt; 0.001); %high-low femoral tunnel position (B = - 0.127; p = 0.023) was associated with that of RKL. Conclusion: There was the risk of residual knee laxity in ACL-reconstructed knee when tibial tunnel shifted more laterally or higher femoral tunnel was created with regard to posterolateral bundle. Level Of Evidence: III. abstract_id: PUBMED:31870066 Clinical results of arthroscopic anterior cruciate ligament reconstruction with short femoral tunnel Objective: To evaluate the clinical outcome of arthroscopic reconstruction of anterior cruciate ligament (ACL) with short femoral tunnel. Methods: From May 2013 to June 2017, 128 patients with anterior cruciate ligament reconstruction were performed with Transportal technique. Among them, 32 cases had short femoral tunnel were included, including 13 males and 19 females, aged 25.8 (17 to 43) years old, with a mean history of (4.5±1.1) months. The tibial tunnels were drilled in the middle of the footprint of the ACL, and femoral tunnels were drilled by transportal technique. Grafts were fixed with Endobutton at the femoral side and with interference screw at the tibial side. The changes of symptoms and signs were observed and the anterior tibial displacement was measured. The function of knee joint was evaluated by Lysholm score and Tegner score. Results: All patients were followed up for over 2 years. At the latest follow-up, 30 patients were negative and 2 patients were positive in knee shift test; 28 patients were negative in Lachman sign, 4 patients were positive in degree I; 30 patients were negative in anterior drawer test, 1 patient was positive in degree I and 1 patient was positive in degree II. The anterior displacement of the tibia increased by (2.6±1.8) mm compared with the healthy side, which was significantly different from that before operation (t=19.77, P&lt;0.05). Lysholm score of 82.2±6.1 was significantly higher than that before operation (t=17.33, P=0.001). According to Lysholm score evaluation, 15 cases got an excellent result, 10 were good, 7 were fair, and no bad results, with a significant difference compared with that before operation (z=-7.151, P&lt;0.05). Tegner motor function score of (7.4±0.6) was significantly different from that before operation (t=9.11, P=0.000 5). After operation, the knee joint movement ability of the patients improved significantly. Twelve patients could participate in antagonistic sports and 15 patients could participate in non-antagonistic sports. Fifteen patients were very satisfied with the curative effect, 13 patients were satisfied with the curative effect. Conclusions: The incidence of short femoral tunnel in anterior cruciate ligament reconstruction with transportal technique is 25%. At present, the clinical effect of patients with short tunnel is acceptable. However, due to the lack of comparative study, the effect of short tunnel on the curative effect is still unclear. abstract_id: PUBMED:19374302 Correlation of graft position, knee laxity and clinical outcome: comparison with native anterior cruciate ligament using magnetic resonance imaging study. Objective: To compare the sagittal obliquity of anterior cruciate ligament graft with normal native anterior cruciate ligament in contralateral knee, to determine the effect of sagittal obliquity and axial femoral tunnel graft placement on stability and functional knee score (Lysholm), and to measure size of graft after complete ligamentization. Material And Method: Seventy single tunnel quadruple hamstring anterior cruciate reconstructed knee in unilateral ACL injury patients were evaluated at 18 months after surgery. At follow up, patients were evaluated including measurement of knee laxity by using side to side different on KT 1000 arthromeres and clinical outcome by completed Lysholm functional knee questionnaires. Sagittal T1 weighted magnetic resonance image with complete dimension of graft from origin to insertion on each side of knee were depicted to compare the obliquity by measuring the intersection angle of the graft line with the tibial plateau plane. The axial femoral tunnel was determined by angle between anteroposterior axis of distal femur and long axis of femoral tunnel. The diameter of graft was also measured. Results: Graft obliquity was average 58 degrees with range between 41 degrees and 69 degrees. In contralateral native ACL obliquity was average 50 degrees with range between 33 degrees and 63 degrees. The difference between the two groups was statistically significant (p &lt; 0.0001). Average axial femoral tunnel was 36 degrees with range between 10 degrees and 56 degrees. Knee laxity (KT-1 000 arthrometer; average pre-op = 6, post-op = 3) and Lysholm knee scores (average score; pre-op 55, post-op score 89) were significantly improved after surgery (p &lt; 0.01). There was no correlation between degree of sagittal obliquity and axial femoral tunnel with knee laxity and functional score in this series. Graft size was increased in average 8% after 18 months post-operatively. Conclusion: ACL grafts in patients with appropriate tibial tunnel placement were more vertical than native ACL. There was no significant effect of degree of sagittal obliquity and axial femoral tunnel to antero-postero stability and knee score. ACL graft size was increased in diameter during post-operative period. Graft-notch distance should be considered during operation. abstract_id: PUBMED:36902753 Femoral Tunnel Position Affects Postoperative Femoral Tunnel Widening after Anterior Cruciate Ligament Reconstruction with Tibialis Anterior Allograft. This study aims to identify potential factors for both femoral and tibial tunnel widening (TW) and to investigate the effect of TW on postoperative outcomes after anterior cruciate ligament (ACL) reconstruction with a tibialis anterior allograft. A total 75 patients (75 knees) who underwent ACL reconstruction with tibialis anterior allografts were investigated between February 2015 and October 2017. TW was calculated as the difference in tunnel widths between the immediate and 2-year postoperative measurements. The risk factors for TW, including demographic data, concomitant meniscal injury, hip-knee-ankle angle, tibial slope, femoral and tibial tunnel position (quadrant method), and length of both tunnels, were investigated. The patients were divided twice into two groups depending on whether the femoral or tibial TW was over or less than 3 mm. Pre- and 2-year follow-up outcomes, including the Lysholm score, International Knee Documentation Committee (IKDC) subjective score, and side-to-side difference (STSD) of anterior translation on stress radiographs, were compared between TW ≥ 3 mm and TW &lt; 3 mm. The femoral tunnel position depth (shallow femoral tunnel position) was significantly correlated with femoral TW (adjusted R2 = 0.134). The femoral TW ≥ 3 mm group showed greater STSD of anterior translation than the femoral TW &lt; 3 mm group. The shallow position of the femoral tunnel was correlated with the femoral TW after ACL reconstruction using a tibialis anterior allograft. A femoral TW ≥ 3 mm showed inferior postoperative knee anterior stability. abstract_id: PUBMED:26015638 Trans-tibial guide wire placement for femoral tunnel in single bundle anterior cruciate ligament reconstruction. Background: Femoral tunnel location is of critical importance for successful outcome of ACL reconstruction. The aim was to study the femoral tunnel created by placing free hand guide wire through tibial tunnel, using the toggle of the guide wire in the tibial tunnel to improve femoral tunnel location. Materials And Methods: 30 cases of a single bundle quadrupled hamstring graft anterior cruciate ligament reconstruction by trans-tibial free hand femoral tunnel creation is studied in this prospective study. The side to side play of the guide wire in the tibial tunnel was used to improve the tunnel location on femoral wall. The coronal angle of the femoral tunnel was measured on the anteroposterior radiograph. The femoral tunnel location on the lateral radiograph of the knee was recorded according to Amis method. Lysholm scoring was done preoperative and at each follow up. Assessment of laxity was done by Rolimeter (Aircast(™)) and pivot shift test. Results: The mean coronal angle of the femoral tunnel in postoperative radiograph was 47°. In lateral radiograph, the femoral tunnel was found to be &gt;60% posterior on Blumensaat line in 67% cases (n = 20) and in the 33% cases (n = 10) it was anterior. The mean Lysholm score improved from 74.6 preoperative to 93.17 postoperative with no objective evidence of laxity. Conclusion: The free hand trans-tibial creation of the femoral tunnel leads to satisfactory coronal obliquity, but it is difficult to recreate anatomic femoral tunnel by this method as the tunnel is consistently anterior in the sagittal plane. abstract_id: PUBMED:12382968 Effects of femoral tunnel placement on knee laxity and forces in an anterior cruciate ligament graft. The purpose of this study was to measure the effects of variation in placement of the femoral tunnel upon knee laxity, graft pretension required to restore normal anterior-posterior (AP) laxity and graft forces following anterior cruciate ligament (ACL) reconstruction. Two variants in tunnel position were studied: (1) AP position along the medial border of the lateral femoral condyle (at a standard 11 o'clock notch orientation) and (2) orientation along the arc of the femoral notch (o'clock position) at a fixed distance of 6-7 mm anterior to the posterior wall. AP laxity and forces in the native ACL were measured in fresh frozen cadaveric knee specimens during passive knee flexion-extension under the following modes of tibial loading: no external tibial force, anterior tibial force, varus-valgus moment, and internal-external tibial torque. One group (15 specimens) was used to determine effects of AP tunnel placement, while a second group (14 specimens) was used to study variations in o'clock position of the femoral tunnel within the femoral notch. A bone-patellar tendon-bone graft was placed into a femoral tunnel centered at a point 6-7 mm anterior to the posterior wall at the 11 o'clock position in the femoral notch. A graft pretension was determined such that AP laxity of the knee at 30 deg of flexion was restored to within 1 mm of normal; this was termed the laxity match pretension. All tests were repeated with a graft in the standard 11 o'clock tunnel, and then with a graft in tunnels placed at other selected positions. Varying placement of the femoral tunnel 1 h clockwise or counterclockwise from the 11 o'clock position did not significantly affect any biomechanical parameter measured in this study, nor did placing the graft 2.5 mm posteriorly within the standard 11 o'clock femoral tunnel. Placing the graft in a tunnel 5.0 mm anterior to the standard 11 o'clock tunnel increased the mean laxity match pretension by 16.8 N (62%) and produced a knee which was on average 1.7 mm more lax than normal at 10 deg of flexion and 4.2 mm less lax at 90 deg. During passive knee flexion-extension testing, mean graft forces with the 5.0 mm anterior tunnel were significantly higher than corresponding means with the standard 11 o'clock tunnel between 40 and 90 deg of flexion for all modes of constant tibial loading. These results indicate that AP positioning of the femoral tunnel at the 11 o'clock position is more critical than o'clock positioning in terms of restoring normal levels of graft force and knee laxity profiles at the time of ACL reconstruction. abstract_id: PUBMED:35219217 Failure rate analysis and clinical outcomes of two different femoral tunnel positions using anteromedial portal technique in anterior cruciate ligament reconstruction. Aim: To analyze two different femoral tunnel positions and to evaluate their correlation with clinical, functional outcomes and surgical revision rate in patients who underwent primary arthroscopic anterior cruciate ligament (ACL) reconstruction with anteromedial (AM) portal technique. Methods: From January 2015 to October 2018, we recruited 244 patients that underwent primary single-bundle ACL reconstruction, using four strand-semitendinosus graft and AM portal technique for femoral tunnel placement. Patients were divided into two groups based on the different femoral tunnel positions: 117 patients of group A had ACL footprint center femoral tunnel position compared with 127 patients of group B, with femoral tunnel placement close to the AM bundle footprint. Preoperatively and at last follow up, all patients were assessed subjectively by Lysholm, Tegner, and International Knee Documentation Committee (IKDC) scores, while Lachman, Pivot-shift, and KT-1000 tests were performed to evaluate knee joint stability. Results: Group B patients showed significantly better results in Lysholm, objective, and subjective IKDC scores compared with patients of group A (P &lt; 0.001). A significantly higher surgical failure rate was found in group A than in group B (10.26% vs. 2.3%; P &lt; 0.001). A higher anterior knee laxity was recorded in patients of group A than in patients of group B (1.9 ± 1.1 vs. 1.3 ± 1 mm; P &lt; 0.001); a reduction in mean anterior tibial translation from preoperative to final follow up was found in group B compared with group A (3.5 ± 1.2 vs. 2.7 ± 1.1 mm; P &lt; 0.001). No significant differences in the Tegner scale were found between the two groups. Conclusion: ACL reconstruction performed using the AM portal technique showed better and more satisfactory clinical and functional outcomes associated with a lower failure rate when the femoral tunnel had been placed more eccentrically in the footprint, in the AM bundle center position. Answer: Yes, the position of the femoral tunnel does affect the laxity and clinical outcome of the anterior cruciate ligament (ACL) reconstructed knee. Studies have shown that different femoral tunnel positions can influence knee stability postoperatively. For instance, a study found that a high anteromedial (HAM) femoral tunnel position resulted in better control over anteroposterior laxity, rotational laxity, and subjective knee function compared to other groups, suggesting that the HAM point is the closest to the I.D.E.A.L point concept and is recommended as the preferred location for the femoral tunnel in ACL reconstruction (PUBMED:37609439). Another study indicated that a change in the femoral tunnel placement from 1 o'clock to 2 o'clock did not result in a detectable change in sagittal laxity, but there was a significant difference in the scores on the IKDC evaluation form, with the lower position showing better subjective knee stability (PUBMED:18063177). Furthermore, the effects of femoral tunnel position and graft tensioning technique on posterior laxity of the posterior cruciate ligament-reconstructed knee were reported, showing that graft tension and knee stability can be affected by the femoral tunnel position (PUBMED:7573651). Additionally, a study found that a laterally shifted tibial tunnel or a higher femoral tunnel position could be a risk for residual knee laxity in double-bundle ACL reconstruction (PUBMED:33938969). Moreover, femoral tunnel position was found to affect postoperative femoral tunnel widening, which in turn could influence postoperative knee anterior stability (PUBMED:36902753). Lastly, a study comparing two different femoral tunnel positions using the anteromedial portal technique in ACL reconstruction found that a femoral tunnel placed more eccentrically in the footprint, in the AM bundle center position, showed better clinical and functional outcomes and a lower failure rate (PUBMED:35219217). In summary, the position of the femoral tunnel is a critical factor in determining the postoperative stability and clinical outcomes of ACL-reconstructed knees.
Instruction: Is sympathectomy at T4 level better than at T3 level for treating palmar hyperhidrosis? Abstracts: abstract_id: PUBMED:18266585 Is sympathectomy at T4 level better than at T3 level for treating palmar hyperhidrosis? Purpose: We compared the results from a video-assisted thoracoscopic sympathectomy (VTS) at the T4 denervation level with those from a VTS at the T3 level for the treatment of palmar hyperhydrosis (PH). Methods: Seventy patients with PH were prospectively followed for VTS at the T3 or T4 denervation levels for 6 months. The end points of this study were: absence of PH, compensatory hyperhydrosis (CH), and quality-of-life assessment. Results: Sixty-seven patients reported a complete resolution of PH after surgery. One failure occurred in the T3 group and 2 in the T4 group. When anhydrosis was obtained, we noticed totally dry hands in 26 patients in the T3 group and 6 patients in the T4 group. The other 27 patients in the T4 group and 8 in the T3 group maintained a small level of sweating and were also considered to be therapeutic successes. At 6 months, 25 patients in the T4 group had some degree of CH (71.42%) and all patients in the T3 group (100%), though the T4 group had a lower degree of severity of CH at the 6-month follow-up (P &lt; 0.05). After the operation, quality of life was improved similarly in both groups. Conclusions: VTS at either the T3 or T4 level provides an effective treatment for PH. VTS at the T4 level is associated with a less severe form of CH. Despite the occurrence of CH, patients' quality of life is significantly improved following VTS at the T3 or T4 levels. For this reason, the T4 resection can now be used as a treatment for PH. abstract_id: PUBMED:19690657 Twenty months of evolution following sympathectomy on patients with palmar hyperhidrosis: sympathectomy at the T3 level is better than at the T2 level. Objective: To compare two surgical techniques (denervation levels) for sympathectomy using video-assisted thoracoscopy to treat palmar hyperhidrosis in the long-term. Methods: From May 2003 to June 2006, 60 patients with palmar hyperhidrosis were prospectively randomized for video-assisted thoracoscopic sympathectomy at the T2 or T3 ganglion level. They were followed for a mean of 20 months and were evaluated regarding their degree of improvement of palmar hyperhidrosis, incidence and severity of compensatory hyperhidrosis and its evolution over time, and quality of life. Results: Fifty-nine cases presented resolution of the palmar hyperhidrosis. One case of therapeutic failure occurred in the T3 group. Most of the patients presented an improvement in palmar hyperhidrosis, without any difference between the groups. Twenty months later, all patients in both groups presented some degree of compensatory hyperhidrosis but with less severity in the T3 group (p = 0.007). Compensatory hyperhidrosis developed in most patients during the first month after the operation, with incidence and severity that remained stable over time. An improvement in quality of life was seen starting from the first postoperative evaluation but without any difference between the groups. This improvement was maintained until the end of the follow-up. Conclusion: Both techniques were effective for treating palmar hyperhidrosis. The most frequent complication was compensatory hyperhidrosis, which presented stable incidence and severity over the study period. Sympathectomy at the T3 level presented compensatory hyperhidrosis with less severity. Nevertheless, the improvement in quality of life was similar between the groups. abstract_id: PUBMED:28985838 T3 versus T4 thoracoscopic sympathectomy for palmar hyperhidrosis: a meta-analysis and systematic review. Background: Palmar hyperhidrosis (PH) is a benign sympathetic disorder that can adversely affect the quality of life of patients. Thoracic sympathectomy (TS) at the T3 or T4 level has been used to treat PH. We aimed to determine the optimal denervation level for TS by comparing the efficacy of T3 TS versus T4 TS in PH patients. Methods: We searched the PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus, and Google Scholar databases for studies comparing T3 versus T4 TS for PH. Clinical end points included symptom resolution, patient satisfaction, and complications. Results: Of 2201 articles reviewed, 10 (T3 group, 566 patients; T4 group, 629 patients) were selected. T4 TS was associated with a lower incidence of postoperative compensatory sweating, dry hands, and gustatory sweating than T3 TS. No significant difference in symptom resolution or patient satisfaction was found between the T3 and T4 groups. Conclusions: T4 TS may be superior to T3 TS in patients with PH. However, this finding should be validated in high-quality, large-scale randomized controlled trials. abstract_id: PUBMED:24874792 Optimal level of sympathectomy for primary palmar hyperhidrosis: T3 versus T4 in a retrospective cohort study. Background: To compare the results obtained with T3 versus T4 sympathectomy in treatment of primary palmar hyperhidrosis (PH). Methods: By retrospective review of medical records of patients with PH who underwent thoracoscopic sympathectomy from February, 2009 to September, 2012. The patients were categorized into two groups: T3 group underwent T3 sympathectomy and T4 group underwent T4 sympathectomy. Patients were evaluated according to the results of sweating, compensatory hyperhidrosis (CH), degree of satisfaction, complications and recurrence. Mean follow up was 19 ± 7 months. Results: A total of 274 consecutive patients with PH were included in this study. There were 169 females and 105 males, with mean age of 29 ± 11 years. 129 patients underwent T3 sympathectomy (T3 group). The T4 group included 145 patients who underwent T4 sympathectomy. 71.7% patients of T4 group did not complain of CH in comparison to 25.6% patients in group T3 (P = 0.001) and the incidence of mild to moderate CH was higher in the T3 group than T4 (64.4% vs. 26.9%; p = 0.001). T4 group showed a great significance in the (very satisfied category) in comparison to T3 group (P = 0.001). The incidence of over-dry hands was significantly lower in group T4 (0.7%, 1 out of 145) than in group T3 (8.5%, 11 out of 129). The recurrence rate was similar in the two groups (0.8% vs. 1.4%; P = 0.19). Conclusion: Video-assisted T3 or T4 sympathectomy is a safe and effective procedure for treatment of palmar hyperhidrosis. T4 sympathectomy appears associated with less severe dryness and CH than T3 sympathectomy at long-term follow-up. abstract_id: PUBMED:33691245 Long-Term Efficacy of T3 Versus T3+T4 Thoracoscopic Sympathectomy for Concurrent Palmar and Plantar Hyperhidrosis. Background: More than 50% of patients with palmar hyperhidrosis (PAH) also have plantar hyperhidrosis (PLH). We compared the long-term results of T3 sympathectomy with those of combined T3+T4 sympathectomy among patients with concurrent PAH and PLH. Materials And Methods: We retrospectively analyzed the records of patients with concurrent PAH and PLH who underwent T3 alone or T3+T4 sympathectomy from January 1, 2012, to December 31, 2017. Preoperative and postoperative sweating (hyperhidrosis index) was evaluated through questionnaires, physical examination, and outpatient follow-up. The relief rates and hyperhidrosis index were used as outcome measures to compare the efficacy of the two approaches. Patients' satisfaction and side effects were also evaluated. Results: Of the 220 eligible patients, 60 underwent T3 sympathectomy (T3 group), and 160 underwent T3+T4 sympathectomy (T3+T4 group). Compared with the T3 group, the T3+T4 group showed higher symptom relief rates both for PAH (98.75% versus 93.33%, P = 0.048) and PLH (65.63% versus 46.67%, P = 0.01), and a greater postoperative decrease in both hyperhidrosis indices. The rate of severe compensatory hyperhidrosis also increased (10% versus 5%, P = 0.197), although the rates of overall satisfaction were comparable between the groups. The incidence of postoperative pneumothorax requiring chest tube placement and postoperative neuralgia was also similar. There were no cases of perioperative death, secondary operation, wound infection, or Horner syndrome in either group. Conclusions: Compared with T3 alone, T3+T4 sympathectomy achieved a higher symptom relief rate and a lower hyperhidrosis index. T3+T4 sympathectomy may be a choice for the treatment of concurrent PAH and PLH; however, patients need to be informed that this kind of surgery may increase the risk of compensatory sweating. abstract_id: PUBMED:32904609 A retrospective cohort study of T3 versus T4 thoracoscopic sympathectomy for primary palmar hyperhidrosis and primary palmar hyperhidrosis with axillary and plantar sweating. Introduction: Thoracoscopic sympathectomy (TS) has been proven to be a safe and effective treatment for primary palmar hyperhidrosis (PH). However, the complications include compensatory hyperhidrosis (CH), and over-dry hands may occur in some patients after TS. Aim: To compare the therapeutic effect of T3 and T4 TS on primary PH and primary PH with axillary and plantar sweating. Material And Methods: We retrospectively analyzed 100 patients with PH who had undergone T3 (group A, n = 49) or T4 (group B, n = 51) TS in our department, with at least 1 year of postoperative follow-up. Results: At discharge, no major complications or deaths occurred in either group. The condition of sweaty hands was fully improved in 44 of 49 patients in group A and all patients in group B, with a significant difference (p = 0.031). After 12 months of follow-up, 18 (36.7%) patients in group A and 4 (7.8%) patients in group B developed CH, 16 (48.5%) patients in group A and 24 (77.4%) patients in group B had improved axillary sweating, with a significant difference (p &lt; 0.05). The satisfaction rate of group B was significantly higher than that of group A (p &lt; 0.01). Conclusions: Both T3 and T4 TS were safe and effective treatments for PH patients, but the incidence of CH in T4 TS was lower than that in T3 TS. T3 TS may be more suitable for patients with severe PH, while T4 TS had a better therapeutic effect on PH patients with axillary sweating. abstract_id: PUBMED:19698838 Effect of sympathectomy level on the incidence of compensatory hyperhidrosis after sympathectomy for palmar hyperhidrosis. Objective: Palmar hyperhidrosis can be psychosocially devastating. Sympathectomy provides effective treatment. The most common side effect after sympathectomy is compensatory hyperhidrosis, which can be debilitating. Controversy exists as to which and how many levels treated carry the lowest incidence of compensatory hyperhidrosis after sympathectomy for palmar hyperhidrosis. Methods: Retrospective review was conducted on a video-assisted thoracoscopic surgical database including all patients who underwent video-assisted thoracoscopic surgical sympathectomy for palmar hyperhidrosis. Results: Video-assisted sympathectomy was performed in 282 patients for palmar hyperhidrosis from May 2002 through July 2005; in all, 179 patients (64%) underwent division at T2 level only and 103 at levels T2, T3, and T4. The groups were similar in age and sex distribution. The rate of compensatory hyperhidrosis was significantly less in the T2 group (23 patients, 13%) than in the T2 through T4 group (35 patients, 34%)(P = .011). The most common site of compensatory hyperhidrosis in both groups was the lower back. Patients with compensatory hyperhidrosis were older (median 31 years vs 23 years, P = .037), had body mass index greater than 28 (P = .048), and underwent multiple level sympathectomy (P = .004). Conclusion: Compensatory hyperhidrosis continues to occur after sympathectomy for palmar hyperhidrosis; however, a significant reduction in incidence can be achieved by dividing the sympathetic chain at a single level (T2). Patients who are older and/or have increased body mass index should be warned of their increased risk of compensatory hyperhidrosis after sympathectomy. abstract_id: PUBMED:36344285 Long-term outcomes of bilateral thoracoscopic T3 sympathectomy for primary focal hyperhidrosis in children. Background: Thoracoscopic bilateral T3 sympathectomy for primary focal palmar hyperhidrosis in children has excellent short-term outcomes. However, data in the literature, on the long-term outcomes of the operation are scarce. Methods: We conducted a retrospective institutional review of all children and adolescents undergoing T3 bilateral thoracoscopic sympathectomy for primary focal palmar hyperhidrosis between June 2013 and October 2020. We composed a quality of life (QoL) questionnaire evaluating the patient's perception of how much the hyperhidrosis affected their daily life in multiple domains. The questionnaire was completed before the operation and at every postoperative follow-up visit. Results: We operated on 58 patients with a median age of 15 (6-25) years. There were no intraoperative or postoperative complications, and all patients had immediate complete postoperative resolution of their palmar hyperhidrosis. Fifty-three patients (91.4%) had long-term follow-up data available with a median of 2.5 (range 0.1-7.5) years. Two patients (3.4%) experienced recurrence of their palmar hyperhidrosis. Nine patients (15.5%) experienced compensatory hyperhidrosis and required occasional medical management with oral anticholinergics. Two patients reported regretting having undergone the operation. Overall, the mean QoL score improved remarkably, from 42/100 before the operation to 92/100 at 1 month, 89/100 at six months to a year, 97/100 between two and four years, and 80/100 ≥ 5 years after the operation. Conclusion: Thoracoscopic bilateral T3 sympathectomy has a high success rate for primary palmar hyperhidrosis in children in the mid- to long-term. Compensatory sweating and recurrence can occur years after the operation, so long-term follow up is mandatory. Level Of Evidence: IV. abstract_id: PUBMED:29885980 Impact of Thoracoscopic T2 Sympathectomy on Patients With Primary Palmar and Axillary Hyperhidrosis. Background: There are different techniques for thoracoscopic sympathectomy for management of primary hyperhidrosis. Each technique has the advantage of giving good surgical results, but may cause complications, which depend on the level of sympathectomy used in each technique. Methods: From October 2012 to August 2016, a total number of 120 patients (of primary palmar and axillary hyperhidrosis) were enrolled in this prospective randomized study. Patients were randomized preoperatively and divided into 2 groups: group A (underwent T2, T3, and T4 sympathectomy) and group B (underwent T3 and T4 sympathectomy) with 1 year of follow-up. Improvement of palm sweating, development of complications, patient satisfaction, and quality of life were evaluated. Results: Family history was positive in 33 patients (55%) in group A and in 31 patients (52%) in group B. All patients had improvement of hand sweating. Overdry hands were significant in group A (p = 0.032). Compensatory sweating was significant in group A after the first month (p = 0.016), after 6 months (p = 0.022), and after 12 months (p = 0.025). In group B, very satisfied patients were significant after 6 and 12 months (p = 0.002 and 0.000, respectively). Conclusions: Limited sympathectomy to the level of T3 and T4 has the same therapeutic effect obtained by T2, T3, and T4 sympathectomy with less complications, greater patient satisfaction, and better quality of life. It should be the technique of choice for treatment of primary palmar and axillary hyperhidrosis. abstract_id: PUBMED:19362283 Influence of T3 or T4 sympathicotomy for palmar hyperhidrosis. Background: This retrospective study aimed to compare the efficacy of video-assisted thoracoscopic sympathicotomy at the T3 or T4 level in the treatment of palmar hyperhidrosis. Methods: Patients were operated on for palmar hyperhidrosis from April 2004 to December 2007, and classified as the T3 (n = 56) or T4 (n = 63) sympathicotomy group. Results: The rate of dryness and compensatory hyperhidrosis (CH) was significantly lower in the T4 sympathicotomy group than the T3 group (P &lt; .01). Satisfaction rate, recurrence, and improvement of plantar sweating were of no statistical significance in either group. Conclusions: Although both sympathicotomies were effective, safe, and minimally invasive methods for the treatment of palmar hyperhidrosis, T4 appeared to be a more optimal technique with less CH. Answer: The comparison between sympathectomy at the T4 level versus the T3 level for treating palmar hyperhidrosis (PH) has been the subject of several studies, and the results suggest that there are differences in outcomes between the two levels. Several studies have reported that sympathectomy at the T4 level is associated with a lower incidence of compensatory hyperhidrosis (CH) compared to the T3 level. For instance, one study found that 71.7% of patients in the T4 group did not complain of CH compared to only 25.6% in the T3 group, and the incidence of mild to moderate CH was higher in the T3 group than in the T4 group (PUBMED:24874792). Another study reported that T4 sympathectomy was associated with a lower incidence of postoperative compensatory sweating, dry hands, and gustatory sweating than T3 sympathectomy (PUBMED:28985838). Similarly, a study found that 25 patients in the T4 group had some degree of CH (71.42%) compared to all patients in the T3 group (100%), though the T4 group had a lower degree of severity of CH at the 6-month follow-up (PUBMED:18266585). In terms of patient satisfaction, one study showed that the T4 group had a greater significance in the "very satisfied" category compared to the T3 group (PUBMED:24874792). Another study found that the satisfaction rate of the T4 group was significantly higher than that of the T3 group (PUBMED:32904609). However, it is important to note that the efficacy of symptom resolution and patient satisfaction between the T3 and T4 levels was not significantly different in some studies (PUBMED:28985838). Additionally, one study suggested that T3 sympathectomy may be more suitable for patients with severe PH, while T4 sympathectomy had a better therapeutic effect on PH patients with axillary sweating (PUBMED:32904609). In conclusion, while both T3 and T4 sympathectomy are effective for treating palmar hyperhidrosis, the T4 level may be associated with less severe compensatory hyperhidrosis and higher patient satisfaction, making it a potentially better option for some patients. However, the choice of level may also depend on the severity of PH and the presence of axillary sweating (PUBMED:32904609). The choice between T3 and T4 sympathectomy should be individualized based on the patient's specific condition and the potential trade-offs between the efficacy of symptom resolution and the risk of compensatory hyperhidrosis. Furthermore, a study comparing T3 sympathectomy with combined T3+T4 sympathectomy for concurrent palmar and plantar hyperhidrosis found that the combined approach achieved a higher symptom relief rate and a lower hyperhidrosis index, although it may increase the risk of compensatory sweating (PUBMED:33691245). It is also worth noting that a study focusing on children with primary focal palmar hyperhidrosis reported that bilateral T3 sympathectomy had a high success rate and significantly improved quality of life, with a low incidence of compensatory sweating and recurrence (PUBMED:36344285). In contrast, a study that compared T2 sympathectomy with T3 and T4 sympathectomy found that limiting the sympathectomy to T3 and T4 had the same therapeutic effect as including T2, but with fewer complications and greater patient satisfaction (PUBMED:29885980). Lastly, a study that compared T3 with T4 sympathicotomy found that T4 sympathicotomy had a significantly lower rate of dryness and compensatory hyperhidrosis than T3 sympathicotomy, suggesting that T4 might be a more optimal technique with less CH (PUBMED:19362283). In summary, the evidence suggests that T4 sympathectomy may offer some advantages over T3 sympathectomy, particularly in terms of a lower incidence of compensatory hyperhidrosis and possibly higher patient satisfaction. However, the decision on the level of sympathectomy should be tailored to the individual patient's needs and the surgeon's experience.
Instruction: Mini-lap cholecystectomy: a viable alternative to laparoscopic cholecystectomy for the Third World? Abstracts: abstract_id: PUBMED:9814739 Mini-lap cholecystectomy: a viable alternative to laparoscopic cholecystectomy for the Third World? Background: Laparoscopic cholecystectomy (LC) requires expensive equipment and special training. Mini-lap cholecystectomy (MLC) has no start-up costs but no large series from a single centre has been reported as the procedure is considered hazardous because of inadequate exposure of the surgical field. Methods: We retrospectively reviewed the outcome of 737 cholecystectomies performed through a 3-5-cm transverse subcostal incision and compared the results to published series of laparoscopic cholecystectomy. Results: The operating time (61.6 min; range 35-130), conversion rate (4%), rate of postoperative complications (3.6%), bile duct injuries (0.3%), number of analgesic doses required (3.4; range 3-8), duration of postoperative hospital stay (1.4; range 1-15 days), and the time off work (13.3 days; range 8-61) compare well with the reported results of laparoscopic and MLC. Ninety-three per cent of the patients were followed up for a median period of 28.4 months and none developed biliary stricture. Conclusions: Mini-lap cholecystectomy is considered a safe, viable alternative to LC in the Third World. abstract_id: PUBMED:8480878 Mini-lap cholecystectomy. With the introduction of laparoscopic cholecystectomy, the surgical community witnessed a revolution in basic ideology. Gone are the days when surgeons spoke of wounds healing "from side to side, and not top to bottom." The surgical community has become aware of the concept of minimally invasive surgery. Despite this recent advance, surgeons have long realized that the surgical wound does contribute to morbidity and mortality. The mini-lap cholecystectomy, described as early as 1982, is an indication that surgeons are making an attempt to reduce this morbidity. Mini-lap cholecystectomy is now being suggested as an alternative to laparoscopic cholecystectomy--a technique that boasts many of the same benefits without the problems inherent in laparoscopic surgery. The author has reviewed the literature for mini-lap cholecystectomy, laparoscopic cholecystectomy, and conventional cholecystectomy. Using this "historical" database, he then compares the three techniques to determine whether mini-lap cholecystectomy is a viable alternative. abstract_id: PUBMED:19944194 Mini-lap cholecystectomy: modifications and innovations in technique. Background: As with any technology-driven field, laparoscopic surgery has made tremendous progress in recent years. Since the performance of first laparoscopic cholecystectomy by Prof Dr Med Erich Mühe of Böblingen, Germany 1985, this procedure has overtaken open cholecystectomy as the treatment of choice in cholelithiasis. However due to the cost incurred thereof and surgical training needed, open cholecystectomy is still performed on a very large scale in most parts of the third world countries. We tried to modify the conventional cholecystectomy to a minimal access approach (with minimal required infrastructure) to suit majority of patients with cholelithiasis in lieu of cost and morbidity. Objective: To assess the outcome of modified mini-lap cholecystectomy and report our experience with our innovations and modifications of the technique. Patients And Methods: Between May 2006 and May 2008, two hundred patients with cholelithiasis aged between 15 and 56 years underwent mini-lap cholecystectomy in a prospective study in Government medical college Srinagar. Our surgical approach was carried out using a 3-5cm oblique incision located two finger breadths below the costal margin; fashioned more laterally with a muscle cutting or splitting technique. The outcome was assessed in terms of intraoperative and postoperative parameters. The median (range) age was 38 (15-56) years and there were 143 females and 57 males in the study. All the procedures were completed successfully without any complications, though one patient needed the extension of incision as in conventional cholecystectomy. Results: All the procedures were completed successfully. The mean (range) operative time was 35 (20-110)min and the average blood loss was 30ml. The mean (range) hospital stay was 2 (1-5) days. All patients returned back to routine work within 9 days of surgery. The mean follow-up was 12 (7-14) months. Conclusions: These results confirm that mini-lap cholecystectomy by our modified approach is safe, feasible and has lesser morbidity and postoperative pain as compared to conventional open cholecystectomy. The technique is cost effective, easy to practice and can benefit majority of patients who otherwise cannot afford the laparoscopic surgery. Hence it can serve as an alternative to the gold standard laparoscopic cholecystectomy with almost comparable results. abstract_id: PUBMED:1534528 Laparoscopic cholecystectomy versus mini-lap-cholecystectomy. Results of a prospective, randomized study Laparoscopic cholecystectomy (LCCE) was gaining acceptance rapidly, when several institutions could demonstrate the safety of this minimal invasive treatment modality. Nevertheless prospective randomised studies still are missing to prove the advantages of this new treatment modality in contrast to open cholecystectomy. 77 patients with symptomatic cholelithiasis were treated by LCCE (n = 40) or mini-lap CCE (n = 37) in a prospective, randomised study. As preliminary results, there were no differences in duration of anesthesia and operation time, perioperative complications or postoperative need for analgetics. Patients with LCCE had significant less postoperative pain, less restriction of total vital capacity and a shorter postoperative hospital stay as parameters of a diminished operative trauma. abstract_id: PUBMED:7485884 Post cholecystectomy admission to the intensive care unit. Comparison between open, mini-lap and laparoscopic techniques. Admissions to the intensive care unit following techniques for cholecystectomy, the open, mini-lap and laparoscopic methods, were studied over an 18-month period. Of 1078 cases of operative cholecystectomy performed during that period, 28 cases (2.6%) from all groups were admitted to the intensive care unit. Cases in the open and mini-lap cholecystectomy groups were admitted for postoperative monitoring because of pre-existing medical diseases and were discharged after an average period of stay of 2.4 and 4.7 days respectively. In contrast, the average stay of patients in the laparoscopic group was 14.8 days and the majority were admitted as a result of complications related to the procedure. In this group, one patient died on the first day of admission to the unit (mortality of 8.3%). This study has shown that laparoscopic surgery, from the intensive care point of view, is a very costly surgical procedure. abstract_id: PUBMED:8840634 Laparoscopic versus mini-lap cholecystectomy for gallstone disease. Aim: To compare the outcome of laparoscopic cholecystectomy (LC) and mini-lap cholecystectomy (MC) in patients with symptomatic gallstone disease. Method: One hundred patients undergoing LC (50) or MC (50) were evaluated for duration of surgery, morbidity, need for analgesia, duration of hospitalization and interval to return to normal work. Results: The mean operative time was significantly longer in the LC group (94 +/- 17 vs 129 +/- 33 min; p &lt; 0.05). The rate of morbidity and conversion to open surgery were similar in the two groups. Patients in the MC group required more oral analgesia (p = ns). The mean post-operative hospital stay in this group was 3.3 +/- 1.5 days as compared to 3.3 +/- 2.7 days in the LC group (p = ns). Patients in the LC group took the same time to return to normal work (19.1 +/- 3.2 days) as those in the MC group 19.5 +/- 5.4 days (p = ns). Conclusion: MC and LC are comparable procedures for treatment of gallstone disease in our country. abstract_id: PUBMED:17057572 Mini-laparotomy cholecystectomy versus laparoscopic cholecystectomy: which way to go? Purpose: The aim of this paper is to report the results of a prospective clinical trial investigating traditional laparoscopic cholecystectomy versus "mini-lap" cholecystectomy in a tertiary care University Hospital. Materials And Methods: This is a prospective, randomized, single-center observational study. Forty-four patients were allocated in each group; patients in group L underwent laparoscopic cholecystectomy, whereas patients in group M had open "mini-laparotomy" cholecystectomy with a small incision through the rectus abdominis muscle. Results: The operation lasted significantly longer in group L compared with group M, whereas patients of group L had a shorter hospital stay. There was no difference between groups regarding postoperative day on which patients commenced eating. There was no significant difference between groups regarding doses of analgesics used during surgery or in the recovery room. However, patients in group M used significantly more opioids in the postoperative period. Time to resume normal activity was significantly shorter in group L. A very good aesthetic result was obtained in 97.7% of patients in group L and 77.3% of patients in group M. Conclusions: Cholecystectomy through a mini-laparotomy incision is a lower-cost, versatile, and safe alternative to laparoscopic cholecystectomy. abstract_id: PUBMED:8813175 Laparoscopic cholecystectomy in the third trimester of pregnancy. Laparoscopic cholecystectomy has become the method of choice for symptomatic cholelithiasis. Laparoscopic cholecystectomy during pregnancy is also becoming more accepted, as illustrated by several recent publications on this subject. In the third trimester, however, laparoscopic cholecystectomy has been less commonly attempted, with only a few case reports. We present a series of four patients who underwent laparoscopic cholecystectomy during the third trimester of pregnancy without complication. We describe a safe and effective technique for this procedure, and review the literature on this topic. abstract_id: PUBMED:38486651 Preliminary Experience with Mini-Laparotomy Cholecystectomy in Jos. Background: Different techniques have been described for removing a diseased gall bladder; however, cholecystectomy via the laparoscopic approach is currently regarded as the gold standard. Laparoscopic surgery services are not widely available in low- and middle-income countries and mini-laparotomy cholecystectomy may be a suitable alternative in such circumstances. This technique achieves cholecystectomy with a smaller incision and affords the advantages of the laparoscopic approach. Objective: We report our experience over a 2-year period of 24 consecutive patients from two hospitals who underwent mini-laparotomy cholecystectomy to highlight our outcomes with the procedure. Materials And Methods: Data were obtained from the surgical theatre procedure register and medical records department of the hospital. Results: During the study period, a total of 24 mini-laparotomy cholecystectomies were performed. Fourteen (58.3%) patients had a clinical diagnosis of calculous cholecystitis whereas 10 (41.7%) patients had symptomatic gallstones. There were four males (16.7%) and 20 females (83.3%) giving a male-to-female ratio of 1:5. The ages ranged from 18 to 68 years with a mean of 46.8 years (standard deviation (SD) = 12.7 years) and the mean operating time was 56.3 min (SD = 7.5 min) and ranged from 45 to 72 min. There was no conversion to the traditional large incision cholecystectomy. There were no intra-operative or post-operative complications and there was no mortality in the study. All the patients were discharged 48 h post-op. Conclusion: Mini-laparotomy cholecystectomy offers the benefits of a minimally invasive procedure such as good cosmesis and short hospital stay. It has a relatively short operative time and a low incidence of complications and can be practised in a low-resource environment, where laparoscopic services are not available. abstract_id: PUBMED:32550326 Mini-cholecystectomy versus laparoscopic cholecystectomy: a retrospective multicentric study among patients operated in some Eastern Libyan hospitals. Objectives: This study was conducted to analyze the difference between Mini-Cholecystectomy (MC) and Laparoscopic Cholecystectomy (LC) in terms of feasibility and postoperative outcomes to determine if MC could be accepted as a good alternative procedure to LC. Material And Methods: A retrospective comparative study of 206 consecutively operated patients of chronic cholecystitis (138 LC and 68 MC), in Al-Jalaa, Ajdabiya and Almrg Teaching hospitals between January 2014 and December 2015 was performed. All cases within the two groups were balanced for age, sex, co-morbidities, ultrasound and intraoperative findings. Exclusion criteria were acute cholecystitis, preoperative jaundice, liver cirrhosis, suspicion of malignancy, previous upper abdominal surgery and pregnancy. Results: Mean age of the patients in the study was around 37 years. Female patients represented 88.84%. Intraoperative complications occurred in about 2% of the patients with bleeding in three cases (one in MC, two in LC) and injury to the bile ducts occurred in one case who underwent LC. Operative duration was longer in LC (mean values 64 minutes for LC and 45 minutes for MC). Rate of conversion to classical cholecystectomy in LC was 5% while it was 0% in MC. Only one case of wound infection was registered in the LC group. Postoperative hospital stay was insignificantly longer for LC versus MC (1.97 days for MC and 2.63 days for LC). Conclusion: Mini-cholecystectomy is a feasible technique, which can be considered as a good alternative method for gallbladder removal for surgeons who have no experience with laparoscopic techniques and in peripheral hospitals where LC is not available. Answer: Mini-lap cholecystectomy (MLC) has been suggested as a viable alternative to laparoscopic cholecystectomy (LC) in the Third World, where the cost and required training for LC can be prohibitive. A retrospective review of 737 cholecystectomies performed through a 3-5-cm transverse subcostal incision showed that MLC had comparable operating times, conversion rates, postoperative complications, bile duct injuries, analgesic requirements, duration of postoperative hospital stay, and time off work to LC, with no development of biliary strictures upon follow-up. This suggests that MLC is a safe and viable alternative to LC in the Third World (PUBMED:9814739). Further studies have supported the notion that MLC is a cost-effective and feasible procedure with lesser morbidity and postoperative pain compared to conventional open cholecystectomy. It has been modified to suit patients in the Third World, offering almost comparable results to the gold standard LC (PUBMED:19944194). Additionally, a prospective randomized study found no significant differences in the duration of anesthesia, operation time, perioperative complications, or postoperative need for analgesics between LC and MLC. However, LC patients experienced less postoperative pain, less restriction of total vital capacity, and a shorter postoperative hospital stay (PUBMED:1534528). Comparative studies have also shown that MLC and LC are comparable in terms of operative time, morbidity, need for analgesia, duration of hospitalization, and interval to return to normal work (PUBMED:8840634). Moreover, a prospective clinical trial reported that MLC is a lower-cost, versatile, and safe alternative to LC, with a shorter operation time and a very good aesthetic result in a significant number of patients (PUBMED:17057572). In conclusion, MLC is presented as a feasible, safe, and cost-effective alternative to LC, particularly in low-resource settings where laparoscopic services may not be widely available. It offers benefits such as good cosmesis, short hospital stay, and low incidence of complications, making it a suitable option for the Third World (PUBMED:38486651; PUBMED:32550326).
Instruction: Do "clicker" educational sessions enhance the effectiveness of a social norms marketing campaign? Abstracts: abstract_id: PUBMED:21186455 Do "clicker" educational sessions enhance the effectiveness of a social norms marketing campaign? Objective: social norms campaigns are a cost-effective way to reduce high-risk drinking on college campuses. This study compares effectiveness of a "standard" social norms media (SNM) campaign for those with and without exposure to additional educational sessions using audience response technology ("clickers"). Methods: American College Health Association's National College Health Assessment questions are used to evaluate actual and perceived use. Additional survey questions assess individual exposure to the interventions. Results: the authors find "clicker" technology to be more effective than social norms poster media alone in reducing misperceptions of normative alcohol use for those students who attended clicker sessions. Conclusion: poster SNM campaigns may be most effective when supported by group "clicker" heath-related sessions. abstract_id: PUBMED:20619177 Effectiveness of social norms media marketing in reducing drinking and driving: A statewide campaign. This research evaluated the efficacy of a high-intensity social norms media marketing campaign aimed at correcting normative misperceptions and reducing the prevalence of drinking and driving among 21-to-34-year-olds in Montana. A quasi-experimental design was used, such that regions of Montana were assigned to one of three experimental groups: social norms media marketing campaign, buffer, and control. Four random samples of Montanans between the ages of 21 and 34 were assessed at four time points over 18 months via phone surveys. Findings suggest that the social norms media campaign was successful at exposing the targeted population to social norms messages in the counties within the intervention region. Moreover, results demonstrate the campaign reduced normative misperceptions, increased use of designated drivers, and decreased drinking and driving among those young adults in counties within the intervention region. Social norms media marketing can be effective at changing drinking-related behaviors at the population level. This research provides a model for utilizing social norms media marketing to address other behaviors related to public health. abstract_id: PUBMED:15590342 Done 4: analysis of a failed social norms marketing campaign. College students commonly believe their peers engage in higher levels of dangerous drinking than is actually the case. Social norms marketing campaigns attempt to correct these misperceptions, decrease the perceived normative pressure to drink, and thereby drive down high-risk alcohol consumption. In this case study, we critically examined "Done 4," an unsuccessful social norms marketing campaign conducted as part of a comprehensive prevention trial at a large urban university. As part of this analysis, undergraduate marketing students were shown the principal print advertisement used in the campaign and asked to complete an advertising analysis questionnaire. The results of this case study suggest that the advertisement was poorly constructed, which decreased its effectiveness and led to confusion about the social norms message. We discuss implications of these findings for future prevention campaigns and new research. abstract_id: PUBMED:29936905 Evaluation of a Social Norms Sexual Violence Prevention Marketing Campaign Targeted Toward College Men: Attitudes, Beliefs, and Behaviors Over 5 Years. The article evaluates the effectiveness of a 5-year social norms sexual violence prevention marketing campaign designed specifically for men on a large public university in the Southeastern United States. From 2010 through 2014, 4,158 men were asked about their self-reported attitudes, beliefs, and behavior related to sexual violence as well as their perception of their peers' attitudes, beliefs, and behaviors. Using social norms theory, a social norms marketing campaign was developed to target highly discrepant attitudes, beliefs, and behaviors. Results indicate that both self-reported and perception of peer attitudes and beliefs improved over time, and the discrepancy between the two reduced over time. In addition, the frequency of positive and prosocial self-reported behavior increased on five of the six indicators. This study represents a significant contribution to the literature, as it uses 5 years of data and includes behavioral indicators to add evidence for the efficacy of targeting misperceptions of social norms as a mechanism to engage men in gender-based violence prevention. abstract_id: PUBMED:12817831 A failed norms social marketing campaign. Objective: In this article we test the efficacy of an intensive norms social marketing campaign to reduce heavy drinking among college students living in a residence hall. Method: We employed a pretest-posttest nonequivalent comparison group design. The study was conducted in two (experimental and comparison) comparable residence halls located in a large urban public university. We attempted a census at each hall, and pre- and postintervention data were collected in public areas of each residence hall. Relative sample sizes were approximately 60% in the experimental hall (both waves) and 38% in the comparison hall. Results: The campaign successfully corrected students' misperceptions of drinking norms but had no effects, or counterintuitive effects, on drinking behaviors. Conclusions: Despite the popularity of this approach, universities would be prudent to proceed with care before adopting this approach wholesale. abstract_id: PUBMED:36138397 Development and evaluation of a social marketing campaign to address methamphetamine use in Los Angeles County. Background: This study describes the development and impact of a social marketing campaign in early 2020 intended to prevent and reduce methamphetamine use in Los Angeles County (LAC). We used social marketing principles and the transtheoretical model to design the campaign, which was intended to avoid stigmatization of methamphetamine users and communicate compassion, empathy, and support. Methods: To evaluate its impact, we collected cross-sectional online survey data post-campaign (n = 1,873) from LAC residents in population segments considered higher risk for methamphetamine use. We examine associations between campaign exposure and outcomes using bivariate analyses and binary logistic regression models, which control for the impact of the COVID-19 pandemic on methamphetamine use or likelihood of use. Results: The analyses revealed that campaign exposure was associated with having more negative attitudes toward methamphetamine, calling LAC's substance abuse service helpline, using methamphetamine fewer days, and considering abstaining. Frequency of exposure to campaign advertisements was positively associated with calling the helpline, suggesting a campaign dose effect. COVID-19-related factors were associated with using methamphetamine in the past 30 days. Conclusions: Social marketing campaigns hold promise for impacting methamphetamine prevention and cessation behaviors. This study adds to the limited literature on mass marketing interventions to address this major health issue. abstract_id: PUBMED:34305506 Effectiveness of 'Mois sans tabac 2016': A French social marketing campaign against smoking. Introduction: In October 2016, the first edition of Mois sans tabac (Tobacco-Free Month) was launched, a campaign which had invited French smokers to challenge themselves to quit smoking for the whole month of November. We aimed to study the effectiveness of this social marketing intervention on quit attempts (QA) in the general French population, and to study possible differences according to sociodemographic characteristics. Methods: This study used data from the 2017 Health Barometer survey, a random survey conducted by telephone on 25319 individuals. It included 6341 respondents who reported that they were daily smokers when the Mois sans tabac campaign was launched in 2016. The association between self-declared exposure to the campaign and making a QA has been studied using multivariate logistic regressions. Results: Exposure to the 2016 Mois sans tabac campaign is associated with a QA lasting at least 24 hours in the final quarter of 2016 (AOR=1.32; 95% CI: 1.07- 1.63, p&lt;0.01), with a QA lasting at least 30 days (AOR=1.95; 95% CI: 1.31-2.91, p&lt;0.001), and being abstinent at the time of the interview in 2017 (AOR=2.39; 95% CI: 1.37-4.15, p&lt;0.01). A dose-effect relationship is observed between the frequency of exposure to the campaign and QA, which is mostly explained by the number of sources of exposure (television, radio, posters, the press, the internet and social networks). Although certain priority groups (e.g. manual workers, the unemployed) had poorer recall of the campaign than other groups, the impact of self-reported exposure to the campaign on QA in unemployed people or those with less than high school educational level appears to have been greater. Conclusions: These analyses suggest the effectiveness of the 2016 Mois sans tabac intervention, in a context of strengthening public tobacco control policies in France, which may have contributed to the drop in smoking observed between 2016 and 2019. abstract_id: PUBMED:34116390 Evaluating a social marketing campaign on healthy nutrition and lifestyle among primary-school children: A mixed-method research design. Italy has one of the highest levels of childhood overweight and obesity in Europe. Therefore, preventing children from becoming overweight is a major public health challenge. Here, we used a mixed-method research approach - including a quasi-experimental design and three surveys - to create a formative evaluation of a social marketing campaign on healthy nutrition and lifestyle in Italian primary school children. The social marketing campaign was organized around the 4 Ps of the marketing mix (product: the educational activities; place: the involved schools and supermarkets; promotion: the in-person and technology-based communication; and price: hours spent by the targeted children in fulfilling the educational activities). The campaign involved primary-school children across four Italian cities. The findings suggest that social marketing education campaigns can be effective tools to improve children's knowledge about healthy food and lifestyle, reduce their sedentary behavior, and increase their consumption of healthy food. Also, increasing children's acceptance of healthful nourishment is a valuable tool to improve the dietary habits of the entire family. With relation to the educational program assessment method, this study can inform societal interventions especially those involving children through the integration of different qualitative and quantitative research methods, which collect data from different subjects and perspectives. abstract_id: PUBMED:37170200 An application of social marketing for promoting HIV testing in Iran. Background: It has been estimated that 60,000 Iranians have been infected with HIV/AIDS and only 36% of them are aware of their status. This study aimed to design, implement and evaluate a social marketing campaign to promote HIV testing in Boyer-Ahmad County, Kohgiluyeh, and Boyer-Ahmad Province, southwest of Iran. Materials And Methods: This study was a quasi-experimental pretest-posttest without a control group, developed based on a social marketing assessment and response tool. To design the intervention formative research was conducted, comprised of four focus group discussion sessions with 42 participants of the target community along with seven in-depth semi-structured personal interviews with health care providers involved in the HIV/AIDS Program. Data analysis was done manually using content analysis and the main content was formulated for the campaign. Afterward, the slogan and messages of the campaign were developed. The campaign's materials including banners, posters, pamphlets, referral forms, and short messages were designed, pretested, and revised. Ultimately, the campaign was conducted for one month in October 2019. To determine the effectiveness of the campaign, the rate of referrals to the Center for Behavioral Health Counseling Services (CBHCS), for three months before and after the campaign, was compared. Results: Qualitative findings identified that the majority of the interviewees mentioned that the main reasons for the low rate of referrals to get tested for HIV were lack of awareness and information about HIV/AIDS and its diagnosis as well as the free and confidential tests available from the CBHCS. Moreover, the stigma associated with HIV/AIDS was another important reason for low referrals for testing. The rate of referrals for HIV testing in the three months leading up to the campaign was 18, 32, and 23 people, and three months after the campaign was 64, 81, and 44 individuals; respectively. The results of the multivariate analysis demonstrated that the campaign had increased the rates of referrals for HIV testing through its significant influence on females, and individuals with academic degrees. Conclusion: It can be concluded that the social marketing campaign was successful in persuading people to get tested for HIV. abstract_id: PUBMED:31662121 The long-term effectiveness of a social norming campaign to reduce high-risk drinking: The Michigan State University experience, 2000-2014. Objective: Evaluate effectiveness of a university-wide social norms marketing campaign to reduce high-risk drinking and its consequences among students at MSU. Participants: Campaign messages regarding descriptive and injunctive norms were distributed campus-wide from 2001 to 2014 to correct norm misperceptions. Methods: Random samples of students surveyed most semesters to monitor message saturation, dosage, and believability along with drinking attitudes, behaviors and harm related to celebratory events. NCHA conducted biennially since 2000 to assess overall progress. Results: Perceived drinking norms declined along with measures of actual drinking intensity, frequency, and frequency of intense drinking. Use of protective behaviors most often addressed in campaign messages increased while driving after drinking declined. Importantly, reported adverse effects of drinking on academic performance declined substantially. Conclusions: The evidence suggests the social norms approach has been effective at reducing adverse academic outcomes of drinking. Changes in perceptions, attitudes, behaviors and outcomes appear to be continuing. Answer: Yes, "clicker" educational sessions do enhance the effectiveness of a social norms marketing campaign. The study referenced in abstract PUBMED:21186455 found that "clicker" technology was more effective than social norms poster media alone in reducing misperceptions of normative alcohol use for those students who attended clicker sessions. The conclusion drawn from this study was that poster social norms marketing (SNM) campaigns may be most effective when supported by group "clicker" health-related sessions.
Instruction: A hundred-year-old question: is the moss calyptra covered by a cuticle? Abstracts: abstract_id: PUBMED:27379125 Comparative Cuticle Development Reveals Taller Sporophytes Are Covered by Thicker Calyptra Cuticles in Mosses. The calyptra is a maternal structure that protects the sporophyte offspring from dehydration, and positively impacts sporophyte survival and fitness in mosses. We explore the relationship between cuticle protection and sporophyte height as a proxy for dehydration stress in Funariaceae species with sporophytes across a range of sizes. Calyptrae and sporophytes from four species were collected from laboratory-grown populations at two developmental stages. Tissues were embedded, sectioned, and examined using transmission electron microscopy. Cuticle thickness was measured from three epidermal cells per organ for each individual and compared statistically. All four species have cuticles consisting of a cuticle proper and a cuticular layer on the calyptra and sporophyte at both developmental stages. Across species, shorter sporophytes are associated with smaller calyptra and thinner calyptra cuticles, whereas taller sporophytes are associated with larger calyptra and thicker calyptra cuticles. Independent of size, young sporophytes have a thin cuticle that thickens later during development, while calyptrae have a mature cuticle produced early during development that persists throughout development. This study adds to our knowledge of maternal effects influencing offspring survival in plants. Released from the pressures to invest in protection for their sporophyte offspring, maternal resources can be allocated to other processes that support sporophyte reproductive success. Using a comparative developmental framework enables us to broaden our understanding of cuticle development across species and provides structural evidence supporting the waterproofing role of the moss calyptra. abstract_id: PUBMED:21486928 A hundred-year-old question: is the moss calyptra covered by a cuticle? A case study of Funaria hygrometrica. Background And Aims: The maternal gametophytic calyptra is critical for moss sporophyte development and ultimately sporogenesis. The calyptra has been predicted to protect the sporophyte apex, including the undifferentiated sporogenous region and seta meristem, from desiccation. We investigate the hypothesis that this waterproofing ability is due to a waxy cuticle. The idea that moss calyptrae are covered by a cuticle has been present in the literature for over a century, but, until now, neither the presence nor the absence of a cuticle has been documented for any calyptra. Methods: The epidermis of the calyptra, leafy gametophyte and sporophyte sporangia of the moss Funaria hygrometrica were examined using scanning and transmission electron microscopy. Thicknesses of individual cuticle layers were quantified and compared statistically. The immunochemistry antibody (LM19) specific for pectins was used to locate cell wall material within the cuticle. Key Results: A multi-layered cuticle is present on the calyptra of F. hygrometrica, including layers analogous to the cuticular layer, cell wall projections, electron-lucent and electron-dense cuticle proper observed in vascular plants. The calyptra rostrum has a cuticle that is significantly thicker than the other tissues examined and differs by specialized thickenings of the cuticular layer (cuticular pegs) at the regions of the anticlinal cell walls. This is the first documentation of cuticular pegs in a moss. Conclusions: The calyptra and its associated cuticle represent a unique form of maternal care in embryophytes. This organ has the potential to play a critical role in preventing desiccation of immature sporophytes and thereby may have been essential for the evolution of the moss sporophyte. abstract_id: PUBMED:22210839 The cuticle on the gametophyte calyptra matures before the sporophyte cuticle in the moss Funaria hygrometrica (Funariaceae). Premise Of The Study: In vascular plants, leaf primordia prevent desiccation of the shoot apical meristem. Lacking leaves, the undifferentiated moss sporophyte apex is covered by the calyptra, a cap of maternal gametophyte tissue that is hypothesized to function in desiccation protection. Herein, we compare cuticle development on the calyptra and sporophyte to assess the calyptra's potential to protect the sporophyte from desiccation. As the first comprehensive study of moss sporophyte cuticle development, this research broadens our perspectives on cuticle development and evolution across embryophytes. Methods: Calyptrae and sporophytes at nine developmental stages were collected from a laboratory-grown population of the moss Funaria hygrometrica. Tissues were embedded, sectioned, then examined using transmission electron microscopy. Epidermal cells were measured for thickness of the cuticle layers, cell wall thickness, and lumen size. Key Results: The calyptra cuticle develops precociously and reaches maturity before the sporophyte cuticle. Calyptrae are covered by a four-layered cuticle at all stages, whereas sporophyte cuticle maturation is delayed until sporangium formation. The development and thickening of the sporophyte cuticle occurs in an acropetal wave. Conclusions: A multilayered calyptra cuticle at the earliest developmental stages is consistent with its ability to protect the immature sporophyte from desiccation. Young sporophytes lack a complex cuticle and thus may require protection, whereas in older sporophytes a mature cuticle develops. The moss calyptra is not a vestigial structure, but rather the calyptra's role in preventing desiccation offers a functional explanation for calyptra retention during the 450 Myr of moss evolution. abstract_id: PUBMED:27489161 The moss Funaria hygrometrica has cuticular wax similar to vascular plants, with distinct composition on leafy gametophyte, calyptra and sporophyte capsule surfaces. Background And Aims: Aerial surfaces of land plants are covered with a waxy cuticle to protect against water loss. The amount and composition of cuticular waxes on moss surfaces had rarely been investigated. Accordingly, the degree of similarity between moss and vascular plant waxes, and between maternal and offspring moss structure waxes is unknown. To resolve these issues, this study aimed at providing a comprehensive analysis of the waxes on the leafy gametophyte, gametophyte calyptra and sporophyte capsule of the moss Funaria hygrometrica Methods: Waxes were extracted from the surfaces of leafy gametophytes, gametophyte calyptrae and sporophyte capsules, separated by gas chromatography, identified qualitatively with mass spectrometry, and quantified with flame ionization detection. Diagnostic mass spectral peaks were used to determine the isomer composition of wax esters. Key Results: The surfaces of the leafy gametophyte, calyptra and sporophyte capsule of F. hygrometrica were covered with 0·94, 2·0 and 0·44 μg cm(-2) wax, respectively. While each wax mixture was composed of mainly fatty acid alkyl esters, the waxes from maternal and offspring structures had unique compositional markers. β-Hydroxy fatty acid alkyl esters were limited to the leafy gametophyte and calyptra, while alkanes, aldehydes and diol esters were restricted to the sporophyte capsule. Ubiquitous fatty acids, alcohols, fatty acid alkyl esters, aldehydes and alkanes were all found on at least one surface. Conclusions: This is the first study to determine wax coverage (μg cm(-2)) on a moss surface, enabling direct comparisons with vascular plants, which were shown to have an equal amount or more wax than F. hygrometrica Wax ester biosynthesis is of particular importance in this species, and the ester-forming enzyme(s) in different parts of the moss may have different substrate preferences. Furthermore, the alkane-forming wax biosynthesis pathway, found widely in vascular plants, is active in the sporophyte capsule, but not in the leafy gametophyte or calyptra. Overall, wax composition and coverage on F. hygrometrica were similar to those reported for some vascular plant species, suggesting that the underlying biosynthetic processes in plants of both lineages were inherited from a common ancestor. abstract_id: PUBMED:23471009 Dehydration protection provided by a maternal cuticle improves offspring fitness in the moss Funaria hygrometrica. Background And Aims: In bryophytes the sporophyte offspring are in contact with, nourished from, and partially surrounded by the maternal gametophyte throughout their lifespan. During early development, the moss sporophyte is covered by the calyptra, a cap of maternal gametophyte tissue that has a multilayered cuticle. In this study the effects on sporophyte offspring fitness of removing the maternal calyptra cuticle, in combination with dehydration stress, is experimentally determined. Methods: Using the moss Funaria hygrometrica, calyptra cuticle waxes were removed by chemical extraction and individuals were exposed to a short-term dehydration event. Sporophytes were returned to high humidity to complete development and then aspects of sporophyte survival, development, functional morphology, and reproductive output were measured. Key Results: It was found that removal of calyptra cuticle under low humidity results in significant negative impacts to moss sporophyte fitness, resulting in decreased survival, increased tissue damage, incomplete sporophyte development, more peristome malformations, and decreased reproductive output. Conclusions: This study represents the strongest evidence to date that the structure of the calyptra cuticle functions in dehydration protection of the immature moss sporophyte. The investment in a maternal calyptra with a multilayered cuticle increases offspring fitness and provides a functional explanation for calyptra retention across mosses. The moss calyptra may represent the earliest occurance of maternal protection via structural provisioning of a cuticle in green plants. abstract_id: PUBMED:26110389 Molecular Mechanisms Underlying Hull-Caryopsis Adhesion/Separation Revealed by Comparative Transcriptomic Analysis of Covered/Naked Barley (Hordeum vulgare L.). The covered/naked caryopsis trait of barley is an important agronomic trait because it is directly linked to dietary use. The formation of covered/naked caryopsis is controlled by an NUD transcription factor, which is involved in pericarp cuticle development. However, the molecular mechanism underlying this trait remains so far largely unknown. In this study, comparative transcriptomes of grains three weeks after anthesis of Tibetan Hulless barley landrace Dulihuang and covered barley Morex were analyzed using RNA-seq technique. A total of 4031 differentially expressed genes (DEGs) were identified. The Nud gene was overexpressed in Morex, with trace expression in Dulihuang. Among seventeen cuticle related DEGs, sixteen were down regulated and one up regulated in Morex. These results suggest that the Nud gene in covered caryopsis might down regulate cuticle related genes, which may cause a permeable cuticle over pericarp, leading to a hull-caryopsis organ fusion. A functional cuticle covering the pericarp of naked caryopsis might be the result of deletion or low expression level of the Nud gene. The functional cuticle defines a perfect boundary to separate the caryopsis from the hull in naked barley. abstract_id: PUBMED:32722473 The Role of Cutinsomes in Plant Cuticle Formation. The cuticle commonly appears as a continuous lipophilic layer located at the outer epidermal cell walls of land plants. Cutin and waxes are its main components. Two methods for cutin synthesis are considered in plants. One that is based on enzymatic biosynthesis, in which cutin synthase (CUS) is involved, is well-known and commonly accepted. The other assumes the participation of specific nanostructures, cutinsomes, which are formed in physicochemical self-assembly processes from cutin precursors without enzyme involvement. Cutinsomes are formed in ground cytoplasm or, in some species, in specific cytoplasmic domains, lipotubuloid metabolons (LMs), and are most probably translocated via microtubules toward the cuticle-covered cell wall. Cutinsomes may additionally serve as platforms transporting cuticular enzymes. Presumably, cutinsomes enrich the cuticle in branched and cross-linked esterified polyhydroxy fatty acid oligomers, while CUS1 can provide both linear chains and branching cutin oligomers. These two systems of cuticle formation seem to co-operate on the surface of aboveground organs, as well as in the embryo and seed coat epidermis. This review focuses on the role that cutinsomes play in cuticle biosynthesis in S. lycopersicum, O. umbellatum and A. thaliana, which have been studied so far; however, these nanoparticles may be commonly involved in this process in different plants. abstract_id: PUBMED:38126886 Whole-Course Intraperitoneal Robot-Assisted Choledochal Cyst Excision with a Hidden Incision in Children Under 1 Year Old. Background: Whole-course intraperitoneal robot-assisted choledochal cyst resection in children under 1 year of age is controversial due to its technical challenges. Current Pfannenstiel incision is widely used in adults for its cosmetic effects but is rarely used in children. Materials and Methods: We conducted a prospective, single-center study to assess the feasibility, safety, and cosmesis of whole-course intraperitoneal robot-assisted choledochal cyst resection with Pfannenstiel incision in children under 1 year of age. Results: Ten patients were treated with our surgical protocol, and there was no conversion to laparotomy. The average total operation time was 223 minutes. The average duration of anesthesia was 260.2 minutes. The average docking time between the robot arm and Trocar was 17.5 minutes. The average intraoperative blood loss was 16 mL. No postoperative complications occurred in the 10 patients. The mean time to start drinking water after surgery was 2.4 days. The mean postoperative drainage tube removal time was 2.6 days. The average length of stay was 8.5 days. The scar assessment scale total scores of the 2 observers were (6.8 ± 1.23) and (7.4 ± 1.84), respectively. For every patient, there are only four abdominal surgery scars of which 75% of scars were hidden by underpants and 25% of scars were not covered. Conclusion: It is feasible and safe to perform whole-courses intraperitoneal robot-assisted choledochal cyst resection with Pfannenstiel incision in children under 1 year old. It also has a hidden incision effect and is worthy of promotion. abstract_id: PUBMED:34258605 Covered Stent Treatment of an Extracranial Internal Carotid Artery Pseudoaneurysm in a Three Year Old Child with 12 Years of Follow Up: A Case Report. Introduction: Extracranial internal carotid artery (ICA) pseudoaneurysms in children, although uncommon, are life threatening. Covered stents are a good alternative treatment, as they avoid the risk of open surgery and preserve the ICA. Until recently, long term outcomes were unknown. Report: A three year old boy was hospitalised with an enlarged swelling on the left side of his neck and severe respiratory distress. He had been treated a month prior for a left deep cervical abscess, with bacteriological culture positive for Staphylococcus aureus. Computed tomography angiography (CTA) revealed a large pseudoaneurysm originating from the left ICA, approximately 2 cm above the bifurcation. A balloon expanded covered stent (Jostent Graftmaster; Abbott Vascular, Redwood City, CA, USA) was deployed via a left femoral approach, after selective angiography, to seal the carotid rupture without incident. Control angiography revealed immediate exclusion of the pseudoaneurysm and patent ICA. The bacteriological culture of the residual haematoma was negative. The child was discharged with full recovery and without neurological sequelae, under platelet anti-aggregation. He has been followed up and has remained asymptomatic for 12 years, with CTA confirmed ICA patency, without deformation or evidence of significant restenosis. Discussion: This is the first report of the long term outcome of a covered stent in a child treated at three years of age, with a 12 year follow up. The good performance of the covered stent in this case reinforces its adoption as a first line option in the treatment of extracranial ICA pseudoaneurysms in children. abstract_id: PUBMED:19283099 Cuticle Ultrastructure of Criconemella curvata and Criconemella sphaerocephala (Nemata: Criconematidae). Cuticle ultrastructure of Criconemella curvata and C. sphaerocephala females is presented; males were available only in the second species. Ultrathin sections revealed three major zones: cortical, median, and basal. The cortical zone in the females consists of an external and internal layer. In C. curvata the external layer is trilaminate and at each annule it is covered by a multilayered cap. In C. sphaerocephala the trilaminate layer is lacking and the external cortical layer includes an osmophilic coating. In both species the internal layer consists of alternate striated and unstriated sublayers. The median zone is fibrous with a central lacuna and the zone is interrupted between the annules. The basal zone of the cuticle is striated and narrower between each annule. The cuticle of the C. sphaerocephala male is typical of Tylenchida, except under both lateral fields; the striated layer becomes forked at the first incisure and the innermost two prongs of the fork overlap each other, resulting in a continuous striated band. Answer: Yes, the moss calyptra is covered by a cuticle. This was confirmed in a study of the moss Funaria hygrometrica, where the presence of a multi-layered cuticle on the calyptra was documented for the first time. The study found that the calyptra's cuticle includes layers analogous to the cuticular layer, cell wall projections, electron-lucent and electron-dense cuticle proper observed in vascular plants. The calyptra rostrum, in particular, has a cuticle that is significantly thicker than other tissues examined and features specialized thickenings of the cuticular layer at the regions of the anticlinal cell walls, known as cuticular pegs (PUBMED:21486928).
Instruction: Do junior house officers routinely test visual acuity and perform ophthalmoscopy? Abstracts: abstract_id: PUBMED:12193006 Do junior house officers routinely test visual acuity and perform ophthalmoscopy? Unlabelled: Our objective was to assess junior house officer (JHO) practice of visual acuity testing and ophthalmoscopy in clerking patients. Design: Cross-sectional questionnaire-based study using a standardised structured interview technique. Setting: A Scottish university teaching hospital employing 65 JHOs. Participants: All medical and surgical JHOs from this hospital were interviewed over a three month period. Main Outcome Measures: Questionnaire-based data on the subjective responses studying current practice of visual acuity testing, and direct ophthalmoscopy with and without topical mydriatics. Results: 18.5% and 4.6% of participants perform daily ophthalmoscopy and visual acuity testing respectively. Most do not routinely use the Snellen chart (80.0%) or topical mydriatics during ophthalmoscopy (75.4%). JHOs claimed these were not easily available in the wards. Conclusions: The majority of JHOs fail to test visual acuity or perform ophthalmoscopy in clerking patients. This study highlights the poor availability of Snellen charts, functioning ophthalmoscopes, and topical mydriatics in the wards. This warrants further investigation. abstract_id: PUBMED:26500789 To Assess Sleep Quality among Pakistani Junior Physicians (House Officers): A Cross-sectional Study. Background: Sleep deprivation among junior physicians (house officers) is of growing concern. In developed countries, duty hours are now mandated, but in developing countries, junior physicians are highly susceptible to develop sleep impairment due to long working hours, on-call duties and shift work schedule. Aim: We undertook the study to assess sleep quality among Pakistani junior physicians. Subjects And Methods: A cross-sectional study was conducted at private and public hospitals in Karachi, Pakistan, from June 2012 to January 2013. The study population comprised of junior doctors (house physicians and house surgeons). A consecutive sample of 350 physicians was drawn from the above-mentioned study setting. The subject underwent two validated self-administered questionnaires, that is, Pittsburgh Sleep Quality Index (PSQI) and Epworth Sleepiness Scale (ESS). Results: A total of 334 physicians completely filled out the questionnaire with a response rate of 95.4% (334/350). Of 334 physicians, 36.8% (123/334) were classified as "poor sleepers" (global PSQI score &gt; 5). Poor sleep quality was associated with female gender (P = 0.01), excessive daytime sleepiness (P &lt; 0.01), lower total sleep time (P &lt; 0.001), increased sleep onset latency (P &lt; 0.001), and increased frequency of sleep disturbances (P &lt; 0.001). Abnormal ESS scores (ESS &gt; 10) were more prevalent among poor sleepers (P &lt; 0.01) signifying increased level of daytime hypersomnolence. Conclusion: Sleep quality among Pakistani junior physicians is significantly poor. Efforts must be directed towards proper sleep hygiene education. Regulations regarding duty hour limitations need to be considered. abstract_id: PUBMED:23730112 A novel computerized visual acuity test for children. Purpose: To investigate the efficacy of a computerized visual acuity test, the SNU visual acuity test for children. Methods: Fifty-six children, ranging from 1 to 5 years of age, were included. In a dark room, children gazed at and followed a circular dot with 50% contrast moving at a fixed velocity of 10 pixels/sec on a computer monitor. Eye movement was captured using a charge coupled device camera and was expressed as coordinates on a graph. Movements of the eye and dot were superimposed on a graph and analyzed. Minimum visualized dot diameters were compared to the Teller visual acuity. Results: Ten eyes (8.9%) of six children failed to perform the Teller visual acuity test, and two eyes (1.8%) of one patient failed to perform the SNU visual acuity test. The observed Teller visual acuity and SNU visual acuity were significantly correlated (p &lt; 0.001). Visual angle degrees converted from the Teller visual acuity and SNU visual acuity were also significantly correlated (p &lt; 0.001). Conclusion: The SNU visual acuity using moving targets correlated well with Teller visual acuity and was more applicable than the Teller acuity test. Therefore, the SNU visual acuity test has potential clinical applications for children. abstract_id: PUBMED:30252088 Character Strengths of Junior Australian Army Officers. Introduction: Armed forces view their officers' character as foundational to their ability to lead the personnel entrusted to them. The character of junior officers is of particular interest, because they must increasingly make quick, morally-laden decisions while dispersed among civilians without time to consult their commanders. However, little is systematically known about the character of officers. Accordingly, the present study was aimed at mapping Australian Army junior officers' perceptions of the chief aspects of their own character and also those of their main role models, specifically, their senior officers, including their trustworthiness as an essential aspect of effective leadership. The present study also tested whether these character perceptions were aligned with four core values of the Australian Army - courage, initiative, teamwork, and respect - which are intended to shape the character of its personnel. Materials And Methods: The respondents (N = 171 lieutenants) ranked how well each of 24 positive character strengths applied to themselves as individuals. To test the alignment of these rankings with the respondents' perception of their leaders' character, respondents nominated the five top strengths of their effective leaders. This study was approved by the Australian Defence Human Research Ethics Committee (ADHREC 009-2013). Results: With regard to the first aim, five character strengths - integrity, leadership, good judgment, trustworthy, and teamworker - were ranked by the respondents as being their chief personal strengths at frequencies significantly above those expected from random allocation.With regard to the second aim, the respondents aligned the rankings of their character strengths with those of their effective leaders through the entire list, not just the highest-ranking items. Nevertheless, there were two significant differences. The respondents assigned their leaders higher ranks for wisdom than for themselves, but assigned lower ranks to their leaders for being trustworthy.With regard to the third aim, the respondents' perceptions of their chief character strengths were not well aligned with the four core values of the Australian Army. Teamworker was given a significant top ranking, but courage, initiative, and respectful were not. Thus, the respondents did not appear to respond according to organizational demand characteristics. Conclusions: The present study achieved its aims. First, it revealed five character strengths that junior officers in the Australian Army tended to see as their own chief strengths. Second, the junior officers saw their character strengths as aligned with those of their senior officers. By and large, this alignment extended across all the character strengths, regardless of their specific ranking. Subject to further experimental testing, this finding provides correlational evidence that junior officers may model their character on what they see in their senior officers. Third, in contrast, the junior officers' rankings were not well aligned with the Australian Army's stated core values. The junior officers saw teamworker as one of their chief strengths, but not courage, initiative, or respectful. This pattern does not imply that the junior officers rejected strengths as valuable but only that they are not among the junior officers' own chief strengths. abstract_id: PUBMED:12736623 Acuity, ophthalmoscopy, and visually evoked potentials in the prediction of visual outcome in infants with bilateral optic nerve hypoplasia. Purpose: To determine whether Teller Acuity Cards, transient visually evoked potentials (VEPs), and optic disc size estimated from ophthalmoscopy were predictive of acuity outcome in infants and young children with bilateral optic nerve hypoplasia (ONH). Methods: Twenty-eight infants (mean age, 7 months) with bilateral ONH underwent clinical assessment, including ophthalmoscopy. All but a few of these patients underwent neuroimaging studies, analysis of transient VEPs to multiple stimuli, and repeated acuity assessment. Acuity outcome was assessed, on average, 28 months later. Results: Acuity outcome was significantly correlated with the estimated optic disc diameter, initial acuity, and VEP signal-to-noise ratios (r = 0.80, 0.71, 0.69, respectively; all P &lt;.001). Multiple regression analysis showed that the initial acuity and estimated optic disc diameter accounted for 73% of the variation in acuity outcome. VEPs to white-black gratings segregated infants by 6 months of age, whose acuity outcome was better or worse than 5.6 cycles/degree (20/100). Conclusions: Acuity outcome was predicted in infants with bilateral ONH with a linear equation using initial acuity and estimated optic disc diameter. Additionally, analysis of VEPs may segregate infants with a good visual outcome from those with a poor visual outcome. Longer follow-up will be necessary to determine final Snellen acuity. abstract_id: PUBMED:25598621 Perceived stress and stressors among house officers. Background: House officers training has always been regarded as a highly stressful environment to doctors. The objectives of our study were to assess perceived stress and sources of stress among house officers. Materials And Methods: A cross-sectional questionnaire-based survey was carried out among house officers working in Civil Hospital, Karachi, Pakistan, and Jinnah Postgraduate Medical Centre, Karachi, Pakistan, during November and December, 2013. Perceived stress was assessed using perceived stress scale. A 15-item questionnaire was used to assess sources of stress and was graded by Likert scale (1 = very low, 5 = very high). To calculate the difference of mean for stressors by gender of house officers, t-test with 95% confidence interval was used. Results: The overall response rate was 81.5% (269 out of 330). One hundred twenty-nine (47.9%) were found to be under stress of whom 32 (24.8%) were males and 97 (75.2%) were females. Top five stressors reported by house officers were night calls, workload, time pressure, working alone, and coping with diagnostic uncertainty. Significant differences for stressors by gender were found for night calls (P &lt; 0.05), unrealistically high expectation by others (P &lt; 0.05), financial issues (P &lt; 0.05), and lack of senior support (P &lt; 0.05). Conclusion: Majority of house officers working in Civil Hospital, Karachi, and Jinnah Postgraduate Medical Centre, Karachi, were under high level of stress. Therefore, immediate steps should be taken for control of stress and its management. abstract_id: PUBMED:10420377 Entoptic phenomena and potential visual acuity REVIEW OF PREOPERATIVE EVALUATION OF VISUAL FUNCTION: In the presence of media opacities, the prevision of the potential postoperative visual acuity constitutes often a complicate diagnostic problem. Its solution requires the experience of the ophthalmologist, who should take into account various elements from the patient's visual aggravation progress, the dissociation between far and near vision under mesopic and scotopic conditions, the actual grating acuity and the eventual hand movement and light-perception, as well as, elements from the patient's objective examination, i.e., from the biomicroscopy with direct and indirect illumination through the still transparent media and from the binocular ophthalmoscopy of the fundus. METHODS REVIEWED: Purkinje figure and luminous darting points visualization. POSSIBLE PREDICTIONS: Light induced entoptic imagery, as Purkinje figure and luminous darting points, visualization, helps as a supplementary diagnostic procedure, for the prevision of a low, but socially useful postoperative visual acuity. abstract_id: PUBMED:7211248 New visual acuity test for pre-school children. A new test chart was developed for the measurement of visual acuity of pre-school children. The symbols of the test are circle, square, apple and house. These were so designed that each symbol measures visual acuity similarly. This feature of the test was verified experimentally. The visual acuity values measured by the individual symbols correlated highly with the visual acuity values measured with the whole test (0.82-0.86). The correlation between the visual acuity values measured repeatedly, the reliability of the new test, was found to be 0.94 for adult subjects. The new visual acuity test thus fulfils the statistical criteria of a good visual acuity test. Because both children and nurses seem to like the new test, it may be useful in the assessment of visual acuity in pre-school children. abstract_id: PUBMED:37426939 Advances in dynamic visual acuity test research. The dynamic visual acuity test (DVAT) is a functional evaluation tool for the impairment and compensation of the vestibular system, which could reflect the Vestibulo-ocular reflex (VOR) function. We present an overview of DVAT research, displaying recent advances in test methods, application, and influencing factors; and discussing the clinical value of DVAT to provide a reference for clinical application. There are two primary types of DVAT: dynamic-object DVAT and static-object DVAT. For the latter, in addition to the traditional bedside DVAT, there are numerous other approaches, including Computerized DVAT (cDVAT), DVAT on a treadmill, DVAT on a rotary, head thrust DVA (htDVA) and functional head impulse testing (fHIT), gaze shift dynamic visual acuity with walking (gsDVA), translational dynamic visual acuity test (tDVAT), pediatric DVAT. The results of DAVT are affected by subject [occupation, static visual acuity (SVA), age, eyeglass lenses], testing methods, caffeine, and alcohol. DVAT has numerous clinical applications, such as screening for vestibular impairment, assessing vestibular rehabilitation, predicting fall risk, and evaluating ophthalmology-related disorders, vestibular disorders, and central system disorders. abstract_id: PUBMED:7223847 Visual acuity and test performance. Evaluation of scholastic achievement (American College Testing Service) test scores confirms previous reports that persons with poor visual acuity perform better on these tests than individuals with normal or superior acuity. Answer: No, junior house officers do not routinely test visual acuity and perform ophthalmoscopy. A cross-sectional questionnaire-based study revealed that only 18.5% of participants perform daily ophthalmoscopy and 4.6% perform visual acuity testing regularly. The majority of junior house officers fail to test visual acuity or perform ophthalmoscopy when clerking patients. The study also highlighted the poor availability of Snellen charts, functioning ophthalmoscopes, and topical mydriatics in the wards, which may contribute to this lack of routine testing (PUBMED:12193006).
Instruction: Normal postural responses preceding shoulder flexion: co-activation or asymmetric activation of transverse abdominis? Abstracts: abstract_id: PUBMED:24867902 Normal postural responses preceding shoulder flexion: co-activation or asymmetric activation of transverse abdominis? Background And Objectives: It is suggested that activation of the transverse abdominis muscle has a stabilizing effect on the lumbar spine by raising intra-abdominal pressure without added disc compression. However, its feedforward activity has remained a controversial issue. In addition, research regarding bilateral activation of trunk muscles during a unilateral arm movement is limited. The aim of this study was to evaluate bilateral anticipatory activity of trunk muscles during unilateral arm flexion. Materials And Methods: Eighteen healthy subjects (aged 25 ± 3.96 years) participated in this study and performed 10 trials of rapid arm flexion in response to a visual stimulus. The electromyographic activity of the right anterior deltoid (AD) and bilateral trunk muscles including the transverse abdominis/internal oblique (TA/IO), superficial lumbar multifidus (SLM) and lumbar erector spine (LES) was recorded. The onset latency and anticipatory activity of the recorded trunk muscles were calculated. Results: The first muscle activated in anticipation of the right arm flexion was the left TA/IO. The right TA/IO activated significantly later than all other trunk muscles (P &lt; 0.0005). In addition, anticipatory activity of the right TA/IO was significantly lower than all other trunk muscles (P &lt; 0.0005). There was no significant difference in either onset latency or anticipatory activity among other trunk muscles (P &gt; 0.05). Conclusion: Healthy subjects showed no bilateral anticipatory co-activation of TA/IO in unilateral arm elevation. Further investigations are required to delineate normal muscle activation pattern in healthy subjects prior to prescribing bilateral activation training of transverse abdominis for subjects with chronic low back pain. abstract_id: PUBMED:31631946 The effectiveness of transverse abdominis training on balance, postural sway and core muscle recruitment patterns: a pilot study comparison across age groups. [Purpose] This pilot study aims to determine whether improvements in postural sway, particularly among older adults, can be augmented immediately after training participants to activate and isolate the transverse abdominis (TrA) muscle. [Participants and Methods] Fifty six participants (in three age groups) took part in a single session TrA training intervention. Aspects of postural sway, balance and muscle activation patterns were measured before and after training and compared. [Results] There was significant improvement across four of six postural sway variables for the combined sample of all age groups. Older adults improved more than younger and middle-age participants in two important postural sway variables. No marked differences were evident in static reach distance across all age groups. There were no differences between groups with regard to surface electromyography (sEMG) amplitudes despite the emergence of different activation patterns among age groups. [Conclusion] Immediate effects were induced in postural sway measures after the single session training intervention. By improving neuromuscular control of the TrA and maximizing the efficiency of related proximal core muscles center of pressure (COP) sway velocities decreased during single limb standing (SLS). abstract_id: PUBMED:21269831 Activation of transversus abdominis varies with postural demand in standing. Transversus abdominis (TrA) is a multifunctional muscle, being involved in pressure regulation within the abdominal cavity and thereby in direction independent stabilization of the spine and resistance to imposed trunk flexion moments. Indirect evidence suggests a role of TrA also in postural control of the erect human trunk. The main purpose here was to investigate if the magnitude of TrA activation is related to postural demand. Eleven healthy males performed seven different symmetrical static bilateral arm positions holding 3 kg in each hand. The arm positions were selected to systematically vary the height of the centre of mass (COM) keeping imposed moments constant and vice versa. EMG was recorded bilaterally with fine-wire intramuscular electrodes from TrA and obliquus internus (OI) and with surface electrodes from rectus abdominis (RA) and erector spinae (ES). Intra-abdominal pressure (IAP) was measured via a pressure transducer in the gastric ventricle. TrA was the only muscle that displayed activation co-varying with the vertical position of the COM. Further, TrA activation increased, together with IAP and ES activation, with imposed flexion moment, i.e. with arms extended horizontally forward. In contrast to OI, RA and ES, TrA activation was independent of the direction of the imposed moment (arms held inclined forward or backward). In conclusion, TrA activation level is uniquely associated with increased postural demand caused by elevated COM. Also, TrA appears to assist in counteracting trunk flexion via increased IAP, and contribute to general spine stabilization when the trunk is exposed to moderate flexion and extension moments. abstract_id: PUBMED:37240987 Effects of Sensorimotor Training on Transversus Abdominis Activation in Chronic Low Back Pain Patients. (1) Background: The aim of this study was to investigate and compare the effect of sensorimotor training on transversus abdominis activation. (2) Methods: Seventy-five patients with chronic low back pain were randomly assigned to one of three groups (whole body vibration training using Galileo®, coordination training using Posturomed®, or physiotherapy (control)). Transversus abdominis activation was measured by using sonography pre- and post-intervention. Second, changes in clinical function tests and their correlation with the sonographic measurements were determined. (3) Results: All three groups showed an improvement in activation of the transversus abdominis post-intervention, with the Galileo® demonstrating the largest improvement. There were no relevant (r &gt; 0.5) correlations between activation of the transversus abdominis muscle and any clinical tests. (4) Conclusions: The present study provides evidence that sensorimotor training on the Galileo® significantly improves the activation of the transversus abdominis muscle. abstract_id: PUBMED:23968681 Effects of limiting anterior displacement of the center of foot pressure on anticipatory postural control during bilateral shoulder flexion. In bilateral shoulder flexion with the arms moving from the sides of the body to the horizontal level while standing, no preceding activation of the triceps surae (TS) with respect to focal muscles has been found. Considering that preceding activation would offer a useful indicator of anticipatory postural control, it was attempted to induce preceding activation by limiting the anterior displacement range of the center of foot pressure in the anteroposterior direction (CoPap). Subjects were 13 healthy young adults. The 50% anterior range of CoPap displacement caused by shoulder flexion was calculated, and the floor inclined by the subject's weight when CoPap extended beyond that range. Subjects were instructed not to incline the floor during shoulder flexion. Under the limitation condition, the ankle and knee joints plantarflexed and extended at 1.1°, respectively, with no hip movement; that is, the whole body inclined backward by pivoting at the ankle. This limitation resulted in preceding muscle activation of TS as well as erector spinae and biceps femoris, and no significant differences in onset time were seen between these muscles. These results demonstrated that by limiting CoPap anterior displacement, preceding activation of TS could be induced with backward inclination of the whole body. abstract_id: PUBMED:31913747 Patterns of muscle activation and modulation of ankle intrinsic stiffness in different postural operating conditions. Intrinsic stiffness describes the dynamic relationship between imposed angular perturbations to a joint and the resulting torque response, due to intrinsic mechanical properties of muscles and joint, and inertia of the limbs. Recently, we showed that ankle intrinsic stiffness changes substantially with sway in normal standing. In the present study, we documented how ankle intrinsic stiffness changes with postural operating conditions. Subjects stood on an apparatus while subjected to ankle position perturbations in five conditions: normal standing, toe-up and toe-down standing, and backward and forward lean. In each condition, ankle intrinsic stiffness was estimated while its modulation with sway was accounted for. The results demonstrated that ankle intrinsic stiffness varies widely, from 0.08 to 0.75 of critical stiffness, across postural operating conditions; however, it is always smaller than the critical stiffness. Therefore, other contributions are necessary to ensure stable standing. The mean intrinsic stiffness was highest in forward lean and lowest in backward lean. Moreover, within each operating condition, the intrinsic stiffness changed with center-of-pressure position in one of three ways, each associated with a distinct muscle activation pattern; these include 1) monotonically increasing stiffness-center of pressure relation, associated with a progressive increase in triceps surae activation, 2) decreasing-increasing stiffness-center of pressure relation, associated with initial activation of tibialis anterior and later activation of triceps surae, and 3) monotonically decreasing stiffness-center of pressure relation, associated with decreasing activation of tibialis anterior. Thus intrinsic stiffness varies greatly within and across postural operating conditions, and a correct understanding of postural control requires accounting for such variations.NEW &amp; NOTEWORTHY Ankle intrinsic stiffness changes with sway in normal standing. We quantified such changes in different postural operating conditions and demonstrated that the intrinsic stiffness changes in a manner associated with different activation patterns of ankle plantarflexors and dorsiflexors, emerging in different operating conditions. Large modulations of the intrinsic stiffness within and across postural operating conditions show that the stiffness importance and contribution change and must be accounted for in the study of postural control. abstract_id: PUBMED:30352387 Antagonist co-activation during short and medium latency responses in subjects with chronic ankle instability. Purpose: This study aims to evaluate bilateral ankle antagonist co-activation during compensatory postural responses, including short latency responses (SLR) and medium latency responses (MLR), in response to a unilateral sudden ankle inversion in subjects with chronic ankle instability (CAI). Methods: Twenty-four participants with CAI and 20 controls participated in the present study. Bilateral eletromyography of peroneus longus (PL), peroneus brevis (PB), tibialis anterior (TA) and soleus (SOL) muscles was collected during a unilateral sudden inversion perturbation (30°) to assess antagonist co-activation of SOL/TA and TA/P pairs during SLR and MLR in both support and perturbed positions. Results: Compared to control group, participants with CAI presented: (1) bilateral increased antagonist co-activation of SOL/TA MLR in the support position; and (2) decreased antagonist co-activation of TA/P MLR of uninjured limb in the support position and of injured limb in the perturbed position. Conclusions: CAI involves deregulation of ankle antagonist co-activation of MLR in both injured and uninjured limbs. abstract_id: PUBMED:35526023 Age-related changes in the activation timing of postural muscles to the prime mover muscle for bilateral arm flexion during standing. Background: We aimed to obtain the standard values of age-related changes in the activation timing of postural muscles to the prime mover muscle (anterior deltoid [AD]) for bilateral arm flexion during standing. Methods: The study participants were 276 children (aged 3-14 years) and 32 adults (aged 20-26 years). In response to a visual stimulus, participants raised both arms from a fully extended position as quickly as possible, stopped their arms voluntarily at a horizontal level at the shoulder, and maintained that position for 2 s. Ten test trials were performed. By using surface electromyography, the duration from the burst onset of the postural muscles to that of AD was measured as the starting time of the postural muscles (rectus abdominis [RA], erector spinae [ES], rectus femoris [RF], biceps femoris [BF], tibialis anterior [TA], gastrocnemius medialis [GcM], and soleus [SOL]). The starting time was presented as a negative value when the burst onset of the postural muscles preceded that of AD, which was defined as the preceding activation. A positive value for the starting time was defined as delayed activation. Results: In adults, the burst onsets of ES and BF significantly preceded that of AD. In ES, the starting time preceded the onset of AD in those aged ≥ 5-6 years; no difference with adults was found at age 13-14 years. On the other hand, in BF, significant delayed activation was found at ages 3-4 to 11-12 years. While the starting time decreased with age, no significant preceding activation similar to adults was found, even at age 13-14 years. In TA, no significant difference with the onset of AD was found at age 3-6 years, and significant delayed activation was found at age ≥ 7-8 years. Significant delayed activation in GcM, SOL, RA, and RF was observed in all age groups, and no age-related changes were observed in children. Conclusion: These findings could provide standard values from childhood to adolescence for age-related changes in anticipatory postural muscle activity during voluntary movement while standing and contribute to applications in the fields of sports and rehabilitation. abstract_id: PUBMED:24108589 Asymmetric C(sp(2) )-H activation. A "niche" topic in the past decade, the asymmetric C-H bond activation has been attracting growing interest over the last few years. Particularly significant advances have been achieved in the field of direct, stereoselective transformations of C(sp(2) )-H bonds. This Concept article intends to showcase different types of asymmetric C(sp(2) )-H bond activation reactions, emphasising both the nature of the stereo-discriminating step and the variability of valuable scaffolds that could be rapidly constructed by means of such strategies. abstract_id: PUBMED:16150608 The effect of postural correction on muscle activation amplitudes recorded from the cervicobrachial region. In clinical practice, postural correction is a common treatment approach for individuals with neck and shoulder pain. As chronic static muscle use is thought to be associated with the onset of some neck and shoulder pain syndromes, it is important to understand the impact a postural correction program might have on muscle activation amplitudes in the neck and shoulder regions. Normalized surface electromyographic data were recorded from the levator scapulae, upper trapezius, supraspinatus, posterior deltoid, masseter, rhomboid major, cervical erector spinae, and sternocleidomastoid muscles of the dominant side of each of eighteen healthy subjects. Subjects performed five repetitions of each of four seated typing postures (habitual, corrected, head-forward and slouched) and four standing postures (habitual, corrected, and head-forward and slouched). Repeated-measures analysis of variance models (alpha=0.05) revealed that in sitting postural correction tended to decreased the level of muscle activation required in all muscles studied during seated computer work, however this finding was not statistically significant. Corrected posture in sitting did, however produce a statistically significant reduction in muscle activity compared to forward head posture. Corrected posture in standing required more muscle activity than habitual or forward head posture in the majority of cervicobrachial and jaw muscles, suggesting that a graduated approach to postural correction exercises might be required in order to train the muscles to appropriately withstand the requirements of the task. A surprising finding was that muscle activity levels and postural changes had the largest impact on the masseter muscle, which demonstrated activation levels in the order of 20% maximum voluntary electrical activation. Answer: The normal postural responses preceding shoulder flexion do not involve bilateral anticipatory co-activation of the transverse abdominis (TrA). Instead, there is asymmetric activation of the TrA muscle. A study evaluating bilateral anticipatory activity of trunk muscles during unilateral arm flexion found that the first muscle activated in anticipation of the right arm flexion was the left transverse abdominis/internal oblique (TA/IO). The right TA/IO activated significantly later than all other trunk muscles and had significantly lower anticipatory activity than all other trunk muscles. There was no significant difference in either onset latency or anticipatory activity among other trunk muscles, indicating that healthy subjects showed no bilateral anticipatory co-activation of TA/IO in unilateral arm elevation (PUBMED:24867902). This finding suggests that during unilateral arm movements, such as shoulder flexion, the activation pattern of the TrA is not symmetrical or co-activated across both sides of the body. Instead, the muscle on the opposite side of the arm being flexed activates first, which may be part of a strategy to stabilize the trunk and maintain balance during the movement. Further research is needed to delineate the normal muscle activation pattern in healthy subjects before prescribing bilateral activation training of the transverse abdominis for subjects with chronic low back pain (PUBMED:24867902).
Instruction: Is the prosthesis length in malleostapedotomy for otosclerosis revision surgery predictable? Abstracts: abstract_id: PUBMED:24836596 Is the prosthesis length in malleostapedotomy for otosclerosis revision surgery predictable? Hypothesis: The prosthesis length in malleostapedotomy for otosclerosis revision surgery can be calculated if the prosthesis length of previous conventional incus stapedotomy is known. Background: Malleostapedotomy is used in revision surgery for otosclerosis in case of severe incus erosion and malleus or incus fixation. Methods: Fifteen cadaveric human temporal bone specimens were investigated by means of micro-CT followed by 3D analysis. The distance between the incus and the stapes footplate as well as distances between the malleus and the footplate were measured and compared. Results: The required length of virtually bent prostheses corresponded approximately to the 1.5-fold of virtual stapes prostheses in 93%. The addition of 2 mm predicted the required prostheses length almost correctly in 80%. Conclusion: The clinical practice will show whether a preoperative calculation of expected prosthesis length in MS based on the length of the formerly used stapes prosthesis is possible and helpful. abstract_id: PUBMED:25862214 Design-Dependent Calculation of the Prosthesis Length in Malleostapedotomy. Objective: The results of a recently published micro-CT study suggested a correlation of the distance between long incus process and stapes footplate and the required prosthesis length in malleostapedotomy. The goal of this study was to test the reliability of that assumption. Methods: Rectangular and bent prostheses were tested in 11 cadaveric human temporal bone specimens; 1 of them showed a stapedial artery. Prosthesis length was calculated based on the distance between long incus process and stapes footplate. The rate of acceptable prosthesis insertion into the vestibule was investigated. Results: In both prostheses designs, the insertion depth into the vestibule did not exceed 1.0 mm. Two prostheses did not pass the footplate level in bent prostheses (18%) and 1 in rectangular prostheses (9%). Conclusion: A rough estimation of the required prosthesis length in malleostapedotomy seems possible if the distance between long incus process and stapes footplate is known and a design-dependent equation exists. abstract_id: PUBMED:29658107 Malleostapedotomy in stapes revision surgery: Is an endoscopic approach possible? Objectives/hypothesis: The purpose of the present study was to show our preliminary results regarding the endoscopic ear surgery application in malleostapedotomy surgery. Study Design: Case series. Methods: Six patients (four females and two males) who underwent endoscopic ear surgery as revision surgery for otosclerosis were enrolled in this study. For endoscopic malleostapedotomy surgery, rigid angled endoscopes at 0 ° and 30 ° with a length of 14 cm and an outer diameter of 3 and 4 mm were used. A superelastic nitinol stapes prosthesis was used to connect the malleus neck to the oval window. Results: During endoscopic ear surgery, a lateral ossicular chain fixation was present in all cases. The mean operative time in the study group was 81 minutes (range, 73-89 minutes). A statistically significant difference between preoperative and postoperative hearing findings was observed (P = .007). No cases of sensorineural hearing loss or persistent vertigo were observed in the study sample. No cases of prosthesis displacement or prosthesis extrusion occurred after a mean follow-up of 11.5 months. Conclusions: In 100% of treated cases, the endoscope proved to be adequate for the visualization of the malleus handle and incus, and of their possible abnormalities. Despite the difficulty related to endoscopic single-handed work, none of the surgical steps of endoscopic malleostapedotomy were considered not feasible by endoscopic ear surgery. Level Of Evidence: 4. Laryngoscope, 2611-2614, 2018. abstract_id: PUBMED:29779038 Malleostapedotomy with the self-fixing and articulated titanium piston. Objective: To analyze the results of malleostapedotomy performed by applying the self-fixing and articulated titanium piston according to Häusler. Study Design: Retrospective case review. Setting: Tertiary referral center. Patients And Interventions: This study concerns a retrospective analysis of the results of malleostapedotomy with the use of a self-fixing articulated titanium piston in 16 ears of 16 consecutively treated patients between 2005 and 2009. The medical files were used for the acquisition of data on medical and surgical history and to obtain pre- and postoperative audiometry. Diagnosis and outcomes of mainly revision surgeries are presented and compared to the literature. Main Outcome Measures: Effect of (revision) malleostapedotomy by evaluating postoperative audiometry and air-bone gap closure. Results: The postoperative air-bone gap closure was ≤ 10 dB in 9/16 (56%) ears and within ≤ 20 dB in 13/16 (81%) ears. The mean postoperative air-bone gap was 14.3 dB HL (0.5-2.0 kHz) and 17.3 dB HL (0.5-4.0 kHz). Postoperatively, there was no increase in bone conduction thresholds larger than 3 dB (0.5-2.0 kHz) and postoperative dizziness was absent or very limited and transient. Conclusions: The malleostapedotomy procedure has become surgically less demanding over time by the technical improvements present in the nowadays available pistons. The design of the self-fixing and articulated titanium piston used in the present group of patients allows a safe and straight-forward malleostapedotomy procedure. Present hearing outcomes match with results presented in the literature. abstract_id: PUBMED:11698795 Malleostapedotomy in revision surgery for otosclerosis. Purpose: The purpose of this study was to analyze the results of malleostapedotomy and to compare them with those of a conventional incus stapedotomy in a series of 82 consecutive surgical revisions in otosclerotic patients. Materials And Methods: 82 consecutive revision stapes surgery cases over 5 years were evaluated. The preoperative and postoperative audiometric data of 80 (97.5%) of the patients were obtained. Results: 71 of the patients underwent a functional revision procedure as malleostapedotomy (56, 79%) or as incus stapedotomy (15, 21%). The most common cause of failure of primary surgery was a displaced or malfunctioning prosthesis (86.2%). Pathologic changes of the oval window were found in 80% of the cases. Problems of the incus were identified in 80% and abnormality of the malleus in 48.6% of the cases. The functional success rate of malleostapedotomy (closure within 10 dB) was found to be higher than that of traditional incus stapedotomy (p &lt; 0.05). Overclosure was seen in 12 patients (17%) and a significant sensorineural hearing loss in 2 patients (3%). There were no dead ears in this series. The postoperative hearing results after first revision surgery were better than those after multiple surgical procedures (p &lt; 0.05). Conclusions: Malleostapedotomy yields better functional hearing results than incus stapedotomy in revision surgery for otosclerosis. The detection of many malleus fixations was the result of the systematic exposure of the anterior malleal process and ligament through an endaural approach with superior canaloplasty. abstract_id: PUBMED:26124693 The Relationship between Stapes Prosthesis Length and Rate of Stapedectomy Success. Objectives: To identify whether measurement of the prosthesis length is mandatory in patients requiring otosclerosis surgeries and to assess their postoperative audiometric outcomes. In addition, evaluation of prosthesis length used in revision compared to primary stapedectomy was carried out. Study Design: Case series with chart review. Methods: Chart review of 393 patients undergoing primary (321) versus revision stapedectomy (72) was performed in a tertiary referral center. The indication for surgery was the presence or persistence/recurrence of an air-bone gap (ABG) greater than 20 dB. Air and bone conduction thresholds (ACT and BCT, respectively), ABG as well as pure tone averages (PTAs) were determined for all patients, and the results were compared preoperatively and postoperatively. Results: Prosthesis length used ranged from 3.0 to 6.0 mm without differences between primary and revision groups. Of the revision surgeries, 62.5% were stapedectomies versus stapedotomies (P &lt; 0.001). Patients showed significant decrease in speech discrimination score, with increased air and bone conduction thresholds as well as mean ABG and PTA before the revision surgeries as a first procedure failure. Prosthesis length changes occurred in 73.5% of the cases, with an average absolute change of 0.55 mm. Prosthesis length did not affect postoperative audiometric results between primary and revision groups, in all surgeries combined. When grouping stapes surgery into accurately versus inaccurately measured incus-footplate distance, significant differences were observed in prosthesis length employed (P &lt; 0.01). Hearing outcomes were also better in the group in which an accurately measured prosthesis was chosen, as opposed to "standard-length" prosthesis. Conclusions: This study corroborates postoperative success rates of revision surgeries, which show smaller improvements in hearing compared to a primary intervention. Accurate intra-operative measurement of prosthesis length was correlated with better audiometric results postoperatively. abstract_id: PUBMED:22992269 From incus bypass to malleostapedotomy: technical improvements and results. Objective: To assess results of malleostapedotomy using a Fisch Storz titanium piston with at least 10 months' follow up. Methods: Using a prospective database, the indications, surgical technique, and pre- and post-operative audiometric data for 60 patients undergoing malleostapedotomy between 2002 and 2010 were evaluated. Diagnoses and primary and revision surgeries were compared with reference to the literature. Results: Sixty endaural malleostapedotomies were performed, 28 as a primary intervention and 32 as revision surgery. In 68 per cent, the underlying pathology was otosclerosis. The most common reason for revision surgery (i.e. in 59 per cent) was prosthesis dysfunction. Overall, the mean air-bone gap (0.5-3 kHz) for the primary intervention and revision surgery groups was 9.4 and 11.3 dB, respectively; an air-bone gap of less than 20 dB was obtained in 100 and 81 per cent of patients, respectively. There was no significant audiological difference between the primary and revision surgeries groups, and no deafness. Conclusion: Malleostapedotomy shows comparable results to standard incus-stapedotomy and may be preferable in the presented situations. abstract_id: PUBMED:17245050 Malleostapedotomy: the Marburg experience. Background: The surgical procedure for patients with otosclerosis routinely is incus stapedotomy. In case of otosclerosis with incus necrosis or a bony fixation of the malleus and incus, malleostapedotomy is performed. Patients And Methods: Between May 2002 and September 2003, malleostapedotomy was performed in 6 out of 34 patients with otosclerosis. In 2 primary cases, a middle ear dysplasia was found. The malleus was fixed in 2 further primary cases. Two revision surgeries were performed with incus necrosis present. A titanium piston was used, which was fixed at the malleus handle and introduced into an opening of the footplate. Results: The preoperative air-bone gap was reduced from 36 dB(A) to 13 dB(A) after surgery for an average checkup time of 3 months. The length of the prostheses varied from 6.3 to 7.5 mm. No patient showed a hearing loss or vertigo after surgery. Conclusion: Malleostapedotomy is the technique of choice in case of an additional pathology of the ossicular chain in patients with otosclerosis. Larger numbers of patients and long-term investigations need to compare the results of malleostapedotomy with those of a conventional incus stapedotomy. abstract_id: PUBMED:32306629 The comparison of malleostapedotomy in primary and revision stapes surgery for otosclerosis Objective: To compare the effectiveness of primary malleostapedotomy with revision malleostapedotomy for otosclerosis. Methods: From April 2002 to December 2017, 70 consecutive patients with otosclerosis who underwent malleostapedotomy were reviewed. Depending on the primary malleostapedotomy (P-MS) or revision malleostapedotomy (R-MS), the patients were divided into P-MS group or R-MS group.The intraoperative findings and hearing results before and after surgery were compared between the two groups. ALL data were analyzed using SPSS 23. Results: Totally 73 malleostapedotomy were performed in 73 ears of 70 patients, including 38 P-MS and 35 R-MS. There was no significant difference between the two groups in sex ratio, age and operated ears (P&gt;0.05 for all). The most common finding at P-MS was incus fixation (50.0%, 19/38) versus prosthesis displacement for R-MS (60.0%, 21/35) . Overall, the air-bone gap (ABG) improvement in P-MS were (18.1±8.2) dB in 0.5-3 kHz and (18.3±8.5) dB in 0.5-4 kHz, without significant difference to those in R-MS (P&gt;0.05) . 31.4% of R-MS in 0.5-3 kHz and 22.9% R-MS in 0.5-4 kHz achieved an ABG&lt;10 dB, significantly lower than those of P-MS (65.8% in 0.5-3 kHz and 57.9% in 0.5-4kHz; P&lt;0.05). Failure (postoperative ABG&gt;30 dB) occurred in 11.4% in R-MS and 0 in P-MS (for 0.5-3 kHz and 0.5-4 kHz). The incidence of postoperative sensorineural hearing loss (&gt;10 dB increase in bone conduct) in R-MS group was 8.6% in 0.5-3 kHz and 0.5-4 kHz, without significant difference to those in P-MS (P&gt;0.05) . 80.0% (20/25) of first R-MS achieved ABG&lt;20 dB, compared to 37.5% (3/8) of second R-MS with ABG&lt;20 dB. Conclusions: Although both P-MS and R-MS can significantly improve hearing, with similar risk of inner ear damage, R-MS is less effective and poses a higher risk of failure than P-MS. For patients with insufficient hearing improvement after first R-MS, conventional hearing aids or implantable hearing devices may be considered as an alternative. abstract_id: PUBMED:30054728 Predictable prosthesis length on a high-resolution CT scan before a stapedotomy. Objective: (1) To assess the correlation between preoperative high-resolution CT (HRCT) imaging measurement from the long process of the incus to the footplate and the length of intraoperative selected prosthesis. (2) To determine if HRCT has a predictive value of prolapsed facial nerve during stapedotomy. Materials And Methods: We evaluated in our tertiary care center, in a retrospective case series, 94 patients undergoing primary stapedotomy. Preoperative temporal bone HRCT scans were reformatted in the plane of the stapes to measure on the same section the distance between the long process of the incus and the footplate. Measurement was performed by otolaryngology resident and neurotologist. We analyze the interobserver correlation and the mean length measured on the HRCT to the selected prosthesis size intraoperatively. Results: Mean HRCT measurement of the incus long process/footplate distance assessed by the otolaryngology resident and neurotologist was 4.34 and 4.38 mm, respectively. Interobserver correlation was statistically significant [intraclass correlation coefficient (ICC) of 0.679 (p &lt; 0.001)]. Mean selected prosthesis length intraoperatively was 4.36 mm. Correlation between the mean selected length prosthesis and the mean HRCT measurement was also statistically significant [ICC of 0.791 (p &lt; 0.001)]. Postoperatively, a statistical improvement was shown in air conduction (p &lt; 0.001), bone conduction (p &lt; 0.001) and air-bone gap reduction (p &lt; 0.001). 2 cases of facial nerve covering one-half of the oval window were identified by HRCT and confirmed intraoperatively. Conclusion: HRCT is a valuable tool to predict preoperatively the length of the stapedotomy prosthesis. Moreover, it might be helpful to identify a potential prolapsed facial nerve, to confirm the diagnosis of otosclerosis and to rule out other abnormalities. Ultimately, it may optimize the stapedotomy procedure planning. Answer: The prosthesis length in malleostapedotomy for otosclerosis revision surgery appears to be predictable to some extent. Studies have suggested that the prosthesis length required for malleostapedotomy can be estimated based on the known distance between the long incus process and the stapes footplate, and that a design-dependent equation may facilitate this estimation (PUBMED:25862214). Additionally, a micro-CT study indicated that the length of virtually bent prostheses corresponded approximately to the 1.5-fold of virtual stapes prostheses in 93% of cases, and adding 2 mm predicted the required prosthesis length almost correctly in 80% of cases (PUBMED:24836596). Furthermore, high-resolution CT scans have been used to measure the distance between the long process of the incus and the footplate preoperatively, showing a statistically significant correlation between the measured distance and the intraoperatively selected prosthesis length (PUBMED:30054728). This suggests that preoperative imaging can be a valuable tool in predicting the necessary prosthesis length for malleostapedotomy. However, it is important to note that while these methods can provide a rough estimation, individual variations and surgical factors may still influence the final prosthesis length required. The success of the prediction also depends on the surgeon's experience and the specific circumstances of the revision surgery, such as the presence of incus erosion or malleus fixation (PUBMED:24836596, PUBMED:25862214, PUBMED:29658107, PUBMED:29779038, PUBMED:11698795, PUBMED:26124693, PUBMED:22992269, PUBMED:17245050, PUBMED:32306629). Therefore, while predictive methods can be helpful, they may not be infallible, and intraoperative judgment remains crucial.
Instruction: Consenting for current genetic research: is Canadian practice adequate? Abstracts: abstract_id: PUBMED:32979650 Electronic consenting for conducting research remotely: A review of current practice and key recommendations for using e-consenting. Background: Electronic approaches are becoming more widely used to obtain informed consent for research participation. Electronic consent (e-consent) provides an accessible and versatile approach to the consenting process, which can be enhanced with audio-visual and interactive features to improve participant engagement and comprehension of study procedures. Best practice guidance underpinned by ethical principles is required to ensure effective implementation of e-consent for use in research. Aim: To identify the key considerations for successful and ethical implementation of e-consent in the recruitment of participants to research projects which are conducted remotely. Methods: Electronic database searches of CINAHL, Medline, Embase, DARE, HTA, PubMed, the Cochrane Library, Scopus, Web of Science, NHS Evidence, and hand-searches of reference lists were performed. Primary research studies of adult (≥ 18 years old) research participants using e-consent, published in English language, peer-reviewed journals between 2010-2020 were eligible for inclusion. Results: Of the initial 665 identified studies, 18 met the inclusion criteria: 6 cohort studies, 5 qualitative studies, 4 randomised control trials, 2 mixed-methods studies and one case-control study. Critical appraisal of included studies using Critical Appraisal Skills Program (CASP) tools suggested a low to moderate risk of bias in most studies (n = 15). Key practice recommendations for researchers using e-consent were identified around five primary themes: 1) accessibility and user-friendliness of e-consent, 2) user engagement and comprehension, 3) customisability to participant preferences and demographics, 4) data security and 5) impact on research teams. Conclusion: E-consenting approaches are generally well received by participants, with most studies reporting user-friendly interfaces and sufficient participant comprehension of consenting documentation. Implications For Practice: E-consent may facilitate remotely-conducted research by offering a feasible and robust alternative to face-to-face consenting approaches, however paper-based options should still be offered, based on participant preference. Customising e-consenting platforms may improve accessibility for individuals with specific needs, and increase engagement with study information. Research teams must offer prospective participants opportunities to discuss study information in real-time. abstract_id: PUBMED:25410958 Consenting for current genetic research: is Canadian practice adequate? Background: In order to ensure an adequate and ongoing protection of individuals participating in scientific research, the impacts of new biomedical technologies, such as Next Generation Sequencing (NGS), need to be assessed. In this light, a necessary reexamination of the ethical and legal structures framing research could lead to requisite changes in informed consent modalities. This would have implications for Institutional Review Boards (IRBs), who bear the responsibility of guaranteeing that participants are verifiably informed, and in sufficient detail, to understand the reality of genetic research as it is practiced now. Current literature allowed the identification of key emergent themes related to the consent process when NGS was used in a research setting. Methods: We examined the subjects of secondary use, sharing of materials and data, and recontacting participants as outlined in the Canadian Informed Consent templates and the accompanying IRB instructions for the conduct of genetic research. The research ethics policy applied by the three Canadian research agencies (Tri-Council Policy Statement, 2nd Edition) was used to frame our content analysis. We also obtained IRB-approved consent forms for genetic research projects on brain and mental health disorders as an example of a setting where participants might present higher-than-average vulnerability. Results: Eighty percent of documents addressed different modalities for the secondary use of material and/or data, although the message was not conveyed in a systematic way. Information on the sharing of genetic sequencing data in a manner completely independent of the material from which it originated was absent. Grounds for recontacting participants were limited, and mainly mentioned to obtain consent for secondary use. A feature of the IRB-approved consent documents for genetic studies on brain and mental health disorders using NGS technologies, offered a complete explanation on sharing material and data and the use of databases. Conclusions: The results of our work show that in Canada, many NGS research needs are already dealt with. Our analysis led us to propose the addition of well-defined categories for future use, adding options on the sharing of genetic data, and widening the grounds on which research participants could consent to be recontacted. abstract_id: PUBMED:35830287 Current practice in diagnostic genetic testing of the epilepsies Epilepsy genetics is a rapidly developing field, in which novel disease-associated genes, novel mechanisms associated with epilepsy, and precision medicine approaches are continuously being identified. In the past decade, advances in genomic knowledge and analysis platforms have begun to make clinical genetic testing accessible for, in principle, people of all ages with epilepsy. For this reason, the Genetics Commission of the International League Against Epilepsy (ILAE) presents this update on clinical genetic testing practice, including current techniques, indications, yield of genetic testing, recommendations for pre- and post-test counseling, and follow-up after genetic testing is completed. We acknowledge that the resources vary across different settings but highlight that genetic diagnostic testing for epilepsy should be prioritized when the likelihood of an informative finding is high. Results of genetic testing, in particular the identification of causative genetic variants, are likely to improve individual care. We emphasize the importance of genetic testing for individuals with epilepsy as we enter the era of precision therapy. abstract_id: PUBMED:26242468 Genomic Testing: a Genetic Counselor's Personal Reflection on Three Years of Consenting and Testing. Whole exome sequencing (WES) is increasingly used in research and clinical genetics as the cost of sequencing decreases and the interpretation improves. Genetic counselors need to be prepared to counsel a diverse patient population for this complex test. This commentary is a reflection of one genetic counselor's experiences in counseling, consenting, and returning results for clinical and research WES for over 120 participants and patients. She reflects on how she overcame the initial challenges and concerns of counseling for WES and how her counseling evolved from a teaching based counseling model to an interactive patient-center counseling model. Her insights are offered to prepare other genetic counselors for the growing use of genomic testing. abstract_id: PUBMED:37713927 Overcoming recruitment hurdles in clinical trials: An investigation of remote consenting modalities and their impact on workload, workflow, and usability. Travel constraints can hinder the recruitment of eligible research participants in clinical trials, causing research timeline extensions, added costs, underpowered results, and early termination. Remote consenting can help solve these issues by allowing researchers and potential participants to connect remotely. This controlled experimental study investigates the effect of remote consenting on workload, workflow, usability, and barriers and facilitators to its implementation. Using a mixed experimental design, simulated consenting sessions were conducted with three different modalities (remote paper consent, eConsent, and teleconsent) with 23 researchers and 93 research participants. Each session involved a single researcher who experienced all three modalities, while three different research participants were assigned to each modality individually. Research participants and researchers completed surveys measuring workload and usability. Remote consenting allows researchers and participants to connect at their preferred location and time, and teleconsenting was found to be the preferred modality by the researchers, primarily due to its ability to exchange visual cues. However, challenges such as training requirements and technology dependence need to be addressed for widespread implementation. Future research should aim to eliminate these barriers and improve remote consenting modalities to facilitate clinical research participation. abstract_id: PUBMED:29462836 Current practice for genetic counselling by nurses: An integrative review. Aim: To examine current practice of genetic counselling by nurses. Background: Recent debate argues that genetic counselling is a specialist advanced practice role, whilst others argue it is the role of all nurses. Current evidence is required to determine if genetic counselling could be included in all nurses' scope of practice. Design: Integrative literature review. Data Sources: A search of electronic databases (CINHAL, Medline, PubMed, Scopus), and reference lists published between January 2012 and March 2017, was undertaken. Review Methods: Studies were critically appraised for methodological quality using the Critical Appraisal Skills Programme. Data from each study were extracted and categorized according to their primary findings. Results: The inclusion criteria were met in 10 studies. Main findings were identified: role of genetic counselling, current knowledge, need for further education, and client satisfaction with nurse genetic counsellors. Conclusion: This paper concludes that some nurses do engage in genetic counselling, but how they engage is not consistent, nor is there consensus about what should be the scope of practice. Further investigation into credentialing, role recognition support and education for nurse genetic counselling are strongly recommended. As nurses are widely available, nurses can make a significant contribution to supporting those affected by genetic problems. abstract_id: PUBMED:34755558 Consenting patients for elective procedures during the pandemic: Are we consenting for risk of nosocomial COVID-19 infection. Introduction: Nosocomial COVID-19 increases morbidity and mortality in patients undergoing surgical procedures. This study assesses the consenting process in patients admitted for surgical procedures with regard to risks of contracting nosocomial COVID-19 infection during the three lockdown periods in the United Kingdom.Methods: Retrospective review of consecutive surgical patients admitted to our tertiary referral centre for surgical procedures during the lockdown periods in the United Kingdom. Data from our hospital's electronic theatre database cross-referenced with the online surgical operative, admission and discharge records were reviewed by three independent reviewers.Discussion: A total of 180 patients (104 males and 76 females) were studied. No patients tested positive perioperatively for COVID-19. The first lockdown had a significantly larger proportion of consultants consenting (P &lt; 0.001). Surgeons consented patients for risk of COVID-19 infection in 34.4% of cases, COVID-19-related illness in 33.9%, inpatient Intensive Care Unit (ITU) admission secondary to COVID-19 infection and risk of death due to COVID-19 in 0.0% and risk of death secondary to inpatient COVID infection in 1.1%.Conclusion: As surgical activity continues and COVID-19 persists, surgeons should be vigilant and ensure proper documentation for consent regarding COVID-19-related complications in line with the Royal College of Surgeons of England guidelines. abstract_id: PUBMED:17002856 Variations in consenting practice for laparoscopic cholecystectomy. Introduction: To compare the variations in consenting practice amongst trainees and consultant surgeons for laparoscopic cholecystectomy with specific reference to the documentation of significant risks of surgery. Patients And Methods: A proforma was devised which included significant and/or commonly recognised complications of laparoscopic cholecystectomy. This was then cross-referenced with the consent forms for the 80 patients included in the study and the documented risks explained in each case were noted. Results: The results showed that there is considerable variation between the three grades of clinicians involved in obtaining a patient's consent for laparoscopic cholecystectomy. There was a clear difference in emphasis of the significant complications depending on the seniority of the consenter. Over 80% of the consents in this study were still being obtained by junior staff. Conclusions: More often than not, patients are not provided with consistent information to make an informed choice. We suggest that a preprinted consent form will provide a more uniform approach to consenting practice for laparoscopic cholecystectomy. abstract_id: PUBMED:21770783 Patients' perceptions of the process of consenting to electroconvulsive therapy. Background: The concept of valid consent has become important for electroconvulsive therapy (ECT). However, many patients feel that they do not have enough information before consenting and a significant minority feel coerced into consenting. Little is known about what factors account for these views. Aim: To explore patients' perceptions about how they consented to ECT. Method: Twelve participants were interviewed about their experiences of consenting to ECT. Interviews were subjected to a thematic analysis. Results: Participants' perceptions of consenting to ECT were complex, and interpersonal factors were found to be important. Many participants felt that they had consented without adequate information from medical sources and that they had little choice to agree. Two participants consented to ECT as either a form of self-harm or hoping it would kill them. Conclusion: Consenting to ECT is more complex than currently recognised and involves interpersonal and systemic factors. As a result, people may consent because they feel that they have little choice. Implications for practice are discussed. abstract_id: PUBMED:8009517 Genetic disease in general practice. An interview study among general practitioners in Oppland and Oslo We interviewed 51 Norwegian general practitioners selected at random about their knowledge and practice of medical genetics. Of these doctors, 29 worked in Oslo and 22 in Oppland county. About 90% of the doctors working in Oslo knew where to refer patients for genetic counselling, while 55% of the doctors working in the area outside Oslo had this knowledge. We believe that this difference is because Oslo has a municipal genetic clinic and is the only area in Norway with an adequate genetic counselling service according to WHO standards. This article presents the results from this survey, and discusses genetic diseases in general practice and the organisation of medical genetic services in Norway. Answer: Based on the abstract from PUBMED:25410958, Canadian practice for consenting in current genetic research appears to be adequate in some areas but requires improvements in others. The study examined the informed consent templates and IRB instructions for genetic research in Canada, particularly focusing on issues such as secondary use, sharing of materials and data, and recontacting participants. The findings indicated that 80% of documents addressed the secondary use of material and/or data, but the information was not systematically conveyed. There was a lack of information on the sharing of genetic sequencing data independent of the material from which it originated. Additionally, the grounds for recontacting participants were limited and mainly mentioned to obtain consent for secondary use. However, the IRB-approved consent documents for genetic studies on brain and mental health disorders using Next Generation Sequencing (NGS) technologies provided a complete explanation on sharing material and data and the use of databases. The study concluded that many needs of NGS research are already addressed in Canada, but there is room for improvement. The authors proposed adding well-defined categories for future use, options on the sharing of genetic data, and widening the grounds on which research participants could consent to be recontacted. In summary, while Canadian practice for consenting in genetic research has addressed several key areas, there is a need for more systematic communication and expanded consent options to fully meet the requirements of modern genetic research.
Instruction: Latent tuberculous infection in the United States and Canada: who completes treatment and why? Abstracts: abstract_id: PUBMED:30226174 Simple Estimates for Local Prevalence of Latent Tuberculosis Infection, United States, 2011-2015. We used tuberculosis genotyping results to derive estimates of prevalence of latent tuberculosis infection in the United States. We estimated &lt;1% prevalence in 1,981 US counties, 1%-&lt;3% in 785 counties, and &gt;3% in 377 counties. This method for estimating prevalence could be applied in any jurisdiction with an established tuberculosis surveillance system. abstract_id: PUBMED:37917562 Tuberculosis Testing and Latent Tuberculosis Infection Treatment Practices Among Health Care Providers - United States, 2020-2022. CDC recommends testing persons at increased risk for tuberculosis (TB) infection as part of routine health care, using TB blood tests, when possible, and, if a diagnosis of latent TB infection (LTBI) is made, prescribing a rifamycin-based, 3- or 4-month treatment regimen (short-course) to prevent the development of TB disease. In 2022, approximately three quarters (73%) of reported TB cases in the United States occurred among non-U.S.-born persons. To assess TB-related practices among health care providers (HCPs) in the United States, CDC analyzed data from the 2020-2022 Porter Novelli DocStyles surveys. Approximately one half (53.3%) of HCPs reported routinely testing non-U.S.-born patients for TB, and of those who did, 35.7% exclusively ordered recommended blood tests, 44.2% exclusively ordered skin tests, and 20.2% ordered TB skin tests and blood tests. One third (33.0%) of HCPs reported prescribing recommended short-course LTBI treatment regimens, and 4.0% reported doing none of the treatment practices available for patients with LTBI (i.e., prescribing short-course regimens, longer course regimens, or referring patients to a health department). Further efforts are needed to identify and overcome barriers for providers to test for and treat persons at risk for TB. abstract_id: PUBMED:33137172 Prevalence of Latent Tuberculosis Infection Among Non-US-Born Persons by Country of Birth-United States, 2012-2017. Background: Most tuberculosis (TB) disease in the United States (US) is attributed to reactivation of remotely acquired latent TB infection (LTBI) in non-US-born persons who were likely infected with Mycobacterium tuberculosis in their countries of birth. Information on LTBI prevalence by country of birth could help guide local providers and health departments to scale up the LTBI screening and preventive treatment needed to advance progress toward TB elimination. Methods: A total of 13 805 non-US-born persons at high risk of TB infection or progression to TB disease were screened for LTBI at 16 clinical sites located across the United States with a tuberculin skin test, QuantiFERON Gold In-Tube test, and T-SPOT.TB test. Bayesian latent class analysis was applied to test results to estimate LTBI prevalence and associated credible intervals (CrIs) for each country or world region of birth. Results: Among the study population, the estimated LTBI prevalence was 31% (95% CrI, 26%-35%). Country-of-birth-level LTBI prevalence estimates were highest for persons born in Haiti, Peru, Somalia, Ethiopia, Vietnam, and Bhutan, ranging from 42% to 55%. LTBI prevalence estimates were lowest for persons born in Colombia, Malaysia, and Thailand, ranging from 8% to 13%. Conclusions: LTBI prevalence in persons born outside the US varies widely by country. These estimates can help target community outreach efforts to the highest-risk groups. abstract_id: PUBMED:26866439 The Prevalence of Latent Tuberculosis Infection in the United States. Rationale: Individuals with latent tuberculosis infection (LTBI) represent a reservoir of infection, many of whom will progress to tuberculosis (TB) disease. A central pillar of TB control in the United States is reducing this reservoir through targeted testing and treatment. Objectives: To estimate the prevalence of LTBI in the United States using the tuberculin skin test (TST) and an IFN-γ release assay. Methods: We used nationally representative data from the 2011-2012 National Health and Nutrition Examination Survey (n = 6,083 aged ≥6 yr). LTBI was measured by both the TST and QuantiFERON-TB Gold In-Tube test (QFT-GIT). Weighted population, prevalence, and multiple logistic regression were used. Measurements And Main Results: The estimated prevalence of LTBI in 2011-2012 was 4.4% as measured by the TST and 4.8% by QFT-GIT, corresponding to 12,398,000 and 13,628,000 individuals, respectively. Prevalence declined slightly since 2000 among the U.S. born but remained constant among the foreign born. Earlier birth cohorts consistently had higher prevalence than more recent ones. Higher risk groups included the foreign born, close contact with a case of TB disease, and certain racial/ethnic groups. Conclusions: After years of decline, the prevalence of LTBI remained relatively constant between 2000 and 2011. A large reservoir of 12.4 million still exists, with foreign-born persons representing an increasingly larger proportion of this reservoir (73%). Estimates and risk factors for LTBI were generally similar between the TST and QFT-GIT. The updated estimates of LTBI and associated risk groups can help improve targeted testing and treatment in the United States. abstract_id: PUBMED:36703605 Diagnosis, Treatment, and Prevention of Tuberculosis Among People Experiencing Homelessness in the United States: Current Recommendations. Objective: Tuberculosis (TB) is a public health problem, especially among people experiencing homelessness (PEH). The Advisory Council for the Elimination of Tuberculosis issued recommendations in 1992 for TB prevention and control among PEH. Our goal was to provide current guidelines and information in one place to inform medical and public health providers and TB programs on TB incidence, diagnosis, and treatment among PEH. Methods: We reviewed and synthesized diagnostic and treatment recommendations for TB disease and latent TB infection (LTBI) as of 2022 and information after 1992 on the magnitude of homelessness in the United States, the incidence of TB among PEH, the role of public health departments in TB case management among PEH, and recently published evidence. Results: In 2018, there were 1.45 million estimated PEH in the United States. During the past 2 decades, the incidence of TB was &gt;10 times higher and the prevalence of LTBI was 7 to 20 times higher among PEH than among people not experiencing homelessness. TB outbreaks were common in overnight shelters. Permanent housing for PEH and the use of rapid TB diagnostic tests, along with isolation and treatment, reduced TB exposure among PEH. The use of direct observation enhanced treatment adherence among PEH, as did involvement of social workers to help secure shelter, food, safety, and treatment for comorbidities, especially HIV and substance use disorders. Testing and treatment for LTBI prevented progression to TB disease, and shorter LTBI regimens helped improve adherence. Federal agencies and the National Health Care for the Homeless Council have helpful resources. Conclusion: Improvements in TB diagnosis, treatment, and prevention among PEH are possible by following existing recommendations and using client-centered approaches. abstract_id: PUBMED:24142915 Estimated rate of reactivation of latent tuberculosis infection in the United States, overall and by population subgroup. We estimated the rate of reactivation tuberculosis (TB) in the United States, overall and by population subgroup, using data on TB cases and Mycobacterium tuberculosis isolate genotyping reported to the Centers for Disease Control and Prevention during 2006-2008. The rate of reactivation TB was defined as the number of non-genotypically clustered TB cases divided by the number of person-years at risk for reactivation due to prevalent latent TB infection (LTBI). LTBI was ascertained from tuberculin skin tests given during the 1999-2000 National Health and Nutrition Examination Survey. Clustering of TB cases was determined using TB genotyping data collected by the Centers for Disease Control and Prevention and analyzed via spatial scan statistic. Of the 39,920 TB cases reported during 2006-2008, 79.7% were attributed to reactivation. The overall rate of reactivation TB among persons with LTBI was estimated as 0.084 (95% confidence interval (CI): 0.083, 0.085) cases per 100 person-years. Rates among persons with and without human immunodeficiency virus coinfection were 1.82 (95% CI: 1.74, 1.89) and 0.073 (95% CI: 0.070, 0.075) cases per 100 person-years, respectively. The rate of reactivation TB among persons with LTBI was higher among foreign-born persons (0.098 cases/100 person-years; 95% CI: 0.096, 0.10) than among persons born in the United States (0.082 cases/100 person-years; 95% CI: 0.080, 0.083). Differences in rates of TB reactivation across subgroups support current recommendations for targeted testing and treatment of LTBI. abstract_id: PUBMED:20843412 Tuberculosis in humans and its epidemiology, diagnosis and treatment in the United States. Tuberculosis (TB) is a pulmonary and systemic disease caused by Mycobacterium tuberculosis complex species. TB is spread from person to person by airborne transmission. Several factors determine the probability of transmission, including the infectiousness of the source patient and the nature of the environment where exposure occurs. This initial infection (primary TB) rapidly progresses to disease in some persons (especially children and immunocompromised persons), but resolves spontaneously in most individuals. This condition in which the organism lies dormant is known as latent TB infection (LTBI). In the United States, the diagnosis of LTBI is made with either the tuberculin skin test or an interferon-gamma release assay. LTBI is treated with isoniazid (INH; usually for 9 months) to prevent progression to TB disease. Up to 5% of immunocompetent persons will progress to TB disease at some time in the future, even decades after infection, if they are not treated for LTBI. Pulmonary TB disease is diagnosed using a combination of chest radiography and microscopic examination, culture and nucleic acid amplification testing of sputum. Treatment of drug-susceptible TB consists of at least 6 months of an INH and rifampin-containing regimen (with ethambutol and pyrazinamide for the first 2 months). In the United States, drug-resistant TB is relatively rare (approximately 1% of all patients), and is treated with an 18-24 month individualized regimen based on drug susceptibility test results. abstract_id: PUBMED:12466510 Global drug-resistance patterns and the management of latent tuberculosis infection in immigrants to the United States. Background: In the United States, an increasingly disproportionate burden of tuberculosis among the foreign-born population has led to calls for improvements in the detection and treatment of latent infection in new immigrants. Current treatment guidelines do not take into account global differences in drug-resistance patterns or their implications for the treatment of immigrants. The use of multinational surveillance systems to guide the management of latent infection according to region-specific drug-resistance profiles could improve the efficiency of efforts to reduce the burden of tuberculosis in immigrants to the United States. Methods: We constructed a decision-analysis model by using a hypothetical cohort of all documented immigrants entering the United States from developing nations. Region-specific drug-resistance profiles were derived from data on 30,388 cases of infection. The model examined the effectiveness and cost effectiveness of four strategies: no intervention or tuberculin skin testing followed by treatment with isoniazid, treatment with rifampin, or treatment with rifampin plus pyrazinamide for those with a positive test result. Results: A strategy of detecting and treating latent tuberculosis infection was cost-saving among immigrants from Mexico, Haiti, sub-Saharan Africa, South Asia, and developing nations in East Asia and the Pacific. This strategy was highly cost effective among immigrants from other developing nations. Rifampin plus pyrazinamide was the preferred strategy for treating latent infection in immigrants from Vietnam, Haiti, and the Philippines. Conclusions: For new immigrants to the United States from developing nations, a strategy of detecting and treating latent tuberculosis infection would lead to substantial health and economic benefits. Because of the high prevalence of resistance to isoniazid, treatment with a rifampin-containing regimen should be strongly considered for immigrants from Vietnam, Haiti, and the Philippines. abstract_id: PUBMED:28704234 CE: Tuberculosis: A New Screening Recommendation and an Expanded Approach to Elimination in the United States. : Nurses play a critical role in the diagnosis and treatment of tuberculosis and in the prevention of tuberculosis transmission through infection control practices. To eliminate tuberculosis in the United States, however, an expanded approach to testing and treating people with latent tuberculosis infection must be implemented. Recently, the U.S. Preventive Services Task Force (USPSTF) issued a new recommendation statement on latent tuberculosis infection testing that expands nurses' opportunities to identify at-risk populations for tuberculosis prevention. In combination with newer testing methodologies and shorter treatment regimens, implementation of the USPSTF recommendation has the potential to remove previously existing barriers to screening and treatment of both patients and health care providers. This article provides a general overview of tuberculosis transmission, pathogenesis, and epidemiology; presents preventive care recommendations for targeted testing among high-risk groups; and discusses the USPSTF recommendation's applicability to public health and primary care practice in the United States. abstract_id: PUBMED:11958239 Treatment of latent tuberculosis infection in contacts of new tuberculosis cases in the United States. Background: Few data are available describing treatment completion rates among recently infected contacts of tuberculosis (TB) cases, a group at high risk for development of active TB. Methods: Health department records were reviewed for all contacts of 360 culture-positive pulmonary TB cases reported from five health departments in the United States in 1996. Results: Of 2,267 contacts who completed screening, 630 (28%) had newly documented positive skin tests (121 with skin test conversion). Treatment of latent TB infection was documented to have been recommended for 447 (71%). Among these, treatment was documented to be initiated for 398 (89%). Of these, 203 (51%) were documented to have completed a 6-month course of treatment, and 78 (20%) received directly observed treatment. Treatment was recommended more often for contacts &lt; 15 years of age, skin test converters, close contacts, and contacts of smear-positive cases. Treatment completion rates were higher for skin test converters. Conclusions: In this study, fewer than one third of all persons with newly documented positive skin tests detected during contact investigations were proven to have completed treatment. Achieving high rates of completion of therapy for latent TB infection in recently infected contacts of active cases of pulmonary TB is essential to maximize public health prevention efforts aimed at eliminating TB. Answer: The completion of treatment for latent tuberculosis infection (LTBI) in the United States varies among different groups, and several factors influence whether individuals complete their treatment. Among contacts of tuberculosis (TB) cases, treatment completion rates are less than ideal. In a study of contacts of 360 culture-positive pulmonary TB cases, only about one-third of those with newly documented positive skin tests were documented to have completed treatment. Treatment completion rates were higher for skin test converters, indicating that those who had recent conversion of their skin test from negative to positive were more likely to complete treatment. Factors that influenced the recommendation and completion of treatment included the age of the contact, the closeness of contact, and whether the index case was smear-positive (PUBMED:11958239). In the general population, the prevalence of LTBI in the United States was estimated to be 4.4% as measured by the tuberculin skin test (TST) and 4.8% by the QuantiFERON-TB Gold In-Tube test (QFT-GIT), with foreign-born persons representing a larger proportion of this reservoir (PUBMED:26866439). However, the rate of reactivation TB among persons with LTBI was higher among foreign-born persons than among U.S.-born persons, supporting targeted testing and treatment of LTBI in these subgroups (PUBMED:24142915). Health care providers (HCPs) play a crucial role in testing and treating LTBI. Approximately half of HCPs reported routinely testing non-U.S.-born patients for TB, but only one-third reported prescribing recommended short-course LTBI treatment regimens. Barriers exist for providers to test for and treat persons at risk for TB, indicating a need for further efforts to identify and overcome these barriers (PUBMED:37917562). For immigrants to the United States, detecting and treating LTBI is cost-effective and can lead to substantial health and economic benefits. However, due to the high prevalence of resistance to isoniazid, treatment with a rifampin-containing regimen should be considered for immigrants from certain countries with high resistance rates (PUBMED:12466510). In summary, treatment completion for LTBI in the United States is influenced by factors such as recent infection, the risk profile of the individual, and the practices of healthcare providers. Efforts to improve completion rates must address these factors and consider the specific needs of high-risk groups, including non-U.S.-born individuals and contacts of TB cases. Additionally, targeted testing and treatment of LTBI, especially among foreign-born populations and recent contacts of TB cases, are essential components of TB control strategies. The use of short-course LTBI treatment regimens and the involvement of healthcare providers in recommending and managing these treatments are critical to improving adherence and completion rates. However, there is a need for increased education and support for healthcare providers to ensure that they are following the recommended guidelines for testing and treating LTBI (PUBMED:37917562). People experiencing homelessness (PEH) also represent a group with a higher incidence of TB and a higher prevalence of LTBI. Efforts to improve diagnosis, treatment, and prevention among PEH include using rapid TB diagnostic tests, direct observation of treatment, and client-centered approaches to secure shelter, food, safety, and treatment for comorbidities. Testing and treatment for LTBI in this population are crucial for preventing progression to TB disease, and shorter LTBI regimens have been shown to improve adherence (PUBMED:36703605). Overall, the completion of LTBI treatment in the United States is multifaceted and requires a comprehensive approach that includes targeted testing, consideration of drug-resistance patterns, adherence to recommended treatment regimens, and addressing the specific needs of high-risk populations.
Instruction: Respiratory muscle strength test: is it realistic in young children? Abstracts: abstract_id: PUBMED:29342462 Reference Values for Respiratory Muscle Strength in Children and Adolescents. Background: Measurement of respiratory muscle function is important in the diagnosis of respiratory muscle disease, respiratory failure, to assess the impact of chronic diseases, and/or to evaluate respiratory muscle function after treatment. Objectives: To establish reference values for maximal inspiratory and expiratory pressure, and the tension-time index at rest in healthy children and adolescents aged 8-19 years, as well as to present sex- and age-related reference centiles normalized for demographic and anthropometric determinants. Methods: In this cross-sectional observational study, demographic, anthropometric, and spirometric data were assessed, as well as data on respiratory muscle strength (PImax and PEmax) and work of breathing at rest (TT0.1), in a total of 251 children (117 boys and 134 girls; mean age 13.4 ± 2.9 years). Reference values are presented as reference centiles developed by use of the lambda, mu, sigma method. Results: Boys had significantly higher PImax and PEmax values. Next to sex and age, fat-free mass appeared to be an important predictor of respiratory muscle strength. Reference centiles demonstrated a slight, almost linear increase in PImax with age in boys, and a less steep increase with age in girls. TT0.1 values did not differ between boys and girls and decreased linearly with age. Conclusion: This study provides reference values for respiratory muscle strength and work of breathing at rest. In addition to sex and age, fat-free mass was found to be an important predictor of respiratory muscle strength in boys and girls. abstract_id: PUBMED:33563044 Assessing the Reliability of Handheld Dynamometry and Ultrasonography to Measure Quadriceps Strength and Muscle Thickness in Children, Adolescents, and Young Adults. Aims: Muscle strength testing is a core aspect in the pediatric physical and occupational therapy clinical examination and evaluation, and quadriceps muscle strength is associated with the ability to perform gross motor skills. The aim of this study was to assess the reliability of handheld dyamometry (HHD) and ultrasonography to measure quadriceps muscle strength and thickness in children, adolescents, and young adults. Methods: In 30 participants (6-26 years), without motor impairment, quadriceps strength was measured with HHD in seated and supine with the knee flexed to 90° and 35°. Rectus femoris (RF) and vastus lateralis (VL) muscle thickness was measured using ultrasonography. Typical error of measurement (TEM), coefficient of variation (CV%), and intra-class correlation coefficient (ICC) were used to assess the error and reliability of measures. Results: The average CV% among three trials of strength testing was 7.07%-9.94% and improved when using the top two trials (4.49%-5.61%). The average TEM was 2.21 Nm for strength and 0.03 cm for muscle thickness. Intra-rater and inter-rater reliability of muscle thickness was good to excellent (0.91-1.00).Conclusions: The results of this study suggest quadriceps muscle strength and thickness measurements using HHD and ultrasonography are reliable in children, adolescents, and young adults. abstract_id: PUBMED:36118657 Reliability, minimal detectable change and measurement errors in knee extension muscle strength measurement using a hand-held dynamometer in young children. [Purpose] Few studies have assessed the reliability of muscle strength measurement using a hand-held dynamometer, specifically in Japanese young children. This study aims to investigate the reliability, minimal detectable change and measurement error in knee extension muscle strength measurements in young children using a hand-held dynamometer. [Participants and Methods] The study participants comprised 36 kindergarten-enrolled children of 3-6 years age. An experienced physiotherapist measured their isometric knee extension strength. Appropriate orientation and practices were performed prior to measurement. The same physiotherapist took the muscle measurements twice; and the maximum value was used as the muscle strength value. The measurements were repeated at intervals of approximately 30 min to verify reproducibility. [Results] The initial isometric knee extension strength was 10.6 ± 3.3 kgf and the Intra-class correlation coefficient (1,1) was 0.765. The standard error of measurement was 1.6 kgf and the minimal detectable change was 4.4 kgf. No significant systematic errors were observed. [Conclusion] Isometric knee extension strength measurement using a hand-held dynamometer in young children has good reliability with a low risk of systematic errors. abstract_id: PUBMED:19526369 Is grip strength a predictor for total muscle strength in healthy children, adolescents, and young adults? The primary purpose of this study was to examine whether grip strength is related to total muscle strength in children, adolescents, and young adults. The second purpose was to provide reference charts for grip strength, which could be used in the clinical and research setting. This cross-sectional study was performed at primary and secondary schools and the University of Applied Sciences. Three hundred and eighty-four healthy Dutch children, adolescents, and young adults at the age of 8 to 20 years participated. Isometric muscle strength was measured with a handheld dynamometer of four muscle groups (shoulder abductors, grip strength, hip flexors, and ankle dorsiflexors). Total muscle strength was a summing up of shoulder abductors, hip flexors, and ankle dorsiflexors. All physical therapists participated in a reliability study. The study was started when intratester and intertester reliability was high (Pearson correlation coefficient &gt;0.8). Grip strength was strongly correlated with total muscle strength, with correlation coefficients between 0.736 and 0.890 (p &lt; 0.01). However, the correlation was weaker when controlled for weight (0.485-0.564, p &lt; 0.01). Grip strength is related to total muscle strength. This indicates, in the clinical setting, that grip strength can be used as a tool to have a rapid indication of someone's general muscle strength. The developed reference charts are suitable for evaluating muscle strength in children, adolescents, and young adults in clinical and research settings. abstract_id: PUBMED:25991978 Respiratory muscle strength in asthmatic children. Introduction: Changes in the respiratory system of asthmatics are also due to the mechanical disadvantage caused by the increased airway resistance. Objective: The study aims to evaluate the respiratory muscle strength and nutritional status of asthmatic children. Method: This is a prospective descriptive and transversal study with 50 children aged 7 to 12 years, who were placed into 2 groups, asthmatic and non-asthmatic. Respiratory muscle strength was evaluated on the basis of maximal inspiratory pressure (MIP) and maximal expiratory pressure (MEP). The nutritional status was evaluated by measuring the anthropometric data, including height, weight, and body mass index (BMI). The findings were subjected to analysis of variance, chi-square, and Student's t test, and p-values &lt; 0.05 was considered statistically significant. Results: In our comparisons, we observed statistically significantly lower values for age, weight, and height in asthmatic patients: 8.52 ± 1.49 years, 30.62 ± 7.66 kg, and 129.85 ± 10.24 cm, respectively, vs. non-asthmatic children(9.79 ± 1.51 years, 39.92 ± 16.57 kg, and 139.04 ± 11.62 cm, respectively). There was no significant increase in MIP and MEP between the groups: MIP was -84.96 ± 27.52 cmH2O for the asthmatic group and -88.56 ± 26.50 cmH2O for the non-asthmatic group, and MEP was 64.48 ± 19.23 cmH2O for asthmatic children and +66.72 ± 16.56 cmH2O for non-asthmatics. Conclusion: There was no statistically significant difference between groups, but we observed that MIP and MEP were slightly higher in the non-asthmatic group than in the asthmatic group. abstract_id: PUBMED:37402166 Longitudinal changes of grip strength and forearm muscle thickness in young children. Background: Grip strength is a marker of future health conditions and is mainly generated by the extrinsic flexor muscles of the fingers. Therefore, whether or not there is a relationship between grip strength and forearm muscle size is vital in considering strategies for grip strength development during growth. Thus, this study aimed to examine the association between changes in grip strength and forearm muscle thickness in young children. Methods: Two hundred eighteen young children (104 boys and 114 girls) performed maximum voluntary grip strength and ultrasound-measured muscle thickness measurements in the right hand. Two muscle thicknesses were measured as the perpendicular distance between the adipose tissue-muscle interface and muscle-bone interface of the radius (MT-radius) and ulna (MT-ulna). All participants completed the first measurement and underwent a second measurement one year after the first one. Results: There were significant (P &lt; 0.001) within-subject correlations between MT-ulna and grip strength [r = 0.50 (0.40, 0.60)] and MT-radius and grip strength [r = 0.59 (0.49, 0.67)]. There was no significant between-subject correlation between MT-ulna and grip strength [r = 0.07 (-0.05, 0.20)], but there was a statistically significant (P &lt; 0.001) between-subject relationship between MT-radius and grip strength [r = 0.27 (0.14, 0.39)]. Conclusion: Although we cannot infer causation from the present study, our findings suggest that as muscle size increases within a child, so does muscle strength. Our between-subject analysis, however, suggests that those who observed the greatest change in muscle size did not necessarily get the strongest. abstract_id: PUBMED:30899735 The relationship between peripheral muscle strength and respiratory function and respiratory muscle strength in athletes. The aim of this study is to determine the relationship between peripheral muscle strength, respiratory function and respiratory muscle strength in athletes. The study included a total of 150 elite athletes (judo, rowing, gymnastics) (age, 16.94±2.37 years; length, 167.70±12.23 cm; body weight, 62.87±17.33 kg; body mass index, 21.95±2.92 kg/m2). Isomed 2000 isokinetic dynamometer was used to assess peripheral muscle strength. The strength of the dominant side knee flexor and extensor muscles was evaluated at 60°/sec and 180°/sec. At the end of the evaluation; knee flexion and extension peak torque (PT) values and flexion and extension PT values rates were obtained at 60°/sec and 180°/sec. Respiratory function and respiratory muscle strength of the athletes were evaluated using a digital spirometer. To assess respiratory function, forced vital capacity maneuver and maximal minute ventilation test; to assess the strength of the respiratory muscles, maximum inspiratory pressure and maximal expiratory pressure tests were performed. There was a strong relationship between muscle strength of knee flex-or and extensor muscles and respiratory function (r=-0.268/0.813, P&lt;0.05). There was a strong moderate correlation between knee flexor and extensor muscle strength and all parameters of respiratory muscle strength (r=0.206/0.411, P&lt;0.05). The knee flexor and extensor muscle strength, respiratory function and respiratory muscle strength develops parallel to each other and the codevelopment of these parameters together with special exercises to increase respiratory muscle strength will improve the performance of the athletes. abstract_id: PUBMED:30568328 Correlation between skeletal muscle mass index and parameters of respiratory function and muscle strength in young healthy adults according to gender. [Purpose] The purpose was to determine the correlation between the skeletal muscle mass index and parameters of respiratory function and muscle strength in young healthy adults as predictors of sarcopenia in association with aging and respiratory diseases. [Participants and Methods] Participants were 41 males and 37 females with a mean age of 19.5 ± 1.5 years. The following were measured: body composition (skeletal muscle mass index), respiratory function (vital capacity, inspiratory reserve volume, expiratory reserve volume, inspiratory capacity, forced vital capacity, one-second forced expiratory volume, peak expiratory flow rate), and respiratory muscle strength (maximum inspiratory pressure, maximum expiratory pressure). Correlations between the skeletal muscle mass index and parameters of respiratory function and respiratory muscle strength were assessed using Pearson's coefficient. [Results] The total skeletal muscle mass index showed a positive correlation with all items. The male skeletal muscle mass index showed a positive correlation with respiratory function excluding inspiratory reserve volume, expiratory reserve volume, maximum inspiratory pressure, and maximum expiratory pressure. The female skeletal muscle mass index showed a positive correlation with all respiratory functions including inspiratory reserve volume and expiratory reserve volume, but was not associated with respiratory muscle strength. [Conclusion] The skeletal muscle mass index showed a positive correlation with respiratory function and respiratory muscle strength. Gender-based features were correlated with respiratory muscle strength in males and lung capacity in females. abstract_id: PUBMED:28858191 Maximal Respiratory Muscle Strength and Vital Capacity in Children With Early Onset Scoliosis. Study Design: Descriptive cross-sectional study. Objective: To measure respiratory muscle function in children with early onset scoliosis (EOS), determine the frequency of respiratory muscle weakness, and correlate these measures with vital capacity, body mass index, and Cobb angle. Summary Of Background Data: Progressive restrictive respiratory disease is common among children with moderate to severe EOS. Reduced respiratory muscle strength is associated with the loss of lung function in adolescents and adults with scoliosis. We hypothesized that reduced inspiratory and expiratory respiratory muscle strength also occur in children with EOS and correlate with reduced vital capacity, poor nutritional status, and severity of the spine deformity. Methods: We measured maximum inspiratory pressure (MIP) and maximum expiratory pressure (MEP), forced vital capacity (FVC), body mass index IBMI), and Cobb angle in 49 children with EOS but with no diagnosis of underlying muscle weakness. We measured these indices in 12 children serially over 18 months to see if abnormal respiratory muscle function was sustained. Results: FVC averaged 51% (SD 21) of predicted norms with 26 subjects having an FVC &lt;50% predicted. The mean MIP was 57% (SD 25) and the mean MEP was 53% (SD 23) of predicted norms. Thirteen (27%) of the group had MIP values &gt;2SD below the age and sex-based norms. BMI ranged between 1% and 99% of age-based norms. Cobb angle averaged 59° (SD27). MIP% and MEP% significantly correlated with FVC% (r =0.37, P = 0.01 and r = 0.52, P &lt; 0.001 respectively)) but not with BMI or Cobb angle. Reduced MIP% and MEP% were sustained over 7 to 41 months. Conclusion: Respiratory muscle weakness is common and persistent in children with EOS and correlates with reductions in vital capacity. Mechanisms for abnormal respiratory function are unclear but must be determined to develop surgical treatment strategies that preserve respiratory muscle function in children with EOS throughout childhood. Level Of Evidence: 2. abstract_id: PUBMED:36200343 Peripheral and respiratory muscle strength in children and adolescents with CHD: systematic review and meta-analysis. Patients with CHD are less active if compared with controls and have limited functional capacity, related to muscle weakness and fatigue. The aim of this study was to evaluate the peripheral and respiratory muscle strength of children and adolescents with CHD with systematic review and meta-analysis. The review included observational and randomised control trial studies which evaluated peripheral and respiratory muscle strength in children and adolescents with CHD under 18 years old. The peripheral muscle strength was evaluated through dynamometry and respiratory muscle strength through manovacuometry. In studies that compared patients with CHD and respective control groups, it was possible to perform a meta-analysis. A total of 5634 articles met the criteria of eligibility, 15 were included in the systematic review, and 4 were included in the meta-analysis. Twelve studies assessed peripheral muscle strength with a reduction in patients with CHD. In the meta-analysis, patients with CHD had lower muscle strength than controls (-34.07 nm; 95% CI, -67.46 to -0.68; I2 47%; p for heterogeneity = 0.05), and the meta-analysis of the handgrip muscle strength showed no significant difference between patients with CHD and controls (0.08 nm; 95% CI, -6.39 to 6.55; I2 98%; p for heterogeneity &lt;0.00001). The meta-analysis in the present study showed lower limb muscle strength in patients with CHD in comparison to controls. In contrast, no difference was found regarding hand grip strength. Also, the review showed lower respiratory muscle strength in patients with CHD, yet no meta-analysis was possible to perform. Answer: Yes, respiratory muscle strength tests are realistic in young children. Several studies have demonstrated the feasibility and reliability of measuring respiratory muscle strength in this population. For instance, a study aimed at establishing reference values for maximal inspiratory and expiratory pressure, as well as the tension-time index at rest in healthy children and adolescents, successfully assessed respiratory muscle strength (PImax and PEmax) in children aged 8-19 years. The study found that sex, age, and fat-free mass were important predictors of respiratory muscle strength, indicating that such measurements can be realistically obtained in young children (PUBMED:29342462). Another study investigated the reliability, minimal detectable change, and measurement errors in knee extension muscle strength measurement using a hand-held dynamometer in young children aged 3-6 years. The study reported good reliability with a low risk of systematic errors, suggesting that muscle strength measurement, including that of respiratory muscles, can be reliably performed in young children (PUBMED:36118657). Furthermore, research on asthmatic children, who may have compromised respiratory muscle function, also supports the feasibility of respiratory muscle strength tests in young children. The study evaluated respiratory muscle strength based on maximal inspiratory pressure (MIP) and maximal expiratory pressure (MEP) in children aged 7 to 12 years, showing that such measurements can be realistically conducted in this age group (PUBMED:25991978). In summary, the evidence suggests that respiratory muscle strength tests are not only realistic but also reliable in young children, and can provide valuable information for clinical and research purposes.
Instruction: Unresectable pancreatic cancer: is a multi-modality approach a promising therapeutical alternative? Abstracts: abstract_id: PUBMED:9261631 Unresectable pancreatic cancer: is a multi-modality approach a promising therapeutical alternative? Background/aims: The aim of this study was to evaluate the combination of immunochemotherapy and stop-flow upper abdominal chemotherapy in the prolongation of survival in patients with unresectable pancreatic cancer. Methodology: Thirty unresectable pancreatic cancer patients were treated with immuno-chemotherapy in combination with stop-flow upper abdominal chemotherapy, in an attempt to improve survival time. Results: The results obtained in this study indicate that this kind of treatment is feasible, safe and effective for patients suffering from Stage III and IV pancreatic duct carcinoma. Twenty per cent of the patients within this group were able to undergo radical resection and remain alive and free of disease, with a mean survival rate of 16 months. Conclusion: The multi-modality approach used in this study achieved promising results for pancreatic cancer patients and is recommended as a promising therapeutic alternative. abstract_id: PUBMED:30411405 Optimal promising zone designs. Clinical trials with adaptive sample size reassessment based on an unblinded analysis of interim results are perhaps the most popular class of adaptive designs (see Elsäßer et al., 2007). Such trials are typically designed by prespecifying a zone for the interim test statistic, termed the promising zone, along with a decision rule for increasing the sample size within that zone. Mehta and Pocock (2011) provided some examples of promising zone designs and discussed several procedures for controlling their type-1 error. They did not, however, address how to choose the promising zone or the corresponding sample size reassessment rule, and proposed instead that the operating characteristics of alternative promising zone designs could be compared by simulation. Jennison and Turnbull (2015) developed an approach based on maximizing expected utility whereby one could evaluate alternative promising zone designs relative to a gold-standard optimal design. In this paper, we show how, by eliciting a few preferences from the trial sponsor, one can construct promising zone designs that are both intuitive and achieve the Jennison and Turnbull (2015) gold-standard for optimality. abstract_id: PUBMED:11561609 Combined-modality therapy in pancreatic cancer: current status and future directions. The use of chemotherapy with concurrent radiation therapy remains a standard treatment option for patients with unresectable or resected adenocarcinoma of the pancreas. This treatment strategy is based in large part on data from serial Gastrointestinal Tumor Study Group (GITSG) trials, which have included 5-fluorouracil (5-FU). Unfortunately, the majority of patients continue to succumb to the disease process. Recently, there has been a resurgence in clinical trials investigating alternative combined modality treatment strategies for patients with pancreatic cancer. In this review, we will summarize both the mature and more recent data pertaining to combined modality therapy for patients with unresectable or resected pancreatic cancer. Strategies utilizing concurrent gemcitabine, alternative radiation therapy techniques, and/or altered sequencing of therapies will be highlighted. Such modifications to the approach in use since the 1980s will need to be fully considered as clinical trials utilizing chemoradiotherapy regimens and new systemic agents or novel targeted therapies are designed. abstract_id: PUBMED:9880768 Evolving trends in combined modality therapy for pancreatic cancer. Only 5%-15% of patients with pancreatic adenocarcinoma undergo potentially curative resection. Evidence that postoperative adjuvant therapy improves outcome is limited to a single randomized trial utilizing split-course chemoradiation. More aggressive regimens have developed and are associated with, at best, a modest improvement in patient outcome. The potentially significant morbidity associated with pancreaticoduodenectomy, which can compromise the delivery of postoperative chemoradiation, has led to the investigations of preoperative regimens. Although such an approach is feasible, its ultimate impact warrants further evaluation. Among the 4% of patients who present with unresectable or locally advanced disease, combined modality therapy has produced the most promising results. However, only modest improvements in survival have so far been achieved. Combined modality therapy with radioisotope implantation appears to have the greatest potential for improving local control and survival in these patients. Intraoperative radiation therapy (IORT) may be associated with lower morbidity than radioisotope implantation, but its impact may be limited by radiobiological disadvantage associated with single-dose boost therapy. The problem of distant metastasis remains significant. New chemotherapeutic agents have the potential to produce better results than those achieved with 5-fluorouracil. Continued advances in surgery, radiation, and systemic therapy should lead to the increased use of modern combined modality interventions with an associated further improvement in patient outcome. abstract_id: PUBMED:37953242 Tumor treating fields: narrative review of a promising treatment modality for cancer. Background And Objective: Tumor treating fields (TTFields) therapy have emerged as a potentially effective treatment for various malignancies by delivering low-intensity, intermediate-frequency electrical fields that disrupt many processes inside cells, resulting in the interruption of cell division in cancer cells. Additionally, TTFields therapy has been found to be synergistic with existing therapeutic approaches. In this review, we provide an introduction and background to the primary mechanisms of TTFields and discuss the emerging preclinical and clinical outcomes of this novel cancer treatment technology. Methods: We performed a literature search on PubMed, ClinicalTrials.Gov, and Google Scholar using the terms 'TTFields' and 'cancer'. We included studies, review articles, and editorials published in English from 1st January 2000 to 1st October 2023. All obtained publications were reviewed and their key references are cross-checked to ensure a balanced and high-quality review. Key Content And Findings: Clinical studies reported to date have demonstrated the survival advantage of TTFields therapy in newly diagnosed glioblastoma (GBM), non-small cell lung cancer (NSCLC), and meaningful clinical activity in recurrent GBM (rGBM) and malignant pleural mesothelioma. Moreover, TTFields therapy has exhibited promising safety profiles across a diverse range of cancers including pancreatic cancer, hepatocellular carcinoma (HCC), ovarian cancer, NSCLC, and gastric cancer, when combined with cytotoxic chemotherapy and/or immunotherapy regimens, suggesting broad applicability as an added treatment modality. Conclusions: Based on preclinical and clinical studies, TTFields therapy show promise as a potential treatment option for patients with a number of different malignancies, offering a favorable safety profile and the potential for significant clinical benefit. Further research is warranted to establish the optimal treatment parameters and identify specific patient subgroups that may derive the greatest advantage from this treatment modality. abstract_id: PUBMED:25341941 Combined modality therapy in pancreatic adenocarcinoma: review and updates on a controversial issue. Due to its extremely high mortality rates, strong efforts continue to be made to develop new therapies in the treatment of pancreatic adenocarcinoma. The use of combined modality chemoradiotherapy for the treatment of pancreatic adenocarcinoma remains an approach with both promise and controversy. This article reviews the conflicting data with regards to role of combined modality therapy in pancreatic adenocarcinoma and provides an update on current studies in the field. abstract_id: PUBMED:11937010 Current and future strategies for combined-modality therapy in pancreatic cancer. Treatment of pancreatic cancer remains a challenging task that often requires a multidisciplinary approach to confer optimal response and, ideally, maximize survival. A combination of locoregional approaches such as surgery and radiotherapy, along with systemic therapies for eradication of micrometastases, should be considered both for patients who are operative candidates and for those with locally advanced, unresectable disease. How best to combine these modalities in terms of schedule, timing, and choice of agents is a question that continues to be actively investigated. Some of these data are equivocal or conflicting; thus standards of care for combined-modality treatment have not been uniformly accepted to date. This article provides an overview of combined-modality therapy, focusing on the major studies that have guided our current approach to the treatment of pancreatic cancer and examining new strategies that are likely to improve outcomes and survival for patients in the future. abstract_id: PUBMED:7754633 Preoperative combined modality therapy for pancreatic cancer. Combined modality therapy can be administered prior to surgical resection for patients with operable pancreatic cancer. One important criteria used to select patients for this treatment sequence is the absence of arterial vascular encasement by tumor on thin-section CT scanning; the absence of peritoneal seeding on surgical staging or laparoscopy has been another important parameter used in identifying patients with "localized" disease. Preoperative treatment with infusional chemoradiation uses multiple fields of irradiation delivering a dose of 50.4 Gy in 28 fractions over 5 1/2 weeks. This is done in conjunction with a continuous infusion of 5-fluorouracil intravenously at a dosage of 300 mg/m2/day with each day of radiotherapy treatment. The initial results of this protocol indicate acceptable toxicity and no evidence of increased perioperative morbidity or mortality compared to series using operation alone. We conclude that nearly all patients eligible for this combined modality treatment approach complete therapy as prescribed with acceptable toxicity. abstract_id: PUBMED:6355662 Combined modality treatment of pancreatic cancer: implications for the surgeon. Since pancreatic cancer is still increasing and has a poor prognosis, there is great interest in improving treatment results by combined modality approaches. This paper considers the most appropriate studies to analyze the status of treatment and future implications for surgeons. With new radiation sources and more sophisticated treatment plans, intra- and post-operative radiotherapy now has an established role in local tumor control. Combination chemotherapy has yielded response rates of 40-45% and improved chemotherapy will play a role in the treatment and perhaps in the prevention of disseminated disease. Although it seems likely that chemotherapy combined with newer radiotherapeutic technique could improve treatment results in advanced pancreatic cancer, treatment-related and limiting toxicity still must be defined. There are suggestions that more surgeons become involved in the combined modality approach, as both radiotherapy and chemotherapy may be more valuable in primary management. The unsatisfactory results of surgical treatment imply the need for adjuvant treatment, which must be tested in randomized multicenter trials. Future efforts will require an interdisciplinary approach to this disease. abstract_id: PUBMED:22443336 Combined modality treatments in pancreatic cancer. Introduction: Of all the carcinomas, pancreatic carcinoma (PC) has the highest mortality rate, with a 1- and 5-year survival rate of 25% and less than 5% respectively. This is regardless of the stage at diagnosis. Areas Covered: In this review relevant literature assessing the evidence regarding preoperative and adjuvant chemoradiotherapy (CRT) is discussed. Furthermore, new therapeutic approaches are summarized, while the future direction regarding the multimodality approach to PC is also discussed. Expert Opinion: The role of combined-modality therapy for PC is continuously evolving. There have been several recent developments, as well as the completion of major, multi-institutional clinical trials. One of the challenges for the busy clinician is to appreciate the variation in staging, surgical expertise, and application of either definitive CRT or neo-adjuvant CRT for local and/or borderline disease. Answer: The multi-modality approach for treating unresectable pancreatic cancer does appear to be a promising therapeutic alternative. A study evaluating the combination of immunochemotherapy and stop-flow upper abdominal chemotherapy in patients with unresectable pancreatic cancer showed that this treatment was feasible, safe, and effective for patients with Stage III and IV pancreatic duct carcinoma. Notably, 20% of the patients in this study were able to undergo radical resection and remained alive and free of disease, with a mean survival rate of 16 months, suggesting that a multi-modality approach can improve outcomes for some patients with advanced pancreatic cancer (PUBMED:9261631). The use of combined modality therapy, which includes chemotherapy with concurrent radiation therapy, has been a standard treatment option for patients with unresectable or resected adenocarcinoma of the pancreas. However, despite this approach, the majority of patients continue to succumb to the disease. Recent clinical trials have investigated alternative combined modality treatment strategies, including concurrent gemcitabine, alternative radiation therapy techniques, and altered sequencing of therapies, which may offer new avenues for improving patient outcomes (PUBMED:11561609). For patients with locally advanced disease, combined modality therapy has produced the most promising results, although only modest improvements in survival have been achieved so far. New chemotherapeutic agents and advances in surgery, radiation, and systemic therapy are expected to lead to increased use of modern combined modality interventions and further improvements in patient outcomes (PUBMED:9880768). Tumor treating fields (TTFields) therapy has also emerged as a potentially effective treatment modality for various malignancies, including pancreatic cancer. When combined with cytotoxic chemotherapy and/or immunotherapy regimens, TTFields therapy has shown a promising safety profile and potential for significant clinical benefit (PUBMED:37953242). In summary, while the multi-modality approach for unresectable pancreatic cancer is promising and continues to evolve, further research and clinical trials are necessary to optimize treatment strategies and improve survival outcomes for patients with this challenging disease (PUBMED:25341941, PUBMED:11937010, PUBMED:22443336).
Instruction: Is acid base determination an accurate predictor of pyloric stenosis? Abstracts: abstract_id: PUBMED:11115037 Is acid base determination an accurate predictor of pyloric stenosis? Objective: To determine if acid base status predicts which vomiting patients have pyloric stenosis. Design: Retrospective chart review. Setting: Tertiary paediatric hospital. Methodology: We compared the clinical and biochemical parameters of 100 patients with a discharge diagnosis of pyloric stenosis and 84 patients of a similar age who presented to the emergency department with vomiting and who had an acid base determination. Patients were included from January 1995 to January 1997. Clinical correlates consisted of age, duration of vomiting, weight loss, gestation, and family history of pyloric stenosis. Biochemical correlates were pH, bicarbonate, base excess (BE), chloride, potassium, and sodium. Results: Independent variables of significance were pH, BE, chloride, bicarbonate, potassium, weight loss (all of which had a P value &lt; 0.0001), and sex (P = 0.006). Each variable was placed in a logistic regression equation with pyloric stenosis being the dominant variable. Variables of significance were pH (P = 0.0001), BE (P = 0.0001), and chloride (P = 0.009). A model for predicting pyloric stenosis using these variables was then created with pH &gt; 7.45, chloride &lt; 98, and BE &gt; +3, with a positive predictive value of 88%. Conclusion: Acid base determination is a useful screening tool when considering pyloric stenosis. This model now needs to be validated on a prospective series of patients with vomiting. abstract_id: PUBMED:1434024 Acid base balance in the digestive system The mechanisms of acid base balance in digestive organs, including stomach, intestine as well as liver, have been described in the present paper. The stomach secrets large amount of acid as well as sodium bicarbonate, so that hydrogen ion would be lost in the severe vomiting state such as pyloric stenosis, resulting in metabolic alkalosis and hypokalemia. In the diarrheal condition, sodium bicarbonate would be lost in large amount, causing metabolic acidosis and hypokalemia. Hepatic failure induces the respiratory alkalosis of which mechanisms have not been clarified yet. In any case, urgent correction of acid base imbalnce would be crucial. It is, however, obscure to date how the systemic acid base imbalnce affects the function of the digestive system. This issue would be promising field in the investigation of digestive diseases. abstract_id: PUBMED:5755924 Diurnal variations in the acid-base balance of blood. N/A abstract_id: PUBMED:5974012 Acid-base equilibrium in infants with pylorospasm and pyloric stenosis N/A abstract_id: PUBMED:1213827 Electrolyte and acid base problems in hypertrophic pyloric stenosis in infancy. N/A abstract_id: PUBMED:1555094 Serum electrolytes and capillary blood gases in the management of hypertrophic pyloric stenosis. The purpose of this study was to assess whether the concentration of serum chloride and other variables, namely serum sodium, potassium and bicarbonate, can be used to predict metabolic acid-base status in infants with hypertrophic pyloric stenosis (HPS) and to assess whether such a prediction is influenced by the state of hydration. One hundred and sixty-three infants with HPS who had at least one set of serum electrolyte and capillary blood gas estimations performed after admission were studied retrospectively. A further 25 infants who had their electrolyte and blood gases measured after at least 12 h of rehydration and correction of the serum sodium and chloride abnormalities were studied prospectively. Stepwise multiple regression analysis, using standard bicarbonate as the dependent variable, revealed serum chloride concentration to be the most powerful independent predictor of standard bicarbonate level (r = -0.69, P less than 0.0001). Other variables did not improve the correlation significantly. In the retrospective study of untreated patients, the calculation of a 90 per cent prediction interval for the model indicated that if the serum chloride level is less than 96 mmol/l, one could be 95 per cent confident (one tail) that the patient was alkalaemic. To be 95 per cent confident that the patient was not alkalaemic, the serum chloride level would have to be greater than 121 mmol/l. The prospective study found that following rehydration a serum chloride level greater than or equal to 106 mmol/l more accurately predicted absence of alkalaemia. We conclude that predictability of acid-base status from measurement of serum chloride depends on the state of hydration of the patient. abstract_id: PUBMED:13754861 Disorders of mineral and acid-base equilibrium in pyloric stenoses in relation to acidity conditions of the gastric juice N/A abstract_id: PUBMED:642247 Disorders of the acid-base equilibrium in spastic hypertrophic pyloric stenosis in infants N/A abstract_id: PUBMED:1064256 Some important blood gas and acid-base disorders in pediatrics N/A abstract_id: PUBMED:5994611 Gastric secretion, water-salt balance and acid-base equilibrium in children with congenital pylorostenosis N/A Answer: Yes, acid base determination is an accurate predictor of pyloric stenosis. A study conducted to determine if acid base status predicts which vomiting patients have pyloric stenosis found that certain biochemical parameters were significant independent variables. These included pH, base excess (BE), chloride, bicarbonate, potassium, and weight loss, with pH, BE, and chloride being particularly significant (PUBMED:11115037). The study created a model for predicting pyloric stenosis using variables such as pH > 7.45, chloride < 98, and BE > +3, which had a positive predictive value of 88%. This suggests that acid base determination can be a useful screening tool when considering pyloric stenosis, although the model needs to be validated on a prospective series of patients with vomiting (PUBMED:11115037). Another study also highlighted the importance of serum chloride concentration as a powerful independent predictor of standard bicarbonate level in infants with hypertrophic pyloric stenosis (HPS). It was found that predictability of acid-base status from measurement of serum chloride depends on the state of hydration of the patient (PUBMED:1555094). The mechanisms of acid base balance in digestive organs, including the stomach, have been described in the literature. The stomach secretes a large amount of acid as well as sodium bicarbonate, which can lead to metabolic alkalosis and hypokalemia in severe vomiting states such as pyloric stenosis (PUBMED:1434024). This further supports the idea that acid base determination can reflect the underlying condition of pyloric stenosis.
Instruction: Is omitting the 3rd hour measurement in the 100 g oral glucose tolerance test feasible? Abstracts: abstract_id: PUBMED:26124045 Is omitting the 3rd hour measurement in the 100 g oral glucose tolerance test feasible? Background: To evaluate the diagnostic value of the 3rd hour plasma glucose level in the 100 g oral glucose tolerance test (OGTT). Methods: Records of all pregnant patients with abnormal 50 g glucose challenge test (GCT) between January 2005 and December 2013 were reviewed (n=1963). The 100 g OGTT results were analyzed separately for both Carpenter &amp; Couston (CC) and National Diabetes Data Group (NDDG) criteria. Results: The number of patients diagnosed with gestational diabetes mellitus (GDM) was 297 (15.1%) according to CC criteria and 166 (8.4%) according to NDDG criteria. The 1st hour plasma glucose level showed the highest correlation with GDM diagnosis (ρ=0.595 for CC and ρ=0.567 for NDDG). However, the 3rd hour plasma glucose level showed the weakest correlation with GDM diagnosis (ρ=0.216 for CC and ρ=0.213 for NDDG). The 3rd hour value of 100 g OGTT was one of the two elevated measurements in 10.8% of patients when CC criteria are used and in 13.8% of patients when NDDG criteria are used. Conclusion: Omitting 3rd hour plasma glucose measurement in 100 g OGTT results in unacceptable rates of underdiagnosed patients. abstract_id: PUBMED:15346816 A 2 hour versus the 3 hour 100 g glucose tolerance test for diagnosing gestational diabetes mellitus. Aim: To determine how the omission of the third hour glucose measurement of the 100 g oral glucose tolerance test (GTT) affects the diagnosis of gestational diabetes mellitus (GDM). Methods: Retrospective chart review of 876 women delivered in a tertiary care hospital in Israel during a three-year period, who underwent a 100 g 3 hour oral GTT following an abnormal 50 g glucose screen. GDM was diagnosed according to the "criterion standard" accepted in the Fourth International Workshop Conference on GDM. The results of the 100 g 3 hour oral GTT were then retrospectively re-evaluated by omission of the third hour plasma glucose measurement from the "criterion standard". Results: GDM was diagnosed in 28.4% of the study. patients, while the omission of the third hour glucose measurement resulted in a 26.4% diagnosis of GDM. The perinatal data of the 18-omitted cases suggests that their exclusion from the GDM group would not have altered substantially the perinatal outcome of the study cohort. Conclusions: A 100 g 2 hour oral GTT is a simple and economic alternative to the 100 g 3 hour oral GTT. abstract_id: PUBMED:1121678 Oral glucose tolerance test using 100 g of glucose and 100 g of oligosaccarides N/A abstract_id: PUBMED:15776861 Usefulness of the third-hour plasma glucose measurement of the oral glucose tolerance test for the diagnosis of gestational diabetes mellitus Objectives: To value the utility of the third-hour plasma glucose measurement of the oral glucose tolerance test (OGTT) for diagnosis of gestational diabetes mellitus (GDM). To analyse the relation of rate insulin therapy, type of delivery and morbidity and mortality perinatal with the third-hour plasma glucose measurement or not for diagnosis of GDM. Methods: Historic cohort study. Subjects: Women diagnosed of GDM from 1993 to March of 2002 in Pregnancy Consultation in periurban Primary Care Center in Camas (Seville, Spain) by consecutive sampling design (n=112). Intervention: To apply the screening and diagnosis criteria from Third International Conference of GDM. To value obstetric outcome history, anthropometric parameter before, treatment and delivery type and perinatal complications. Statistic analysis: A chi2 test, with Fisher's correction was used. Results: A 63% of women were diagnosed of GDM with normal third-hour plasma glucose measurement (Group 1). In 21.4% this value was increase (&gt; 145 mg/dL) but it was not necessary to diagnosis (Group 2). In 15.1%, the third-hour plasma glucose measurement was essential to diagnose GDM (Group 3). Insulin therapy was recommended in 17% (Group 3) and 5% (Group 1 and 2). There are not significant difference between groups about perinatal complications and cesarean delivery incidence. Discussion: We suggest it is not recommended the omission of the third-hour plasma glucose measurement of OGTT because a 15.1% were undiagnosed and would be a substantial decrease in sensitivity. In addition, that group was higher rate insulin therapy. abstract_id: PUBMED:34474639 Low glucose at 3-hour 100 gram oral glucose tolerance test: implications for glucose control. Objective: To assess the clinical significance of a low 180-minute glucose value in a 100 g oral glucose tolerance test (OGTT) and a single high abnormal value. Methods: A retrospective cohort study. The study group included women with 180-minute plasma glucose levels of ≤60 mg/dL and one abnormal value in the OGTT. The control group was comprised of women with one abnormal value in the OGTT and normal 180-minute glucose value. The primary outcome was glycemic control, defined as fasting blood glucose measurements &gt; 90 mg/dL or post-prandial glucose values &gt;140 mg/dL or 120 mg/dL (one-hour and two-hour post-prandial, respectively) in &gt;30% of the measurements. Secondary outcomes were the rate of insulin treatment and the perinatal outcome consisting of birthweight, large-for-gestational-age, and polyhydramnios. Results: Three hundred and one women were included, 143 in the study group and 158 in the control group. Pre-pregnancy BMI, first trimester fasting glucose levels, previous GDM, and familial diabetes were similar for both groups. Suboptimal glycemic control was more prevalent among the women in the study group (14% vs. 5.1%, respectively, p= .01). The need for insulin treatment was similar in both groups. Conclusions: Women with one abnormal value and a 180-minute hypoglycemia in the OGTT are at increased risk for suboptimal glycemic control. abstract_id: PUBMED:981741 Glucose tolerance tests with oral glucose challenges of 50 and 100 grams (author's transl) Seventy six subjects (63 females and 13 males) with an average age of 50.5 years, all of them with familial and/or obstetric history for diabetes mellitus, were submitted to glucose tolerance tests with oral glucose challenges of 50 and 100 g. The tests were all analysed by different criteria of current usage in medical literature (Wilkerson, Fajans and Conn, British Diabetes Association and University Group Diabetes Program). We concluded that the 100 g glucose challenge gives a greater index of positivity by all the criteria above referred when compared to 50 g challenge. Wilkerson's criteria is less sensitive than the others, when applied to 100 g glucose challenges. abstract_id: PUBMED:35046763 The Detection of Postprandial Hypoglycemia with 5-Hour Oral Glucose Tolerance Test. Introduction: Postprandial hypoglycemia (PH) is a poorly understood phenomenon. Five-hour oral glucose tolerance test (5-OGTT) is often a useful laboratory investigation to understand and establish a diagnosis of PH. The aim of this study is to present the patterns observed during 5-OGTT performed in cases with PH in a tertiary hospital in Nepal. Methods: 5-OGTTs were performed on 52 patients who complained symptomatic postprandial neuroglycopenic symptoms, at the Nepal Medicity hospital during the period of 2 years from 2017 to 2019. The anthropometry, medical history, serum glucose; insulin and cortisol were obtained. The homeostatic model assessment score for insulin resistance (HOMA-IR) based on fasting glucose and insulin levels were calculated. Data was analyzed using SPSS (Version 20.0). Results: 21 (40.4%) patients out of 52 developed hypoglycemia [blood glucose &lt; 55mg/dl (3.1mmol/L)], among them nine patients developed hypoglycemia at 3 hours, 11 at 4 hours and one at 5 hours post glucose load. The fasting insulin level in patients who developed hypoglycemia was 12.1 ± 5.8 μU/ml compared to the insulin level analyzed at the point of hypoglycemic episode which was 6.4 ± 1.8 μU/ml, P&lt;0.005. Conclusion: The level of insulin is disproportionately high in the setting of hypoglycemia where it was expected to be nearly absent. The disturbance in physiological mechanism between insulin sensitivity and insulin secretion may be the possible cause of PH. abstract_id: PUBMED:3663546 Evaluation of the WHO criteria for 75 g oral glucose tolerance test in pregnancy. A group of pregnant women at high risk of developing diabetes in pregnancy had paired oral glucose tolerance tests (OGTT) using a 100 g load followed by 75 g load. When the World Health Organization (WHO) criteria and the National Diabetes Data Group (NDDG) criteria were compared, the 2-h plasma glucose value after the 100 g load was the most discriminative in differentiating those with normal glucose tolerance, impaired glucose tolerance and gestational diabetes mellitus. When only the 2-h plasma glucose values were assessed, the WHO test (75 g: 8 mmol/l) agreed with the NDDG test (100 g load: 9.2 mmol/l) in the diagnosis of glucose intolerance in 60% of subjects only. Using the same criteria at 2-h (8 mmol/l) the agreement between these tests was 47%. Reducing the glucose load from 100 g to 75 g produced a reduced glucose response in 49% of the subjects, with a significant decrease in the area under the glucose response curve. abstract_id: PUBMED:6754492 Minor physiological relevance of oral glucose tolerance test. The physiological relevance of the oral glucose tolerance test was evaluated in ten healthy nonobese subjects and nine subjects with slightly impaired glucose tolerance. In random order, all subjects received a 50 g oral glucose tolerance test or a standardized breakfast meal of equivalent carbohydrate content. Changes in plasma glucose, insulin, and pancreatic glucagon concentrations were measured. In both groups, plasma glucose increased significantly during the oral glucose tolerance test and the meal test but the incremental glucose area (0-60 min) of the oral glucose tolerance test was about 350% and 120% greater than that of the mean test (p less than 0.001) in the normals and the patients with impaired glucose tolerance, respectively. In both groups, insulin responded almost similarly to the oral glucose tolerance test and the meal test whereas plasma glucagon declined significantly during the oral glucose tolerance test only (p less than 0.001). Glucagon remained unchanged during the meal test in the normals and increased slightly (p less than 0.05) in the group with impaired glucose tolerance. These data show that the response of glucose, insulin and glucagon to an oral glucose tolerance test in various respects is different to that obtained by the more physiological stimulation with a breakfast meal. abstract_id: PUBMED:29179745 Reproducibility of the 75 g oral glucose tolerance test for the diagnosis of gestational diabetes mellitus in a sub-Saharan African population. Objective: To evaluate the reproducibility of the 75 g oral glucose tolerance test and factors associated with non-reproducible results in Cameroonian pregnant women. Results: Twenty-seven of the 84 participants (32.1%) who did the first oral glucose tolerance test were diagnosed with gestational diabetes mellitus. There was no difference between the means of the glycaemic responses at T0 (p = 0.64), T30 (p = 0.08), T60 (p = 0.86), T90 (p = 0.51), and T120 (p = 0.34) between the two oral glucose tolerance test. Age (p = 0.001) and BMI (p = 0.001) were significantly associated with non-reproducible results. The reproducibility of the oral glucose tolerance test in this study was 74.2%, and the kappa statistic's 0.46. In conclusion, the results of the oral glucose tolerance test were reproducible in only 74.2% of pregnant women in this study. This highlights that a single oral glucose tolerance test for the diagnosis of gestational diabetes mellitus should be interpreted with caution. Answer: Omitting the 3rd hour measurement in the 100 g oral glucose tolerance test is not considered feasible based on the evidence provided. The 3rd hour plasma glucose level, although showing the weakest correlation with gestational diabetes mellitus (GDM) diagnosis compared to the 1st hour plasma glucose level, is still an important component of the test. According to one study, omitting the 3rd hour plasma glucose measurement would result in unacceptable rates of underdiagnosed patients, with 10.8% of patients when using Carpenter & Couston criteria and 13.8% when using National Diabetes Data Group criteria having only the 3rd hour value elevated (PUBMED:26124045). Another study suggests that while a 2 hour oral GTT could be a simpler and more economic alternative, the omission of the third hour glucose measurement did not substantially alter the perinatal outcome of the study cohort (PUBMED:15346816). However, another study emphasizes that it is not recommended to omit the third-hour plasma glucose measurement because 15.1% of GDM diagnoses would be missed, and this group had a higher rate of insulin therapy (PUBMED:15776861). Lastly, a study found that women with one abnormal value and a 180-minute hypoglycemia in the OGTT are at increased risk for suboptimal glycemic control, indicating the importance of the 3rd hour measurement in managing GDM (PUBMED:34474639). Therefore, the evidence suggests that the 3rd hour measurement is crucial for accurate diagnosis and management of GDM, and its omission could lead to underdiagnosis and inadequate treatment.