input
stringlengths
6.82k
29k
Instruction: Is increased apolipoprotein B-A major factor enhancing the risk of coronary artery disease in type 2 diabetes? Abstracts: abstract_id: PUBMED:12421026 Is increased apolipoprotein B-A major factor enhancing the risk of coronary artery disease in type 2 diabetes? Objectives: An association of Apolipoprotein B (Apo B) with coronary artery disease (CAD) independent of LDL cholesterol (LDLc) concentrations has been reported in white population. This analysis was taken up to study whether the higher CAD risk in Asian Indians with diabetes could be explained by possible alterations in Apo B and Apolipoprotein A1 (Apo A1) concentrations. Methods: The study group consisted of four hundred and forty seven men aged > or = 25 years, 167 with CAD and 280 with no CAD, classified by coronary angiography. Plasma lipid profile including total cholesterol, LDLc, Apo A1 and Apo B were done. Glucose tolerance was evaluated in all. Results: Age, BMI, Apo B, and Apo A1 were significantly associated with CAD in a multiple regression analysis. Hyper Apo B was more common than hyper LDLc in CAD (73.6% vs 20.4%, chi2 = 157, P < 0.001). Apo B concentrations were increased in diabetic subjects even in the presence of normal levels of LDLc and in the absence of CAD. Conclusions: The study has shown that the apolipoproteins B and A1 provide better information regarding the risk of CAD. Apo B abnormalities exist in large percentages of CAD subjects despite having normal levels of LDLc. Diabetes per se enhances the Apo B concentrations and this could probably be one of the mechanisms of accelerated CAD in diabetes. Hyper Apo B may be an index of CAD risk. abstract_id: PUBMED:9819100 Triglyceride as a risk factor for coronary artery disease. The data for an independent association between triglyceride concentrations and risk for coronary artery disease (CAD) are equivocal, unlike the data for low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol, which show strong, consistent, and opposing correlations with CAD risk. There is some evidence for triglyceride as an independent risk factor in certain subgroups, for example, women 50-69 years of age (Framingham Heart Study) and in patients with noninsulin-dependent diabetes. However, the evidence is stronger for triglyceride as a synergistic CAD risk factor. For example, patients with the "lipid triad" of high LDL cholesterol, low HDL cholesterol, and high triglyceride accounted for most of the event reduction with lipid-lowering therapy in the Helsinki Heart Study. An important confounder of the correlation between triglyceride and CAD risk is the heterogeneity of triglyceride-rich lipoproteins: the larger triglyceride-rich particles are thought not to be associated with CAD risk, whereas the smaller (and denser) particles are believed to be atherogenic. At present, measurement of fasting triglyceride levels and triglyceride assessment in conjunction with LDL cholesterol and HDL cholesterol concentrations are the most practical methods of evaluating hypertriglyceridemia in CAD risk, although postprandial lipemia may prove a better indicator of atherogenicity. Management of hypertriglyceridemia should initially focus on nonpharmacologic therapy (i.e., diet, exercise, weight control, and alcohol reduction). In diabetic patients, meticulous glycemic control is also important. However, if this approach proves inadequate, there are several pharmacologic options. Fibrates may be effective in decreasing triglyceride and increasing HDL cholesterol. Nicotinic acid (niacin) has been shown to decrease triglyceride, increase HDL cholesterol, lower LDL cholesterol, and decrease lipoprotein(a); it also decreases fibrinogen. The statins appear to be effective in decreasing triglyceride and LDL cholesterol in hypertriglyceridemia; however, they do not normalize metabolism of apolipoprotein B, and HDL cholesterol may remain low. Therefore, combination with a fibrate or niacin may be appropriate. Attention to hypertriglyceridemia with respect to increased CAD risk represents an important step in assessing global risk for CAD development. abstract_id: PUBMED:8964190 Intra-abdominal fat: is it a major factor in developing diabetes and coronary artery disease? Abdominal obesity has emerged as a strong and independent predictor for non-insulin dependent diabetes mellitus (NIDDM). Adiposity located centrally in the abdominal region, and particularly visceral as opposed to subcutaneous fat, is also distinctly associated with hyperlipidemia, compared with generalized distributions of body fat. These lipoprotein abnormalities are characterized by elevated very low density lipoprotein (VLDL) and low density lipoprotein (LDL) levels, small dense LDL with elevated apolipoprotein B levels, and decreased high density lipoprotein2b (HDL2b) levels. This is the same pattern seen in both familial combined hyperlipidemia and NIDDM. The pronounced hyperinsulinemia of upper-body obesity supports the overproduction of VLDL and the increased LDL turnover. We have proposed that an increase in the size of the visceral fat depot is a precursor to the increased lipolysis and elevated free fatty acid (FFA) flux and metabolism and to subsequent overexposure of hepatic and extrahepatic tissues to FFA, which then, in part, promotes aberrations in insulin actions and dynamics. The resultant changes in glucose/insulin homeostasis, lipoprotein metabolism, and vascular events then lead to metabolic morbidities such as glucose intolerance, NIDDM, dyslipidemia, and increased risk for coronary heart disease. abstract_id: PUBMED:9740500 Intermediate-density lipoproteins, diabetes and coronary artery disease. The results of various studies suggest that hypertriglyceridaemia is associated with an increased risk of coronary artery disease. It is unclear, however, which particular triglyceride (TG)-rich lipoproteins contribute to the risk. Different types of TG-rich lipoprotein differ in function, composition, size and density. TG-rich lipoproteins in the range Svedberg flotation (Sf) 12-60 have been shown to be associated with angiographic severity in both diabetic and non-diabetic individuals. A study in people with type 2 diabetes found that those with moderate coronary artery disease had higher levels of both Sf 12 60 and Sf 60-400. Multivariate analysis showed that this association was independent of both low (LDL)- and high-density lipoprotein (HDL). The association was not seen in patients with severe coronary artery disease, suggesting that these lipoproteins may only be involved in the early stages of atherogenesis. Further research has indicated that the risk correlates positively to the postprandial levels of apolipoprotein B48 in the Sf 20-60 fraction. This suggests that elevated levels of chylomicron remnants are involved in progression of coronary artery disease. abstract_id: PUBMED:18193043 Newly identified loci that influence lipid concentrations and risk of coronary artery disease. To identify genetic variants influencing plasma lipid concentrations, we first used genotype imputation and meta-analysis to combine three genome-wide scans totaling 8,816 individuals and comprising 6,068 individuals specific to our study (1,874 individuals from the FUSION study of type 2 diabetes and 4,184 individuals from the SardiNIA study of aging-associated variables) and 2,758 individuals from the Diabetes Genetics Initiative, reported in a companion study in this issue. We subsequently examined promising signals in 11,569 additional individuals. Overall, we identify strongly associated variants in eleven loci previously implicated in lipid metabolism (ABCA1, the APOA5-APOA4-APOC3-APOA1 and APOE-APOC clusters, APOB, CETP, GCKR, LDLR, LPL, LIPC, LIPG and PCSK9) and also in several newly identified loci (near MVK-MMAB and GALNT2, with variants primarily associated with high-density lipoprotein (HDL) cholesterol; near SORT1, with variants primarily associated with low-density lipoprotein (LDL) cholesterol; near TRIB1, MLXIPL and ANGPTL3, with variants primarily associated with triglycerides; and a locus encompassing several genes near NCAN, with variants strongly associated with both triglycerides and LDL cholesterol). Notably, the 11 independent variants associated with increased LDL cholesterol concentrations in our study also showed increased frequency in a sample of coronary artery disease cases versus controls. abstract_id: PUBMED:28969633 Relation between low-density lipoprotein cholesterol/apolipoprotein B ratio and triglyceride-rich lipoproteins in patients with coronary artery disease and type 2 diabetes mellitus: a cross-sectional study. Background: The low-density lipoprotein cholesterol/apolipoprotein B (LDL-C/apoB) ratio has conventionally been used as an index of the LDL-particle size. Smaller LDL-particle size is associated with triglyceride (TG) metabolism disorders, often leading to atherogenesis. We investigated the association between the LDL-C/apoB ratio and TG metabolism in coronary artery disease (CAD) patients with diabetes mellitus (DM). Methods: In the cross-sectional study, the LDL-C/apoB ratio, which provides an estimate of the LDL-particle size, was calculated in 684 consecutive patients with one additional risk factor. The patients were classified into 4 groups based on the presence or absence of CAD and DM, as follows: CAD (-) DM (-) group, n = 416; CAD (-) DM (+) group, n = 118; CAD (+) DM (-) group, n = 90; CAD (+) DM (+) group, n = 60. Results: A multi-logistic regression analysis after adjustments for coronary risk factors revealed that the CAD (+) DM (+) condition was an independent predictor of the smallest LDL-C/apoB ratio among the four groups. Furthermore, multivariate regression analyses identified elevated TG-rich lipoprotein (TRL)-related markers (TG, very-LDL fraction, remnant-like particle cholesterol, apolipoprotein C-II, and apolipoprotein C-III) as being independently predictive of a smaller LDL-particle size in both the overall subject population and a subset of patients with a serum LDL-C level < 100 mg/dL. In the 445 patients followed up for at least 6 months, multi-logistic regression analyses identified increased levels of TRL-related markers as being independently predictive of a decreased LDL-C/apoB ratio, which is indicative of smaller LDL-particle size. Conclusions: The association between disorders of TG metabolism and LDL heterogeneity may account for the risk of CAD in patients with DM. Combined evaluation of TRL-related markers and the LDL-C/apoB ratio may be of increasing importance in the risk stratification of CAD patients with DM. Further studies are needed to investigate the useful clinical indices and outcomes of these patients. Clinical Trial Registration UMIN (http://www.umin.ac.jp/) Study ID: UMIN000028029 retrospectively registered 1 July 2017. abstract_id: PUBMED:18059210 The metabolic syndrome and dyslipidemia among Asian Indians: a population with high rates of diabetes and premature coronary artery disease. South Asians have high rates of diabetes and the highest rates of premature coronary artery disease in the world, both occurring about 10 years earlier than in other populations. The metabolic syndrome (MS), which appears to be the antecedent or "common soil" for both of these conditions, is also common among South Asians. Because South Asians develop metabolic abnormalities at a lower body mass index and waist circumference than other groups, conventional criteria underestimate the prevalence of MS by 25% to 50%. The proposed South Asian Modified National Cholesterol Education Program criteria that use abdominal obesity as an optional component and the South Asian-specific waist circumference recommended by the International Diabetes Federation appear to be more appropriate in this population. Furthermore, Asian Indians have at least double the risk of coronary artery disease than that of whites, even when adjusted for the presence of diabetes and MS. This increased risk appears to be due to South Asian dyslipidemia, which is characterized by high serum levels of apolipoprotein B, lipoprotein (a), and triglycerides and low levels of apolipoprotein A1 and high-density lipoprotein (HDL) cholesterol. In addition, the HDL particles are small, dense, and dysfunctional. MS needs to be recognized as a looming danger to South Asians and treated with aggressive lifestyle modifications beginning in childhood and at a lower threshold than in other populations. abstract_id: PUBMED:19491209 Apolipoprotein B but not LDL cholesterol is associated with coronary artery calcification in type 2 diabetic whites. Objective: Evidence favors apolipoprotein B (apoB) over LDL cholesterol as a predictor of cardiovascular events, but data are lacking on coronary artery calcification (CAC), especially in type 2 diabetes, where LDL cholesterol may underestimate atherosclerotic burden. We investigated the hypothesis that apoB is a superior marker of CAC relative to LDL cholesterol. Research Design And Methods: We performed cross-sectional analyses of white subjects in two community-based studies: the Penn Diabetes Heart Study (N = 611 type 2 diabetic subjects, 71.4% men) and the Study of Inherited Risk of Coronary Atherosclerosis (N = 803 nondiabetic subjects, 52.8% men) using multivariate analysis of apoB and LDL cholesterol stratified by diabetes status. Results: In type 2 diabetes, apoB was associated with CAC after adjusting for age, sex, and medications [Tobit regression ratio of increased CAC for 1-SD increase in apoB; 1.36 (95% CI 1.06-1.75), P = 0.016] whereas LDL cholesterol was not [1.09 (0.85-1.41)]. In nondiabetic subjects, both were associated with CAC [apoB 1.65 (1.38-1.96), P < 0.001; LDL cholesterol 1.56 (1.30-1.86), P < 0.001]. In combined analysis of diabetic and nondiabetic subjects, apoB provided value in predicting CAC scores beyond LDL cholesterol, total cholesterol, the total cholesterol/HDL cholesterol and triglyceride/HDL cholesterol ratios, and marginally beyond non-HDL cholesterol. Conclusions: Plasma apoB, but not LDL cholesterol, levels were associated with CAC scores in type 2 diabetic whites. ApoB levels may be particularly useful in assessing atherosclerotic burden and cardiovascular risk in type 2 diabetes. abstract_id: PUBMED:17932584 Hypertriglyceridemic waist: a useful screening phenotype in preventive cardiology? The worldwide increase in the prevalence and incidence of type 2 diabetes represents a tremendous challenge for the Canadian health care system, especially if we consider that this phenomenon may largely be explained by the epidemic of obesity. However, despite the well-recognized increased morbidity and mortality associated with an elevated body weight, there is now more and more evidence highlighting the importance of intra-abdominal adipose tissue (visceral adipose tissue) as the fat depot conveying the greatest risk of metabolic complications. In this regard, body fat distribution, especially visceral adipose tissue accumulation, has been found to be a key correlate of a cluster of diabetogenic, atherogenic, prothrombotic and inflammatory metabolic abnormalities now often referred to as the metabolic syndrome. This dysmetabolic profile is predictive of a substantially increased risk of coronary artery disease (CAD) even in the absence of hyperglycemia, elevated low-density lipoprotein cholesterol or hypertension. For instance, some features of the metabolic syndrome (hyperinsulinemia, elevated apolipoprotein B and small low-density lipoprotein particles--the so-called atherogenic metabolic triad) have been associated with a more than 20-fold increase in the risk of ischemic heart disease in middle-aged men enrolled in the Quebec Cardiovascular Study. This cluster of metabolic complications has also been found to be predictive of a substantially increased risk of CAD beyond the presence of traditional risk factors. These results emphasize the importance of taking into account in daily clinical practice the presence of metabolic complications associated with abdominal obesity together with traditional risk factors to properly evaluate the cardiovascular risk profile of patients. From a risk assessment standpoint, on the basis of additional work conducted by several groups, there is now evidence that the simultaneous presence of an elevated waist circumference and fasting triglyceride levels (a condition that has been described as hypertriglyceridemic waist) may represent a relevant first-step approach to identify a subgroup of individuals at higher risk of being carriers of the features of the metabolic syndrome. Moreover, a moderate weight loss in initially abdominally obese patients is associated with a selective mobilization of visceral adipose tissue, leading to improvements in the metabolic risk profile predictive of a reduced risk of CAD and type 2 diabetes. In conclusion, hypertriglyceridemic waist as a marker of visceral obesity and related metabolic abnormalities is a useful and practical clinical phenotype to screen persons at risk for CAD and type 2 diabetes. abstract_id: PUBMED:7810485 Relation of angiographically defined coronary artery disease and plasma concentrations of insulin, lipid, and apolipoprotein in normolipidemic subjects with varying degrees of glucose tolerance. We investigated the association between hyperinsulinemia and changes in lipid, lipoprotein, and apolipoprotein that would increase the risk of coronary artery disease (CAD) independent of glucose tolerance. A coronary angiogram was recorded in 127 male subjects, including 41 with normal glucose tolerance, 41 with impaired glucose tolerance, and 45 with non-insulin-dependent diabetes mellitus (NIDDM). Subjects were divided into 2 groups according to results: the group with CAD (n = 94) and the group with normal coronary arteries (n = 33). All subjects were normolipidemic (total cholesterol < 230 mg/dl and triglycerides < 150 mg/dl). The CAD group had a significantly lower plasma level of high-density lipoprotein (HDL) cholesterol and apolipoprotein A-I (apo A-I) and a higher level of apolipoprotein B (apo B) than the normal group with normal glucose tolerance. In considering subjects with impaired glucose tolerance or NIDDM, the CAD group had a significantly lower plasma level of HDL cholesterol and apo A-I and a significantly higher plasma level of total cholesterol, triglycerides, and apo B than the normal group. In each of the subjects with normal and impaired glucose tolerance, and NIDDM, the elevation of plasma insulin concentration during both the complete test period and the early phase of an oral glucose challenge was significantly higher in the CAD than in the normal group. In all subjects, graded reductions in HDL cholesterol and apo A-I and graded increases in plasma total cholesterol, triglycerides, and apo B were observed with increasing tertiles of the postglucose challenge measurements of insulinemia.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: Yes, increased apolipoprotein B (Apo B) is considered a major factor enhancing the risk of coronary artery disease (CAD) in type 2 diabetes. Studies have shown that Apo B concentrations are increased in diabetic subjects even when levels of low-density lipoprotein cholesterol (LDLc) are normal and in the absence of CAD. This suggests that diabetes per se enhances Apo B concentrations, which could be one of the mechanisms of accelerated CAD in diabetes. Hyper Apo B is more common than hyper LDLc in CAD subjects, indicating that Apo B abnormalities exist in a large percentage of CAD subjects despite having normal levels of LDLc. Therefore, Apo B may serve as an index of CAD risk (PUBMED:12421026). Additionally, evidence supports that apolipoprotein B (apoB) is a superior marker of coronary artery calcification (CAC) relative to LDL cholesterol, especially in type 2 diabetes, where LDL cholesterol may underestimate atherosclerotic burden. In type 2 diabetic whites, apoB levels were associated with CAC scores, suggesting that apoB levels may be particularly useful in assessing atherosclerotic burden and cardiovascular risk in type 2 diabetes (PUBMED:19491209). Furthermore, the metabolic syndrome and dyslipidemia, which are common among South Asians, are characterized by high serum levels of apolipoprotein B, among other lipids, and are predictive of a substantially increased risk of CAD even in the absence of hyperglycemia, elevated LDL cholesterol, or hypertension (PUBMED:18059210). In summary, increased Apo B is a significant factor that enhances the risk of CAD in individuals with type 2 diabetes, and it provides better information regarding the risk of CAD than LDL cholesterol alone.
Instruction: Intramedullary fixation of failed plated femoral diaphyseal fractures: are bone grafts necessary? Abstracts: abstract_id: PUBMED:18784586 Intramedullary fixation of failed plated femoral diaphyseal fractures: are bone grafts necessary? Background: Nonunited fracture shaft femur after plate fixation is a common problem in third world countries because of economic reasons. Management of such a problem is still controversial and is associated with many surgical details, due not only to the nonunited fracture itself, but also to the broken implant which is not easy to remove. Methods: This study is a randomized prospective study presenting 40 patients with aseptic nonunited fracture shaft femur associated with failed plating managed by the removal of hardware, and intramedullary fixation using an interlocking nail with or without autogenous iliac bone graft. Results: There was no statistically significant difference between patients with and without iliac autogenous bone graft regarding the demographic data, the preoperative condition, and the postoperative course including time needed for bone union and return to work. The statistically significant difference was in the intraoperative blood loss and the duration of surgery with less blood loss and shorter duration of surgery occurring in the group treated by reamed intramedullary nail without iliac bone graft. Conclusion: In cases with aseptic nonunited fracture shaft femur after failed plating, intramedullary reamed nailing without autogenous bone graft produced similar results as with bone graft, but with less operating time and blood loss. abstract_id: PUBMED:22295492 Treatment of femoral neck fractures after the fixation of ipsilateral femoral shaft by antegrade intramedullary nail Objective: To investigate the treatment of femoral neck fractures after the fixation of ipsilateral femoral shaft fracture by antegrade intramedullary nail. Methods: A retrospective study on 12 patients with femoral neck fractures after the fixation of ipsilateral femoral shaft fracture by antegrade intramedullary nail, which were identified intraoperatively or postoperatively from January 2000 to January 2010. All the patients were treated with 2 supplemental screws placed anteriorly and posteriorly to intramedullary nail seperately. All the patients were periodic followed-up, fractures union and functional recovery were evaluated. Results: All the patients were followed up, and the duration ranged from 10 to 36 months (averaged 16.5 months). The mean healing time was 3.6 months in femoral neck fractures and 5.4 months in femoral shaft fractures. No osteonecrosis of femoral head was found. According to Harris scoring system for hip function, 7 patients got an excellent result, 3 good, 2 fair. Conclusion: Treatment of femoral neck after the fixation of ipsilateral femoral shaft by antegrade intramedullary nail with 2 screws placed anteriorly and posteriorly to intramedullary nail separately is feasible, and has the advantages of reliable fixation, less trauma and high rate of fracture healing. abstract_id: PUBMED:29292893 Interlock plate fixation for the treatment of femoral hypertrophic nonunions after intramedullary nailing fixation Objective: To investigate clinical effect of locking plate assisted intramedullary nail in treating femoral hypertrophic nonunions after intramedullary fixation. Methods: From January 2006 to December 2015, clinical data of 40 patients with femoral nonunions after intramedullary nail internal fixation treated with interlock plate internal fixation were respectively analyzed. Among patients, there were 22 males and 18 females, aged from 21 to 60 years old with an average age of (35.0±2.2) years. The time of bone nonunion ranged from 9 to 24 months with an average of (14.1±1.5) months. Operative time, blood loss, hospital stay , complications, bone healing time and recovery of function were observed, Evanich scoring was applied to evaluate clinical effects. Results: All patients were followed up from 12 to 24 months with an average of (15.2±2.7) months. Operative time ranged from 105.1 to 130.2 min with an average of (112.5±10.2) min;blood loss ranged from 207.0 to 250.2 ml with an average of (220.6±14.7)ml; hospital stay ranged from 10 to 15 days with an average of (12.2±1.5) d. All patients were obtained bone healing from 4 to 12 months after additional plate internal fixation, with an average of (6.2±1.9) months. No implant failure and infection occurred after operation. According to Evanich scoring of knee joint, total score was 83.2±5.6, 22 cases obtained excellent results, 17 good and 1 fair. Conclusions: Limited incision approach locking plate with original intramedullary nail fixation for femoral hypertrophic nonunions subsequent to intramedullary fixation could receive good results, increase stability of fracture, and could increase stability of fracture, provide environment for callus growth. It had advantages of high cure rate, less trauma and complications, and also could do functional exercise earlier to promote good recovery of knee joint. abstract_id: PUBMED:10217235 Effect of femoral fracture and intramedullary fixation on lung capillary leak. Background: Pulmonary injury is an important complication in the trauma patient with long-bone fractures. The purpose of this study was to determine the effect of femoral fracture or fracture and intramedullary fixation on lung capillary leak. The contribution of leukocytes to lung injury in this model was also determined. Methods: The pulmonary capillary filtration coefficient was determined in lungs of rats after femur fracture or fracture and reamed or unreamed intramedullary fixation. Pulmonary arterial vascular resistance and lung neutrophil content were also determined. Results: Fracture alone did not cause lung injury, whereas fracture and intramedullary fixation elicited lung capillary leak. Fracture alone and intramedullary fixation increased pulmonary vascular resistance, whereas unreamed intramedullary fixation caused lung leukosequestration. Conclusion: Femoral fracture alone does not cause an increase in pulmonary microvascular permeability. Femoral fracture and intramedullary fixation causes lung capillary leak, which is not increased by reaming the femoral canal. abstract_id: PUBMED:31823542 Research progress of augmentation plate for femoral shaft nonunion after intramedullary nail fixation Objective: To review the history, current situation, and progress of augmentation plate (AP) for femoral shaft nonunion after intramedullary nail fixation. Methods: The results of the clinical studies about the AP in treatment of femoral shaft nonunion after intramedullary nail fixation in recent years were widely reviewed and analyzed. Results: The AP has been successfully applied to femoral shaft nonunion after intramedullary nail fixation since 1997. According to breakage of the previous nailing, AP is divided into two categories: AP with retaining the previous intramedullary nail and AP with exchanging intramedullary nail. AP is not only suitable for simple nonunion, but also for complex nonunion with severe deformity. Compared with exchanging intramedullary nail, lateral plate, and dual plate, AP has less surgical trauma, shorter healing time, higher healing rate, and faster returning to society. However, there are still some problems with the revision method, including difficulty in bicortical screw fixation, lack of anatomic plate suitable for femoral shaft nonunion, and lack of postoperative function and quality of life assessment. Conclusion: Compared with other revision methods, AP could achieve higher fracture healing rate and better clinical prognosis for patients with femoral shaft nonunion. However, whether patients benefit from AP in terms of function and quality of life remain uncertain. Furthermore, high-quality randomized controlled clinical studies are needed to further confirm that AP are superior to the other revision fixations. abstract_id: PUBMED:25636533 Intramedullary fixation of a femoral shaft fracture with preservation of an existing hip resurfacing prosthesis. Femoral neck fractures have been reported as a cause for failure in patients with a hip resurfacing arthroplasty. However, the incidence and management of fractures of the femoral shaft with an ipsilateral hip resurfacing arthroplasty is relatively absent in current literature. Although, the gold standard for the fixation of a closed femoral shaft fracture is with the use of an intramedullary nail, this can be a challenge in the presence of a hip resurfacing arthroplasty. We describe the case of anterograde intramedullary nail fixation for a femoral shaft fracture in a patient with an ipsilateral hip resurfacing arthroplasty in situ. abstract_id: PUBMED:24077687 Rigid intramedullary nail fixation of femoral fractures in adolescents: what evidence is available? Background: Femoral fracture in adolescents is a significant injury. It is generally agreed that operative fixation is the treatment of choice, and rigid intramedullary nail fixation is a treatment option. However, numerous types of rigid nails to fix adolescent femoral fractures have been described. Hence, the aim of this paper was to collate and evaluate the available evidence for managing diaphyseal femoral fractures in adolescents using rigid intramedullary nails. Materials And Methods: A literature search was undertaken using the healthcare database website ( http://www.library.nhs.uk/hdas ). Medline, CINAHL, Embase, and the Cochrane Library databases were searched to identify prospective and retrospective studies of rigid intramedullary nail fixation in the adolescent population. Results: The literature search returned 1,849 articles, among which 51 relevant articles were identified. Of these 51 articles, 23 duplicates were excluded, so a total of 28 articles were reviewed. First-generation nails had a high incidence of limb length discrepancy (Küntscher 5.8 %, Grosse-Kempf 9 %), whilst second-generation nails had a lower incidence (Russell-Taylor 1.7 %, AO 2.6 %). Avascular necrosis was noted with solid Ti nails (2.6 %), AO femoral nails (1.3 %) and Russell-Taylor nails (0.85 %). These complications have not been reported with the current generation of nails. Conclusions: Rigid intramedullary nail fixation of femoral fractures in adolescents is a useful procedure with good clinical results. A multiplanar design and lateral trochanteric entry are key to a successful outcome of titanium alloy nail fixation. abstract_id: PUBMED:35790362 Hip Fractures after Intramedullary Nailing Fixation for Atypical Femoral Fractures: Three Cases. Secondary hip fractures (SHFs) rarely occur after intramedullary nailing (IMN) fixation without femoral neck fixation for atypical femoral fractures (AFFs). We report three cases of older Japanese women who sustained SHFs presumably caused by osteoporosis and peri-implant stress concentration around the femoral neck after undergoing IMN without femoral neck fixation for AFF. All cases were fixed with malalignment. In AFF patients, postoperative changes due to postoperative femoral bone malalignment may affect the peri-implant mechanical environment around the femoral neck, which can result in insufficiency fractures. At the first AFF surgery, we recommend femoral neck fixation after adequate reduction is achieved. abstract_id: PUBMED:30526163 Intramedullary nails with cannulated screw fixation for the treatment of unstable femoral neck fractures. Objective: Unstable femoral neck fractures are typically high-angled shear fractures caused by high-energy trauma. Internal fixation of femoral neck fractures with placement of parallel cannulated screws in an inverted triangle configuration is commonly performed in the clinical setting. This study was performed to investigate the primary results of intramedullary nailing with cannulated screws for the treatment of unstable femoral neck fractures in young and middle-aged patients. Methods: In total, 96 consecutive patients with no history of hip surgery using inverted triangular cannulated compression screws or construction nails with cannulated screws were reviewed. Their demographic and radiological data were retrospectively collected from our institutional database. Results: Inverted cannulated screws had an excellent effect on decreasing the blood loss volume and incision size, but intramedullary nails exhibited superior advantages in decreasing screw exit and shortening the hospital stays. The Harris hip scores were comparable between the two groups. Conclusions: Intramedullary fixation with cannulated screws has advantages in treating complicated femoral neck fractures. Besides cannulated screws, intramedullary fixation with cannulated screws might be another method to treat unstable femoral neck fractures in young and middle-aged patients. The study was registered in ClinicalTrials.gov. Unique Protocol ID: 11156458. The ClinicalTrial number is NCT03550079. abstract_id: PUBMED:34749907 Hip fractures following intramedullary nailing fixation for femoral fractures. Introduction: Proximal peri-implant femoral fractures occur following intramedullary nailing (IMN) fixation for trochanteric, femoral shaft, and distal femoral fractures. However, analyses of secondary hip fractures (SHFs) using large clinical samples are lacking. Therefore, we aimed to report the incidence and clinical outcomes of SHF after nailing fixation (IMN or cephalomedullary nailing [CMN]) for overall femoral fractures. In addition, we focused on IMN for femoral shaft fractures and investigated the risk factors for SHF. Methods: This multicenter, retrospective, cohort study included 2,293 patients aged > 60 years who underwent nailing fixation for femoral fractures. The primary outcome was the incidence of SHF. In the assessment of clinical outcomes, we evaluated reoperation and the regaining of walking ability following SHF management. In addition, we conducted multivariable logistic regression analyses to examine the association between risk factors and SHF. Results: Seventeen (0.7%) patients had SHFs, including 12 femoral neck fractures and 5 trochanteric fractures. Antegrade IMN was the most common type of nailing fixation. Multivariable analysis demonstrated that the absence of femoral head fixation was significantly associated with the incidence of SHF following IMN for femoral shaft fractures (odds ratio, 17.0; 95% confidence interval, 1.9-2265.7; p=0.006). In the assessment of clinical outcomes, there were two reoperations (16.7%) in the secondary femoral neck fracture group. Patients with secondary trochanteric fractures tended to have a lower probability of regaining walking ability than those with secondary femoral neck fractures (20% vs. 50%). Conclusions: In this multicenter study, the incidence of SHF after nail fixation for femoral fractures was 0.7%. The absence of femoral head fixation was significantly associated with SHF, and the clinical outcomes were poor. Therefore, femoral head fixation at the initial IMN fixation for femoral fractures may be a fixation option for surgeons to consider as an SHF prevention measure. Answer: Intramedullary fixation of failed plated femoral diaphyseal fractures does not necessarily require bone grafts. A randomized prospective study with 40 patients who had aseptic nonunited fracture shaft femur associated with failed plating managed by the removal of hardware and intramedullary fixation using an interlocking nail with or without autogenous iliac bone graft found no statistically significant difference between patients with and without iliac autogenous bone graft regarding demographic data, preoperative condition, postoperative course including time needed for bone union, and return to work. The significant difference was in the intraoperative blood loss and the duration of surgery, with less blood loss and shorter duration of surgery occurring in the group treated by reamed intramedullary nail without iliac bone graft. This suggests that intramedullary reamed nailing without autogenous bone graft produced similar results as with bone graft but with less operating time and blood loss (PUBMED:18784586).
Instruction: Doing no harm? Abstracts: abstract_id: PUBMED:33048314 Complicity in Harm Reduction. At first glance, it seems difficult to object to any program that merits the label "harm reduction." If harm is bad, as everyone recognizes, then surely reducing it is good. What's the problem? The problem, we submit, is twofold. First, there's more to "harm reduction," as that term is typically used, than simply the reduction of harm. Some of the wariness about harm-reduction programs may result from the nebulous "more." Thus, part of our task is to provide a clear definition of harm reduction. Next, we turn to a second problem: a worry about complicity. Those who object to harm reduction programs fear that participation in such programs would make them complicit in activities they deem immoral. In this paper we argue that this fear is largely unwarranted. We use supervised injection sites (SISs)-safe spaces for the use of risky drugs-as our paradigmatic case of harm reduction. These SISs are generally offered in the hope of reducing harm to both the drug user and the public. For this reason, our analysis focuses on complicity in harm. We draw upon the work of Gregory Mellema as our framework. Mellema offers three ways one can be complicit in harm caused by another: by enabling, facilitating or condoning it. We argue that one who operates an SIS is not complicit in any of these ways, while also laying out the conditions that must be met if one is to argue that harm reduction entails complicity in non-consequentialist wrongdoing. abstract_id: PUBMED:20508804 Measuring self-harm behavior with the self-harm inventory. Self-harm behavior is exhibited by a substantial minority of the general population and may be particularly prevalent among adolescents and clinical samples, both in psychiatric and primary care settings. A number of measures are currently available for the assessment of self-harm behavior. These vary a great deal in terms of their content, response options, targeted clinical audience, time to complete, and availability. The Self-Harm Inventory, a measure that we developed for the assessment of self-harm behavior, is one-page in length, takes five or less minutes to complete, and is free-of-charge. Studies indicate that the Self-Harm Inventory does the following: 1) screens for the lifetime prevalence of 22 self-harm behaviors; 2) detects borderline personality symptomatology; and 3) predicts past mental healthcare utilization. Hopefully, more efficient assessment of self-harm behavior will lead to more rapid intervention and resolution. abstract_id: PUBMED:33027597 Young Peoples' Perspectives on the Role of Harm Reduction Techniques in the Management of Their Self-Harm: A Qualitative Study. Objective: Self-harm is a common phenomenon amongst young people, often used to regulate emotional distress. Over the last decade harm reduction approaches to self-harm have been introduced as a means to minimize risk and reinforce alternative coping strategies. However, there is a stark absence of research into the perceived usefulness of such techniques amongst adolescents, and previous studies have highlighted ethical concerns about advocating 'safer' forms of self-harm. This study aimed to investigate the perceived usefulness of harm reduction techniques for adolescents who self-harm. Method: We purposively recruited current clients of a British early intervention program supporting young people in managing self-harm. We conducted semi-structured interviews and analyzed transcripts using thematic analysis. Results: Eleven interviews with service users aged 14-15 years identified three main themes: (1) Controlling the uncontrollable; (2) Barriers to practising safer self-harm; and (3) Developing a broad repertoire of harm reduction techniques. Participants expressed mixed views regarding the usefulness of such approaches. Some described greater competence and empowerment in self-harm management, whilst others described the utility of harm reduction methods as either short-lived or situation-specific, with the potential for misuse of anatomical knowledge to cause further harm to high-risk adolescents. Conclusion: The findings from our sample suggest harm reduction techniques have a place in self-harm management for some individuals, but their usage should be monitored and offered alongside alternative strategies and therapeutic support. Our study highlights the need for further research on who would benefit from these techniques and how they can be implemented successfully.HIGHLIGHTSHarm reduction can help people who self-harm manage distress and maintain autonomyPeople who self-harm have a broad repertoire of harm reduction techniquesHarm reduction can help reduce long-term damage and frequency of self-harm. abstract_id: PUBMED:37979016 The harm threshold and Mill's harm principle. The Harm Threshold (HT) holds that the state may interfere in medical decisions parents make on their children's behalf only when those decisions are likely to cause serious harm to the child. Such a high bar for intervention seems incompatible with both parental obligations and the state's role in protecting children's well-being. In this paper, I assess the theoretical underpinnings for the HT, focusing on John Stuart Mill's Harm Principle as its most plausible conceptual foundation. I offer (i) a novel, text-based argument showing that Mill's Harm Principle does not give justificatory force to the HT; and (ii) a positive account of some considerations which, beyond significant harm, would comprise an intervention principle normatively grounded in Mill's ethical theory. I find that substantive recommendations derived from Mill's socio-political texts are less laissez-faire than they have been interpreted by HT proponents. Justification for state intervention owes not to the severity of a harm, but to whether that harm arises from the failure to satisfy one's duty. Thus, a pediatric intervention principle derived from Mill ought not to be oriented around the degree of harm caused by a parent's healthcare decision, but rather, the kind of harm-specifically, whether the harm arises from violation of parental obligation. These findings challenge the interpretation of Mill adopted by HT proponents, eliminating a critical source of justification for a protected domain of parental liberty and reorienting the debate to focus on parental duties. abstract_id: PUBMED:34172102 Harm minimisation for the management of self-harm: a mixed-methods analysis of electronic health records in secondary mental healthcare. Background: Prevalence of self-harm in the UK was reported as 6.4% in 2014. Despite sparse evidence for effectiveness, guidelines recommend harm minimisation; a strategy in which people who self-harm are supported to do so safely. Aims: To determine the prevalence, sociodemographic and clinical characteristics of those who self-harm and practise harm minimisation within a London mental health trust. Method: We included electronic health records for patients treated by South London and Maudsley NHS Trust. Using an iterative search strategy, we identified patients who practise harm minimisation, then classified the approaches using a content analysis. We compared the sociodemographic characteristics with that of a control group of patients who self-harm and do not use harm minimisation. Results: In total 22 736 patients reported self-harm, of these 693 (3%) had records reporting the use of harm-minimisation techniques. We coded the approaches into categories: (a) 'substitution' (>50% of those using harm minimisation), such as using rubber bands or using ice; (b) 'simulation' (9%) such as using red pens; (c) 'defer or avoid' (7%) such as an alternative self-injury location; (d) 'damage limitation' (9%) such as using antiseptic techniques; the remainder were unclassifiable (24%). The majority of people using harm minimisation described it as helpful (>90%). Those practising harm minimisation were younger, female, of White ethnicity, had previous admissions and were less likely to have self-harmed with suicidal intent. Conclusions: A small minority of patients who self-harm report using harm minimisation, primarily substitution techniques, and the large majority find harm minimisation helpful. More research is required to determine the acceptability and effectiveness of harm-minimisation techniques and update national clinical guidelines. abstract_id: PUBMED:37481649 Gambling harm prevention and harm reduction in online environments: a call for action. Background: Gambling is increasingly offered and consumed in online and mobile environments. The digitalisation of the gambling industry poses new challenges on harm prevention and harm reduction. The digital environment differs from traditional, land-based gambling environments. It increases many risk-factors in gambling, including availability, ease-of-access, but also game characteristics such as speed and intensity. Furthermore, data collected on those gambling in digital environments makes gambling offer increasingly personalised and targeted. Main Results: This paper discusses how harm prevention and harm reduction efforts need to address gambling in online environments. We review existing literature on universal, selective, and indicated harm reduction and harm prevention efforts for online gambling and discuss ways forward. The discussion shows that there are several avenues forward for online gambling harm prevention and reduction at each of the universal, selective, and indicated levels. No measure is likely to be sufficient on its own and multi-modal as well as multi-level interventions are needed. Harm prevention and harm reduction measures online also differ from traditional land-based efforts. Online gambling providers utilise a variety of strategies to enable, market, and personalise their products using data and the wider online ecosystem. Conclusion: We argue that these same tools and channels should also be used for preventive work to better prevent and reduce the public health harms caused by online gambling. abstract_id: PUBMED:33123962 Toward a Philosophy of Harm Reduction. In this paper, I offer a prolegomenon to the philosophy of harm reduction. I begin with an overview of the philosophical literature on both harm and harm reduction, and a brief summary of harm reduction scholarship outside of philosophy in order to make the case that philosophers have something to contribute to understanding harm reduction, and moreover that engagement with harm reduction would improve philosophical scholarship. I then proceed to survey and assess the nascent and still modest philosophy of harm reduction literature that has begun to emerge. I pay particular attention to two Canadian philosophers who have called for the expansion of harm reduction beyond the realm of so-called "vice" (that is, addiction, intoxicants and sex work). Finally, I sketch some of the most interesting and important philosophical issues that I think the philosophy of harm reduction must grapple with going forward. abstract_id: PUBMED:30826976 The harm of medical disorder as harm in the damage sense. Jerome Wakefield has argued that a disorder is a harmful dysfunction. This paper develops how Wakefield should construe harmful in his harmful dysfunction analysis (HDA). Recently, Neil Feit has argued that classic puzzles involved in analyzing harm render Wakefield's HDA better off without harm as a necessary condition. Whether or not one conceives of harm as comparative or non-comparative, the concern is that the HDA forces people to classify as mere dysfunction what they know to be a disorder. For instance, one can conceive of cases where simultaneous disorders prevent each other from being, in any traditional sense, actually harmful; in such cases, according to the HDA, neither would be a disorder. I argue that the sense of harm that Wakefield should employ in the HDA is dispositional, similar to the sense of harm used when describing a vile of poison: "Be careful! That's poison. It's harmful." I call this harm in the damage sense. Using this sense of harm enables the HDA to avoid Feit's arguments, and thus it should be preferred to other senses when analyzing harmful dysfunction. abstract_id: PUBMED:33716854 The Co-occurrence of Self-Harm and Aggression: A Cognitive-Emotional Model of Dual-Harm. There is growing evidence that some individuals engage in both self-harm and aggression during the course of their lifetime. The co-occurrence of self-harm and aggression is termed dual-harm. Individuals who engage in dual-harm may represent a high-risk group with unique characteristics and pattern of harmful behaviours. Nevertheless, there is an absence of clinical guidelines for the treatment and prevention of dual-harm and a lack of agreed theoretical framework that accounts for why people may engage in this behaviour. The present work aimed to address this gap in the literature by providing a narrative review of previous research of self-harm, aggression and dual-harm, and through doing so, presenting an evidence-based theory of dual-harm - the cognitive-emotional model of dual-harm. This model draws from previous studies and theories, including the General Aggression Model, diathesis-stress models and emotional dysregulation theories. The cognitive-emotional model highlights the potential distal, proximal and feedback processes of dual-harm, the role of personality style and the possible emotional regulation and interpersonal functions of this behaviour. In line with our theory, various clinical and research implications for dual-harm are suggested, including hypotheses to be tested by future studies. abstract_id: PUBMED:36411711 Reseach progress in dual harm. Growing evidence supports an association between self-harm and aggression, and the two behaviors frequently co-occur. The co-occurrence of self-harm and aggression is termed as dual harm. Existing evidence reveals that rather than a simple co-occurrence of self-harm and aggression, dual-harm may be an independent behavior different from a sole harm behavior. Identification of characteristics and influencing factors for dual harm may help develop effective prevention strategies for the affected population. A review of the literature yields a high incidence of dual harm among the clinical and forensic populations. Individuals with dual harm engage in an earlier, more frequent, and a wider range of harmful behaviors, with the increased use of lethal methods. Common factors that affect dual harm include childhood adversities, emotional dysregulation, personality traits, mental disorders, and biological factors. A comprehensive theory to explain dual harm is still not available. Currently, two main theories for explaining mechanism of dual harm are the two-stage model of countervailing forces and the cognitive-emotional model. An in-depth exploration of characteristics, influencing factors, and theories for dual harm is of great importance for the prevention of dual harm. Answer: The concept of "doing no harm" is a fundamental ethical principle in various fields, including medicine, public health, and social services. However, when it comes to harm reduction strategies, the interpretation of this principle can become complex. Harm reduction refers to policies, programs, and practices that aim to minimize the negative health, social, and legal impacts associated with certain behaviors, such as drug use or self-harm, without necessarily eliminating the behaviors themselves (PUBMED:33048314; PUBMED:33027597; PUBMED:34172102). In the context of harm reduction, "doing no harm" may not mean preventing all harmful behaviors outright but rather reducing the risks and potential damage associated with them. For example, supervised injection sites (SISs) are designed to provide a safer environment for drug users, thereby reducing harm to both the individual and the public (PUBMED:33048314). Similarly, harm minimization strategies for self-harm, such as the use of less dangerous methods or the implementation of coping strategies, aim to reduce the severity and frequency of self-harm incidents (PUBMED:33027597; PUBMED:34172102). The Self-Harm Inventory is a tool developed to assess self-harm behavior, which can help in identifying individuals at risk and providing timely interventions (PUBMED:20508804). The harm threshold concept, based on John Stuart Mill's Harm Principle, suggests that state intervention in parental medical decisions for children should be based not on the severity of harm but on whether the harm arises from a violation of parental obligations (PUBMED:37979016). In the realm of online gambling, harm prevention and reduction efforts are needed to address the unique challenges posed by the digital environment, such as increased availability and targeted marketing strategies (PUBMED:37481649). Furthermore, the co-occurrence of self-harm and aggression, termed dual-harm, requires a deeper understanding of the characteristics and influencing factors to develop effective prevention strategies (PUBMED:36411711; PUBMED:33716854). Overall, "doing no harm" in the context of harm reduction is about mitigating the adverse effects of certain behaviors and promoting safer practices, rather than an absolute prohibition of those behaviors. It involves a nuanced approach that balances individual autonomy with public health and safety considerations.
Instruction: Is pregnancy a teachable moment for diet and physical activity behaviour change? Abstracts: abstract_id: PUBMED:27287546 Is pregnancy a teachable moment for diet and physical activity behaviour change? An interpretative phenomenological analysis of the experiences of women during their first pregnancy. Objectives: Pregnancy may provide a 'teachable moment' for positive health behaviour change, as a time when women are both motivated towards health and in regular contact with health care professionals. This study aimed to investigate whether women's experiences of pregnancy indicate that they would be receptive to behaviour change during this period. Design: Qualitative interview study. Methods: Using interpretative phenomenological analysis, this study details how seven women made decisions about their physical activity and dietary behaviour during their first pregnancy. Results: Two women had required fertility treatment to conceive. Their behaviour was driven by anxiety and a drive to minimize potential risks to the pregnancy. This included detailed information seeking and strict adherence to diet and physical activity recommendations. However, the majority of women described behaviour change as 'automatic', adopting a new lifestyle immediately upon discovering their pregnancy. Diet and physical activity were influenced by what these women perceived to be normal or acceptable during pregnancy (largely based on observations of others) and internal drivers, including bodily signals and a desire to retain some of their pre-pregnancy self-identity. More reasoned assessments regarding benefits for them and their baby were less prevalent and influential. Conclusions: Findings suggest that for women who conceived relatively easily, diet and physical activity behaviour during pregnancy is primarily based upon a combination of automatic judgements, physical sensations, and perceptions of what pregnant women are supposed to do. Health professionals and other credible sources appear to exert less influence. As such, pregnancy alone may not create a 'teachable moment'. Statement of contribution What is already known on this subject? Significant life events can be cues to action with relation to health behaviour change. However, much of the empirical research in this area has focused on negative health experiences such as receiving a false-positive screening result and hospitalization, and in relation to unequivocally negative behaviours such as smoking. It is often suggested that pregnancy, as a major life event, is a 'teachable moment' (TM) for lifestyle behaviour change due to an increase in motivation towards health and regular contact with health professionals. However, there is limited evidence for the utility of the TM model in predicting or promoting behaviour change. What does this study add? Two groups of women emerged from our study: the women who had experienced difficulties in conceiving and had received fertility treatment, and those who had conceived without intervention. The former group's experience of pregnancy was characterized by a sense of vulnerability and anxiety over sustaining the pregnancy which influenced every choice they made about their diet and physical activity. For the latter group, decisions about diet and physical activity were made immediately upon discovering their pregnancy, based upon a combination of automatic judgements, physical sensations, and perceptions of what is normal or 'good' for pregnancy. Among women with relatively trouble-free conception and pregnancy experiences, the necessary conditions may not be present to create a 'teachable moment'. This is due to a combination of a reliance on non-reflective decision-making, perception of low risk, and little change in affective response or self-concept. abstract_id: PUBMED:34993005 Understanding pregnancy as a teachable moment for behaviour change: a comparison of the COM-B and teachable moments models. Objectives: Theoretical models have informed the understanding of pregnancy as a 'teachable moment' for health behaviour change. However, these models have not been developed specifically for, nor widely tested, in this population. Currently, no pregnancy-specific model of behaviour change exists, which is important given it is a unique yet common health event. This study aimed to assess the extent to which factors influencing antenatal behaviour change are accounted for by the COM-B model and Teachable Moments (TM) model and to identify which model is best used to understand behaviour change during pregnancy. Design: Theoretical mapping exercise. Methods: A deductive approach was adopted; nine sub-themes identified in a previous thematic synthesis of 92 studies were mapped to the constructs of the TM and COM-B models. The sub-themes reflected factors influencing antenatal health behaviour. Findings: All sub-themes mapped to the COM-B model constructs, whereas the TM model failed to incorporate three sub-themes. Missed factors were non-psychological, including practical and environmental factors, social influences, and physical pregnancy symptoms. In contrast to the COM-B model, the TM model provided an enhanced conceptual understanding of pregnancy as a teachable moment for behaviour change, however, neither model accounted for the changeable salience of influencing factors throughout the pregnancy experience. Conclusions: The TM and COM-B models are both limited when applied within the context of pregnancy. Nevertheless, both models offer valuable insight that should be drawn upon when developing a pregnancy-specific model of behaviour change. abstract_id: PUBMED:38378517 A cross-sectional analysis of factors associated with the teachable moment concept and health behaviors during pregnancy. Background: Pregnancy is often associated with a change in health behaviors, leading some to suggest that pregnancy could be a teachable moment for lifestyle change. However, the prevalence and underlying mechanism of this phenomenon is not well understood. The aim of this study is to explore the prevalence of a teachable moment during pregnancy, the psychosocial factors that are associated with experiencing such a moment, and its association with actual health behaviors. Methods: In this cross-sectional study, 343 pregnant Dutch women completed an online questionnaire. Participants reported on their intentions to change lifestyle due to pregnancy, their current health behaviors, and several psychosocial factors that were assumed to be linked to perceiving a teachable moment during pregnancy: perceived risk, affective impact, changed self-concept, and social support. Multivariable linear and logistic regression were applied to the data analysis. Results: Results demonstrate that 56% of the women experienced a teachable moment based on intentions to change their health behavior. Multivariate regression analyses revealed that changed self-concept (β = 0.21; CI = 0.11-0.31), positive affect (positive β = 0.28; CI = 0.21-0.48), and negative affect (β = 0.12; CI = 0.00-0.15) were associated with higher intentions to change health behavior. Conversely, more perceived risk was associated with lower intentions to change health behavior (β=-0.29; CI = 0.31 - 0.13). Multivariate regression analyses showed a positive association between intentions to change health behavior and diet quality (β = 0.11; CI = 0.82-1.64) and physical activity (OR = 2.88; CI = 1.66-5.00). Conclusions: This study suggests that pregnancy may be experienced as a teachable moment, therefore providing an important window of opportunity for healthcare professionals to efficiently improve health behaviors and health in pregnant women and their children. Results suggest that healthcare professionals should link communication about pregnancy-related health behaviors to a pregnant women's change in identity, affective impact (predominantly positive affective impact) and risk perception to stimulate the motivation to change healthy behavior positively. abstract_id: PUBMED:35672937 Putting the 'teachable moment' in context: A view from critical health psychology. The concept of 'Teachable Moment' (TM) is an increasingly used term within mainstream health psychology in relation to interventions and health behaviour change. It refers to a naturally occurring health event where individuals may be motivated to change their behaviours from unhealthy ones to healthier choices. Pregnancy is seen as a key time for behaviour change interventions, partly due to the idea that the mother has increased motivations to protect her unborn child. This paper proposes a Critical Health Psychological (CHP) re-examination of the concept and explores the 'teachable moment' within a wider framing of contemporary parenting ideologies in order to offer a more critical, nuanced and contextual consideration of pregnancy and the transition to motherhood. The paper locates these discussions using an example of alcohol usage in pregnancy. In doing so, this paper is the first of its kind to consider the 'teachable moment' from a critical health psychological perspective. abstract_id: PUBMED:26626592 Beyond the 'teachable moment' - A conceptual analysis of women's perinatal behaviour change. Background: Midwives are increasingly expected to promote healthy behaviour to women and pregnancy is often regarded as a 'teachable moment' for health behaviour change. This view focuses on motivational aspects, when a richer analysis of behaviour change may be achieved by viewing the perinatal period through the lens of the Capability-Opportunity-Motivation Behaviour framework. This framework proposes that behaviour has three necessary determinants: capability, opportunity, and motivation. Aim: To outline a broader analysis of perinatal behaviour change than is afforded by the existing conceptualisation of the 'teachable moment' by using the Capability-Opportunity-Motivation Behaviour framework. Findings: Research suggests that the perinatal period can be viewed as a time in which capability, opportunity or motivation naturally change such that unhealthy behaviours are disrupted, and healthy behaviours may be adopted. Moving away from a sole focus on motivation, an analysis utilising the Capability-Opportunity-Motivation Behaviour framework suggests that changes in capability and opportunity may also offer opportune points for intervention, and that lack of capability or opportunity may act as barriers to behaviour change that might be expected based solely on changes in motivation. Moreover, the period spanning pregnancy and the postpartum could be seen as a series of opportune intervention moments, that is, personally meaningful episodes initiated by changes in capability, opportunity or motivation. Discussion: This analysis offers new avenues for research and practice, including identifying discrete events that may trigger shifts in capability, opportunity or motivation, and whether and how interventions might promote initiation and maintenance of perinatal health behaviours. abstract_id: PUBMED:33201632 Teachable moments: the right moment to make patients change their lifestyle Healthcare professionals can play a significant role in the prevention of lifestyle-related diseases. Yet there is still relatively little attention to lifestyle counseling, partly because of limited available time and doubts about its effectiveness. During so-called 'teachable moments', patients may be more receptive towards lifestyle advices and more motivated to change their lifestyle. For example during pregnancy, disease diagnoses, abnormal test results or even the corona crisis, patients may suddenly face lifestyle change differently. In this paper, we provide guidelines to healthcare professionals regarding utilization of these situations. General practitioners or specialists can create a potential teachable moment by discussing risk perception, emotions and self-image with the patient. Subsequently, paramedics can encourage patients to change health behaviors by increasing their motivation, self-efficacy and lifestyle-related skills. Recognizing and making optimal use of potential teachable moments can contribute to desired behavior change of patients with relatively little time investment. abstract_id: PUBMED:36521198 A review of the behaviour change techniques used in physical activity promotion or maintenance interventions in pregnant women. Background: The proportion of women meeting the recommended physical activity requirement is low. Evidence suggests behaviour change techniques (BCTs) can be effective in initiating and maintaining behaviour change and improving physical activity. Purpose: To synthesise the evidence related to the attributes of BCT-based physical activity interventions targeted at pregnant women. Methods: A systematic search of studies was made. Randomised controlled trials aiming to improve or maintain physical activity in pregnant women were included. Trials were categorised into 'very promising', 'quite promising', or 'non-promising' according to the intervention effectiveness. One-way analysis of variance was used to determine the difference in mean BCTs implemented in promising/ non-promising studies. Findings: A total of 18,966 studies were identified and 10 studies were included. 'Problem solving', 'social support (unspecified)', 'graded tasks', 'goal setting (behaviour)', 'instruction on how to perform a behaviour', 'self-monitoring of behaviour', 'demonstration of the behaviour', and 'action planning' were rated as promising BCTs. Discussion: Specific types of BCTs might be associated with physical activity promotion or maintenance during pregnancy. More high-quality randomised controlled trials investigating the effectiveness of individual or combinations of BCTs on physical activity in pregnant women are needed. abstract_id: PUBMED:26537206 Behaviour change techniques to change the postnatal eating and physical activity behaviours of women who are obese: a qualitative study. Objective: To explore the experiences of postnatal women who are obese [body mass index (BMI) ≥ 30 kg/m(2) ] in relation to making behaviour changes and use of behaviour change techniques (BCTs). Design: Qualitative interview study. Setting: Greater Manchester, UK. Population Or Sample: Women who were 1 year postnatal aged ≥18 years, who had an uncomplicated singleton pregnancy, and an antenatal booking BMI ≥ 30 kg/m(2) . Methods: Eighteen semi-structured, audio-recorded interviews were conducted by a research midwife with women who volunteered to be interviewed 1 year after taking part in a pilot randomised controlled trial. The six stages of thematic analysis were followed to understand the qualitative data. The Behavior Change Technique Taxonomy (version 1) was used to label the behaviour change techniques (BCTs) reported by women. Main Outcome Measures: Themes derived from 1-year postnatal interview transcripts. Results: Two themes were evident: 1. A focused approach to postnatal weight management: women reported making specific changes to their eating and physical activity behaviours, and 2. Need for support: six BCTs were reported as helping women make changes to their eating and physical activity behaviours; three were reported more frequently than others: Self-monitoring of behaviour (2.3), Prompts/cues (7.1) and Social support (unspecified; 3.1). All of the BCTs required support from others for their delivery; food diaries were the most popular delivery method. Conclusion: Behaviour change techniques are useful to postnatal women who are obese, and have the potential to improve their physical and mental wellbeing. Midwives and obstetricians should be aware of such techniques, to encourage positive changes. abstract_id: PUBMED:26209211 Efficacy of physical activity interventions in post-natal populations: systematic review, meta-analysis and content coding of behaviour change techniques. This systematic review and meta-analysis reports the efficacy of post-natal physical activity change interventions with content coding of behaviour change techniques (BCTs). Electronic databases (MEDLINE, CINAHL and PsychINFO) were searched for interventions published from January 1980 to July 2013. Inclusion criteria were: (i) interventions including ≥1 BCT designed to change physical activity behaviour, (ii) studies reporting ≥1 physical activity outcome, (iii) interventions commencing later than four weeks after childbirth and (iv) studies including participants who had given birth within the last year. Controlled trials were included in the meta-analysis. Interventions were coded using the 40-item Coventry, Aberdeen & London - Refined (CALO-RE) taxonomy of BCTs and study quality assessment was conducted using Cochrane criteria. Twenty studies were included in the review (meta-analysis: n = 14). Seven were interventions conducted with healthy inactive post-natal women. Nine were post-natal weight management studies. Two studies included women with post-natal depression. Two studies focused on improving general well-being. Studies in healthy populations but not for weight management successfully changed physical activity. Interventions increased frequency but not volume of physical activity or walking behaviour. Efficacious interventions always included the BCTs 'goal setting (behaviour)' and 'prompt self-monitoring of behaviour'. abstract_id: PUBMED:32469872 The effectiveness of smoking cessation, alcohol reduction, diet and physical activity interventions in changing behaviours during pregnancy: A systematic review of systematic reviews. Background: Pregnancy is a teachable moment for behaviour change. Multiple guidelines target pregnant women for behavioural intervention. This systematic review of systematic reviews reports the effectiveness of interventions delivered during pregnancy on changing women's behaviour across multiple behavioural domains. Methods: Fourteen databases were searched for systematic reviews published from 2008, reporting interventions delivered during pregnancy targeting smoking, alcohol, diet or physical activity as outcomes. Data on behaviour change related to these behaviours are reported here. Quality was assessed using the JBI critical appraisal tool for umbrella reviews. Consistency in intervention effectiveness and gaps in the evidence-base are described. Results: Searches identified 24,388 results; 109 were systematic reviews of behaviour change interventions delivered in pregnancy, and 36 reported behavioural outcomes. All smoking and alcohol reviews identified reported maternal behaviours as outcomes (n = 16 and 4 respectively), whereas only 16 out of 89 diet and/or physical activity reviews reported these behaviours. Most reviews were high quality (67%) and interventions were predominantly set in high-income countries. Overall, there was consistent evidence for improving healthy diet behaviours related to increasing fruit and vegetable consumption and decreasing carbohydrate intake, and fairly consistent evidence for increase in some measures of physical activity (METs and VO2 max) and for reductions in fat intake and smoking during pregnancy. There was a lack of consistent evidence across reviews reporting energy, protein, fibre, or micronutrient intakes; smoking cessation, abstinence or relapse; any alcohol behaviours. Conclusions: The most consistent review evidence is for interventions improving dietary behaviours during pregnancy compared with other behaviours, although the majority of diet reviews prioritised reporting health-related outcomes over behavioural outcomes. Heterogeneity between reported behaviour outcomes limits ability to pool data in meta-analysis and more consistent reporting is needed. Limited data are available for alcohol interventions in pregnancy or interventions in low- or middle-income-countries, which are priority areas for future research. Answer: Pregnancy has been proposed as a potential 'teachable moment' for health behavior change, particularly regarding diet and physical activity, due to increased motivation towards health and regular contact with healthcare professionals (PUBMED:27287546). However, the evidence on whether pregnancy acts as a teachable moment for behavior change is mixed. Some studies suggest that pregnancy can indeed be a time when women are more receptive to changing their health behaviors. A cross-sectional analysis found that 56% of pregnant women experienced a teachable moment based on intentions to change their health behavior, with changed self-concept, positive affect, and negative affect being associated with higher intentions to change health behavior (PUBMED:38378517). This indicates that pregnancy may provide an important window of opportunity for healthcare professionals to promote health behaviors. On the other hand, qualitative research indicates that for women who conceived without difficulty, decisions about diet and physical activity during pregnancy were often made immediately upon discovering their pregnancy and were based on automatic judgments, physical sensations, and perceptions of what is normal or 'good' for pregnancy, rather than on reasoned assessments of benefits (PUBMED:27287546). This suggests that the necessary conditions for creating a teachable moment may not always be present, as health professionals and other credible sources appear to exert less influence on these women's behavior. Theoretical models such as the COM-B model and Teachable Moments (TM) model have been used to understand behavior change during pregnancy. While all sub-themes identified in a thematic synthesis mapped to the COM-B model constructs, the TM model failed to incorporate some factors, such as practical and environmental factors, social influences, and physical pregnancy symptoms (PUBMED:34993005). Neither model fully accounted for the changeable salience of influencing factors throughout the pregnancy experience. A critical health psychology perspective suggests that the concept of a teachable moment should be examined within a wider framing of contemporary parenting ideologies, offering a more nuanced and contextual consideration of pregnancy and the transition to motherhood (PUBMED:35672937). In summary, while pregnancy may be a teachable moment for some women, particularly those who experience a change in self-concept or affective impact, it is not universally so. The influence of automatic judgments, physical sensations, and social perceptions, as well as the lack of a pregnancy-specific behavior change model, suggest that the effectiveness of interventions may vary.
Instruction: Cardiac resynchronization therapy: an option for inotrope-supported patients with end-stage heart failure? Abstracts: abstract_id: PUBMED:21111981 Role of cardiac resynchronization in end-stage heart failure patients requiring inotrope therapy. Background: Outcomes among inotrope-treated heart failure (HF) patients receiving cardiac resynchronization therapy (CRT) have not been well characterized, particularly in those requiring intravenous inotropes at the time of implant. Methods: We analyzed 759 consecutive CRT-defibrillator recipients who were categorized as never on inotropes (NI; n = 585), weaned from inotropes before implant (PI; n = 124), or on inotropes at implant (II; n = 50). Survival free from heart transplant or ventricular assist device and overall survival were compared using the Social Security Death Index. A patient cohort who underwent unsuccessful CRT implantation and received a standard defibrillator (SD; n = 94) comprised a comparison group. Propensity score analysis was used to control for intergroup baseline differences. Results: Compared with the other cohorts, II patients had more comorbidities. Both survival endpoints differed significantly (P < .001) among the 4 cohorts; II patients demonstrated shorter survival than NI patients, with the PI and SD groups having intermediate survival. After adjusting for propensity scores, overall differences and patterns in survival endpoints persisted (P < .01), but the only statistically significant pairwise difference was overall survival between the NI and II groups at 12 months (hazard ratio 2.95, 95% confidence interval 1.05-8.35). CRT recipients ever on inotropes (PI and II) and SD patients ever requiring inotropes (n = 17) experienced similar survival endpoints. Among II patients, predictors of hospital discharge free from inotropes after CRT included male gender, older age, and ability to tolerate β-blockade. Conclusions: Inotrope-dependent HF patients show significantly worse survival despite CRT than inotrope-naïve patients, in part because of more comorbid conditions at baseline. CRT may not provide a survival advantage over a standard defibrillator among patients who have received inotropes before CRT. Weaning from inotropes and initiating neurohormonal antagonists before CRT should be an important goal among inotrope-dependent HF patients. abstract_id: PUBMED:33447714 Upgrade of cardiac resynchronization therapy by utilizing additional His-bundle pacing in patients with inotrope-dependent end-stage heart failure: a case series. Background: His-bundle pacing (HBP) alone may become an alternative to conventional cardiac resynchronization therapy (CRT) utilizing right ventricular apical (RVA) and left ventricular (LV) pacing (BiVRVA+LV) in selected patients, but the effects of CRT utilizing HBP and LV pacing (BiVHB+LV) on cardiac resynchronization and heart failure (HF) are unclear. Case Summary: We presented two patients with inotrope-dependent end-stage HF in whom the upgrade from conventional BiVRVA+LV to BiVHB+LV pacing by the addition of a lead for HBP improved their HF status. Patient 1 was a 32-year-old man with lamin A/C cardiomyopathy, atrial fibrillation, and complete atrioventricular (AV) block. Patient 2 was a 70-year-old man with ischaemic cardiomyopathy complicated by AV block and worsening of HF resulting from ablation for ventricular tachycardia storm. The HF status of both patients improved dramatically following the upgrade from BiVRVA+LV to BiVHB+LV pacing. Discussion: End-stage HF patients suffer from diffuse intraventricular conduction defect not only in the LV but also in the right ventricle (RV). The resulting dyssynchrony may not be sufficiently corrected by conventional BiVRVA+LV pacing or HBP alone. Right ventricular apical pacing itself may also impair RV synchrony. An upgrade to BiVHB+LV pacing could be beneficial in patients who become non-responsive to conventional BiV pacing as the His-Purkinje conduction defect progresses. abstract_id: PUBMED:15701469 Cardiac resynchronization therapy: an option for inotrope-supported patients with end-stage heart failure? Background: Patients with refractory heart failure requiring inotropic support have a very poor prognosis. Cardiac resynchronization therapy (CRT) offers symptomatic and possibly a survival benefit for patients with stable chronic heart failure (CHF) and a prolonged QRS, but its role in the management of end-stage heart failure requiring inotropic support has not been evaluated. Methods: We performed a retrospective observational study of patients undergoing CRT at our institution. Results: We identified 10 patients who required inotropic support for refractory CHF and who underwent CRT while on intravenous inotropic agents. Patients had been in hospital for 30+/-29 days and had received inotropic support for 11+/-6 days prior to CRT. All patients were weaned from inotropic support (2+/-2 days post-CRT) and all patients survived to hospital discharge (12+/-13 days post-CRT). Furosemide dose fell from 160+/-38 mg on admission to 108+/-53 mg on discharge (p<0.01). Serum creatinine fell from 192+/-34 micromol/l prior to CRT to 160+/-37 micromol/l on discharge (p<0.05). Serum sodium was 131+/-4 mmol/l prior to CRT and remained low at 132+/-5 mmol/l on discharge. At short-term follow up (mean 47 days), all patients were alive; mean furosemide dose was 130+/-53 mg (p=0.056 versus pre-CRT). Serum creatinine was 157+/-36 micromol/l and serum sodium had increased to 138+/-6 mmol/l (p<0.05 and p<0.01, respectively, versus pre-CRT). Conclusion: CRT may offer a new therapeutic option for inotrope-supported CHF patients with a prolonged QRS. abstract_id: PUBMED:20819622 Echocardiographic mapping of left ventricular resynchronization during cardiac resynchronization therapy procedures. Background: Cardiac resynchronization therapy (CRT) is an effective electrical therapy for patients with moderate to severe heart failure and cardiac dyssynchrony. This study aimed to investigate the degree of acute left ventricular (LV) resynchronization with biventricular pacing (BVP) at different LV sites and to examine the feasibility of performing transthoracic echocardiography (TTE) to quantify acute LV resynchronization during CRT procedure. Methods: Fourteen patients with NYHA Class III-IV heart failure, LV ejection fraction < or = 35%, QRS duration > or = 120 ms and septal-lateral delay (SLD) > or = 60 ms on tissue Doppler imaging (TDI), underwent CRT implant. TDI was obtained from three apical views during BVP at each accessible LV site and SLD during BVP was derived. Synchronicity gain index (Sg) by SLD was defined as (1 + (SLD at baseline--SLD at BVP)/SLD at baseline). Results: Seventy-two sites were studied. Positive resynchronization (R+, Sg > 1) was found in 42 (58%) sites. R+ was more likely in posterior or lateral than anterior LV sites (66% vs. 36%, P < 0.001). Concordance of empirical LV lead implantation sites and sites with R+ was 50% (7/14). Conclusions: The degree of acute LV resynchronization by BVP depends on LV lead location and empirical implantation of LV lead results in only 50% concordance with R+. Performing TTE during CRT implantation is feasible to identify LV sites with positive resynchronization. abstract_id: PUBMED:21344234 Role of imaging in cardiac resynchronization therapy Several multicenter randomized clinical trials have established cardiac resynchronization as a safe and effective way to treat heart failure patients. This is reflected in the Focus Update of the European guidelines that describes a class IA indication in patients with NYHA class II-IV heart failure with LVEF≤35% and QRS≥120 ms (NYHA III/IV) or ≥150 ms (NYHA II). If applied in clinical practice, this patient selection results in ineffective treatment in about one third of patients implanted. Since the pathophysiological basis of the disease, a disorganized electromechanical function in patients with left bundle branch block (LBBB), is amenable to analysis with imaging methods, imaging has always played an important role in patient selection. None of the parameters used proved to be reliable for the prediction of cardiac resynchronization therapy success in the multicenter PROSPECT trial. Following the publication of PROSPECT in 2008, several new studies using echocardiography and cardiac magnetic resonance imaging were published. New publications are evaluated and analyzed in the context of earlier ones. abstract_id: PUBMED:17599447 Cardiac resynchronization therapy in patients with end-stage inotrope-dependent class IV heart failure. Although cardiac resynchronization therapy (CRT) is beneficial in patients with drug-refractory New York Heart Association (NYHA) class III/IV heart failure (HF) and left ventricular (LV) dyssynchrony, CRT efficacy is not well established in patients with more advanced HF on inotropic support. Ten patients (age 55 +/- 13 years) with inotrope-dependent class IV HF (nonischemic [n = 6] and ischemic [n = 4]) received a CRT implantable cardioverter-defibrillator device. QRS duration was 153 +/- 25 ms (left branch bundle block [n = 7], intraventricular conduction delay [n = 2], and QRS <120 ms [n = 1]). The indication for CRT was based on either electrocardiographic criteria (n = 9) or echocardiographic evidence of LV dyssynchrony (n = 1). Intravenous inotropic therapy consisted of dobutamine (n = 6; 4.3 +/- 1.9 microg/kg/min) or milrinone (n = 4; 0.54 +/- 0.19 microg/kg/min) as inpatient (n = 3) or outpatient (n = 7) therapy for 146 +/- 258 days before CRT. One patient required ventilatory support before and during device implantation. All patients were alive at follow-up 1,088 +/- 284 days after CRT. Three patients underwent successful orthotopic cardiac transplantation after 56, 257, and 910 days of CRT. HF improved in 9 patients to NYHA classes II (n = 5) and III (n = 4). Intravenous inotropic therapy was discontinued in 9 of 10 patients after 15 +/- 14 days of CRT. LV volumes decreased (end-diastolic from 226 +/- 78 to 212 +/- 83 ml; p = 0.08; end-systolic from 174 +/- 65 to 150 +/- 78 ml; p <0.01). LV ejection fraction increased (23.5 +/- 4.3% to 32.0 +/- 9.1%; p <0.05). No implantable cardioverter-defibrillator shocks were recorded, and antitachycardia therapy for ventricular tachyarrhythmias was delivered in 1 patient. In conclusion, patients with end-stage inotrope-dependent NYHA class IV HF and LV dyssynchrony may respond favorably to CRT with long-term clinical benefit and improved LV function. abstract_id: PUBMED:22681865 Comparison of outcomes for patients with nonischemic cardiomyopathy taking intravenous inotropes versus those weaned from or never taking inotropes at cardiac resynchronization therapy. Mixed cohorts of patients with ischemic and nonischemic end-stage heart failure (HF) with a QRS duration of ≥120 ms and requiring intravenous inotropes do not appear to benefit from cardiac resynchronization therapy (CRT). However, CRT does provide greater benefit to patients with nonischemic cardiomyopathy and might, therefore, be able to reverse the HF syndrome in such patients who are inotrope dependent. To address this question, 226 patients with nonischemic cardiomyopathy who received a CRT-defibrillator and who had a left ventricular ejection fraction of ≤35% and QRS of ≥120 ms were followed up for the outcomes of death, transplantation, and ventricular assist device placement. Follow-up echocardiograms were performed in patients with ≥6 months of transplant- and ventricular assist device-free survival after CRT. The patients were divided into 3 groups: (1) never took inotropes (n = 180), (2) weaned from inotropes before CRT (n = 30), and (3) dependent on inotropes at CRT implantation (n = 16). At 47 ± 30 months of follow-up, the patients who had never taken inotropes had had the longest transplant- and ventricular assist device-free survival. The inotrope-dependent patients had the worst outcomes, and the patients weaned from inotropes experienced intermediate outcomes (p <0.0001). Reverse remodeling and left ventricular ejection fraction improvement followed a similar pattern. Among the patients weaned from and dependent on inotropes, a central venous pressure <10 mm Hg on right heart catheterization before CRT was predictive of greater left ventricular functional improvement, more profound reverse remodeling, and longer survival free of transplantation or ventricular assist device placement. In conclusion, inotrope therapy before CRT is an important marker of adverse outcomes after implantation in patients with nonischemic cardiomyopathy, with inotrope dependence denoting irreversible end-stage HF unresponsive to CRT. abstract_id: PUBMED:35587165 Renal denervation in patients who do not respond to cardiac resynchronization therapy. Cardiac resynchronization therapy (CRT) reduces the morbidity and mortality in advanced heart failure (HF) in about two-thirds of the patients. Approximately one-third of the patients do not respond to CRT. The overactivity of sympathetic nervous system is associated with advanced HF and deteriorates the hemodynamic state. We tested the hypothesis that controlling sympathetic overactivity by renal denervation (RDN) could be beneficial in nonresponders for CRT. In our HeartF-RDN study (ClinalTrials.gov. NCT02638324), RDN could not reverse the progression of HF in subjects with New York Heart Association Classification (NYHA) III-IV stage symptoms. abstract_id: PUBMED:21238890 Heart failure patients selection for cardiac resynchronization therapy. Cardiac resynchronization therapy (CRT) is an established treatment for refractory chronic heart failure (CHF) patients with ventricular dyssynchrony. The patient selection for this therapy remains the basis for response improvement. Various parameters, methods and technology for identification of appropriate patient are under research. The influences of age and gender, disease progress stage such as mild and late stage CHF including right ventricular dysfunction, dyssynchrony and scar identified by imaging techniques like echocardiography, magnetic resonance and nuclear imaging, and atrial fibrillation on CRT benefits were respectively discussed. This review summarizes the current advancement in these areas. abstract_id: PUBMED:27525239 Cardiac Resynchronization Therapy Defibrillator Treatment in a Child with Heart Failure and Ventricular Arrhythmia. Cardiac resynchronization therapy (CRT) is a new treatment for refractory heart failure. However, most patients with heart failure treated with CRT are adults, middle-aged or older with idiopathic or ischemic dilated cardiomyopathy. We treated a 12-year-old boy, who was transferred after cardiac arrest, with dilated cardiomyopathy, left bundle-branch block, and ventricular tachycardia. We performed cardiac resynchronization therapy with a defibrillator (CRT-D). After CRT-D, left ventricular ejection fraction improved from 22% to 44% assessed by echocardiogram 1 year postoperatively. On electrocardiogram, QRS duration was shortened from 206 to 144 ms. The patient's clinical symptoms also improved. For pediatric patients with refractory heart failure and ventricular arrhythmia, CRT-D could be indicated as an effective therapeutic option. Answer: Cardiac resynchronization therapy (CRT) has been explored as a treatment option for patients with end-stage heart failure who are supported by inotropes. Studies have shown varying outcomes for this patient population when treated with CRT. One study found that inotrope-dependent heart failure patients showed significantly worse survival despite CRT compared to inotrope-naïve patients, partly due to more comorbid conditions at baseline. CRT did not provide a survival advantage over a standard defibrillator among patients who had received inotropes before CRT. The study suggested that weaning from inotropes and initiating neurohormonal antagonists before CRT should be an important goal among inotrope-dependent heart failure patients (PUBMED:21111981). Another study presented two patients with inotrope-dependent end-stage heart failure who experienced dramatic improvement in their heart failure status after upgrading from conventional CRT to CRT utilizing His-bundle pacing and left ventricular pacing (BiVHB+LV). This suggests that an upgrade to BiVHB+LV pacing could be beneficial in patients who become non-responsive to conventional CRT as the His-Purkinje conduction defect progresses (PUBMED:33447714). A retrospective observational study identified 10 patients who required inotropic support for refractory heart failure and underwent CRT while on intravenous inotropic agents. All patients were weaned from inotropic support post-CRT and survived to hospital discharge, indicating that CRT may offer a new therapeutic option for inotrope-supported patients with a prolonged QRS (PUBMED:15701469). However, another study concluded that inotrope therapy before CRT is an important marker of adverse outcomes after implantation in patients with nonischemic cardiomyopathy, with inotrope dependence denoting irreversible end-stage heart failure unresponsive to CRT (PUBMED:22681865). In summary, while CRT may offer benefits to certain inotrope-supported patients with end-stage heart failure, the overall prognosis for these patients remains guarded, and the therapy may not provide a survival advantage in all cases. The decision to use CRT in this population should be individualized, taking into account the patient's specific clinical situation and the presence of comorbidities.
Instruction: Are behaviour risk factors for traumatic dental injuries in childhood different between males and females? Abstracts: abstract_id: PUBMED:25793950 Are behaviour risk factors for traumatic dental injuries in childhood different between males and females? Aim: Examination of the risk factors for childhood traumatic dental injuries for male and female patients have been elusive. The present study aimed to examine whether males and females are differentially vulnerable to Traumatic Dental Injuries in relation to emotion regulation, attention deficiency hyperactive disorder symptomatology and behaviour problems. Materials And Methods: An institutional ethical review board approved the case-control study carried out at the Gazi University, Faculty of Dentistry, Turkey. A total of 80 patients with traumatic dental injuries and 80 patients with other dental problems participated in the study. Patients' parents filled in two scales: Conners' Rating Scales-Revised Attention Deficiency Hyperactive Disorder-Index, Oppositional Behavior, Hyperactivity, Anxious-Shy, Social Problems, Inattentive and Hyperactive-Impulsive subscales; and Emotion Regulation Checklist, with two subscales of Emotional Lability and Emotion Regulation. Multiple logistic regression analyses were performed separately for male and female patients. Results: Oppositional behaviour, hyperactivity and social problems were found to be risk factors for male patients. Being anxious/shy was the protective factor for both males and females. Classification accuracy for males and females were calculated to be 79.2% and 85.2% respectively. Conclusion: Several risk factors for childhood traumatic dental injuries were found to differ for male and female patients. abstract_id: PUBMED:32257006 Risk factors and patterns of traumatic dental injuries among Indian adolescents. Background/purpose: Dental injuries in children have functional, esthetic, and psychological effects, with consequences for the child, parent, and dentist. This study assessed the pattern of traumatic dental injuries and their relationship with predisposing factors among 12- and 15-year-old school children in Kanpur, India. Materials And Methods: A cross-sectional study was conducted on 1100 boys and girls aged 12 or 15 years. Anterior permanent teeth were examined based on the modified Ellis classification. Type of damage, size of incisal overjet, and adequacy of lip coverage were also recorded. Chi-square tests and multiple regression analysis were used for statistical analysis. Results: The prevalence of traumatic dental injuries to anterior teeth was 10.9%. Age and gender distribution indicated that most injuries occurred in 15-year-old age group (11.3%) and among boys (11.5%). The gender-related difference was statistically significant (p < 0.024). Maxillary central incisors (83.7%) were frequently involved. The predominant injury type was enamel fracture (68.3%) mainly due to falls (52.5%). Increased overjet, inadequate lip coverage, type of school, and gender were significant contributing factors for traumatic dental injuries. Conclusion: Study reveals the frequency and cause of traumatic injuries to anterior teeth, which assists in identifying risk groups and treatment needs in order to establish effective preventive strategies. abstract_id: PUBMED:30207628 Risk factors for traumatic dental injuries in the Brazilian population: A critical review. Background/aims: Strategies for the prevention of traumatic dental injuries (TDI) should consider the risk factors involved for each population studied. The aim of this study was to perform a critical review regarding the risk factors for TDI in the Brazilian population. Materials And Methods: A systematic literature search was performed in the MEDLINE, Scopus, Web of Science, Lilacs, and BBO databases using MeSH terms, synonyms, and keywords, with no language or date restrictions. In the first step, all relevant studies identified, regardless of the type of statistical analysis performed, were grouped according to their geographic location. In a second step, the studies using Andreasen's criteria to classify the injuries and multivariate analysis to identify the risk factors for TDI in Brazilian subjects were included for data extraction. Results: The search strategy initially identified 3373 articles. However, only 108 articles assessed TDI with predisposing factors and were included in the first step. From those, 28 were deemed eligible for inclusion in the second step. No consensus related to the relationship between gender and TDI in the primary dentition was achieved. Nonetheless, males were found to be more prone to trauma in the permanent dentition. Overjet, inadequate lip sealing and anterior open bite increased the risk for TDI, both in primary and permanent dentitions. Social environment was related to trauma only in primary dentition. For permanent dentition, dental caries, obesity, binge drinking, and drug use were identified as considerable risk factors for TDI. Conclusion: The risk factors for TDI in the Brazilian population are similar to those found worldwide. However, some differences can be observed, such as gender and socioeconomic indicators as predisposing factors. abstract_id: PUBMED:35900466 The risk factors and pattern of traumatic dental injuries in 10-12-year olds in Kano, Nigeria. Background: Traumatic dental injuries (TDIs) rank among the most common conditions in children and adolescents. Nigerian dental trauma data are largely based on studies that were conducted in the southern parts of Nigeria. This study was designed to identify the risk factors and the pattern of TDIs among school-age children in northern Nigeria. Objectives: The objective of the study was to identify the risk factors for and to determine the pattern of dental injuries among 10-12-year-old males in Kano, northern Nigeria. Materials And Methods: Six hundred and ninety-six 10-12-year olds were selected through a multistage sampling of school children, street children and rehabilitated children in Kano and examined for TDIs using the WHO protocols. Data analysis was carried out using SPSS version 20. Statistical significance was considered when P < 0.05. Results: Six hundred and ninety-four 10-12-year olds participated in the study; The prevalence of TDIs was 6.6%. Being a street-child was associated with 30% higher risk for dental injuries (adjusted odds ratio [aOR] = 1.3; 95% confidence interval [CI] = 0.60 - 3.1; P = 0.48), whereas living as a rehabilitated street child (aOR = 0.41; 95% CI = 0.19 - 0.88; P = 0.02) and older age were associated with a reduced risk (aOR = 0.63; 95% CI = 0.39 - 1.01; P = 0.06) to injuries. The most common type of trauma was enamel-dentine injuries or Ellis II, and the most common cause was falls. Street children and low-age groups had more single-tooth injuries (85.7% and 85.0%, respectively). The commonly injured teeth were the maxillary right and left central incisors. Conclusion: Living on the street and young age were associated with the likelihood for injuries in male adolescents in Kano. The maxillary central incisors were the commonly affected teeth. abstract_id: PUBMED:25032172 Traumatic dental injuries among 12-15-year-old-school children in panchkula. Background: Traumatic dental injury (TDI) in children and adolescents has become one of the most serious dental public health problems. Despite such a high prevalence of dental trauma, very less attention has been paid to TDI, its etiology, and prevention. Objectives: To determine the prevalence of anterior tooth traumatic dental injuries in 12-15-year-old school children of Panchkula district, India, and to find any correlation with the cause, gender, extent of overbite as well as over-jet, and previous treatment. Patients And Methods: A multistage sample of 12-15-year-old school children (n = 810) in Panchkula district, Haryana, was selected. The children were screened using WHO criteria for oral examination and a trained dental surgeon examined the children. Those with clinical TDI were examined further for the type of traumatic injuries using Elis classification modified by Holland. Overjet and overbite were recorded. After examination, questions regarding the cause of trauma and its treatment were asked. Data were subjected to statistical analysis using the Chi square and Mantel-Haenszel tests by SPSS version 20.0. Results: The results showed that out of 810 children, 86 (10.2 %) had TDI. Males had higher prevalence of trauma than females (P < 0.05). The common cause of trauma was fall (51.11%) followed by sports injuries (41.86%). Enamel-dentin fracture without pulpal involvement was the most common type of trauma and the most frequent involved teeth were maxillary central incisors. A significant association was observed between overjet and overbite and trauma. Only 3.5% of the children affected with trauma had received treatment. Conclusions: The prevalence of traumatic injuries to permanent incisors in 12-15-year-old Panchkula school children was relatively high. TDI was associated with gender, overjet, and lip competence. There was a great unmet treatment need. abstract_id: PUBMED:28561901 Psychosocial factors and traumatic dental injuries among adolescents. Objectives: To examine the association of traumatic dental injuries (TDI) and psychosocial factors in adolescents and to identify psychological profiles associated with TDI. Methods: A cross-sectional study was conducted involving 531 students aged 13-16 years. Data were collected through oral examination and a structured interview with the adolescents, in conjunction with a questionnaire answered by their mothers. Associations between TDI and independent variables were analysed using a model-based approach, while an exploratory data analysis was applied to identify homogenous clusters of adolescents in relation to their sense of coherence (SoC), perception of parental support and their mothers' SoC. These clusters were examined further for associations with TDI and psychosocial variables. Results: The prevalence of TDI was 15.8%. Adolescents with high TDI prevalence were males, nonfirstborns, or those frequently engaging in physical activity. In addition, both their own SoC and that of their mother were low and they reported low parental support. They were also prone to complaining about the behaviour of their peer group. The hierarchical cluster analysis (HCA) demonstrated three homogenous clusters. The cluster with the highest scores for all psychological variables included adolescents with low TDI prevalence, low paternal punishment, spacious home environment, high Family Affluence Scale (FAS) score, good school grades, few complaints about schoolmates and higher maternal education. Conclusions: Psychosocial factors appear to influence an adolescent's risk of TDI. High parental support, high own and maternal SoC and a higher socioeconomic status (SES) are typical of adolescents with low TDI experience. abstract_id: PUBMED:29327631 The association between adverse childhood experiences and adult traumatic brain injury/concussion: a scoping review. Background: Adverse childhood experiences are significant risk factors for physical and mental illnesses in adulthood. Traumatic brain injury/concussion is a challenging condition where pre-injury factors may affect recovery. The association between childhood adversity and traumatic brain injury/concussion has not been previously reviewed. The research question addressed is: What is known from the existing literature about the association between adverse childhood experiences and traumatic brain injury/concussion in adults? Methods: All original studies of any type published in English since 2007 on adverse childhood experiences and traumatic brain injury/concussion outcomes were included. The literature search was conducted in multiple electronic databases. Arksey and O'Malley and Levac et al.'s scoping review frameworks were used. Two reviewers independently completed screening and data abstraction. Results: The review yielded six observational studies. Included studies were limited to incarcerated or homeless samples, and individuals at high-risk of or with mental illnesses. Across studies, methods for childhood adversity and traumatic brain injury/concussion assessment were heterogeneous. Discussion: A positive association between adverse childhood experiences and traumatic brain injury occurrence was identified. The review highlights the importance of screening and treatment of adverse childhood experiences. Future research should extend to the general population and implications on injury recovery. Implications for rehabilitation Exposure to adverse childhood experiences is associated with increased risk of traumatic brain injury. Specific types of adverse childhood experiences associated with risk of traumatic brain injury include childhood physical abuse, psychological abuse, household member incarceration, and household member drug abuse. Clinicians and researchers should inquire about adverse childhood experiences in all people with traumatic brain injury as pre-injury health conditions can affect recovery. abstract_id: PUBMED:28965363 Work-related traumatic dental injuries: Prevalence, characteristics and risk factors. Background/aims: The prevalence of work-related oral trauma is underestimated because minor dental injuries are often not reported in patients with several injuries in different parts of the body. In addition, little data are available regarding their characteristics. The aim of this epidemiological study was to determine the prevalence, types, and characteristics of occupational traumatic dental injuries (TDIs) in a large working community. Materials And Methods: Work-related TDIs that occurred during the period between 2011 and 2013 in the District of Genoa (Northwest of Italy, 0.86 million inhabitants) were analyzed. Patients' data were obtained from the National Institute for Insurance against Accidents at Work database. Results: During the 2 year period, 112 TDIs (345 traumatized teeth) were recorded. The prevalence was 5.6‰ of the total amount of occupational trauma. The highest prevalence was found in the fourth and fifth decades of life (OR=3.6, P < .001), and males were injured more often than females (70.5% vs 29.5%, OR=2.8, P < .001). Service and office workers represented 52% of the sample, and construction/farm/factory workers and craftsmen were 48%. TDIs involved only teeth and surrounding tissue in 66% of cases, or in combination with another maxillofacial injury in 34%. They were statistically associated with construction/farm/factory workers group (Chi squared P < .01). Crown fracture was recorded in 34.5% of cases, subluxation/luxation in 10.7%, avulsion in 9%, root fracture in 3.8%, and concussion in 3.5%. Thirty-two subjects (28.6%, 133 teeth, OR=4.3, P < .001) presented at least 1 traumatized tooth with previous dental treatment. Among 212 (61.4%) traumatized teeth, 67.5% were upper incisors, 17.5% were lower incisors, 3.3% were upper canines, 1.9% were lower canines, and 9.9% were bicuspids and molars. Conclusions: Work-related TDIs had a low overall prevalence, and fractures were the most frequent dental injury. Age, gender, and preexisting dental treatments represented risk factors for work-related TDIs. abstract_id: PUBMED:37638637 Risk factors associated with traumatic dental injuries in individuals with special healthcare needs-A systematic review and meta-analysis. Background/aim: Individuals with special healthcare needs (SHCN) are more likely to sustain traumatic dental injuries (TDIs) due to distinct risk factors. The aim of this review was to assess various risk factors associated with TDIs in individuals with SHCN. Materials And Methods: The protocol was designed according to the recommendations of the Cochrane-handbook, Joanna Briggs Institute, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and registered in PROSPERO (CRD42022357422). A comprehensive search was performed in PubMed, LILACS, Web of Science, EMBASE and Scopus using a pre-defined strategy without any limitation of language and year of publication. It was last updated on 25 April 2023. Studies addressing the TDIs in individuals with SHCN were included. Data extraction and analyses were performed, risk of bias (ROB) assessment was done using the Joanna Briggs Institute's critical appraisal tool, and a meta-analysis was performed using random-effects model. Results: A total of 21 studies were included in the review. They were categorized according to the target disease/condition: cerebral palsy (n = 5), ADHD and autism spectrum disorders (n = 5), visually impaired (n = 4), and multiple disorders (n = 7). The studies showed variability in the design and methods; however, 17 out of 21 studies showed moderate to low ROB. Increased overjet and lip incompetence were the main risk factors reported in the studies. The commonest injuries were observed to be enamel and enamel and dentine fractures. Conclusion: The overall pooled prevalence of TDI in individuals with special healthcare needs was 23.16% with 20.98% in males and 27.06% in females. Overjet >3 mm and inadequate lip coverage were found to be associated with a higher risk of TDI in all the categories of individuals with special healthcare needs except ADHD and ASD. Falls at home in cerebral palsy, falls while walking and self-harm in ADHD and ASD, falls at home and collision in visual impairment, and unspecified falls in multiple disorders could be identified as the most common cause of TDI. abstract_id: PUBMED:35528492 Prevalence, Etiology, and Risk Factors of Traumatic Dental Injuries in Children with Special Needs of Puducherry. Background/aim: Dental trauma draws attention in special children due to its risks and consequences. The aim of this study was to estimate the prevalence, etiology, and risk factors of traumatic dental injuries in children with special healthcare needs of Puducherry. Materials And Methods: A cross-sectional survey among 121 special children aged 4-18 years was carried out in Puducherry. The children were examined for the presence of traumatic dental injuries (TDI) and associated risk factors for the occurrence of trauma and classified according to the WHO epidemiological field survey classification. Results: Prevalence of TDI was 40.5%. The majority of them were restricted to enamel fractures and the most commonly affected were permanent maxillary central incisors. The most common cause of injury was ˝falling over˝ and home was the frequent place of injury to occur. The risk factors associated with TDI were mesoprosopic facial form, convex facial profile, increased overjet, Angles Class II molar relationship, posterior facial divergence, and incompetent lips which were statistically significant. Conclusion: The study concludes the prevalence of TDI is more prevalent in CSHCN, thereby it is necessary to create awareness, health education, and periodic screening for efficient treatment. How To Cite This Article: Martin AG, GS Prathima, Sanguida A, et al. Prevalence, Etiology, and Risk Factors of Traumatic Dental Injuries in Children with Special Needs of Puducherry. Int J Clin Pediatr Dent 2022;15(1):104-108. Answer: Yes, behavior risk factors for traumatic dental injuries (TDIs) in childhood appear to differ between males and females. A study conducted at Gazi University, Faculty of Dentistry, Turkey, found that oppositional behavior, hyperactivity, and social problems were risk factors for male patients, while being anxious/shy was a protective factor for both males and females. The classification accuracy for identifying these risk factors was 79.2% for males and 85.2% for females, indicating that there are indeed differences in the risk factors for TDIs between genders (PUBMED:25793950).
Instruction: Is Hemoglobin Level in Patients with Nasopharyngeal Carcinoma Still a Significant Prognostic Factor in the Era of Intensity-Modulated Radiotherapy Technology? Abstracts: abstract_id: PUBMED:26313452 Is Hemoglobin Level in Patients with Nasopharyngeal Carcinoma Still a Significant Prognostic Factor in the Era of Intensity-Modulated Radiotherapy Technology? Background: Hemoglobin (Hb) levels are regarded as an important determinant of outcome in a number of cancers treated with radiotherapy. However, for patients treated with intensity modulated radiotherapy (IMRT), information regarding the prognostic value of hemoglobin level is scarce. Patients And Methods: A total of 650 patients with nasopharyngeal carcinoma (NPC), enrolled between May, 2005, and November, 2012, were included in this study. The prognostic significance of hemoglobin level (anemia or no-anemia) at three different time points was investigated, including before treatment, during treatment and at the last week of treatment. Univariate and multivariate analyses were conducted using the log-rank test and the Cox proportional hazards model, respectively. Results: The 5-year OS (overall survival) rate of patients who were anemia and no-anemia before treatment were 89.1%, and 80.7% (P = 0.01), respectively. The 5-year DMFS (distant metastasis-free survival) rate of patients who were anemia and no-anemia before treatment were 88.9%, and 78.2% (P = 0.01), respectively. The 5-year OS rate of patients who were anemia and no-anemia during treatment were 91.7% and 83.3% (P = 0.004). According to multivariate analysis, the pre-treatment Hb level predicted a decreased DMFS (P = 0.007, HR = 2.555, 95% CI1.294-5.046). Besides, the mid-treatment Hb level predicted a decreased OS (P = 0.013, HR = 2.333, 95% CI1.199-4.541). Conclusions: Hemoglobin level is a useful prognostic factor in NPC patients receiving IMRT. It is important to control the level of hemoglobin both before and during chemoradiotherapy. abstract_id: PUBMED:30620231 Clinical outcome and prognostic analysis of young adults nasopharyngeal carcinoma patients of a nonendemic area in intensity-modulated radiotherapy era. Aim: To investigate the clinical outcome and prognostic factors of young adults nasopharyngeal carcinoma (NPC) patients in the era of intensity-modulated radiotherapy. Methods: We retrospectively analyzed the clinical outcome and the prognostic factors of young adults NPC patients who were admitted to our hospital from January 2010 to December 2013. COX regression model was used to identify factors associated with survival. The acute and late toxicities were also evaluated. Results: A total of 165 patients were included; the median follow-up time for all the patients was 65 months (4-96 months). The 5-year overall survival (OS), distant metastasis-free survival, progression-free survival and local-regional recurrence-free survival were 85.9, 82.4, 76.4 and 92.4%, respectively. N stage was an independent prognostic factor for OS (p = 0.009) and distant metastasis-free survival (p = 0.008). Cumulative cisplatin >200 mg/m2 was an independent prognostic factor for OS (p = 0.032). Conclusion: Young adults with NPC can achieve a reasonable local-regional control and OS in the era of intensity-modulated radiotherapy with tolerable toxicities. abstract_id: PUBMED:24183063 Prognostic value of parapharyngeal extension in nasopharyngeal carcinoma treated with intensity modulated radiotherapy. Background And Purpose: The development of improved diagnostic and therapeutic techniques has revolutionized the management of nasopharyngeal carcinoma (NPC). The purpose of this study is to revaluate the prognostic value of parapharyngeal extension in NPC in the IMRT era. Material And Methods: We retrospectively reviewed data from 749 biopsy-proven non-metastatic NPC patients. All patients were examined with magnetic resonance imaging (MRI) and received intensity-modulated radiotherapy (IMRT) as the primary treatment. Results: The incidence of parapharyngeal extension was 72.1%. A significant difference was observed in the disease-free survival (DFS; 70.3% vs. 89.1%, P<0.001), distant metastasis-free survival (DMFS; 79.3% vs. 92.0%, P<0.001), and local relapse-free survival (LRFS; 92.8% vs. 99.0%, P=0.002) of patients with and without parapharyngeal extension. Parapharyngeal extension was an independent prognostic factor for DFS and DMFS in multivariate analysis (P=0.001 and P=0.015, respectively), but not LRFS. The difference between DMFS in patients with or without parapharyngeal space extension was statistically significant in patients with cervical lymph node metastasis (P<0.001). Conclusions: In the IMRT era, parapharyngeal extension remains a poor prognosticator for DMFS in NPC, especially in patients with positive lymph node metastasis. Additional therapeutic improvements are required to achieve a favorable distant control in NPC with parapharyngeal extension. abstract_id: PUBMED:30147337 Prognostic value of nutritional markers in nasopharyngeal carcinoma patients receiving intensity-modulated radiotherapy: a propensity score matching study. Purpose: To investigate the prognostic value of nutritional markers for survival in nasopharyngeal carcinoma (NPC) patients receiving intensity-modulated radiotherapy (IMRT), with or without chemotherapy. Patients And Methods: This retrospective study included 412 NPC patients who received IMRT-based treatment. Weight loss (WL) during treatment, hemoglobin level (Hb) and serum albumin level (Alb) before treatment were measured. The prognostic values of these markers for overall survival (OS), locoregional recurrence-free survival (LRFS) and distant metastasis-free survival (DMFS) were analyzed using Kaplan-Meier method and Cox proportional hazards regression analysis. Propensity score matching was performed to reduce the effect of confounders. Results: WL, Hb and Alb were significantly correlated with each other and inflammatory markers. Adjusted Cox regression analysis showed that critical weight loss (CWL) (WL≥5%) was an independent prognostic factor for OS (HR: 2.399, 95% CI: 1.267-4.540, P=0.007) and LRFS (HR: 2.041, 95% CI: 1.052-3.960, P=0.035), while low pretreatment Hb was independently associated with poor DMFS (HR: 2.031, 95% CI: 1.144-3.606, P=0.016). However, no significant correlation was found between Alb and survival in our study cohort. The prognostic value of these markers was further confirmed in the propensity-matched analysis. Conclusion: CWL, Hb and Alb have a significant impact on survival in NPC patients undergoing IMRT. They can be utilized in combination with conventional staging system to predict the prognosis of NPC patients treated with IMRT. abstract_id: PUBMED:35116818 The maximum diameter of cervical lymph node was not a prognostic factor for local-regional advanced nasopharyngeal carcinoma treated with intensity modified radiotherapy. Background: Cervical lymph node metastasis was an important prognostic factor. However, the prognosis of the maximum diameter of cervical lymph nodes before treatment has always been controversial. The aim of this study was to analyze the relationship between treatment outcomes and the maximum diameter of lymph nodes (Dmax) in loco-regional advanced nasopharyngeal carcinoma (NPC) after intensity modified radiotherapy. Methods: From Jan. 2012 to Dec. 2017, 163 patients with locally advanced NPC treated with intensity modified radiotherapy were retrospectively analyzed. The T-stage distribution was 6.7% in T1, 23.3% in T2, 38.7% in T3, and 31.3% in T4. The N-classifications were 6.1% in N0, 23.3% in N1, 47.9% in N2, and 22.7% in N3. TNM stages were III 51.5% and IVa 48.5%. All patients received intensity modified radiotherapy to the nasopharynx and neck. The dose was 66-70.4 Gy, 2-2.2 Gy per fraction over 6-7 weeks to the primary tumor and lymph nodes and 54-60 Gy to clinical target volumes (CTVs). One hundred fifty patients were received induction chemotherapy and/or concurrent chemotherapy. The maximum diameter of the lymph node is measured on the axial or coronal MRI image. Results: The median follow-up time was 31 months (range, 6.1-79.3 months). Six cases developed neck recurrence and 9 cases developed nasopharynx recurrence. The lymph nodes diameter was 0-12 cm, median 2.9 cm. Three-year overall survival (OS) rate was 77.8%. Three-year local failure-free rate (L-FFR), distant failure-free rate (D-FFR) and disease-free survival (DFS) rate were 88.1%, 77.6% and 63.9% respectively. Multivariate analysis showed Dmax was not a prognostic factor for OS, L-FFR, D-FFR, DFS. Both uni- and multivariate analyses demonstrated that N-classification and age is the significant prognostic factor for predicting OS while the maximum diameter of lymph nodes, T-classification, N-classification and AJCC-classification are the significant prognostic factor for predicting OS in univariate analyses in local-regional advanced NPC. Conclusions: The maximum diameter of the lymph nodes was not a prognostic factor for local-regional advanced NPC treated with intensity modulated radiotherapy. abstract_id: PUBMED:33079493 Failure patterns and prognostic factors for cervical node-negative nasopharyngeal carcinoma in the intensity-modulated radiotherapy era. Background: To evaluate the failure patterns and prognostic factors in patients with cervical node-negative nasopharyngeal carcinoma (NPC) in the intensity-modulated radiotherapy (IMRT) era. Methods: Patients with cervical node-negative NPC treated with IMRT at the Sun Yat-sen University Cancer Center between February 2001 and December 2008 were retrospectively reviewed. The failure patterns, prognostic factors, and efficacy of additional chemotherapy were assessed. Results: The median follow-up time was 78 months for 298 patients. The 5-year local recurrence-free survival (LRFS), nodal recurrence-free survival (NRFS), distant metastasis-free survival (DMFS), failure-free survival (FFS), and overall survival (OS) were 95.2%, 99.3%, 94.8%, 89.8%, and 92.9%, respectively. The rate of treatment failure remained high in patients with T4 disease (35.4%, 17/48), including eight of local recurrence, two of nodal recurrence, and seven of distant metastasis. Multivariate analyses showed that the primary gross tumor volume (GTVp) was significantly associated with LRFS, DMFS, FFS, and OS. Subgroup analysis revealed that patients with GTVp ≤ 42.5 cc had better 5-year LRFS (98.7% vs 81.4%, P < .001), 5-year DMFS (97.8% vs 82.5%, P < .001), 5-year FFS (96.1% vs 65.4%, P < .001), and 5-year OS (96.6% vs 78.2%, P < .001) than patients with GTVp > 42.5 cc. However, additional chemotherapy showed no significant survival benefit in stratification analysis. Conclusions: Cervical node-negative NPC has a good prognosis in the IMRT era, and the primary tumor volume is the most important prognostic factor. Further exploration is needed to determine the optimal treatment strategy for patients with a high tumor burden. abstract_id: PUBMED:28937797 Modified-Nutrition Index is a Significant Prognostic Factor for the Overall Survival of the Nasopharyngeal Carcinoma Patients who Undergo Intensity-modulated Radiotherapy. Purpose: To explore whether the modified-nutrition index (m-NI) is a prognostic factor for the overall survival (OS) in nasopharyngeal carcinoma (NPC) patients who undergo intensity-modulated radiotherapy (IMRT). Methods: Clinical data were prospectively collected from NPC patients who underwent IMRT at our hospital between October 2008 and December 2014. The patient nutritional status before radiotherapy was evaluated using the m-NI, based on eight nutrition indicators including body mass index, arm muscle circumference, albumin, total lymphocyte count, red blood cell count, hemoglobin, serum pre-albumin, and transferrin. The independent prognostic value of m-NI for the OS was evaluated. Results: A total of 323 patients (229 males, 94 females) were included in this study, and the follow-up rate was 99.7% (322/323). The 1-, 3-, and 5-yr OS rates between malnutrition and normal nutrition groups by using the m-NI were 93.0% vs. 96.9%, 76.4% vs. 82.8%, and 61.8% vs. 77.1%, respectively. A regression analysis showed that the m-NI was the significant prognostic value for the OS in NPC. Conclusions: The m-NI before radiotherapy is a significant prognostic factor for the OS in NPC patients. Further validation of our instrument is needed in other NPC patients. abstract_id: PUBMED:33783535 Outcomes of patients with nasopharyngeal carcinoma treated with intensity-modulated radiotherapy. Nasopharyngeal cancer shows a good response to intensity-modulated radiotherapy. However, there is no clear evidence for the benefits of routine use of image-guided radiotherapy. The purpose of this study was to perform a retrospective investigation of the treatment outcomes, treatment-related complications and prognostic factors for nasopharyngeal cancer treated with intensity-modulated radiotherapy and image-guided radiotherapy techniques. Retrospective analysis was performed on 326 consecutive nasopharyngeal cancer patients treated between 2004 and 2015. Potentially significant patient-related and treatment-related variables were analyzed. Radiation-related complications were recorded. The 5-year overall survival and disease-free survival rates of these patients were 77.9% and 70.5%, respectively. Age, AJCC (American Joint Committee on Cancer) stage, retropharyngeal lymphadenopathy, treatment interruption and body mass index were independent prognostic factors for overall survival. Age, AJCC stage, retropharyngeal lymphadenopathy, image-guided radiotherapy and body mass index were independent prognostic factors for disease-free survival. In conclusion, intensity-modulated radiotherapy significantly improves the treatment outcomes of nasopharyngeal cancer. With the aid of image-guided radiotherapy, the advantage of intensity-modulated radiotherapy might be further amplified. abstract_id: PUBMED:35116319 Prognostic value of hepatitis B viral infection in patients with nasopharyngeal carcinoma in the intensity-modulated radiotherapy era. Background: Whether hepatitis B virus (HBV) infection poses risk to patients with nasopharyngeal carcinoma (NPC) in the intensity-modulated radiotherapy (IMRT) era remains unclear. Methods: 953 patients with non-metastatic, newly diagnosed NPC who received detection of serologic hepatitis B surface antigen (HBsAg) and treated with IMRT were retrospectively reviewed. 171 patients had HBV infection (HBsAg seropositive). Propensity score matching method (PSM) and stabilized inverse probability of treatment weighting (IPTW) were used to address confounding. The survival rates were evaluated by Kaplan-Meier analysis and the survival curves were compared by Log-rank test. Prognostic factors were explored by multivariate analysis. Results: No significant survival differences were observed between HBsAg-negative group and HBsAg-positive group [5-year overall survival (OS), 87.7% vs. 83.9%, P=0.181; locoregional recurrence-free survival (LRFS), 83.5% vs. 78.3%, P=0.109; distant metastasis-free survival (DMFS), 80.2% vs. 77.9%, P=0.446; progression-free survival (PFS), 77.4% vs. 71.4%, P=0.153], consistent with the results of PSM and IPTW analysis. Further analyses revealed that HBV infection was an independent prognostic factor for poor OS [multivariate analysis; hazard ratio (HR), 3.74; 95% confidence interval (CI), 1.45-9.68; P=0.006], LRFS (HR, 2.86; 95% CI, 1.37-5.95); P=0.005] in patients with stage N1, DMFS (HR, 2.65; 95% CI, 1.15-6.09; P=0.022) and PFS (HR, 2.63; 95% CI, 1.34-5.14; P=0.005). Among HBsAg-positive patients, liver protection improved OS (90.3% vs. 77.2%; P=0.022). Conclusions: HBV infection is an independent risk factor for patients with stage N1 NPC in the IMRT era. Hepatic protection may benefit the survival of HBsAg-positive patients. abstract_id: PUBMED:32104077 Management of Chemotherapy for Stage II Nasopharyngeal Carcinoma in the Intensity-Modulated Radiotherapy Era: A Review. Nasopharyngeal carcinoma is an endemic disease with a high prevalence in Southeast Asia, Mediterranean countries, and Northern Africa. With substantial advances in screening and diagnosis, increasingly more early-stage (stage I~II) patients are being diagnosed. The undebated treatment modality for stage I patients is radiotherapy alone. However, controversies exist for patients with stage II disease, mostly revolving around the management of chemotherapy. However, the use of intensity-modulated radiotherapy for the treatment of nasopharyngeal carcinoma has increased recently, which has drastically improved survival outcomes. Thus, many oncologists have considered omitting chemotherapy for stage II patients in the intensity-modulated radiotherapy era. Unfortunately, prospective studies comparing concurrent radio-chemotherapy with intensity-modulated radiotherapy alone are limited. Notably, stage II nasopharyngeal carcinoma consists of three subgroups, among which stage T2N1M0 disease is unique and potentially warrants additional treatment including chemotherapy. Additionally, molecular biology techniques are advancing at an incredible speed. Instead of adopting a one-size-fits-all recommendation, exploring potential predictive biomarkers to select patients who are likely to derive benefit from chemotherapy is a better choice. In this review, we summarize the data from studies and reviews regarding chemotherapy for stage II nasopharyngeal carcinoma in the intensity-modulated radiotherapy era and discuss chemotherapy utility. Eventually, we conclude that IMRT alone may be sufficient for stage II nasopharyngeal carcinoma, but this needs to be verified by prospective studies in the near future, the evidence collected thus far suggests that concurrent chemo-radiotherapy without induction or adjuvant chemotherapy is yet to be necessary for patients with stage II disease. Answer: Yes, hemoglobin level is still a significant prognostic factor in patients with nasopharyngeal carcinoma (NPC) in the era of intensity-modulated radiotherapy (IMRT) technology. A study including 650 patients with NPC treated with IMRT found that hemoglobin levels at three different time points (before treatment, during treatment, and at the last week of treatment) were significant for prognosis. Specifically, patients with anemia before treatment had lower 5-year overall survival (OS) and distant metastasis-free survival (DMFS) rates compared to those without anemia. Additionally, mid-treatment hemoglobin levels were predictive of decreased OS. Multivariate analysis confirmed that pre-treatment hemoglobin level was a predictor of decreased DMFS, and mid-treatment hemoglobin level predicted a decreased OS (PUBMED:26313452). Moreover, another study that investigated the prognostic value of nutritional markers, including hemoglobin level, in NPC patients receiving IMRT, with or without chemotherapy, supported the significance of hemoglobin levels. It was found that low pre-treatment hemoglobin levels were independently associated with poor DMFS (PUBMED:30147337). These findings suggest that monitoring and managing hemoglobin levels both before and during chemoradiotherapy are important for the prognosis of NPC patients undergoing IMRT.
Instruction: Adrenocorticotropic hormone--secreting islet cell tumors: are they always malignant? Abstracts: abstract_id: PUBMED:8259429 Adrenocorticotropic hormone--secreting islet cell tumors: are they always malignant? Purpose: To evaluate the frequency with which benign occult islet cell tumors cause ectopic adrenocorticotropic hormone (ACTH) syndrome. Materials And Methods: Ten patients with Cushing syndrome due to the production of ACTH by a pancreatic islet cell tumor were studied. In addition, 53 cases of ACTH-secreting islet cell tumors in the English-language literature were reviewed. Results: All 10 of the authors' patients had malignant islet cell tumors. Liver metastases were present in all 10 patients at presentation. Five patients are dead, four patients are alive with liver metastases, and one patient is alive without gross evidence of residual tumor after distal pancreatectomy and right hepatectomy. Eight of the 10 islet cell carcinomas produced gastrin in addition to ACTH. In the 53 reported cases of ectopic ACTH production, there was only one benign adenoma with a prolonged follow-up. Conclusion: When ectopic ACTH production is caused by an islet cell tumor, the tumor is large and malignant and has usually metastasized to the liver by the time Cushing syndrome is diagnosed. No occult ACTH-producing islet cell tumor was encountered in the authors' experience or in a review of the literature. abstract_id: PUBMED:3004244 Functioning oncocytic islet-cell carcinoma. Report of a case with electron-microscopic and immunohistochemical confirmation. A case of a 58-year-old woman with an unusual variant of malignant islet-cell tumor showing oncocytic features is described. Using the light microscopy technique, the tumor appeared comprised of solid nests of uniform cells with abundant, eosinophilic cytoplasm and round nuclei with granular chromatin. Ultrastructurally, the cells contained numerous abnormal mitochondria, dilated rough endoplasmic reticulum, and scattered dense-core neurosecretory granules, often associated with cytoplasmic filaments. Tumor cells were focally immunoreactive for insulin, glucagon, and somatostatin and diffusely immunoreactive for alpha 1-antitrypsin as assayed by the avidin--biotin technique. The tumor was immunonegative for human chorionic gonadotropin, gastrin, adrenocorticotropic hormone, and serotonin. The patient exhibited some of the clinical features associated with glucagonoma syndrome, including diabetes mellitus and chronic diarrhea. The tumor behaved in a malignant fashion, with widespread lymphatic involvement and bony metastases at the time of presentation. This report of an oncocytic islet-cell carcinoma supports the concept of oncocytic differentiation in islet-cell tumors in a fashion analagous to oncocytic carcinoids. abstract_id: PUBMED:187526 Canine Zollinger-Ellison syndrome. The unusual finding of peptic esophagitis and duodenal ulceration in a dog was associated with a malignant pancreatic islet cell tumor producing gastrin and ACTH. The finding of a gastrinoma in a non-human species introduces the potential for developing an animal model for the study of the protean genetic biochemical, physiologic and metabolic aspects of the Zollinger-Ellison syndrome. abstract_id: PUBMED:16418841 Multihormonality and entrapment of islets in pancreatic endocrine tumors. We analyzed pancreatic endocrine tumors (PETs) from 200 patients for the incidence of multihormonality and entrapped islets and correlated the results with clinicopathological features. Our series included 86 cases (43%) of functioning PET and 114 cases (57%) of nonfunctioning PET. Classified according to the WHO classification, there were 32 well-differentiated benign PETs, 85 well-differentiated PETs with uncertain behavior, and 83 well-differentiated malignant PETs. All tumors were immunostained for pancreatic hormones (insulin, glucagon, somatostatin, and pancreatic polypeptide) and for additional hormones such as gastrin, vasoactive intestinal polypeptide, calcitonin, seratonin, and adrenocorticotropic hormone. Multihormonality was found in 34% of all PETs and it was a frequent finding in the tumors of the uncertain behavior (38.8%) group. Islet entrapment was found in 57 tumors (28.5%) and was significantly more frequent in PETs with uncertain and malignant behavior than benign ones (p=0.01). In 57 cases, we also investigated whether ductule entrapment accompanied islet entrapment. Of these 57 tumors, 45 (79%) tumors had accompanying ductule entrapment. Ductule entrapment did not show significant correlation with malignancy and was a more frequent finding in nonfunctioning tumors. We conclude that the incidence of multihormonality in PETs is not as high as suggested previously and islet entrapping may reflect aggressive tumor growth and may be a complementary criterion for predicting the biological behavior of PETs. abstract_id: PUBMED:237800 Amelioration of hypoglycemia in a patient with malignant insulinoma during the development of the ectopic ACTH syndrome. A patient with functioning islet cell carcinoma is described who had amelioration of her hypoglycemia during the development of ectopic ACTH syndrome. Moon facies and hyperkalemic metabolic acidosis were also present in this patient, features uncommonly seen in the actopic ACTH syndrome. At autopsy, she was found to have active tuberculosis. Prophylactic antituberculous therapy should be given to high-risk patients with the ectopic ACTH syndrome. High doses of ACTH may be palliative in refractory hypoglycemic states. abstract_id: PUBMED:21417063 Ectopic production of multiple hormones (ACTH, MSH and gastrin) by a single malignant tumor. N/A abstract_id: PUBMED:205304 Clinically silent gross hypergastrinaemia from a multiple hormone-secreting pancreatic apudoma. A patient is described who had a malignant pancreatic islet cell apudoma secreting corticotrophin (ACTH) and melanocyte-stimulating hormone (MSH), both of which were clinically active, and very large quantities of immunoreactive gastrins, which were biologically active but clinically silent (normal gastric acid secretion and no peptic ulceration). The presence of parietal cell antibodies, with no increase in the plasma concentrations of hormones which can inhibit gastric acid secretion (secretin, GIP and VIP), suggests that many of the of the parietal cells may have been blocked by the autoantibodies. abstract_id: PUBMED:35846 The APUD cell system and its neoplasms: observations on the significance and limitations of the concept. N/A abstract_id: PUBMED:8606627 The glucagonoma syndrome. Clinical and pathologic features in 21 patients. The glucagonoma syndrome is a rare disorder characterized by weight loss, necrolytic migratory erythema (NME), diabetes, stomatitis, and diarrhea. We identified 21 patients with the glucagonoma syndrome evaluated at the Mayo Clinic from 1975 to 1991. Although NME and diabetes help identify patients with glucagonomas, other manifestations of malignant disease often lead to the diagnosis. If the diagnosis is made after the tumor is metastatic, the potential for cure is limited. The most common presenting symptoms of the glucagonoma syndrome were weight loss (71%), NME (67%), diabetes mellitus (38%), cheilosis or stomatitis (29%), and diarrhea (29%). Although only 8 of the 21 patients had diabetes at presentation, diabetes eventually developed in 16 patients, 75% of whom required insulin therapy. Symptoms other than NME or diabetes mellitus led to the diagnosis of an islet cell tumor in 7 patients. The combination of NME and diabetes mellitus led to a more rapid diagnosis (7 months) than either symptom alone (4 years). Ten patients had diabetes mellitus before the onset of NME. No patients had NME clearly preceding diabetes mellitus. Increased levels of secondary hormones, such as gastrin (4 patients), vasoactive intestinal peptide (1 patient), serotonin (5 patients), insulin (6 patients, clinically significant in 1 only), human pancreatic polypeptide (2 patients), calcitonin (2 patients) and adrenocorticotropic hormone (2 patients), contributed to clinical symptoms leading to the diagnosis of an islet cell tumor before the onset of the full glucagonoma syndrome in 2 patients. All patients had metastatic disease at presentation. Surgical debulking, chemotherapy, somatostatin, and hepatic artery embolization offered palliation of NME, diabetes, weight loss, and diarrhea. Despite the malignant potential of the glucagonomas, only 9 of 21 patients had tumor-related deaths, occurring an average of 4.91 years after diagnosis. Twelve patients were still alive, with an average age follow-up of 3.67 years. abstract_id: PUBMED:2544919 Ectopic adrenocorticotropic hormone syndrome: localization studies in 28 patients. A variety of imaging procedures were performed in 28 patients with ectopic adrenocorticotropic hormone (ACTH) syndrome in an attempt to localize the ACTH-producing tumor. Diagnosis was made on the basis of removal of an ACTH-producing tumor or biopsy of metastases in the 19 patients with a proved source and the absence of ACTH gradients in bilateral samples of the inferior petrosal sinuses in the nine patients in whom an ACTH-secreting tumor had not been localized. Eleven bronchial carcinoids, two thymic carcinoids, three pheochromocytomas, and three islet-cell tumors constituted the proved sources. The condition has been cured in eight patients, six are alive with residual tumor, and five have died. Of the nine patients with undetected sites of ACTH production, one has died of pneumocystis pneumonia and eight are being treated medically or with bilateral adrenalectomy. Computed tomography (CT) of the chest and abdomen was the most helpful study in the detection of these tumors. Selective arteriography (bronchial and visceral), systemic and portal venous sampling, and iodine-131 meta-iodobenzylguanidine scintigraphy failed to demonstrate tumors when findings at CT were negative. Bronchial carcinoids constituted most of the ACTH-secreting tumors in this study (58%) and in a review of four large series (47%). To assure early detection of these potentially malignant tumors, pulmonary CT should be performed every 6 months, even after hypercortisolism has been medically or surgically controlled. Answer: Adrenocorticotropic hormone (ACTH)-secreting islet cell tumors are typically associated with malignancy. A study that evaluated ten patients with Cushing syndrome due to the production of ACTH by a pancreatic islet cell tumor found that all patients had malignant islet cell tumors, with liver metastases present at the time of diagnosis. The study also reviewed 53 cases from the English-language literature and found only one benign adenoma with a prolonged follow-up, suggesting that benign occurrences are extremely rare (PUBMED:8259429). Additionally, another case report described a patient with a malignant islet-cell tumor showing oncocytic features, which was immunonegative for ACTH, indicating that not all malignant islet-cell tumors secrete ACTH (PUBMED:3004244). Furthermore, a case of a dog with a malignant pancreatic islet cell tumor producing gastrin and ACTH was reported, which suggests that these tumors can also occur in non-human species (PUBMED:187526). In a study of 200 patients with pancreatic endocrine tumors (PETs), multihormonality was found in 34% of all PETs, and while the study did not specifically focus on ACTH-secreting tumors, it did include immunostaining for ACTH among other hormones (PUBMED:16418841). Another case report described a patient with a functioning islet cell carcinoma who experienced amelioration of hypoglycemia during the development of ectopic ACTH syndrome (PUBMED:237800). Moreover, a patient with a malignant pancreatic islet cell apudoma was reported to secrete ACTH and MSH, both of which were clinically active, along with large quantities of immunoreactive gastrins (PUBMED:205304). In summary, while there may be rare exceptions, the available evidence strongly suggests that ACTH-secreting islet cell tumors are usually malignant and often present with metastases at the time of diagnosis (PUBMED:8259429).
Instruction: Are inflammatory and coagulation biomarkers related to sleep characteristics in mid-life women? Abstracts: abstract_id: PUBMED:21120127 Are inflammatory and coagulation biomarkers related to sleep characteristics in mid-life women?: Study of Women's Health across the Nation sleep study. Study Objectives: Inflammation and pro-coagulation biomarkers may be a link between sleep characteristics and risk for cardiometabolic disorders. We tested the hypothesis that worse sleep characteristics would be associated with C-reactive protein (CRP), fibrinogen, factor VIIc, and plasminogen activator inhibitor (PAI)-1 in a multi-ethnic subsample of mid-life women enrolled in the Study of Women's Health across the Nation. Design: Cross-sectional. Measurements And Results: African American, Chinese, and Caucasian women (N=340) participated in 3 days of in-home polysomnographic (PSG) monitoring and had measures of inflammation and coagulation. Regression analyses revealed that each of the biomarkers were associated with indicators of sleep disordered breathing after adjusting for age, duration between sleep study and blood draw, site, menopausal status, ethnicity, residualized body mass index, smoking status, and medications that affect sleep or biomarkers. Among African American women, those who had higher levels of CRP had shorter PSG-sleep duration and those who had higher levels of fibrinogen had less efficient sleep in multivariate models. Conclusions: These results suggest that inflammation and pro-coagulation processes may be an important pathway connecting sleep disordered breathing and cardiometabolic disorders in women of these ethnic groups and that inflammation may be a particularly important pathway in African Americans. abstract_id: PUBMED:29617910 Sleep characteristics and inflammatory biomarkers among midlife women. Study Objectives: Research suggests that sleep disturbances are associated with elevated levels of inflammation. Some evidence indicates that women may be particularly vulnerable; increased levels of inflammatory biomarkers with sleep disturbances are primarily observed among women. Midlife, which encompasses the menopause transition, is typically reported as a time of poor sleep. We tested whether poorer objectively measured sleep characteristics were related to a poorer inflammatory profile in midlife women. Methods: Two hundred ninety-five peri- and postmenopausal women aged 40-60 completed 3 days of wrist actigraphy, physiologic hot flash monitoring, questionnaires (e.g. Berlin sleep apnea risk questionnaire], and a blood draw for the assessment of inflammatory markers, including C-reactive protein (CRP), interleukin-6 (IL-6), and von Willebrand factor (VWF) antigen. Associations of objective (actigraphy) sleep with inflammatory markers were tested in regression models. Sleep efficiency was inverse log transformed. Covariates included age, race/ethnicity, education, body mass index, sleep apnea risk, homeostatic model assessment (a measure of insulin resistance), systolic blood pressure, low-density lipoprotein cholesterol, and physical activity. Results: In separate models controlling for age, race/ethnicity, and education, lower sleep efficiency was associated with higher IL-6 [b(SE) = .02 (.10), p = .003] and VWF [b(SE) = .02 (.08), p = .002]. More minutes awake after sleep onset was associated with higher VWF [b(SE) = .12 (.06), p = .01]. Findings persisted in multivariable models. Conclusions: Lower sleep efficiency and more minutes awake after sleep onset were independently associated with higher circulating levels of VWF. Lower sleep efficiency was associated with higher circulating levels of IL-6. These findings suggest that sleep disturbances are associated with greater circulating inflammation in midlife women. abstract_id: PUBMED:18328671 Self-reported symptoms of sleep disturbance and inflammation, coagulation, insulin resistance and psychosocial distress: evidence for gender disparity. Self-reported ratings of sleep quality and symptoms of poor sleep have been linked to increased risk of coronary heart disease (CHD), Type 2 diabetes and hypertension with recent evidence suggesting stronger associations in women. At this time, the mechanisms of action that underlie these gender-specific associations are incompletely defined. The current study examined whether gender moderates the relation of subjective sleep and sleep-related symptoms to indices of inflammation, coagulation, insulin resistance (IR) and psychosocial distress, factors associated with increased risk of cardiovascular and metabolic disorders. Subjects were 210 healthy men and women without a history of sleep disorders. The Pittsburgh Sleep Quality Index (PSQI) was used to assess sleep quality and frequency of sleep symptoms. In multivariate-adjusted models, overall poor sleep quality, more frequent problems falling asleep (>2 night/week) and longer periods to fall asleep (>30 min) were associated with greater psychosocial distress, higher fasting insulin, fibrinogen and inflammatory biomarkers, but only for women. The data suggest that subjective ratings of poor sleep, greater frequency of sleep-related symptoms, and longer period of time to fall asleep are associated with a mosaic of biobehavioral mechanisms in women and that these gender-specific associations have direct implications to recent observations suggesting gender differences in the association between symptoms of poor sleep and cardiovascular disease. abstract_id: PUBMED:18930805 Effects of gender and dementia severity on Alzheimer's disease caregivers' sleep and biomarkers of coagulation and inflammation. Background: Being a caregiver for a spouse with Alzheimer's disease is associated with increased risk for cardiovascular illness, particularly for males. This study examined the effects of caregiver gender and severity of the spouse's dementia on sleep, coagulation, and inflammation in the caregiver. Methods: Eighty-one male and female spousal caregivers and 41 non-caregivers participated (mean age of all participants 70.2 years). Full-night polysomnography (PSG) was recorded in each participants home. Severity of the Alzheimer's disease patient's dementia was determined by the Clinical Dementia Rating (CDR) scale. The Role Overload scale was completed as an assessment of caregiving stress. Blood was drawn to assess circulating levels of D-dimer and Interleukin-6 (IL-6). Results: Male caregivers who were caring for a spouse with moderate to severe dementia spent significantly more time awake after sleep onset than female caregivers caring for spouses with moderate to severe dementia (p=.011), who spent a similar amount of time awake after sleep onset to caregivers of low dementia spouses and to non-caregivers. Similarly, male caregivers caring for spouses with worse dementia had significantly higher circulating levels of D-dimer (p=.034) than females caring for spouses with worse dementia. In multiple regression analysis (adjusted R(2)=.270, p<.001), elevated D-dimer levels were predicted by a combination of the CDR rating of the patient (p=.047) as well as greater time awake after sleep onset (p=.046). Discussion: The findings suggest that males caring for spouses with more severe dementia experience more disturbed sleep and have greater coagulation, the latter being associated with the disturbed sleep. These findings may provide insight into why male caregivers of spouses with Alzheimer's disease are at increased risk for illness, particularly cardiovascular disease. abstract_id: PUBMED:29073400 Sleep restriction and delayed sleep associate with psychological health and biomarkers of stress and inflammation in women. Study Objectives: Despite strong associations between sleep duration and health, there is no clear understanding of how volitional chronic sleep restriction (CSR) alters the physiological processes that lead to poor health in women. We focused on biochemical and psychological factors that previous research suggests are essential to uncovering the role of sleep in health. Design: Cross-sectional study. Setting: University-based. Participants: Sixty female participants (mean age, 19.3; SD, 2.1 years). Measurements: We analyzed the association between self-reported volitional CSR and time to go to sleep on a series of sleep and psychological health measures as well as biomarkers of immune functioning/inflammation (interleukin [IL]-1β), stress (cortisol), and sleep regulation (melatonin). Results: Across multiple measures, poor sleep was associated with decreased psychological health and a reduced perception of self-reported physical health. Volitional CSR was related to increased cortisol and increased IL-1β levels. We separately looked at individuals who experienced CSR with and without delayed sleep time and found that IL-1β levels were significantly elevated in CSR alone and in CSR combined with a late sleep time. Cortisol, however, was only elevated in those women who experienced CSR combined with a late sleep time. We did not observe any changes in melatonin across groups, and melatonin levels were not related to any sleep measures. Conclusions: New to our study is the demonstration of how an increase in a proinflammatory process and an increase in hypothalamic-pituitary-adrenal axis activity both relate to volitional CSR, with and without a delayed sleep time. We further show how these mechanisms relate back to psychological and self-reported health in young adult women. abstract_id: PUBMED:22711587 Inflammation, coagulation and risk of locomotor disability in elderly women: findings from the British Women's Heart and Health Study. This study investigated associations between chronic inflammation and coagulation and incident locomotor disability using prospective data from the British Women's Heart and Health Study. Locomotor disability was assessed using self-reported questionnaires in 1999/2000, and 3 and 7 years later. Scores for inflammation and coagulation were obtained from summation of quartile categories of all available biomarkers from blood samples taken at baseline. 534 women developed locomotor disability after 3 years, 260 women after 7 years, while 871 women remained free of locomotor disability over the whole study period. After adjustment for demographic characteristics, lifestyle factors and health conditions, we found associations between inflammation and incident locomotor disability after three (OR per unit increase in score = 1.08, 95 % confidence interval (CI): 1.03, 1.13) and 7 years (OR = 1.10, 95 % CI: 1.03, 1.18) and between coagulation and incident locomotor disability after 3 (OR = 1.06, 95 % CI: 0.98, 1.14) and 7 years (OR = 1.09, 95 % CI: 1.00, 1.18). This corresponds to ORs between 1.8 and 2.4 comparing women with highest to lowest inflammation or coagulation scores. These results support the role of inflammation and coagulation in the development of locomotor disability in elderly women irrespective of their lifestyle factors and underlying age-related chronic diseases. abstract_id: PUBMED:19955705 Sleep and biomarkers of atherosclerosis in elderly Alzheimer caregivers and controls. Background: Perturbed sleep might contribute to cardiovascular disease by accelerating atherosclerosis. Sleep is poor in Alzheimer caregivers who are also a group at increased cardiovascular risk. Objective: To test the hypothesis that impaired sleep relates to elevated levels of biomarkers of atherosclerosis in community-dwelling elderly and that this association would possibly be stronger in caregivers than in non-caregiving controls. Methods: We studied 97 Alzheimer caregivers and 48 non-caregiving controls (mean age 71 +/- 8 years, 72% women) who underwent wrist actigraphy at their homes. Measures of objective sleep were averaged across 3 consecutive nights. The Pittsburgh Sleep Quality Index was administered by an interviewer to rate subjective sleep quality. Morning fasting blood samples were collected to determine measures of inflammation, coagulation and endothelial dysfunction. Results: There were independent associations between decreased subjective sleep quality and increased levels of fibrin D-dimer (p = 0.022, DeltaR(2) = 0.029) and von Willebrand factor antigen (p = 0.029, DeltaR(2) = 0.034) in all participants. Percent sleep (p = 0.025) and subjective sleep quality (p = 0.017) were lower in caregivers than in controls. In caregivers, the correlation between decreased percent sleep and elevated levels of interleukin-6 (p = 0.042, DeltaR(2) = 0.039) and C-reactive protein (p < 0.10, DeltaR(2) = 0.027) was significantly stronger than in controls. Conclusion: Perceived impairment in sleep related to increased coagulation activity and endothelial dysfunction in all participants, whereas objectively impaired sleep related to inflammation activity in caregivers. The findings provide one explanation for the increased cardiovascular risk in elderly poor sleepers and dementia caregivers in particular. abstract_id: PUBMED:37274178 Coagulation biomarkers for ischemic stroke. A State of the Art lecture titled "coagulation biomarkers for ischemic stroke" was presented at the International Society on Thrombosis and Haemostasis (ISTH) Congress in 2022. Ischemic stroke (IS) is a common disease with major morbidity and mortality. It is a challenge to determine which patients are at risk for IS or have poor clinical outcome after IS. An imbalance of coagulation markers may contribute to the progression and prognosis of IS. Therefore, we now discuss studies on the association of selected coagulation biomarkers from the hemostasis, inflammation, and immunothrombosis systems with the risk of IS, stroke severity at the acute phase, and clinical outcome after treatment. We report on coagulation biomarker-induced risk of IS, stroke severity, and outcomes following IS derived from prospective population studies, case-control studies, and acute-phase IS studies. We found indications that many coagulation and inflammation biomarkers are associated with IS, but it is early to conclude that any of these biomarkers can be applied in a therapeutic setting to predict patients at risk of IS, stroke severity at the acute phase, and clinical outcome after treatment. The strongest evidence for a role in IS was found for beta-thromboglobulin, von Willebrand factor, factor VIII, fibrinogen, thrombin-activatable fibrinolysis inhibitor, D-dimer, and neutrophil extracellular traps, and therefore, they are promising candidates. Further research and validation in large-size populations using well-defined study designs are warranted. Finally, we provide a selection of recent data relevant to this subject that was presented at the 2022 ISTH Congress. abstract_id: PUBMED:33799528 Coagulation and Fibrinolysis in Obstructive Sleep Apnoea. Obstructive sleep apnoea (OSA) is a common disease which is characterised by repetitive collapse of the upper airways during sleep resulting in chronic intermittent hypoxaemia and frequent microarousals, consequently leading to sympathetic overflow, enhanced oxidative stress, systemic inflammation, and metabolic disturbances. OSA is associated with increased risk for cardiovascular morbidity and mortality, and accelerated coagulation, platelet activation, and impaired fibrinolysis serve the link between OSA and cardiovascular disease. In this article we briefly describe physiological coagulation and fibrinolysis focusing on processes which could be altered in OSA. Then, we discuss how OSA-associated disturbances, such as hypoxaemia, sympathetic system activation, and systemic inflammation, affect these processes. Finally, we critically review the literature on OSA-related changes in markers of coagulation and fibrinolysis, discuss potential reasons for discrepancies, and comment on the clinical implications and future research needs. abstract_id: PUBMED:24076375 Child abuse is related to inflammation in mid-life women: role of obesity. Objective: Elevated inflammation biomarkers are associated with incident cardiovascular disease. Several studies suggest that childhood abuse may be associated with inflammation later in life. This study examined whether childhood abuse predicted elevated levels of C-reactive protein (CRP) and whether the association was due to body size. Methods: Participants were 326 (104 Black, 222 White) women from the Pittsburgh site of the Study of Women's Health Across the Nation (SWAN). SWAN included a baseline assessment of pre-menopausal or early peri-menopausal women in mid-life (mean age=45.7), and CRP, depressive symptoms, body mass index (BMI), and other covariates were measured over 7 annual follow-up visits. The Childhood Trauma Questionnaire, a standardized measure that retrospectively assesses abuse and neglect in childhood and adolescence, was administered at year 8 or 9 of follow-up. Results: Approximately 37% of the participants reported a history of abuse or neglect. Generalized estimating equations showed that sexual and emotional abuse, emotional and physical neglect, and the total number of types of abuse were associated with higher CRP levels over 7 years, adjusting for race, age, education, smoking status, use of hormone therapy, depressive symptoms, occurrence of heart attack or stroke, and medications for hypertension. The coefficients for indirect effects for emotional and sexual abuse, physical neglect, and total number of types of abuse on CRP levels through BMI were significant. A history of emotional abuse and neglect was related to percent change in CRP over the 7 years but not through percent change in BMI over the 7 years. Conclusion: A history of childhood abuse and neglect retrospectively reported is related to overall elevated inflammation in mid-life women, perhaps through obesity. A history of some types of abuse and neglect (emotional) may be related to change in inflammation, independent of simultaneously measured change in BMI. Answer: Yes, inflammatory and coagulation biomarkers are related to sleep characteristics in mid-life women. Studies have found associations between worse sleep characteristics and elevated levels of C-reactive protein (CRP), fibrinogen, factor VIIc, and plasminogen activator inhibitor (PAI)-1, particularly in African American women who had higher levels of CRP and shorter polysomnographic (PSG) sleep duration, and those with higher levels of fibrinogen had less efficient sleep (PUBMED:21120127). Additionally, lower sleep efficiency and more minutes awake after sleep onset were independently associated with higher circulating levels of von Willebrand factor (VWF) and interleukin-6 (IL-6), suggesting that sleep disturbances are associated with greater circulating inflammation in midlife women (PUBMED:29617910). Furthermore, subjective ratings of poor sleep and sleep-related symptoms were associated with higher fasting insulin, fibrinogen, and inflammatory biomarkers, but only for women, indicating a gender disparity in the relationship between sleep disturbance and inflammation, coagulation, insulin resistance, and psychosocial distress (PUBMED:18328671). In the context of caregiving, male caregivers of spouses with more severe dementia experienced more disturbed sleep and had greater coagulation, as indicated by higher circulating levels of D-dimer, which was associated with the disturbed sleep (PUBMED:18930805). Moreover, volitional chronic sleep restriction (CSR) was related to increased cortisol and interleukin (IL)-1β levels, with IL-1β levels significantly elevated in CSR alone and in CSR combined with a late sleep time, while cortisol was only elevated in those women who experienced CSR combined with a late sleep time (PUBMED:29073400). These findings demonstrate the complex interplay between sleep, inflammation, and coagulation, and their potential impact on the health of mid-life women.
Instruction: Can optical coherence tomography predict the outcome of laser photocoagulation for diabetic macular edema? Abstracts: abstract_id: PUBMED:18050810 Can optical coherence tomography predict the outcome of laser photocoagulation for diabetic macular edema? Background And Objective: To assess the outcome of laser photocoagulation in patients with diabetic macular edema. Patients And Methods: Forty-seven patients (51 eyes) with clinically significant macular edema (CSME) undergoing grid laser photocoagulation were included. Clinical examination and optical coherence tomography (OCT) were performed at baseline and 3 to 4 months after treatment. The central foveal thickness, mean inner macular thickness (average retinal thickness in fovea and inner macular circle), and mean macular thickness were calculated. Based on the greatest OCT thickness at baseline, patients were grouped according to mild (< 300 microm; Group 1), moderate (300 to 399 microm; Group 2), and severe (> or = 400 microm; Group 3) macular edema. Results: Group 2 showed significant reductions in central foveal thickness (23 microm, P = .02), mean inner macular thickness (18 microm, P = .02), and mean macular thickness (9 microm, P = .04) with slight improvement in visual acuity. Groups 1 and 3 did not show any significant change in macular thickness values and there was a statistically insignificant worsening of visual acuity in these groups. Conclusions: Patients with moderate macular thickening of 300 to 400 microm benefit most from laser treatment. OCT may help in choosing the appropriate treatment for CSME based on the degree of macular thickening. Long-term studies are warranted to confirm these findings. abstract_id: PUBMED:22762054 Is laser photocoagulation still effective in diabetic macular edema? Assessment with optical coherence tomography in Nepal. Aim: To find out the outcome of laser photocoagulation in clinically significant macular edema (CSME) by optical coherence tomography (OCT) METHODS: It was a prospective, non-controlled, case series study enrolling 81 eyes of 64 patients with CSME between August 2008 and January 2010. All patients received modified grid photocoagulation with frequency doubled Nd: YAG laser. Each patient was evaluated in terms of best-corrected visual acuity (BCVA) and regression or progression of maculopathy after laser therapy at 1, 3 and 6 months. Spearman's correlation test was used to show the correlation between BCVA and total macular volume (TMV). Analysis of variance (ANOVA) was used to compare among groups and independent t-test was used to compare in each group. Results: There is high correlation between BCVA and TMV (P≤0.001). BCVA improved in 50.6 %, remained static in 39.5% and deteriorated in 9.9% patients after 6 month of treatment. The Baseline TMV (mean and SD) were 9.26±1.83, 10.4±2.38), 11.5±3.05), 8.89±0.75 and 9.47±1.98mm(3) for different OCT patterns, ST (sponge like thickening), CMO (cystoid macular edema), SFD (subfoveal detachment), VMIA (Vitreo macular interface abnormality) and average TMV respectively (P=0.04). After 6 months of laser treatment, the mean TMV decreased from 9.47±1.98mm(3) to 8.77±1.31mm(3) (P=0.01). In ST there was significant decrease in TMV, P=0.01, Further within these groups at 6 months, they were significantly different, P=0.01. Conclusion: OCT showed the different morphological variant of CSME while the response of treatment is different. TMV decreased the most and hence showed the improvement in vision after 6 months of laser treatment. In the era of Anti vascular endothelial growth factors (VEGFs), efficacy of laser seems to be in shadow but it is still first line of treatment in developing nation like Nepal where antiVEGFs may not be easily available and affordable. abstract_id: PUBMED:21654890 Macular laser photocoagulation guided by spectral-domain optical coherence tomography versus fluorescein angiography for diabetic macular edema. Background: The aim of this study was to compare the efficacy of spectral-domain optical coherence tomography (SD-OCT) and fluorescein angiography (FA) in the guidance of macular laser photocoagulation for diabetic macular edema. Methods: This was a prospective interventional clinical comparative pilot study. Forty eyes from 24 consecutive patients with diabetic macular edema were allocated to receive laser photocoagulation guided by SD-OCT or FA. Best-corrected visual acuity (BCVA), central macular thickness, and retinal volume were assessed at baseline and two months after treatment. Results: Subjects treated using FA-guided laser improved BCVA from the logarithm of the minimum angle of resolution (logMAR) 0.52 ± 0.2 to 0.37 ± 0.2 (P < 0.001), and decreased mean central macular thickness from 397.25 ± 139.1 to 333.50 ± 105.7 μm (P < 0.001) and retinal volume from 12.61 ± 1.6 to 10.94 ± 1.4 mm(3) (P < 0.001). Subjects treated using SD-OCT guided laser had improved BCVA from 0.48 ± 0.2 to 0.33 ± 0.2 logMAR (P < 0.001), and decreased mean central macular thickness from 425.90 ± 149.6 to 353.4 ± 140 μm (P < 0.001) and retinal volume from 12.38 ± 2.1 to 11.53 ± 1.1 mm(3) (P < 0.001). No significant differences between the groups were found in two-month BCVA (P = 0.505), two-month central macular thickness (P = 0.660), or two-month retinal volume (P = 0.582). Conclusion: The short-term results of this pilot study suggest that SD-OCT is a safe and effective technique and could be considered as a valid alternative to FA in the guidance of macular laser photocoagulation treatment for diabetic macular edema. abstract_id: PUBMED:32285237 Efficacy of navigated focal laser photocoagulation in diabetic macular edema planned with en face optical coherence tomography versus fluorescein angiography. Aim: To analyze the efficacy of navigated focal laser photocoagulation (FLP) of microaneurysms in diabetic macular edema (DME) planned using en face optical coherence tomography (OCT) as against fluorescein angiography (FA). Methods: Twenty-six eyes of 21 DME patients (12 males, 9 females, 69.5 ± 12.3 years) with mean BCVA of 0.52 ± 0.44 LogMAR were included. En face OCT images of deep capillary plexus slab and FA images were used to plan FLP targeting of leaky microaneurysms. The primary outcome measures were central retinal thickness (CRT) and macular volume. The secondary outcome measure was best-corrected visual acuity (BCVA). Results: The difference in the change of CRT and macular volume between en face OCT and FA-planned FLP after 1 month and at the end of follow-up was not statistically significant (p > 0.05), except for a higher CRT reduction in the en face OCT-planning group (p = 0.007) at the end of mean follow-up of 2.6 ± 0.9 months. There was no difference in BCVA change between the two planning options (p = 0.42). Conclusion: En face OCT is a non-inferior alternative for FA in the planning of navigated FLP of microaneurysms in DME. abstract_id: PUBMED:19254904 Optical coherence tomographic patterns in diabetic macular oedema: prediction of visual outcome after focal laser photocoagulation. Aim: To identify optical coherence tomography (OCT) patterns predictive of visual outcome in diabetic macular oedema (DMO) patients who undergo focal laser photocoagulation. Methods: This study involved 70 eyes (45 patients) with clinically significant macular oedema that underwent focal laser photocoagulation using the Early Treatment Diabetic Retinopathy Study protocol. Preoperative macular OCT images were retrospectively examined. OCT features were classified into four patterns: diffuse retinal thickening (DRT); cystoid macular oedema (CMO), serous retinal detachment and vitreomacular interface abnormalities (VMIA). Changes in retinal thickness, retinal volume and visual acuity (VA) after focal laser photocoagulation were evaluated and compared with respect to their OCT features. Results: After focal laser photocoagulation, changes in retinal thickness and retinal volume were significantly different for different OCT types (p = 0.002 and p<0.001). The change in VA from baseline was not significantly different between groups (p = 0.613). The DRT pattern was associated with a greater reduction in retinal thickening and better VA improvement than the CMO or VMIA patterns. Proportions of patients with persistent DMO (central macular thickness >250 microm after laser treatment) were greater for the CMO and VMIA patterns than DRT pattern. Conclusion: DRT patients achieved a greater reduction in retinal thickening and greater VA increases than CMO and VMIA patients. We suggest that classifying DMO structural patterns using OCT might allow visual outcome to be predicted after laser photocoagulation. abstract_id: PUBMED:22973860 Variability in photocoagulation treatment of diabetic macular oedema. Purpose: To establish whether differences in the assessment of diabetic macular oedema (DME) with either optical coherence tomography (OCT) or stereoscopic biomicroscopy lead to variability in the photocoagulation treatment of DME. Methods: The differences in the assessment of DME with either OCT or stereoscopic biomicroscopy were analysed by calculating the surface areas and the overlap of retinal thickening. Photocoagulation treatment plans of retinal specialists were compared by evaluating the number and location of planned laser spots. Results: The threshold for and dosage of photocoagulation differ depending upon whether the basis of retinal thickness diagnosis is clinical observation or OCT. The overlap in laser spot location based on the assessment of DME with OCT or biomicroscopy averages 51%. Among retinal specialists, the treatment plans differed in the laser spot count by six- to 11-fold. Conclusion: Diabetic macular oedema photocoagulation treatment threshold and dosage of laser spots differ depending on whether thickness assessments are based on stereoscopic slit-lamp biomicroscopy or OCT. In addition, retinal specialists differed in the number and placement of planned laser spots even when given identical information concerning DME and treatable lesions. This variability in the photocoagulation treatment of DME could lead to differences in patient outcome and laser study results. abstract_id: PUBMED:24695064 Fluorescein angiography versus optical coherence tomography-guided planning for macular laser photocoagulation in diabetic macular edema. Purpose: To compare laser photocoagulation plans for diabetic macular edema (DME) using fluorescein angiography (FA) versus optical coherence tomography (OCT) thickness map superimposed on the retina. Methods: Fourteen eyes with DME undergoing navigated laser photocoagulation with navigated photocoagulator had FA taken using the same instrument. Optical coherence tomography central retinal thickness map was imported to the photocoagulator and with same magnification aligned onto the retina. Three retina specialists placed laser spot marks separately on FA and OCT image in a masked fashion. The spots placed by each physician were compared between FA and OCT and among physicians. The area of dye leakage on FA and increased central retinal thickness on OCT of the same eye were also compared. Results: The average number of spots using FA and OCT template was 36.64 and 40.61, respectively (P = 0.0201). The average area of dye leakage was 7.45 mm, whereas the average area of increased central retinal thickness on OCT of the same eye was 10.92 mm (P = 0.013). Conclusion: There is variability in the treatment planning for macular photocoagulation with a tendency to place more spots when guided by OCT than by FA. Integration of OCT map aligned to the retina may have an impact on treatment plan once such information is available. abstract_id: PUBMED:19174719 Association of the extent of diabetic macular edema as assessed by optical coherence tomography with visual acuity and retinal outcome variables. Purpose: To determine whether the extensiveness of diabetic macular edema using a 10-step scale based on optical coherence tomography explains pretreatment variation in visual acuity and predicts change in macular thickness or visual acuity after laser photocoagulation. Methods: Three hundred twenty-three eyes from a randomized clinical trial of two methods of laser photocoagulation for diabetic macular edema were studied. Baseline number of thickened optical coherence tomography subfields was used to characterize diabetic macular edema on a 10-step scale from 0 to 9. Associations were explored between baseline number of thickened subfields and baseline fundus photographic variables, visual acuity, central subfield mean thickness (CSMT), and total macular volume. Associations were also examined between baseline number of thickened subfields and changes in visual acuity, CSMT, and total macular volume at 3.5 and 12 months after laser photocoagulation. Results: For baseline visual acuity, the number of thickened subfields explained no more variation than did CSMT, age and fluorescein leakage. A greater number of thickened subfields was associated with a greater baseline CSMT, total macular volume, area of retinal thickening, and degree of thickening at the center of the macula (r = 0.64, 0.77, 0.61-0.63, and 0.45, respectively) and with a lower baseline visual acuity (r = 0.38). Baseline number of thickened subfields showed no association with change in visual acuity (r < or = 0.01-0.08) and weak associations with change in CSMT and total macular volume (r from 0.11 to 0.35). Conclusion: This optical coherence tomography based assessment of the extensiveness of diabetic macular edema did not explain additional variation in baseline visual acuity above that explained by other known important variables nor predict changes in macular thickness or visual acuity after laser photocoagulation. abstract_id: PUBMED:17017196 Serial optical coherence tomography of subthreshold diode laser micropulse photocoagulation for diabetic macular edema. Background And Objective: To use serial optical coherence tomography (OCT) to evaluate low-intensity, high-density subthreshold diode laser micropulse photocoagulation treatment of clinically significant diabetic macular edema. Patients And Methods: Eighteen consecutive eyes of 14 patients with clinically significant diabetic macular edema and a minimum foveal thickness of 223 microm or greater were prospectively evaluated by OCT preoperatively and 1, 4, and 12 weeks following treatment. Results: Overall, estimated macular edema 3 months postoperatively (minimum foveal thickness--223 microm) was reduced a mean of 24% (P = .02). Eleven eyes treated for recurrent or persistent clinically significant diabetic macular edema following prior treatment more than 3 months before study entry were most improved, with a mean reduction in estimated macular edema 3 months postoperatively of 59%. No treatment complications were observed. No patient demonstrated laser lesions following treatment. Conclusions: Low-intensity, high-density subthreshold diode laser micropulse photocoagulation can reduce or eliminate clinically significant diabetic macular edema measured by OCT. Further study is warranted. abstract_id: PUBMED:25017009 Optical coherence tomography-guided selective focal laser photocoagulation: a novel laser protocol for diabetic macular edema. Background: To compare the anatomic and functional results between optical coherence tomography (OCT)-guided selective focal laser photocoagulation (OCT-laser) and conventional modified Early Treatment Diabetic Retinopathy Study (mETDRS) laser treatment for diabetic macular edema (DME). Methods: We analyzed treatment outcomes in 47 consecutive eyes treated with OCT-laser compared to 31 matched eyes treated with mETDRS. In the OCT-laser group, we identified 'significant actively-leaking microaneurysms on OCT' (SALMO) which are responsible for edema in OCT B-scan images, and thoroughly ablated them with photocoagulation. Best-corrected visual acuity (BCVA) and retinal thickness by OCT were compared at baseline and 12 months after treatment between two groups. Results: OCT-laser treatment resulted in significant improvements in BCVA, central subfield thickness (CST), and maximum retinal thickness (MRT) from baseline at 12 months from the time of therapy (+2.5 letter score, p = 0.04; -45.56 μm in CST, p < 0.001; -91.6 μm in MRT, p < 0.001). The mean number of treated 'SALMO' was 5.6 ± 4.0 (range 1-26), while the number of MAs in 'treatable lesions' by fluorescein angiography (FA) in the same eye was 16.3 ± 11.8 (range 2-42). There was no difference between OCT-laser and mETDRS groups in changes of these parameters from baseline at 12 months (p = 0.56, p = 0.89, p = 0.43 respectively). Fundus autofluorescence (FAF) and OCT revealed less tissue damage in OCT-laser-treated eyes, compared to eyes treated with mETDRS (p < 0.001). Conclusions: OCT-laser shows similar anatomic and functional outcomes compared to conventional laser (modified ETDRS), with significantly less retinal damages. Answer: Optical coherence tomography (OCT) can be useful in predicting the outcome of laser photocoagulation for diabetic macular edema (DME). Studies have shown that OCT can help assess the degree of macular thickening and the morphological variants of DME, which can influence the response to laser treatment. For instance, patients with moderate macular thickening (300 to 400 microm) have been found to benefit most from laser treatment, showing significant reductions in macular thickness and slight improvement in visual acuity (PUBMED:18050810). Additionally, different OCT patterns of DME, such as sponge-like thickening (ST) and cystoid macular edema (CMO), have been associated with varying responses to laser therapy, with some patterns showing more significant decreases in total macular volume (TMV) and improvement in vision after treatment (PUBMED:22762054). Moreover, OCT-guided laser photocoagulation has been compared to fluorescein angiography (FA)-guided treatment, with studies suggesting that OCT is a safe and effective technique that could be considered a valid alternative to FA in guiding macular laser photocoagulation for DME (PUBMED:21654890). The efficacy of navigated focal laser photocoagulation planned with en face OCT has also been shown to be non-inferior to that planned with FA (PUBMED:32285237). Furthermore, OCT patterns have been found to be predictive of visual outcomes in DME patients undergoing focal laser photocoagulation. For example, the diffuse retinal thickening (DRT) pattern was associated with a greater reduction in retinal thickening and better visual acuity improvement than the CMO or vitreomacular interface abnormalities (VMIA) patterns (PUBMED:19254904). However, variability in photocoagulation treatment of DME exists, and the threshold for and dosage of photocoagulation can differ depending on whether assessments are based on clinical observation or OCT findings. This variability could lead to differences in patient outcomes (PUBMED:22973860). Additionally, while OCT can provide valuable information, it may not explain additional variation in baseline visual acuity above that explained by other known important variables nor predict changes in macular thickness or visual acuity after laser photocoagulation (PUBMED:19174719). In summary, OCT can be a useful tool in predicting and guiding the outcome of laser photocoagulation for DME, although variability in treatment approaches and patient responses should be considered.
Instruction: Declining syphilis prevalence among pregnant women in northern Botswana: an encouraging sign for the HIV epidemic? Abstracts: abstract_id: PUBMED:16326844 Declining syphilis prevalence among pregnant women in northern Botswana: an encouraging sign for the HIV epidemic? Objectives: To evaluate trends in syphilis prevalence among antenatal women in a high HIV prevalence setting in northern Botswana. Methods: Laboratory logbooks of antenatal syphilis testing for 1992-2003 in Francistown, Botswana's second largest city, were reviewed, and a consecutive sample of 750 women per year from 1992-2003 were analysed. VDRL result and age were recorded. A positive result was considered a case. Results: Overall syphilis prevalence (VDRL positive) among pregnant women in Francistown decreased from 12.4% in 1992 to 4.3% in 2003 (p< or =0.001). The downward trend in overall syphilis prevalence began in 1997. There was no change in syphilis prevalence from 1992-6. Beginning in 1997, there has been a significant decrease in syphilis prevalence in all age groups. Conclusions: Syphilis in pregnant women in Francistown has been decreasing for the last 6 years, despite extremely high HIV prevalence (stable at > or =40% since 1996) in the same population. Reasons contributing to the decline in syphilis rates may include nationwide implementation of syndromic management of sexually transmitted diseases (STDs) in 1992, improved access to health care, and less risky sexual behaviour. There is evidence from other sources indicating that risky sexual behaviour in Botswana has decreased during the HIV epidemic. abstract_id: PUBMED:25243015 High prevalence and incidence of sexually transmitted infections among women living in Kwazulu-Natal, South Africa. Background And Objectives: Sexually transmitted infections (STIs) contribute largely to the burden of health in South Africa and are recognized as major contributors to the human immunodeficiency virus (HIV) epidemic. Young women are particularly vulnerable to STIs. The purpose of this secondary analysis was to examine the risk factors associated with prevalent and incident STIs among women who had participated in three clinical trials. Methods: A total of 5,748 women were screened and 2293 sexually active, HIV negative, non-pregnant women were enrolled in three clinical trials in Kwazulu-Natal, South Africa. The prevalence of individual STIs Chlamydia trachomatis (CT), Neisseria gonorrhea (NG), syphilis, and Trichomonas vaginalis (TV) was assessed at screening; and incident infections were evaluated over a 24 month period. Results: Overall, the combined study population of all three trials had a median age of 28 years (inter-quartile range (IQR):22-37), and a median duration of follow-up of 12 months. Prevalence of STIs (CT, NG, TV, or syphilis) was 13% at screening. The STI incidence was estimated to be 20/100 women years. Younger women (<25 years, p < 0.001), women who were unmarried (p < 0.001) and non-cohabiting women (p < 0.001) were shown to be at highest risk for incident STIs. Conclusions: These results confirm the extremely high prevalence and incidence of STIs among women living in rural and urban communities of KwaZulu-Natal, South Africa, where the HIV epidemic is also particularly severe. These findings strongly suggest an urgent need to allocate resources for STI and HIV prevention that mainly target younger women. Trial Registration: Clinical Trials.gov, NCT00121459. abstract_id: PUBMED:36284561 Trends of HIV and syphilis prevalence among pregnant women in antenatal clinics in Togo: Analysis of sentinel serosurveillance results between 2008 and 2016 Introduction: The aim of our work was to analyse the trends of HIV infection and syphilis among pregnant women in prenatal consultation (PNC) in healthcare facilities in Togo. Methods: This was an analytical retrospective study, covering the period from 2008 to 2016 and focusing on pregnant women aged 15 to 49 seen in PNC for the first time in maternal and child health services in Togo. Results: During the study period, 41,536 pregnant women were registered in 2008, 2009, 2010, 2014 and 2016, respectively 8079, 8572, 8430, 7920 and 8535.The mean age of the patients was 26 ± 6 year in 2008, 2009 and 2010. The overall HIV prevalence decreased from 3.4% in 2008 to 2.9% in 2016 (p = 0.0145). It fell from 1% in 2008 to 0.5% in 2016 and from 3.6% in 2008 to 1.4% in 2016 (p < 0.0001) among 15-19 year-old and 20-24 year-old respectively. HIV prevalence in rural areas is two times lower than in urban areas between 2008 and 2016 with a statistically significant difference. The prevalence of syphilis decreased significantly from 2008 (1.3%) to 2016 (0.6%), (p < 0.0001). It is low and not associated with age in 2008; 0.2% and 0.4% in 2016 respectively in the 15 to 19 and 20 to 24 age groups. This prevalence is significantly low between 2008 and 2016 in both urban and rural areas. Conclusion: Our study documents a relatively low prevalence of syphilis and HIV among pregnant women in Togo, with a significant decrease among adolescents and young women, attesting to the effectiveness of the increased screening and comprehensive prevention of sexually transmitted infections (STIs) and HIV, including the antiretroviral treatment as prevention (TASP) approach, and the neonatal syphilis elimination programme in the country. abstract_id: PUBMED:22337102 Declining syphilis trends in concurrence with HIV declines among pregnant women in Zambia: observations over 14 years of national surveillance. Background: Zambia has a serious HIV epidemic and syphilis infection remains prevalent in the adult population. We investigated syphilis trends using national antenatal clinic (ANC) sentinel surveillance data in Zambia and compared the findings with population-based data. Methods: The analyses are based on ANC data from 22 sentinel sites from five survey rounds conducted between 1994 and 2008. The data comprised information from interviews and syphilis and HIV test results. The syphilis estimates for 2002 and 2008 were compared with data from the Demographic and Health Surveys 2001/2002 and 2007, which are nationally representative data, and also included syphilis testing and HIV. Results: The overall syphilis prevalence dropped during the period 1994-2008 among both urban and rural women aged 15 to 49 years (9.8% to 2.8% and 7.5% to 3.2%, respectively). However, provincial variations were striking. The decline was steep irrespective of educational level, but among those with the highest level the decline started earlier and was steeper than among those with low education. The comparison with Zambia Demographic and Health Surveys 2001/2002 and 2007 findings also showed an overall reduction in syphilis prevalence among urban and rural men and women in the general population. Conclusions: The syphilis prevalence declined by 65% in urban and 59% in rural women. Provincial variations need to be further studied to better guide specific sexually transmitted infection prevention and control programmes in different geographical settings. The national ANC-based HIV and syphilis surveillance system provided good proxies of syphilis prevalence and trends. abstract_id: PUBMED:23601556 Sexually transmitted infections among HIV-infected women in Thailand. Background: Data on sexually transmitted infections (STI) prevalence among HIV-infected women in Thailand are limited. We studied, among HIV-infected women, prevalence of STI symptoms and signs; prevalence and correlates of having any STI; prevalence and correlates of Chlamydia trachomatis (CT) or Neisseria gonorrhoeae (GC) among women without CT and/or GC symptoms or signs; and number of women without CT and/or GC symptoms or signs needed to screen (NNS) to detect one woman with CT and/or GC overall, among pregnant women, and among women ≤25 years. Methods: During October 2004-September 2006, HIV-infected women at 3 obstetrics and gynecology clinics were asked about sexual behaviors and STI symptoms, physically examined, and screened for chlamydia, gonorrhea, trichomoniasis, and syphilis. Multivariate logistic regression was used to identify correlates of infections. NNS was calculated using standard methods. Results: Among 1,124 women, 526 (47.0%) had STI symptoms or signs, 469 (41.7%) had CT and/or GC symptoms or signs, and 133 (11.8%) had an STI. Correlates of having an STI included pregnancy and having STI signs. Among 469 women and 655 women with vs. without CT and/or GC symptoms or signs, respectively, 43 (9.2%) vs. 31 (4.7%), 2 (0.4%) vs. 9 (1.4%), and 45 (9.6%) vs. 38 (5.8%) had CT, GC, or "CT or GC", respectively; correlates included receiving care at university hospitals and having sex with a casual partner within 3 months. NNS for women overall and women ≤25 years old were 18 (95% CI, 13-25) and 11 (95% CI, 6-23), respectively; and for pregnant and non-pregnant women, 8 (95% CI, 4-24) and 19 (95% CI, 14-27), respectively. Conclusions: STI prevalence among HIV-infected women, including CT and GC among those without symptoms or signs, was substantial. Screening for CT and GC, particularly for pregnant women, should be considered. abstract_id: PUBMED:35761327 Prevalence trends and risk factors associated with HIV, syphilis, and hepatitis C virus among pregnant women in Southwest China, 2009-2018. Objective: This study investigated prevalence trends and identified the associated factors of HIV, syphilis and hepatitis C virus (HCV) among pregnant women in the Guangxi Zhuang Autonomous Region (Guangxi), Southwest China. Methods: Serial cross-sectional surveys were performed annually among pregnant women in Guangxi from 2009 to 2018. Blood specimens were collected to test the prevalence of HIV, syphilis and HCV. Cochran-Armitage analysis was used to assess the trends of HIV, syphilis and HCV prevalence, as well as the sociodemographic and behavioural data. In this study, we used zero-inflated negative binomial (ZINB) regression models to identify factors associated with HIV, syphilis and HCV infection. Results: A total of 23,879 pregnant women were included in the study. The prevalence of HIV, syphilis and HCV was 0.24%, 0.85% and 0.19%, respectively. There was a decrease in HIV prevalence from 0.54% to 0.10%, a decrease in HCV prevalence from 0.40% to 0.05% and a decrease in syphilis prevalence from 1.53% to 0.30%. The findings based on the ZINB model revealed that pregnant women who had a history of STI had significantly increased risks of HIV (OR 6.63; 95% CI 1.33-32.90) and syphilis (OR 9.06; 95% CI 3.85-21.30) infection, while pregnant women who were unmarried/widowed/divorced were more likely to have HIV (OR 2.81; 95% CI 1.20-6.54) and HCV (OR 58.12; 95% CI, 3.14-1076.99) infection. Furthermore, pregnant women whose husband had a history of STI (OR 5.62; 95% CI 1.24-25.38) or drug use (OR 7.36; 95% CI 1.25-43.43) showed an increased risk of HIV infection. Conclusions: There was a relatively low prevalence of HIV, syphilis and HCV among pregnant women. Although decreasing trends in HIV, syphilis and HCV infections were observed, effort is needed to promote STI testing in both premarital medical check-ups and antenatal care, especially targeting couples with a history of STI or drug use. abstract_id: PUBMED:23970623 High syphilis but low HIV prevalence rates among pregnant women in New Caledonia. Sexually transmitted infections have been described as one of the major health problems in several countries of the Pacific Region. The objective of the study was to estimate the prevalence of pregnant women infected with HIV and/or syphilis in New Caledonia. HIV and syphilis test results were obtained from women attending antenatal clinics. From 2008 to 2011, 3353 pregnant women were tested with a mean prevalence of active syphilis found at 5.6/100,000. No pregnant women tested positive for HIV. Despite available resources and public health strategies similar to those existing in France, active syphilis prevalence is high in New Caledonia. Surprisingly, HIV seroprevalence remains far below the figures reported in mainland countries. However, social and economic changes as well as the looming referendum on independence scheduled in 2014 may have a potential negative impact on public health resources. The need for action to control syphilis and other curable sexually transmitted infections is pressing in order to prevent further spread of HIV in New Caledonia. abstract_id: PUBMED:8031913 Syphilis and HIV infection among displaced pregnant women in rural Mozambique. A cross-sectional study was conducted among displaced pregnant women in Mozambique to determine the prevalence and correlates of HIV infection and syphilis. Between September 1992 and February 1993, 1728 consecutive antenatal attendees of 14 rural clinics in Zambézia were interviewed, examined, and tested for HIV and syphilis antibodies. The seroprevalence of syphilis and HIV were 12.2% and 2.9%, respectively. Reported sexual abuse was frequent (8.4%) but sex for money was uncommon. A positive MHA-TP result was significantly associated with unmarried status, history of past STD, HIV infection, and current genital ulcers, vaginal discharge, or genital warts. Significant correlates of HIV seropositivity included anal intercourse, history of past STD, and syphilis. In summary, displaced pregnant women had a high prevalence of syphilis but a relatively low HIV seroprevalence suggesting recent introduction of HIV infection in this area or slow spread of the epidemic. A syphilis screening and treatment programme is warranted to prevent perinatal transmission and to reduce the incidence of chancres as a cofactor for HIV transmission. abstract_id: PUBMED:28893207 Hepatitis B virus and HIV co-infection among pregnant women in Rwanda. Background: Hepatitis B virus (HBV) affects people worldwide but the local burden especially in pregnant women and their new born babies is unknown. In Rwanda HIV-infected individuals who are also infected with HBV are supposed to be initiated on ART immediately. HBV is easily transmitted from mother to child during delivery. We sought to estimate the prevalence of chronic HBV infection among pregnant women attending ante-natal clinic (ANC) in Rwanda and to determine factors associated with HBV and HIV co-infection. Methods: This study used a cross-sectional survey, targeting pregnant women in sentinel sites. Pregnant women were tested for hepatitis B surface antigen (HBsAg) and HIV infection. A series of tests were done to ensure high sensitivity. Multivariable logistic regression was used to identify independent predictors of HBV-HIV co-infection among those collected during ANC sentinel surveillance, these included: age, marital status, education level, occupation, residence, pregnancy and syphilis infection. Results: The prevalence of HBsAg among 13,121 pregnant women was 3.7% (95% CI: 3.4-4.0%) and was similar among different socio-demographic characteristics that were assessed. The proportion of HIV-infection among HBsAg-positive pregnant women was 4.1% [95% CI: 2.5-6.3%]. The prevalence of HBV-HIV co-infection was higher among women aged 15-24 years compared to those women aged 25-49 years [aOR = 6.9 (95% CI: 1.8-27.0)]. Women residing in urban areas seemed having HBV-HIV co-infection compared with women residing in rural areas [aOR = 4.3 (95% CI: 1.2-16.4)]. Women with more than two pregnancies were potentially having the co-infection compared to those with two or less (aOR = 6.9 (95% CI: 1.7-27.8). Women with RPR-positive test were seemed associated with HBV-HIV co-infection (aOR = 24.9 (95% CI: 5.0-122.9). Conclusion: Chronic HBV infection is a public health problem among pregnant women in Rwanda. Understanding that HBV-HIV co-infection may be more prominent in younger women from urban residences will help inform and strengthen HBV prevention and treatment programmes among HIV-infected pregnant women, which is crucial to this population. abstract_id: PUBMED:36992554 Prevalence of Syphilis in Pregnant Women Living With Human Immunodeficiency Virus (HIV) From South Africa Using a Molecular-Based Assay. Background: Syphilis is one of the most common sexually transmitted infections (STIs), and it remains a significant public health concern, particularly in low-resource settings including sub-Saharan Africa. There are limited data on the prevalence of syphilis among pregnant women living with HIV in South Africa. This study determined the prevalence of syphilis infection in pregnant women living with HIV by the polymerase chain reaction (PCR). Methods: This was a cross-sectional study that included 385 pregnant women living with HIV recruited from the antenatal clinic at the King Edward VIII Hospital in Durban, South Africa between October 2020 and April 2021. Treponema pallidum was detected using the Applied BiosystemsTM TaqMan® Assays from stored DNA samples extracted from vaginal swabs. Results: The prevalence of syphilis was 5.2% (20/385). The overall median (Q1-Q3) age of the women was 30.0 years (25.0-36.0). Of the women who tested positive for syphilis, 60.0% had reported STI symptoms (p = 0.030) and of those, 65.0% did not perceive themselves at risk of contracting STIs (p = 0.019). Women who reported having STI symptoms were more likely to test positive for syphilis when compared to women who reported not having any STI symptoms (OR: 2.810; 95% CI 1.119-7.052; p = 0.028). Women who perceived themselves as being at risk of contracting STIs were less likely to test positive for syphilis when compared to women who did not perceive themselves at risk of contracting STIs (OR: 0.328; 95% CI 0.128-0.842; p = 0.020). Conclusion: The study has indicated syphilis is prevalent among pregnant women living with HIV in Durban, South Africa however STI risk perception is low. Educational programs on STIs are essential among pregnant women attending antenatal care clinics in Durban. Answer: The declining syphilis prevalence among pregnant women in northern Botswana, as observed in Francistown from 1992 to 2003, could be considered an encouraging sign for the HIV epidemic (PUBMED:16326844). The study found that syphilis prevalence among pregnant women decreased from 12.4% in 1992 to 4.3% in 2003, despite the extremely high HIV prevalence in the same population. This decline in syphilis rates may be attributed to the nationwide implementation of syndromic management of sexually transmitted diseases (STDs), improved access to health care, and less risky sexual behavior. Evidence suggests that risky sexual behavior in Botswana has decreased during the HIV epidemic. The decrease in syphilis prevalence among pregnant women could be indicative of broader improvements in sexual health practices and awareness, which are also critical in controlling the spread of HIV. Since syphilis and HIV share common modes of transmission, a reduction in syphilis prevalence could reflect a reduction in behaviors that lead to HIV transmission as well. However, it is important to note that while the decline in syphilis prevalence is a positive development, the HIV prevalence remained stable at > or =40% since 1996 in the same population. This indicates that while efforts to reduce syphilis transmission have been successful, additional strategies may be needed to achieve similar reductions in HIV prevalence. In conclusion, the declining syphilis prevalence among pregnant women in northern Botswana is an encouraging sign that could suggest a positive impact on the HIV epidemic, but it does not necessarily translate to a direct reduction in HIV prevalence. Continued efforts in STD prevention, improved healthcare access, and promotion of safe sexual behaviors remain crucial in the fight against both syphilis and HIV.
Instruction: Can a tool that automates insulin titration be a key to diabetes management? Abstracts: abstract_id: PUBMED:22568777 Can a tool that automates insulin titration be a key to diabetes management? Background: Most patients who use insulin do not achieve optimal glycemic control and become susceptible to complications. Numerous clinical trials have shown that frequent insulin dosage titration is imperative to achieve glycemic control. Unfortunately, implementation of such a paradigm is often impractical. We hypothesized that the Diabetes Insulin Guidance System (DIGS™) (Hygieia, Inc., Ann Arbor, MI) software, which automatically advises patients on adjustment of insulin dosage, would provide safe and effective weekly insulin dosage adjustments. Subjects And Methods: In a feasibility study we enrolled patients with type 1 and type 2 diabetes, treated with a variety of insulin regimens and having suboptimal glycemic control. The 12-week intervention period followed a 4-week baseline run-in period. During the intervention, DIGS processed patients' glucose readings and provided insulin dosage adjustments on a weekly basis. If approved by the study team, the adjusted insulin dosage was communicated to the patients. Insulin formulations were not changed during the study. The primary outcome was the fraction of DIGS dosage adjustments approved by the study team, and the secondary outcome was improved glycemic control. Results: Forty-six patients were recruited, and eight withdrew. The DIGS software recommended 1,734 insulin dosage adjustments, of which 1,731 (99.83%) were approved. During the run-in period the weekly average glucose was stable at 174.2±36.7 mg/dL (9.7±2.0 mmol/L). During the following 12 weeks, DIGS dosage adjustments resulted in progressive improvement in average glucose to 163.3±35.1 mg/dL (9.1±1.9 mmol/L) (P<0.03). Mean glycosylated hemoglobin decreased from 8.4±0.8% to 7.9±0.9% (P<0.05). Concomitantly, the frequency of hypoglycemia decreased by 25.2%. Conclusions: The DIGS software provided patients with safe and effective weekly insulin dosage adjustments. Widespread implementation of DIGS may improve the outcome and reduce the cost of implementing effective insulin therapy. abstract_id: PUBMED:36602040 Evaluation of a Digital Health Tool for Titration of Basal Insulin in People With Type 2 Diabetes: Rationale and Design of a Randomized Controlled Trial. Background: Optimal insulin titration is essential in helping people with type 2 diabetes mellitus (T2DM) to achieve adequate glycemic control. Barriers of people with diabetes to implementation of titration include lack of self-efficiency and self-management skills, increased diabetes-related distress, low treatment satisfaction, poor well-being, as well as concerns about hypoglycemia and insulin overdose. My Dose Coach is a digital health tool for optimizing titration of basal insulin that combines a smartphone app for patients with T2DM and a Web portal for health care professionals. Methods/design: This is a prospective, open-label, multicenter, randomized controlled parallel study conducted in approximately 50 centers in Germany that are specialized in the treatment of diabetes. Patients in the intervention group will use the titration app and will be registered on the Web portal by their treating physician. Control group patients will continue their current basal insulin titration without using the app. The primary outcome is the mean change in HbA1c levels at the 12-week follow-up. The secondary outcome measures include patient-reported outcomes such as diabetes-related distress, self-management, empowerment, self-efficacy, treatment satisfaction, and psychological well-being as well as fasting blood glucose values. Conclusion: This digital health tool has been previously implemented in several independent pilot studies. The findings from this multicenter randomized controlled trial can provide further evidence supporting the effectiveness of this tool in patients with T2DM and serve as a basis for its clinical integration. Trial Registration: German Register for Clinical Studies-DRKS-ID: DRKS00024861. abstract_id: PUBMED:37954005 Use of smartphone application versus written titration charts for basal insulin titration in adults with type 2 diabetes and suboptimal glycaemic control (My Dose Coach): multicentre, open-label, parallel, randomised controlled trial. Background: The majority of people with type 2 diabetes who require insulin therapy use only basal insulin in combination with other anti-diabetic agents. We tested whether using a smartphone application to titrate insulin could improve glycaemic control in people with type 2 diabetes who use basal insulin. Methods: This was a 12-week, multicentre, open-label, parallel, randomised controlled trial conducted in 36 diabetes practices in Germany. Eligible participants had type 2 diabetes, a BMI ≥25.0 kg/m2, were on basal insulin therapy or were initiating basal insulin therapy, and had suboptimal glycaemic control (HbA1c >7.5%; 58.5 mmol/mol). Block randomisation with 1:1 allocation was performed centrally. Participants in the intervention group titrated their basal insulin dose using a smartphone application (My Dose Coach) for 12 weeks. Control group participants titrated their basal insulin dose according to a written titration chart. The primary outcome was the baseline-adjusted change in HbA1c at 12 weeks. The intention-to-treat analysis included all randomised participants. Results: Between 13 July 2021 and 21 March 2022, 251 study participants were randomly assigned (control group: n = 123; intervention group: n = 128), and 236 completed the follow-up phase (control group: n = 119; intervention group: n = 117). Regarding the HbA1c a model-based adjusted between-group difference of -0.31% (95% CI: 0.01%-0.69%; p = 0.0388) in favour of the intervention group was observed. There were 30 adverse events reported: 16 in the control group, 14 in the intervention group. Of these, 15 adverse events were serious. No event was considered to be related to the investigational device. Interpretation: Study results suggest that utilizing this digital health smartphone application for basal insulin titration may have resulted in a comparatively greater reduction in HbA1c levels among individuals with type 2 diabetes, as compared to basal insulin titration guided by a written titration schedule. No negative effect on safety outcomes was observed. Funding: Sanofi-Aventis Deutschland GmbH. abstract_id: PUBMED:31981212 A Practitioner's Toolkit for Insulin Motivation in Adults with Type 1 and Type 2 Diabetes Mellitus: Evidence-Based Recommendations from an International Expert Panel. Aim: To develop an evidence-based expert group opinion on the role of insulin motivation to overcome insulin distress during different stages of insulin therapy and to propose a practitioner's toolkit for insulin motivation in the management of diabetes mellitus (DM). Background: Insulin distress, an emotional response of the patient to the suggested use of insulin, acts as a major barrier to insulin therapy in the management of DM. Addressing patient-, physician- and drug-related factors is important to overcome insulin distress. Strengthening of communication between physicians and patients with diabetes and enhancing the patients' coping skills are prerequisites to create a sense of comfort with the use of insulin. Insulin motivation is key to achieving targeted goals in diabetes care. A group of endocrinologists came together at an international meeting held in India to develop tool kits that would aid a practitioner in implementing insulin motivation strategies at different stages of the journey through insulin therapy, including pre-initiation, initiation, titration and intensification. During the meeting, emphasis was placed on the challenges and limitations faced by both physicians and patients with diabetes during each stage of the journey through insulinization. Review Results: After review of evidence and discussions, the expert group provided recommendations on strategies for improved insulin acceptance, empowering behavior change in patients with DM, approaches for motivating patients to initiate and maintain insulin therapy and best practices for insulin motivation at the pre-initiation, initiation, titration and intensification stages of insulin therapy. Conclusions: In the management of DM, bringing in positive behavioral change by motivating the patient to improve treatment adherence helps overcome insulin distress and achieve treatment goals. abstract_id: PUBMED:35767186 Practical Guidance on Basal Insulin Initiation and Titration in Asia: A Delphi-Based Consensus. The global health burden of diabetes is on the rise and has affected more than half a billion people worldwide, particularly in Southeast Asia, North Africa, Africa, and the Western Pacific, Middle East, and South and Central America regions of the International Diabetes Federation (IDF). Despite many new treatments being available for the management of diabetes, glycemic control remains suboptimal in Asia, compared to the rest of the world. Delay in timely insulin initiation and inadequate titration of insulin are regarded to be some of the important reasons for inadequate glycemic control. Additionally, Asian populations have a distinct phenotype, including a younger age of onset and higher glycemic excursions, suggestive of a lower beta-cell function, as compared to non-Asians. Although there are multiple local and international guidelines on insulin initiation and titration, some of these guidelines can be complex. There is an unmet need for guideline recommendations on basal insulin initiation and titration to be simplified and customized for the Asian population with type 2 diabetes mellitus (T2DM). A unified approach would increase adoption of basal insulin initiation by primary care and family medicine physicians, which in turn would help reduce the inertia to insulin initiation. With this background, a consensus-seeking meeting was conducted with 14 experts from seven Asian countries to delineate appropriate practices for insulin initiation and titration in the Asian context. The key objective was to propose a simple insulin titration algorithm, specific for the Asian population, to improve glycemic control and optimize therapeutic outcomes of people with T2DM on basal insulin. Following a detailed review of literature and current guidelines, and potential barriers to insulin initiation and titration, the experts proposed a simplified insulin titration algorithm based on both physician- and patient-led components. The consensus recommendations of the experts related to basal insulin initiation and titration have been summarized in this article, along with the proposed titration algorithm for optimizing glycemic control in the Asian population with T2DM. abstract_id: PUBMED:30900198 Appropriate Titration of Basal Insulin in Type 2 Diabetes and the Potential Role of the Pharmacist. A substantial proportion of patients with suboptimal control of their type 2 diabetes experience delays in treatment intensification. Additionally, patients often experience overuse of basal insulin, commonly referred to as "over-basalization," whereby basal insulin continues to be uptitrated in order to meet targets, when addition of a mealtime bolus insulin dose may be a more appropriate option. In order to overcome these challenges, there is a need to develop the capacity of allied healthcare professionals to provide appropriate support to these patients, such as during initiation or titration of basal insulin. Pharmacists play an integral role in healthcare delivery, with patients seeing their pharmacist, on average, seven times more often than their primary care physician. This places pharmacists in a unique position to provide diabetes education and care, which may help patients avoid clinical inertia. Nevertheless, the management of the disease with basal insulin is becoming increasingly complex, with growing numbers of treatment options (such as recent second-generation longer-acting basal insulin formulations) and frequently updated titration algorithms. The two most common titration schedules specify either increasing doses by a set amount every 2-3 days or a treat-to-target strategy. Neither schedule has been shown to be superior, and the decision to use one or the other should be based on a discussion between the clinician and patient after assessment of mental and physical acumen, comfort of both parties, and follow-up plans. This review article discusses basal insulin therapy options and titration algorithms from the unique perspective of the pharmacist in order to help ensure that optimal antidiabetes therapy is initiated, appropriately titrated, and maintained.Funding: Sanofi US, Inc. abstract_id: PUBMED:33460017 Comparison of Patient-Led and Physician-Led Insulin Titration in Japanese Type 2 Diabetes Mellitus Patients Based on Treatment Distress, Satisfaction, and Self-Efficacy: The COMMIT-Patient Study. Introduction: In Japan, patient-led insulin titration is rare in type 2 diabetes mellitus (T2DM) patients. Few studies have compared the effects of patient-led versus physician-led insulin titration on patient-reported outcomes in Japanese T2DM patients. This study aimed to compare the effects of patient-led and physician-led insulin titration in Japanese insulin-naïve T2DM patients on safety, glycemic control, and patient-reported outcomes (emotional distress, treatment satisfaction, and self-efficacy). Methods: Ultimately, 125 insulin-naïve Japanese T2DM patients were randomly assigned to either a patient-led insulin self-titration group or a physician-led insulin titration group and monitored for 24 weeks. The primary endpoint was a change in emotional distress as measured using the Problem Areas in Diabetes scale (PAID). Secondary endpoints included treatment satisfaction, as measured with the Diabetes Treatment Satisfaction Questionnaire (DTSQ), self-efficacy as measured using the Insulin Therapy Self-Efficacy Scale (ITSS), glycated hemoglobin (HbA1c) levels, fasting plasma glucose levels, body weight, insulin daily dose, and frequency of hypoglycemia. Results: There was no significant difference between the groups in PAID and DTSQ scores. The results for the primary endpoint should be interpreted taking account that the sample size for the power calculation was not reached. ITSS scores were significantly higher in the patient-led self-titration group. HbA1c and fasting plasma glucose levels were significantly decreased in both groups, but the decrease was significantly larger in the patient-led self-titration group. Although the insulin daily dose was significantly higher in the patient-led self-titration group, severe hypoglycemia did not occur in either group, and the frequency of hypoglycemia was similar in both groups. Conclusion: Self-measurement of blood glucose and self-titration of insulin enhanced the patients' self-efficacy without compromising their emotional distress or treatment satisfaction. Also, insulin self-titration was found to be safe and effective; it resulted in better glycemic control without severe hypoglycemia. Trial Registration: University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR) (registration number: UMIN000020316). abstract_id: PUBMED:33586058 Similar glycaemic control and risk of hypoglycaemia with patient- versus physician-managed titration of insulin glargine 300 U/mL across subgroups of patients with T2DM: a post hoc analysis of ITAS. Aims: The Italian Titration Approach Study (ITAS) demonstrated comparable HbA1c reductions and similarly low hypoglycaemia risk at 6 months in poorly controlled, insulin-naïve adults with T2DM who initiated self- or physician-titrated insulin glargine 300 U/mL (Gla-300) in the absence of sulphonylurea/glinide. The association of patient characteristics with glycaemic and hypoglycaemic outcomes was assessed. Methods: This post hoc analysis investigated whether baseline patient characteristics and previous antihyperglycaemic drugs were associated with HbA1c change and hypoglycaemia risk in patient- versus physician-managed Gla-300 titration. Results: HbA1c change, incidence of hypoglycaemia (any type) and nocturnal rates were comparable between patient- and physician-managed arms in all subgroups. Hypoglycaemia rates across subgroups (0.03 to 3.52 events per patient-year) were generally as low as observed in the full ITAS population. Small increases in rates of 00:00-pre-breakfast and anytime hypoglycaemia were observed in the ≤ 10-year diabetes duration subgroup in the patient- versus physician-managed arm (heterogeneity of effect; p < 0.05). Conclusions: Comparably fair glycaemic control and similarly low hypoglycaemia risk were achieved in almost all patient subgroups with patient- versus physician-led Gla-300 titration. These results reinforce efficacy and safety of Gla-300 self-titration across a range of phenotypes of insulin-naïve people with T2DM. Clinical Trial Registration: EudraCT 2015-001167-39. abstract_id: PUBMED:36966543 Studies are needed to support optimal insulin dose titration in gestational diabetes mellitus: A systematic review. Background And Aims: We aimed to summarise the existing literature on insulin dose titration in gestation diabetes. Methods: Databases: Medline, EMBASE, CENTRAL and CINAHL were systematically searched for trials and observational studies comparing insulin titration strategies in gestational diabetes. Results: No trials comparing insulin dose titration strategies were identified. Only one small (n = 111) observational study was included. In this study, patient-led daily basal insulin titration was associated with higher insulin doses, tighter glycaemic control, and lower birthweight, vs weekly clinician-led titration. Conclusions: There is a paucity of evidence to support optimal insulin titration in gestational diabetes. Randomized trials are required. abstract_id: PUBMED:33790599 Efficacy and Safety of a Decision Support Intervention for Basal Insulin Self-Titration Assisted by the Nurse in Outpatients with T2DM: A Randomized Controlled Trial. Objective: The main aim of this study was to evaluate a combined fasting blood glucose based dosage self-titration setting and decision supported telephone coaching intervention on glycemic control and diabetes self-management skills, compared to the conventional care. Methods: A 12-week, single-blinded, randomized controlled trial was conducted on adults with type 2 diabetes (T2DM) primarily treated with basal insulin. After randomization, the intervention group (IG, n = 426) received a basal insulin self-titration decision support intervention administered by the Diabetes Specialty Nurses while the control group (CG, n = 423) received conventional care for 12 weeks, both included five telephone interviews. The primary efficacy endpoint was the effect of intervention on glycemic control, measured as the change in glycated hemoglobin (HbA1c) from baseline to Week 12 (after intervention) compared to the control group. Other endpoints included comparisons of the effects of intervention on fasting plasma glucose (FPG), postprandial plasma glucose (PPG), body weight, Michigan diabetes knowledge test (MDKT), diabetes empowerment scale-short Form (DES-DSF), and summary of diabetes self-care activities (SDSCA). Changes in the primary and secondary outcomes were compared using the t-test for continuous variables with a normal distribution and χ2-test for categorical variables. Results: The IG showed more improvements on mean HbA1c, compared to the CG (-2.8% vs -1.8%), so did the FPG, PPG, MDKT, DES-DSF and SDSCA (all P<0.01) after the 12-week follow up. Though the final mean insulin dose in the IG was higher than the CG at the end of the study (0.32 U/kg vs 0.28 U/kg), the changes of body weight were similar between the two groups (0.46kg vs 0.40kg, P=0.246), and the proportion of patients with hypoglycemia events during the whole trial were similar (20.65% vs 17.73%, P=0.279). Conclusion: Decision supporting of basal insulin glargine self-titration assisted by Diabetes Specialty Nurses is effective and safe in patients with T2DM. Decision supported telephone coaching intervention offers ongoing encouragement, guidance, and determination of relevant sources of decisional conflict, facilitating adjusting the insulin dose. Answer: Yes, a tool that automates insulin titration can be a key to diabetes management. The Diabetes Insulin Guidance System (DIGS™) software, which automatically advises patients on adjustment of insulin dosage, was shown to provide safe and effective weekly insulin dosage adjustments. In a feasibility study, 99.83% of the DIGS software's recommended insulin dosage adjustments were approved by the study team, and patients experienced progressive improvement in average glucose levels and a decrease in mean glycosylated hemoglobin, with a concomitant decrease in the frequency of hypoglycemia (PUBMED:22568777). Additionally, the My Dose Coach smartphone application for titrating basal insulin in people with type 2 diabetes was evaluated in a randomized controlled trial. The study suggested that utilizing this digital health smartphone application for basal insulin titration may result in a greater reduction in HbA1c levels compared to titration guided by a written titration schedule, without a negative effect on safety outcomes (PUBMED:37954005). Moreover, a decision support intervention for basal insulin self-titration assisted by nurses in outpatients with type 2 diabetes demonstrated that such an intervention is effective and safe, leading to improvements in glycemic control and diabetes self-management skills (PUBMED:33790599). These findings indicate that automated tools for insulin titration can play a significant role in diabetes management by improving glycemic control and potentially reducing the burden on patients and healthcare providers.
Instruction: Do 0-10 numeric rating scores translate into clinically meaningful pain measures for children? Abstracts: abstract_id: PUBMED:21127278 Do 0-10 numeric rating scores translate into clinically meaningful pain measures for children? Background: Self-reported pain scores are used widely in clinical and research settings, yet little is known about their interpretability in children. In this prospective, observational study we evaluated the relationship between 0 to 10 numerical rating scale (NRS) pain scores and other self-reported, clinically meaningful outcomes, including perceived need for medicine (PNM), pain relief (PR), and perceived satisfaction (PS) with treatment in children postoperatively. Methods: This study included children ages 7 to 16 years undergoing surgery associated with postoperative pain. One to 4 observations were recorded in each child within the first 24 hours postoperatively. At each assessment, children rated their pain with the NRS, stated their PNM, and rated their satisfaction with pain management. Assessments were repeated within 1 to 2 hours, and children additionally rated their PR as the same, better, or worse in comparison with the earlier assessment. Receiver operator characteristic curves were developed to examine potential NRS cut-points for PNM and PS, and the minimum clinically significant difference (MCSD) in pain score associated with PR was calculated. Results: Three hundred ninety-seven observations (including 189 pairs) were recorded in 113 children. NRS scores associated with PNM were significantly higher than "no need" (median 6 vs. 3; P < 0.001). NRS scores >4 had good sensitivity (0.81) and specificity (0.70) to discriminate PNM, but with a large number of false positives and negatives (e.g., 42% of children with scores >4 did not need analgesia). The MCSD in NRS scores was -1 (95% confidence interval [CI] -0.5 to 1) or +1 (CI 0.5 to 2.7) in relation to feel "a little better" or "worse," respectively (P < 0.001 vs. the same). NRS scores >6 had a sensitivity of 0.82 and specificity of 0.76 in discriminating dissatisfaction with treatment, yet 46% and 24% of children with scores >6, respectively, were somewhat to very satisfied with their analgesia. Conclusions: This study provides important information regarding the clinical interpretation of NRS pain scores in children. Data further support the NRS as a valid measure of pain intensity in relation to the child's PNM, PR, and PS in the acute postoperative setting. However, the variability in scores in relation to other clinically meaningful outcomes suggests that application of cut-points for individual treatment decisions is inappropriate. abstract_id: PUBMED:33749760 Associations of Pain Numeric Rating Scale Scores Collected during Usual Care with Research Administered Patient Reported Pain Outcomes. Objective: The purpose of this study is to examine the extent to which numeric rating scale (NRS) scores collected during usual care are associated with more robust and validated measures of pain, disability, mental health, and health-related quality of life (HRQOL). Design: We conducted a secondary analysis of data from a prospective cohort study. Subjects: We included 186 patients with musculoskeletal pain who were prescribed long-term opioid therapy. Setting: VA Portland Health Care System outpatient clinic. Methods: All patients had been screened with the 0-10 NRS during routine outpatient visits. They also completed research visits that assessed pain, mental health and HRQOL every 6 months for 2 years. Accounting for nonindependence of repeated measures data, we examined associations of NRS data obtained from the medical record with scores on standardized measures of pain and its related outcomes. Results: NRS scores obtained in clinical practice were moderately associated with pain intensity scores (B's = 0.53-0.59) and modestly associated with pain disability scores (B's = 0.33-0.36) obtained by researchers. Associations between pain NRS scores and validated measures of depression, anxiety, and health related HRQOL were low (B's = 0.09-0.26, with the preponderance of B's < .20). Conclusions: Standardized assessments of pain during usual care are moderately associated with research-administered measures of pain intensity and would be improved from the inclusion of more robust measures of pain-related function, mental health, and HRQOL. abstract_id: PUBMED:29673262 Psychometric properties of the Numeric Pain Rating Scale and Neck Disability Index in patients with cervicogenic headache. Background: Self-reported disability and pain intensity are commonly used outcomes in patients with cervicogenic headaches. However, there is a paucity of psychometric evidence to support the use of these self-report outcomes for individuals treated with cervicogenic headaches. Therefore, it is unknown if these measures are reliable, responsive, or result in meaningful clinically important changes in this patient population. Methods: A secondary analysis of a randomized clinical trial (n = 110) examining the effects of spinal manipulative therapy with and without exercise in patients with cervicogenic headaches. Reliability, construct validity, responsiveness and thresholds for minimal detectable change and clinically important difference values were calculated for the Neck Disability Index and Numeric Pain Rating Scale. Results: The Neck Disability Index exhibited excellent reliability (ICC = 0.92; [95 % CI: 0.46-0.97]), while the Numeric Pain Rating Scale exhibited moderate reliability (ICC = 0.72; [95 % CI: 0.08-0.90]) in the short term. Both instruments also exhibited adequate responsiveness (area under the curve; range = 0.78-0.93) and construct validity ( p < 0.001) in this headache population. Conclusions: Both instruments seem well suited as short-term self-report measures for patients with cervicogenic headaches. Clinicians and researchers should expect at least a 2.5-point reduction on the numeric pain rating scale and a 5.5-point reduction on the neck disability index after 4 weeks of intervention to be considered clinically meaningful. abstract_id: PUBMED:26733318 The psychometric properties of an Arabic numeric pain rating scale for measuring osteoarthritis knee pain. Purpose: The aims of this study were to translate the numeric rating scale (NRS) into Arabic and to evaluate the test-retest reliability and convergent validity of an Arabic Numeric Pain Rating Scale (ANPRS) for measuring pain in osteoarthritis (OA) of the knee. Methods: The English version of the NRS was translated into Arabic as per the translation process guidelines for patient-rated outcome scales. One hundred twenty-one consecutive patients with OA of the knee who had experienced pain for more than 6 months were asked to report their pain levels on the ANPRS, visual analogue scale (VAS), and verbal rating scale (VRS). A second assessment was performed 48 h after the first to assess test-retest reliability. The test-retest reliability was calculated using the intraclass correlation coefficient (ICC2,1). The convergent validity was assessed using Spearman rank correlation coefficient. In addition, the minimum detectable change (MDC) and standard error of measurement (SEM) were also assessed. Results: The repeatability of ANPRS was good to excellent (ICC 0.89). The SEM and MDC were 0.71 and 1.96, respectively. Significant correlations were found with the VAS and VRS scores (p <0.01). Conclusions: The Arabic numeric pain rating scale is a valid and reliable scale for measuring pain levels in OA of the knee. Implications for Rehabilitation The Arabic Numeric Pain Rating Scale (ANPRS) is a reliable and valid instrument for measuring pain in osteoarthritis (OA) of the knee, with psychometric properties in agreement with other widely used scales. The ANPRS is well correlated with the VAS and NRS scores in patients with OA of the knee. The ANPRS appears to measure pain intensity similar to the VAS, NRS, and VRS and may provide additional advantages to Arab populations, as Arabic numbers are easily understood by this population. abstract_id: PUBMED:35005354 Optimizing Numeric Pain Rating Scale administration for children: The effects of verbal anchor phrases. Background: The 0-10 Verbal Numeric Rating Scale (VNRS) is commonly used to obtain self-reports of pain intensity in school-age children, but there is no standard verbal descriptor to define the most severe pain. Aims: The aim of this study was to determine how verbal anchor phrases defining 10/10 on the VNRS are associated with children's reports of pain. Methods and Results: Study 1. Children (N = 131, age 6-11) rated hypothetical pain vignettes using six anchor phrases; scores were compared with criterion ratings. Though expected effects of age and vignette were found, no effects were found for variations in anchors. Study 2. Pediatric nurses (N = 102) were asked how they would instruct a child to use the VNRS. Common themes of "the worst hurt you could ever imagine" and "the worst hurt you have ever had" to define 10/10 were identified. Study 3. Children's hospital patients (N = 27, age 8-14) rated pain from a routine injection using four versions of the VNRS. Differences in ratings ranging from one to seven points on the scale occurred in the scores of 70% of children when the top anchor phrase was changed. Common themes in children's descriptions of 10/10 pain intensity were "hurts really bad" and "hurts very much." Discussion: This research supports attention to the details of instructions that health care professionals use when administering the VNRS. Use of the anchor phrase "the worst hurt you could ever imagine" is recommended for English-speaking, school-age children. Details of administration of the VNRS should be standardized and documented in research reports and in clinical use. abstract_id: PUBMED:31330252 PROMIS 4-item measures and numeric rating scales efficiently assess SPADE symptoms compared with legacy measures. Objective: The 5 SPADE (sleep, pain, anxiety, depression, and low energy/fatigue) symptoms are among the most prevalent and disabling symptoms in clinical practice. This study evaluates the minimally important difference (MID) of Patient-Reported Outcomes Measurement Information System (PROMIS) measures and their correspondence with other brief measures to assess SPADE symptoms. Study Design And Setting: Three hundred primary care patients completed a 4-item PROMIS scale, a numeric rating scale (NRS), and a non-PROMIS legacy scale for each of the 5 SPADE symptoms. Optimal NRS cutpoints were examined, and cross-walk units for converting legacy measure scores to PROMIS scores were determined. PROMIS scores corresponding to standard deviation (SD) and standard error of measurement (SEM) changes in legacy scores were used to estimate MID. Results: At an NRS ≥5, the mean PROMIS T-score exceeded 55 (the operational threshold for a clinically meaningful symptom) for each SPADE symptom. Correlations were high (0.70-0.86) between each PROMIS scale and its corresponding non-PROMIS legacy scale. Changes in non-PROMIS legacy scale scores of 0.35 SD and 1 SEM corresponded to mean PROMIS T-scores of 2.92 and 3.05 across the 5 SPADE symptoms, with changes in 0.2 and 0.5 SD corresponding to mean PROMIS T-scores of 1.67 and 4.16. Conclusion: A 2-step screening process for SPADE symptoms might use single-item NRS scores, proceeding to PROMIS scales for NRS scores ≥5. A PROMIS T-score change of three points represents a reasonable MID estimate, with two to four points approximating lower and upper bounds. abstract_id: PUBMED:31403125 Correlations among algometry, the visual analogue scale, and the numeric rating scale to assess chronic pelvic pain in women. Objective: To investigate the correlation between the numerical rating scale, visual analogue scale, and pressure threshold by algometry in women with chronic pelvic pain. Study Design: This was a cross-sectional study. We included 47 patients with chronic pelvic pain. All subjects underwent a pain assessment that used three different methods and were divided according to the cause of pain (endometriosis versus non-endometriosis). Moreover, we assessed the agreement between the scales (visual, analogue and algometry) using the intraclass correlation coefficient (ICC). Results: The ICC for the numeric rating scale and the visual analogue scale regarding pain (0.992), dysmenorrhea (1.00) and dyspareunia (0.996) were strong. The agreement between the scales was excellent (p ≤0.01). The correlation between algometry and the scales showed a moderate and inverse association, and this correlation was statistically significant: as the scores on the numeric rating scale and the visual analogue scale regarding dyspareunia increased, the algometry thresholds decreased. Conclusions: The assessment of women with chronic pelvic pain should combine pressure algometry and the numeric rating scale or the visual analogue scale, because of their inverse correlations and satisfactory reliability and sensitivity, to make pain assessment less subjective and more accurate. abstract_id: PUBMED:33547939 Validity and reliability of a novel numeric rating scale to measure skin-pain in adults with atopic dermatitis. Little is known about the measurement properties of numeric rating scales (NRS) for pain in AD. We evaluated a novel NRS for skin-pain and existing NRS for average overall-pain in adults with AD. Self-administered questionnaires and skin-examination were performed in 463 AD patients (age 18-97 years) in a dermatology practice setting. Numeric rating scales skin-pain and average overall-pain had moderate correlations with each other, and multiple clinician-reported and patient-reported AD severity outcomes (Spearman correlations, P < 0.0001). There were significant and stepwise increases of NRS skin-pain and average overall-pain scores with patient-reported global severity (Wilcoxon rank-sum test, P < 0.0001). Floor-effects were observed for NRS skin-pain and average overall-pain. Changes from baseline in NRS skin-pain and average overall-pain showed weak-moderate correlations with changes of POEM, vIGA-AD*BSA, SCORAD, and DLQI. Using an anchoring approach, the optimal interpretability band for NRS skin-pain was clear = 0, mild = 1-3, moderate = 5-6, severe = 7-9, and very severe = 10 (weighted kappa = 0.4923). The thresholds for minimally clinically important difference for NRS skin-pain ranged from 2.2 to 2.9. NRS skin-pain and average overall-pain showed moderate-good reliability. Numeric rating scales skin-pain and average overall-pain had sufficient validity, reliability, responsiveness, and interpretability in adults with AD, and were inherently feasible as single-items for use in clinical trials and practice. abstract_id: PUBMED:29530726 Minimal clinically important change in the Toronto Western Spasmodic Torticollis Rating Scale. Objectives: To characterize the minimal clinically important change (MCIC) after treatment in cervical dystonia patients using the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS). Methods: Changes in the TWSTRS from an observational study of abobotulinumtoxinA in the routine management of cervical dystonia (NCT01314365) were analyzed using the Patient Global Impression of Change (PGIC) as anchor. Results: For the overall population (N = 304, baseline TWSTRS-Total score 43.4 ± 19.4), the MCIC for the TWSTRS Total score was -11.9 (95%CI: -13.9, -10.0; p < 0.0001). However, thresholds ranged from -3.2 to -18.0 dependent on baseline severity. TWSTRS-Total scores improved linearly by 3 points for every one-point PGIC increase. There was similar linearity between the graded PGIC categories and TWSTRS subscale scores (severity, disability, and pain). Conclusions: A 3-point change is the minimal clinically important change after treatment using TWSTRS as endpoint with higher cutoffs for greater baseline disease severity. For an average trial population (TWSTRS-total: 40-45), a 12-point decrease is clinically meaningful. abstract_id: PUBMED:30123858 Psychometric properties of the brief pain inventory modified for proxy report of pain interference in children with cerebral palsy with and without cognitive impairment. Introduction: Cerebral palsy (CP) is the most common cause of physical disability in children and is often associated with secondary musculoskeletal pain. Cerebral palsy is a heterogeneous condition with wide variability in motor and cognitive capacities. Although pain scales exist, there remains a need for a validated chronic pain assessment tool with high clinical utility for use across such a heterogeneous patient population with and without cognitive impairment. Objectives: The purpose of this study was an initial assessment of several psychometric properties of the 12-item modified brief pain inventory (BPI) pain interference subscale as a proxy-report tool in a heterogeneous sample of children with CP with and without cognitive impairment. Methods: Participants (n = 167; 47% male; mean age = 9.1 years) had pain assessments completed through caregiver report in clinic before spasticity treatment (for a subgroup, the modified BPI was repeated after procedure). To measure concurrent validity, we obtained pain intensity ratings (Numeric Rating Scale of pain) and pain intensity, duration, and frequency scores (Dalhousie Pain Interview). Results: Modified BPI scores were internally consistent (Cronbach α = 0.96) and correlated significantly with Numeric Rating Scale intensity scores (rs = 0.67, P < 0.001), Dalhousie Pain Interview pain intensity (rs = 0.65, P < 0.001), pain frequency (rs = 0.56, P = 0.02), and pain duration scores (rs = 0.42, P = 0.006). Modified BPI scores also significantly decreased after spasticity treatment (pretest [scored 0-10; 3.27 ± 2.84], posttest [2.27 ± 2.68]; t (26) = 2.14, 95% confidence interval [0.04-1.95], P = 0.04). Conclusion: Overall, the modified BPI produced scores with strong internal consistency and that had concurrent validity as a proxy-report tool for children with CP. Answer: The use of 0-10 numeric rating scores (NRS) does translate into clinically meaningful pain measures for children, but with some limitations and considerations. A study on children postoperatively found that NRS scores were valid measures of pain intensity in relation to the child's perceived need for medicine (PNM), pain relief (PR), and perceived satisfaction (PS) with treatment. However, the study also highlighted that there was variability in scores in relation to other clinically meaningful outcomes, suggesting that applying cut-points for individual treatment decisions may be inappropriate (PUBMED:21127278). Another study focused on optimizing the administration of the Numeric Pain Rating Scale for children, recommending the use of the anchor phrase "the worst hurt you could ever imagine" for English-speaking, school-age children. This study emphasized the importance of standardizing and documenting the details of administration of the NRS in both research and clinical settings (PUBMED:35005354). In terms of psychometric properties, the Numeric Pain Rating Scale exhibited moderate reliability in the short term for patients with cervicogenic headaches, and both the Numeric Pain Rating Scale and the Neck Disability Index were considered well-suited as short-term self-report measures for this patient population (PUBMED:29673262). Furthermore, the Numeric Pain Rating Scale was found to be a valid and reliable scale for measuring pain levels in osteoarthritis of the knee when translated into Arabic, suggesting its utility across different languages and conditions (PUBMED:26733318). In summary, 0-10 numeric rating scores can be meaningful in assessing pain in children, but their interpretation should be contextualized within the broader clinical picture, and the administration of the scale should be consistent and well-documented. Clinicians should be aware of the potential for variability and the need to consider additional factors beyond the numeric score when making treatment decisions.
Instruction: Are lean smokers at increased risk of lung cancer? Abstracts: abstract_id: PUBMED:30925812 The Effects of Dietary Supplements on Asthma and Lung Cancer Risk in Smokers and Non-Smokers: A Review of the Literature. Smoking is one of the major global causes of death. Cigarette smoke and secondhand (passive) smoke have been causally related to asthma and lung cancer. Asthma is a potential risk factor for developing lung cancer in both smokers and non-smokers. Prospective studies and randomized control trials (RCTs) of dietary supplements and lung cancer risk in adult smokers and non-smokers have yielded inconsistent results. A few prospective studies have shown that long-term use of high doses of some supplements, such as retinol, β-carotene, B vitamins, and vitamin E, increase lung cancer risk in current and former smokers. Limited evidence from RCTs suggests that vitamin D supplementation is effective in improving lung function and reducing asthma risk in current/former smokers. The relationship between dietary supplements and lung cancer risk has never before been examined in asthmatic smokers and non-smokers. This short review aims to examine the evidence from existing studies for the effects of dietary supplements on asthma/lung cancer risk and mortality in smokers and non-smokers. abstract_id: PUBMED:37410540 Lung cancer risk score for ever and never smokers in China. Background: Most lung cancer risk prediction models were developed in European and North-American cohorts of smokers aged ≥ 55 years, while less is known about risk profiles in Asia, especially for never smokers or individuals aged < 50 years. Hence, we aimed to develop and validate a lung cancer risk estimate tool for ever and never smokers across a wide age range. Methods: Based on the China Kadoorie Biobank cohort, we first systematically selected the predictors and explored the nonlinear association of predictors with lung cancer risk using restricted cubic splines. Then, we separately developed risk prediction models to construct a lung cancer risk score (LCRS) in 159,715 ever smokers and 336,526 never smokers. The LCRS was further validated in an independent cohort over a median follow-up of 13.6 years, consisting of 14,153 never smokers and 5,890 ever smokers. Results: A total of 13 and 9 routinely available predictors were identified for ever and never smokers, respectively. Of these predictors, cigarettes per day and quit years showed nonlinear associations with lung cancer risk (Pnon-linear < 0.001). The curve of lung cancer incidence increased rapidly above 20 cigarettes per day and then was relatively flat until approximately 30 cigarettes per day. We also observed that lung cancer risk declined sharply within the first 5 years of quitting, and then continued to decrease but at a slower rate in the subsequent years. The 6-year area under the receiver operating curve for the ever and never smokers' models were respectively 0.778 and 0.733 in the derivation cohort, and 0.774 and 0.759 in the validation cohort. In the validation cohort, the 10-year cumulative incidence of lung cancer was 0.39% and 2.57% for ever smokers with low (< 166.2) and intermediate-high LCRS (≥ 166.2), respectively. Never smokers with a high LCRS (≥ 21.2) had a higher 10-year cumulative incidence rate than those with a low LCRS (< 21.2; 1.05% vs. 0.22%). An online risk evaluation tool (LCKEY; http://ccra.njmu.edu.cn/lckey/web) was developed to facilitate the use of LCRS. Conclusions: The LCRS can be an effective risk assessment tool designed for ever and never smokers aged 30 to 80 years. abstract_id: PUBMED:36554027 Dietary Antioxidants and Lung Cancer Risk in Smokers and Non-Smokers. Smoking is considered a major risk factor in the development of lung diseases worldwide. Active smoking and secondhand (passive) smoke (SHS) are related to lung cancer (LC) risk. Oxidative stress (OS) and/or lipid peroxidation (LP) induced by cigarette smoke (CS) are found to be involved in the pathogenesis of LC. Meta-analyses and other case-control/prospective cohort studies are inconclusive and have yielded inconsistent results concerning the protective role of dietary vitamins C and E, retinol, and iron intake against LC risk in smokers and/or non-smokers. Furthermore, the role of vitamins and minerals as antioxidants with the potential in protecting LC cells against CS-induced OS in smokers and non-smokers has not been fully elucidated. Thus, this review aims to summarize the available evidence reporting the relationships between dietary antioxidant intake and LC risk in smokers and non-smokers that may be used to provide suggestions for future research. abstract_id: PUBMED:31201228 ATM rs189037 significantly increases the risk of cancer in non-smokers rather than smokers: an updated meta-analysis. Rs189037 (G>A) is an important functional variant with ataxia telangiectasia mutated (ATM) gene, which might affect ATM's expression involvement in several human cancers. Increasing evidence reveals that smoking-related cancers have distinct molecular characteristics from non-smoking cancers. Until now, the role of ATM rs189037 in cancer risk stratified by smoking status still remains unclear. To evaluate the association between ATM rs189037 and cancer risk based on smoking status, we performed this meta-analysis by a comprehensive literature search via databases of PubMed, Embase, Web of Science and CNKI, updated till January 2019. Multivariate-adjusted odds ratios (ORs) and 95% confidence intervals (CIs) were extracted from eligible studies if available, to assess the relationship strengths. A total of seven eligible studies were included, comprising 4294 cancer patients (smokers: 1744 [40.6%]) and 4259 controls (smokers: 1418 [33.3%]). Results indicated a significant association of ATM rs189037 with cancer risk. In non-smokers, compared with GG genotype, AA genotype increased a 1.40-fold risk of overall cancer (OR = 1.40, 95% CI = 1.15-1.70, Pheterogeneity=0.433, I2 = 0.0%). Subgroup analysis in lung cancer (LC) also exhibited a significant result (OR = 1.41, 95% CI = 1.15-1.73, Pheterogeneity=0.306, I2 = 17.0%) only in non-smokers. However, the association was not observed in smokers, no matter for overall cancer or for LC. Our findings highlight that ATM rs189037 significantly increases cancer susceptibility in non-smokers, rather than in smokers. The association is prominent in LC. abstract_id: PUBMED:36741370 The association of BTLA gene polymorphisms with non-small lung cancer risk in smokers and never-smokers. Introduction: Lung cancer is the predominant cause of death among cancer patients and non-small cell lung cancer (NSCLC) is the most common type. Cigarette smoking is the prevailing risk factor for NSCLC, nevertheless, this cancer is also diagnosed in never-smokers. B and T lymphocyte attenuator (BTLA) belongs to immunological checkpoints which are key regulatory molecules of the immune response. A growing body of evidence highlights the important role of BTLA in cancer. In our previous studies, we showed a significant association between BTLA gene variants and susceptibility to chronic lymphoblastic leukemia and renal cell carcinoma in the Polish population. The present study aimed to analyze the impact of BTLA polymorphic variants on the susceptibility to NSCLC and NSCLC patients' overall survival (OS). Methods: Using TaqMan probes we genotyped seven BTLA single-nucleotide polymorphisms (SNPs): rs2705511, rs1982809, rs9288952, rs9288953, rs1844089, rs11921669 and rs2633582 with the use of ViiA 7 Real-Time PCR System. Results: We found that rs1982809 within BTLA is associated with NSCLC risk, where carriers of rs1982809G allele (AG+GG genotypes) were more frequent in patients compared to controls. In subgroup analyses, we also noticed that rs1982809G carriers are significantly overrepresented in never-smokers, but not in smokers compared to controls. Additionally, the global distribution of the haplotypes differed between the never-smokers and smokers, where haplotypes A G G C A, C G A C G, and C G A T G were more frequent in never-smoking patients. Furthermore, the presence rs1982809G (AG+GG genotypes) allele as well as the presence of rs9288953T allele (CT+TT genotypes) increased NSCLC risk in females' patients. After stratification by histological type, we noticed that rs1982809G and rs2705511C carriers were more frequent among adenocarcinoma patients. Moreover, rs1982809G and rs2705511C correlated with the more advanced stages of NSCLC (stage II and III), but not with stage IV. Furthermore, we showed that rs2705511 and rs1982809 significantly modified OS, while rs9288952 tend to be associated with patients' survival. Conclusion: Our results indicate that BTLA polymorphic variants may be considered low penetrating risk factors for NSCLC especially in never-smokers, and in females, and are associated with OS of NSCLC patients. abstract_id: PUBMED:35900139 Risk-stratified Approach for Never- and Ever-Smokers in Lung Cancer Screening: A Prospective Cohort Study in China. Rationale: Over 40% of lung cancer cases occurred in never-smokers in China. However, high-risk never-smokers were precluded from benefiting from lung cancer screening as most screening guidelines did not consider them. Objectives: We sought to develop and validate prediction models for 3-year lung cancer risks for never- and ever-smokers, named the China National Cancer Center Lung Cancer models (China NCC-LCm2021 models). Methods: 425,626 never-smokers and 128,952 ever-smokers from the National Lung Cancer Screening program were used as the training cohort and analyzed using multivariable Cox models. Models were validated in two independent prospective cohorts: one included 369,650 never-smokers and 107,678 ever-smokers (841 and 421 lung cancers), and the other included 286,327 never-smokers and 78,469 ever-smokers (503 and 127 lung cancers). Measurements and Main Results: The areas under the receiver operating characteristic curves in the two validation cohorts were 0.698 and 0.673 for never-smokers and 0.728 and 0.752 for ever-smokers. Our models had higher areas under the receiver operating characteristic curves than other existing models and were well calibrated in the validation cohort. The China NCC-LCm2021 ⩾0.47% threshold was suggested for never-smokers and ⩾0.51% for ever-smokers. Moreover, we provided a range of threshold options with corresponding expected screening outcomes, screening targets, and screening efficiency. Conclusion: The construction of the China NCC-LCm2021 models can accurately reflect individual risk of lung cancer, regardless of smoking status. Our models can significantly increase the feasibility of conducting centralized lung cancer screening programs because we provide justified thresholds to define the high-risk population of lung cancer and threshold options to adapt different configurations of medical resources. abstract_id: PUBMED:23921082 Lung cancer in never smokers: disease characteristics and risk factors. It is estimated that approximately 25% of all lung cancer cases are observed in never-smokers and its incidence is expected to increase due to smoking prevention programs. Risk factors for the development of lung cancer described include second-hand smoking, radon exposure, occupational exposure to carcinogens and to cooking oil fumes and indoor coal burning. Other factors reported are infections (HPV and Mycobacterium tuberculosis), hormonal and diatery factors and diabetes mellitus. Having an affected relative also increases the risk for lung cancer while recent studies have identified several single nucleotide polymorphisms associated with increased risk for lung cancer development in never smokers. Distinct clinical, pathology and molecular characteristics are observed in lung cancer in never smokers; more frequently is observed in females and adenocarcinoma is the predominant histology while it has a different pattern of molecular alterations. The purpose of this review is to summarize our current knowledge of this disease. abstract_id: PUBMED:7503599 Are lean smokers at increased risk of lung cancer? The Israel Civil Servant Cancer Study. Background: Whether leanness is related to an increased risk of lung cancer is controversial. Objective: To examine the association of leanness with lung cancer incidence in a sample of Israeli men. Methods: The 23-year lung cancer incidence (1963 through 1986) was determined by linkage to the Israel Cancer Registry in 9975 male civil servants aged 40 through 69 years at initial examination in 1963. In 198,298 person-years of follow-up, 153 cases of lung cancer were identified. In 1963, body mass index (BMI) and cigarette smoking status were determined; in the 1968 reexamination, lung function tests were performed and BMI was reassessed. Results: Adjusted for age, smoking, and city by Cox regression, BMI was exponentially inversely related to lung cancer incidence, with a relative risk of 2.3 (95% confidence interval [CI], 1.4 to 3.8) comparing the lowest fifth of BMI (< 22.93 kg/m2) with the highest. The association was evident in light, moderate, and heavy smokers. Among smokers, the adjusted relative risk was 3.7 (95% CI, 1.9 to 7.3) for the lowest fifth of BMI. The associations were stronger for men in the lowest 10th of the BMI distribution (< 21.38 kg/m2). Controlling for lung function did not materially change the results. The adjusted population-attributable fraction associated with the lowest fifth of BMI among smokers was 20.4% (95% CI, 10.1% to 29.9%). Survival analysis showed that the association of BMI with lung cancer persisted throughout follow-up. Conclusions: The association shown between thinness and lung cancer incidence, particularly in smokers, was not attributable to the confounding factors studied, preclinical weight loss, or competing risks. Thinness in smokers may lead to, or may reflect, enhanced host susceptibility. abstract_id: PUBMED:29454784 Coronary Artery Calcium Scores and Atherosclerotic Cardiovascular Disease Risk Stratification in Smokers. Objectives: This study assessed the utility of the pooled cohort equation (PCE) and/or coronary artery calcium (CAC) for atherosclerotic cardiovascular disease (ASCVD) risk assessment in smokers, especially those who were lung cancer screening eligible (LCSE). Background: The U.S. Preventive Services Task Force recommended and the Centers for Medicare & Medicaid Services currently pays for annual screening for lung cancer with low-dose computed tomography scans in a specified group of cigarette smokers. CAC can be obtained from these low-dose scans. The incremental utility of CAC for ASCVD risk stratification remains unclear in this high-risk group. Methods: Of 6,814 MESA (Multi-Ethnic Study of Atherosclerosis) participants, 3,356 (49.2% of total cohort) were smokers (2,476 former and 880 current), and 14.3% were LCSE. Kaplan-Meier, Cox proportional hazards, area under the curve, and net reclassification improvement (NRI) analyses were used to assess the association between PCE and/or CAC and incident ASCVD. Incident ASCVD was defined as coronary death, nonfatal myocardial infarction, or fatal or nonfatal stroke. Results: Smokers had a mean age of 62.1 years, 43.5% were female, and all had a mean of 23.0 pack-years of smoking. The LCSE sample had a mean age of 65.3 years, 39.1% were female, and all had a mean of 56.7 pack-years of smoking. After a mean of 11.1 years of follow-up 13.4% of all smokers and 20.8% of LCSE smokers had ASCVD events; 6.7% of all smokers and 14.2% of LCSE smokers with CAC = 0 had an ASCVD event during the follow-up. One SD increase in the PCE 10-year risk was associated with a 68% increase risk for ASCVD events in all smokers (hazard ratio: 1.68; 95% confidence interval: 1.57 to 1.80) and a 22% increase in risk for ASCVD events in the LCSE smokers (hazard ratio: 1.22; 95% confidence interval: 1.00 to 1.47). CAC was associated with increased ASCVD risk in all smokers and in LCSE smokers in all the Cox models. The C-statistic of the PCE for ASCVD was higher in all smokers compared with LCSE smokers (0.693 vs. 0.545). CAC significantly improved the C-statistics of the PCE in all smokers but not in LCSE smokers. The event and nonevent net reclassification improvements for all smokers and LCSE smokers were 0.018 and -0.126 versus 0.16 and -0.196, respectively. Conclusions: In this well-characterized, multiethnic U.S. cohort, CAC was predictive of ASCVD in all smokers and in LCSE smokers but modestly improved discrimination over and beyond the PCE. However, 6.7% of all smokers and 14.2% of LCSE smokers with CAC = 0 had an ASCVD event during follow-up. abstract_id: PUBMED:34431436 Fruits and Vegetables and Lung Cancer Risk in Never Smokers. A Multicentric and Pooled Case-Control Study. Background: The etiology of lung cancer in never smokers is partly unknown. We aimed to assess the effect of fruits and vegetables consumption on lung cancer risk in never smokers. Methods: We pooled five multicenter case-control studies performed in Northwestern Spain. Cases and controls were all never smokers. All lung cancer cases had anatomopathological confirmed diagnoses. We performed a multivariate logistic regression to analyze the effect of different types of fruits and vegetables consumption on lung cancer risk. Results: A total of 438 cases and 781 controls were included. We observed that a consumption from one to six times per week shows a negative association with lung cancer risk for: kiwis (OR 0.67; 95%CI 0.46-0.95), oranges (OR 0.55; 95%CI 0.37-0.80), turnip tops (OR 0.48; 95%CI 0.34-0.66), "berza gallega" (OR 0.70; 95%CI 0.51-0.97) and broccoli (OR 0.55; 95%CI 0.35-0.83) compared to less than once a week consumption. On the other hand, we found an increased risk for lung cancer with a daily consumption of tomatoes, carrots and potatoes. Conclusions: Oranges, kiwis, turnip tops, berza gallega and broccoli may play a protective role on lung cancer development in never smokers while tomatoes, carrots and potatoes might have some association with this disease. Answer: Yes, lean smokers are at increased risk of lung cancer. A study examining the association of leanness with lung cancer incidence in a sample of Israeli men found that body mass index (BMI) was exponentially inversely related to lung cancer incidence. Specifically, the relative risk of lung cancer was higher for those in the lowest fifth of BMI compared to the highest, and this association was evident across light, moderate, and heavy smokers. Among smokers, those in the lowest fifth of BMI had a significantly higher adjusted relative risk for lung cancer. The study concluded that the association between thinness and lung cancer incidence in smokers was not attributable to the confounding factors studied, preclinical weight loss, or competing risks, suggesting that thinness in smokers may lead to or reflect enhanced host susceptibility to lung cancer (PUBMED:7503599).
Instruction: Is there a polysomnographic signature of augmentation in restless legs syndrome? Abstracts: abstract_id: PUBMED:29860190 Polysomnographic findings in restless legs syndrome (RLS) patients with severe augmentation. Background And Objectives: Augmentation can occur frequently in restless legs syndrome (RLS) patients treated with dopaminergic agents. Video-polysomnographic (PSG) data from augmented RLS patients are scant. The aim of this study was to evaluate PSG findings in augmented RLS patients and compare them with those of non-augmented RLS patients. Patients And Methods: Valid PSG data were analyzed from 99 augmented and 84 non-augmented RLS inpatients who underwent one night PSG. Results: Both patient groups showed a high subjective burden of RLS symptoms. The mean scores on the International RLS Study Group Rating Scale (IRLS) were significantly higher in the group with augmentation than in the group without. The periodic leg movement index (PLMI) was increased in both groups, mostly on account of the PLM in wakefulness (PLMW). Both groups presented a reduced sleep efficiency and an increased sleep latency. The levodopa equivalent dose (LED) was significantly higher in the augmented group. Conclusions: Our study confirms that RLS patients with augmentation have disturbed sleep due to high amount of leg movements and fragmented sleep. Overall, however, polysomnographic characteristics were not different between insufficiently treated RLS and severely augmented RLS patients, implying that augmentation could represent a severe form of RLS and not a different phenomenon. abstract_id: PUBMED:24767724 Polysomnographic record and successful management of augmentation in restless legs syndrome/Willis-Ekbom disease. Background: Dopamine agonists (DAs) represent the first-line treatment in restless legs syndrome (RLS); however, in the long term, a substantial proportion of patients will develop augmentation, which is a severe drug-related exacerbation of symptoms and the main reason for late DA withdrawal. Polysomnographic features and mechanisms underlining augmentation are unknown. No practice guidelines for management of augmentation are available. Methods: A clinical case series of 24 consecutive outpatients affected by RLS with clinically significant augmentation during treatment with immediate-release DA was performed. All patients underwent a full-night polysomnographic recording during augmentation. A switchover from immediate-release DAs (l-dopa, pramipexole, ropinirole, rotigotine) to the long-acting, extended-release formula of pramipexole was performed. Results: Fifty percent of patients presented more than 15 periodic limb movements per hour of sleep during augmentation, showing longer sleep latency and shorter total sleep time than subjects without periodic limb movements. In all patients, resolution of augmentation was observed within two to four weeks during which immediate-release dopamine agonists could be completely withdrawn. Treatment efficacy of extended-release pramipexole has persisted, thus far, over a mean follow-up interval of 13 months. Conclusions: Pramipexole extended release could be an easy, safe, and fast pharmacological option to treat augmentation in patients with restless legs syndrome. As such it warrants further prospective and controlled investigations. This observation supports the hypothesis that the duration of action of the drug plays a key role in the mechanism of augmentation. abstract_id: PUBMED:25129261 Is there a polysomnographic signature of augmentation in restless legs syndrome? Objective: Augmentation of restless legs syndrome (RLS) is a potentially severe side-effect of dopaminergic treatment. Data on objective motor characteristics in augmentation are scarce. The aim of this study was to investigate in detail different variables of leg movements (LM) in untreated, treated, and augmented RLS patients. Methods: Forty-five patients with idiopathic RLS [15 untreated, 15 treated (non-augmented), 15 augmented] underwent RLS severity assessment, one night of video-polysomnography with extended electromyographic montage, and a suggested immobilization test (SIT). Results: Standard LM parameters as well as periodicity index (PI) and muscle recruitment pattern did not differ between the three groups. The ultradian distribution of periodic leg movements (PLM) in sleep during the night revealed significant differences only during the second hour of sleep (P <0.05). However, augmented patients scored highest on RLS severity scales (P <0.05) and were the only group with a substantial number of PLM during the SIT. Conclusion: This study demonstrates that polysomnography is of limited usefulness for the diagnosis and evaluation of RLS augmentation. In contrast, the SIT showed borderline differences in PLM, and differences on subjective scales were marked. According to these results, augmentation of RLS is a phenomenon that predominantly manifests in wakefulness. abstract_id: PUBMED:36090852 Polysomnographic nighttime features of Restless Legs Syndrome: A systematic review and meta-analysis. Background: Restless Legs Syndrome (RLS) is a common sleep disorder. Polysomnographic (PSG) studies have been used to explore the night sleep characteristics of RLS, but their relationship with RLS has not been fully analyzed and researched. Methods: We searched the Cochrane Library electronic literature, PubMed, and EMBASE databases to identify research literature comparing the differences in polysomnography between patients with RLS and healthy controls (HCs). Results: This review identified 26 studies for meta-analysis. Our research found that the rapid eye movement sleep (REM)%, sleep efficiency (SE)%, total sleep time (TST) min, and N2 were significantly decreased in patients with RLS compared with HCs, while sleep latency (SL) min, stage shifts (SS), awakenings number (AWN), wake time after sleep onset (WASO) min, N1%, rapid eye movement sleep latency (REML), and arousal index (AI) were significantly increased. Additionally, there was no significant difference among N3%, slow wave sleep (SWS)%, and apnea-hypopnea index (AHI). Conclusion: Our findings demonstrated that architecture and sleep continuity had been disturbed in patients with RLS, which further illustrates the changes in sleep structure in patients with RLS. In addition, further attention to the underlying pathophysiological mechanisms of RLS and its association with neurodegenerative diseases is needed in future studies. abstract_id: PUBMED:37840917 Exploring the causes of augmentation in restless legs syndrome. Long-term drug treatment for Restless Legs Syndrome (RLS) patients can frequently result in augmentation, which is the deterioration of symptoms with an increased drug dose. The cause of augmentation, especially derived from dopamine therapy, remains elusive. Here, we review recent research and clinical progress on the possible mechanism underlying RLS augmentation. Dysfunction of the dopamine system highly possibly plays a role in the development of RLS augmentation, as dopamine agonists improve desensitization of dopamine receptors, disturb receptor interactions within or outside the dopamine receptor family, and interfere with the natural regulation of dopamine synthesis and release in the neural system. Iron deficiency is also indicated to contribute to RLS augmentation, as low iron levels can affect the function of the dopamine system. Furthermore, genetic risk factors, such as variations in the BTBD9 and MEIS1 genes, have been linked to an increased risk of RLS initiation and augmentation. Additionally, circadian rhythm, which controls the sleep-wake cycle, may also contribute to the worsening of RLS symptoms and the development of augmentation. Recently, Vitamin D deficiency has been suggested to be involved in RLS augmentation. Based on these findings, we propose that the progressive reduction of selective receptors, influenced by various pathological factors, reverses the overcompensation of the dopamine intensity promoted by short-term, low-dose dopaminergic therapy in the development of augmentation. More research is needed to uncover a deeper understanding of the mechanisms underlying the RLS symptom and to develop effective RLS augmentation treatments. abstract_id: PUBMED:17230457 Augmentation of restless legs syndrome with long-term tramadol treatment. Restless legs syndrome (RLS) augmentation, defined as a kind of suppression of the circadian rhythm of the disease in which sensory and motor symptoms appear earlier during the day (and over previously unaffected body parts), with a progressive phase advance until, backwards, the symptoms may cover the entire day, has been described only after treatment with dopaminergic drugs. We report clinical and polysomnographic accounts of a patient developing RLS augmentation after long-term treatment with tramadol, an opioid agonist with selectivity for mu-receptor and added norepinephrine and serotonin reuptake inhibition properties. Polysomnographic measures showed an improvement of RLS and a disappearance of diurnal sensory and motor RLS symptoms after tramadol was stopped. Our case confirms a recent retrospective report of augmentation of RLS after treatment with tramadol, and begs the question whether augmentation is truly restricted to dopaminergic drugs. abstract_id: PUBMED:26106453 Augmentation in Restless Legs Syndrome: Treatment with Gradual Medication Modification. Dopaminergic drugs can cause augmentation during the treatment of restless legs syndrome (RLS). We previously reported that sudden withdrawal of dopaminergic treatment was poorly tolerated. We now report our experience with gradual withdrawal of the dopaminergic drug during the drug substitution process using a retrospective chart review with comparison to previous data. Seven patients with RLS and dopaminergic drug-induced augmentation were treated with a gradual withdrawal of the offending drug and replacement with an alternative medication. Compared to sudden withdrawal, measured outcomes were similar but gradual tapering was better tolerated. We conclude that for augmentation in RLS, gradual tapering of the augmentation-inducing dopaminergic drug is better tolerated than sudden withdrawal. The optimal approach to treating augmentation has not been established and may differ between patients. Further study with direct comparison of strategies and a larger patient population is needed to confirm our preliminary observations. abstract_id: PUBMED:16200540 Polysomnographic and pharmacokinetic findings in levodopa-induced augmentation of restless legs syndrome. Augmentation, defined as a loss of circadian recurrence with progressively earlier daily onset and increase in the duration, intensity, and anatomy of symptoms, not compatible with the half-life of the drug, is associated with dopaminergic treatment in restless legs syndrome (RLS) patients. The pathogenesis of augmentation is unclear. We describe a patient with idiopathic RLS who developed augmentation after 8 months of levodopa treatment. Videopolysomnographic and pharmacokinetic studies with monitoring of plasma levodopa levels demonstrated marked motor hyperactivity during augmentation, with anarchic discharges of motor unit potentials, tonic grouped discharges and flexor spasms, associated with painful dysesthesia. Symptoms and signs of augmentation were related to low plasma levodopa levels, abating 75 minutes after oral levodopa administration and reappearing after 3 hours, closely mirroring the rapid rise and fall of plasma levodopa concentration. This case is the first report in which RLS augmentation is shown to be characterized by motor hyperkinesias paralleling levodopa plasma pharmacokinetic profile. abstract_id: PUBMED:32998091 The Frontal Assessment Battery in RLS patients with and without augmentation. Objective: We assessed frontal executive functions in patients with RLS/WED with and without augmentation and compared the results to healthy controls. Methods: We recruited 38 patients with RLS/WED. A total of 23 patients were treated with dopaminergic therapy and showed no signs of augmentation and 15 patients had a history of augmentation (AUG). Results were compared to 21 healthy controls. All individuals were assessed by the Frontal Assessment Battery (FAB) and the MMSE. Furthermore, impulsivity was assessed during a semi-structured interview. Results: Patients with AUG performed worse in the FAB than healthy controls and RLS/WED patients without AUG (p = 0.001, η2 = 0.201). When we assessed the subtests of the FAB separately, we found a significant difference in the subtest assessing inhibitory control (p = 0.008, η2 = 0.138). Conclusions: Our findings suggest an impaired executive function in RLS/WED patients with augmentation compared to RLS/WED patients without augmentation and healthy controls. Long term neuroplastic changes within the prefrontal cortex may be the underlying cause for these results. However, further studies in a larger sample size and with a more extensive neuropsychological test battery are needed to confirm our preliminary results. abstract_id: PUBMED:30055794 French consensus: Augmentation syndrome in restless legs syndrome. Augmentation syndrome is one of the most severe complications of RLS. It is characterised by a worsening of treated symptoms; principally an increase in the severity of symptoms and an earlier onset time. Augmentation syndrome occurs primarily with dopaminergic treatments. It is crucial for the patient to be sufficiently well informed to prevent its occurrence and the prescription of too high doses of dopaminergic agonists avoided. In the presence of augmentation syndrome confirmed using the diagnostic criteria, the specialist treating the restless legs syndrome should quickly modify the patient's treatment. In this article, our expert group proposes a practical strategy for the diagnosis, prevention and treatment of augmentation syndrome. Answer: Based on the provided abstracts, there does not appear to be a clear polysomnographic signature of augmentation in restless legs syndrome (RLS). Several studies have investigated the polysomnographic (PSG) features of RLS patients with augmentation, but the findings suggest that PSG may have limited usefulness for the diagnosis and evaluation of RLS augmentation. One study found that standard leg movement parameters, periodicity index, and muscle recruitment pattern did not differ between untreated, treated (non-augmented), and augmented RLS patients. The study also noted that the ultradian distribution of periodic leg movements (PLM) in sleep revealed significant differences only during the second hour of sleep, but the augmented patients scored highest on RLS severity scales and were the only group with a substantial number of PLM during the suggested immobilization test (SIT), indicating that augmentation predominantly manifests in wakefulness (PUBMED:25129261). Another study confirmed that RLS patients with augmentation have disturbed sleep due to a high amount of leg movements and fragmented sleep. However, the polysomnographic characteristics were not different between insufficiently treated RLS and severely augmented RLS patients, suggesting that augmentation could represent a severe form of RLS rather than a different phenomenon (PUBMED:29860190). Additionally, a systematic review and meta-analysis found that RLS patients had disturbed sleep architecture and sleep continuity compared to healthy controls, but this was not specifically linked to augmentation (PUBMED:36090852). Furthermore, a case series reported that 50% of patients presented more than 15 periodic limb movements per hour of sleep during augmentation, showing longer sleep latency and shorter total sleep time than subjects without periodic limb movements. However, this study focused on the management of augmentation rather than establishing a PSG signature (PUBMED:24767724). In summary, while PSG studies have shown that RLS patients with augmentation experience disturbed sleep, there is no definitive polysomnographic signature that can be used to diagnose or evaluate augmentation in RLS. The phenomenon of augmentation seems to be more evident in subjective severity and wakefulness rather than in specific PSG parameters (PUBMED:25129261; PUBMED:29860190).
Instruction: Breast cancer in Japanese men: does sex affect prognosis? Abstracts: abstract_id: PUBMED:15550865 Breast cancer in Japanese men: does sex affect prognosis? Background: Breast cancer has received much less investigative attention in Asian men than in Caucasian men. We examined whether the prognosis of Japanese men with breast cancer differs from that of Japanese women with the disease. Methods: The clinicopathological features of 14 Japanese men with breast cancer were reviewed and age- and stage-matched case-control analysis of these men and 140 female patients was performed. Results: Disease-free survival (p=0.94) and overall survival (p=0.62) did not differ significantly between the sexes. Five-year disease-free survival was 77% for the men and 75% for the women, and the 5-year overall survival was 92% for the men and 86% for the women. The disease recurred in 2 men but none died of breast cancer, although 3 died of other causes during the median follow-up period of 7 years. There were no significant differences in p53 mutation (p=0.20) or erbB-2 oncoprotein overexpression (p=0.33) between the men and women studied. Conclusion: Survival rates of Japanese male and female breast cancer patients are similar when age and stage of the disease are taken into consideration. However, comorbid disease mortality is likely the major contributor to clinical outcome in Japanese male breast cancer. abstract_id: PUBMED:37520334 An appropriate treatment interval does not affect the prognosis of patients with breast Cancer. Purpose: Major public health emergencies may lead to delays or alterations in the treatment of patients with breast cancer at each stage of diagnosis and treatment. How much do these delays and treatment changes affect treatment outcomes in patients with breast cancer? Methods: This review summarized relevant research in the past three decades and identified the effect of delayed treatment on the prognosis of patients with breast cancer in terms of seeking medical treatment, neoadjuvant treatment, surgery, postoperative chemotherapy, radiotherapy, and targeted therapies. Results: Delay in seeking medical help for ≥12 weeks affected the prognosis. Surgical treatment within 4 weeks of diagnosis did not affect patient prognosis. Starting neoadjuvant chemotherapy within 8 weeks after diagnosis, receiving surgical treatment at 8 weeks or less after the completion of neoadjuvant chemotherapy, and receiving radiotherapy 8 weeks after surgery did not affect patient prognosis. Delayed chemotherapy did not increase the risk of relapse in patients with luminal A breast cancer. Every 4 weeks of delay in the start of postoperative chemotherapy in patients with luminal B, triple-negative, or HER2-positive breast cancer treated with trastuzumab will adversely affect the prognosis. Targeted treatment delays in patients with HER2-positive breast cancer should not exceed 60 days after surgery or 4 months after diagnosis. Radiotherapy within 8 weeks after surgery did not increase the risk of recurrence in patients with early breast cancer who were not undergoing adjuvant chemotherapy. Conclusion: Different treatments have different time sensitivities, and the careful evaluation and management of these delays will be helpful in minimizing the negative effects on patients. abstract_id: PUBMED:26195940 Application of a 70-Gene Expression Profile to Japanese Breast Cancer Patients. Background: As data on using MammaPrint®, a 70-gene expression profile for molecular subtyping of breast cancer, are limited in Japanese patients, we aimed to determine the gene profiles of Japanese patients using MammaPrint and to investigate its possible clinical application for selecting adjuvant treatments. Patients And Methods: 50 women treated surgically at our institution were examined. The MammaPrint results were compared with the St Gallen 2007 and intrinsic subtype risk categorizations. Results: Of 38 cases judged to be at intermediate risk based on the St Gallen 2007 Consensus, 11 (29%) were in the high-risk group based on MammaPrint. 1 of the 30 luminal A-like tumors (3%) was judged as high risk based on MammaPrint results, whereas 7 of the 20 tumors (35%) categorized as luminal B-like or triple negative were in the low-risk group. There have been no recurrences to date in the MammaPrint group, and this is possibly attributable to most of the high-risk patients receiving chemotherapy that had been recommended on the basis of their MammaPrint results. Conclusions: Our results indicate that MammaPrint is applicable to Japanese patients and that it is of potential value in current clinical practice for devising individualized treatments. abstract_id: PUBMED:15371463 Improvement in the prognosis of Japanese breast cancer patients from 1946 to 2001--an institutional review. Background: Breast cancer has emerged as one of the most frequent malignancies among Japanese women; however, the long-term survival of Japanese breast cancer patients is uncertain. Methods: We analyzed the chronological changes in the clinical and pathological characteristics, treatment procedures and the long-term prognosis of 15 416 Japanese women with 16 217 primary breast cancers treated in the Cancer Institute Hospital in Tokyo between 1946 and 2001. Results: Our analysis revealed a chronological increase in the mean patient age, postmenopausal patients and non-invasive carcinomas. Operative procedures became less extensive, with approximately 45% of breast cancer patients in 2000-2001 receiving breast-conserving treatment. Radiotherapy to the regional lymph nodes decreased, while postoperative chemotherapy and hormonal treatments have become more frequent. The survival rate has improved steadily during the past 5 decades. The 10-year crude overall survival rate improved from 61% before 1960 to 83% in the 1990s. Conclusions: The survival rate of Japanese women with breast cancer has dramatically improved during the past 5 decades. abstract_id: PUBMED:20571962 The relevance of intrinsic subtype to clinicopathological features and prognosis in 4,266 Japanese women with breast cancer. Background: Estrogen receptor (ER), progesterone receptor (PgR), and HER2 expression status in breast cancer function as prognostic and predictive factors that enable individualized treatment. Intrinsic subtype classification has also been performed based on these and other biological and prognostic characteristics. However, clinical analysis of such subtypes in a large number of Japanese breast cancer patients has not yet been reported. Methods: Between January 2003 and December 2007, 4,266 patients with primary breast cancer were registered. Four subtypes based on immunohistochemically evaluated ER/PgR/HER2 status, clinicopathological features, and prognosis were analyzed retrospectively. Results: The following subtype distribution was observed: luminal A type (ER+ and/or PgR+, HER2-), 3,046 cases (71%); luminal B type (ER+ and/or PgR+, HER2+), 321 cases (8%); HER2 type (ER-, PgR-, HER2+), 398 cases (9%); and triple negative (TN) type (ER-, PgR-, HER2-), 501 cases (12%). The HER2+ subtypes (luminal B and HER2 types) had a significantly higher incidence of lymph node metastasis and lymphatic permeation, while the hormone receptor negative subtypes (HER2 and TN types) showed a significantly higher nuclear grade. Overall, patients with HER2-type and TN-type disease had a significantly poorer prognosis than other subtypes. Conclusion: Intrinsic breast cancer subtypes are associated with clinicopathological features and prognosis in Japanese women. Long-term clinical observation of the relationship between each subtype and therapies used should provide useful information for selecting appropriately tailored treatments. abstract_id: PUBMED:2195998 Natural history of breast cancer among Japanese and Caucasian females Breast cancer among Japanese females is characterized by its relatively low incidence and better prognosis than among Caucasian females. The annual mortality due to breast cancer among Japanese is about one-fifth that among Caucasians. Comparison of case distribution by histological type indicates that the ratio of well-differentiated carcinoma is slightly higher among Japanese, while the ratio of poorly differentiated carcinoma is slightly higher among Caucasian females. It is noteworthy that the incidence of in situ and invasive lobular carcinoma among Japanese is much lower than among Caucasian females. The age distribution shows that breast cancer is more frequent among middle-aged females in Japan, but more common among aged females in the West. Breast cancer among Japanese females shows a better prognosis than among Caucasian females as a whole, and even with equal tumor size and lymph node metastasis. It seems that postmenopausal breast cancer among Caucasian has a worse prognosis than the premenopausal one, although no remarkable difference in prognosis is found between premenopausal and postmenopausal patients in Japan. This suggests that the menopausal status is a critical factor for prognosis among Caucasians, but not among Japanese. As mentioned above, the morbidity and mortality rates of breast cancer among Japanese females are very low, but recently, both morbidity and mortality rates in Japan have been steeply increasing. For example, the mortality rate of breast cancer in Japan almost doubled during the past 20 years. Moreover, biological behavior of breast cancer among Japanese females has been recently changing. And time-trend data clearly indicate that breast cancer in Japan in future will be much more like that in the west, and nowadays it is westernized. abstract_id: PUBMED:15538045 Clinicopathological feature and long-term prognosis of apocrine carcinoma of the breast in Japanese women. Because of the rarity of apocrine carcinoma and lack of standardized criteria for the diagnosis, the definitive conclusions of clinicopathologic features and the prognosis has not been determined. We retrospectively examined data on 2091 curatively treated Japanese patients with primary breast carcinoma. Among them, 33 (1.6%) who had been diagnosed of apocrine carcinoma were reviewed. Compared with non-apocrine carcinoma, apocrine carcinoma was characterized by less positive rates of ER and PR, and by frequent rates of unilateral multicentric breast carcinoma with significant difference. The clinicopathological factors influencing 12-year survival rate were lymph node metastasis, lymphatic involvement and vascular involvement. There was no difference in survival rates at 10 years after operation between apocrine carcinoma and non-apocrine carcinoma. Our result shows unique hormone response and unilateral multicentricity are only typical clinicopathological features of apocrine carcinoma. abstract_id: PUBMED:17714947 The prevalence of intrinsic subtypes and prognosis in breast cancer patients of different races. A recent report indicated that a high prevalence of basal-like breast tumors (estrogen receptor [ER]-negative, progesterone receptor [PR]-negative, human epidermal growth factor receptor [HER] 2-negative, and cytokeratin 5/6-positive and/or HER1-positive) could contribute to a poor prognosis in African American women with breast cancer. It has been reported that Japanese women with breast cancer have a significantly better survival rate than other races in the USA. These findings suggest that breast cancers in Japanese women have favorable biological characteristics. To clarify this hypothesis, we conducted a cohort study to investigate the prevalence of intrinsic subtypes and prognosis for each subtype in 793 Japanese patients. This study revealed a very low prevalence (only 8%) of basal-like breast tumors with aggressive biological characteristics in Japanese patients. Survival analysis showed a significantly poorer prognosis in patients with basal-like tumors than in those with luminal A tumors (ER- and/or PR-positive, and HER2-negative) with favorable biological characteristics. These findings support the hypothesis that breast cancers in Japanese women have more favorable biological characteristics and a better prognosis than those in other races. In conclusion, the prevalence of basal-like breast tumors could influence the prognosis of breast cancer patients of different races. The prevalence of intrinsic subtypes should be taken into account when analyzing survival data in a multi-racial/international clinical study. abstract_id: PUBMED:33037391 Incidence of contralateral and ipsilateral breast cancers and prognosis in BRCA1/2 pathogenic variant carriers based on the Japanese HBOC Consortium registration. This study aimed to clarify the breast cancer prognosis in Japanese patients with BRCA1/2 pathogenic variant. We analyzed 2235 women with breast cancer who underwent BRCA1/2 genetic testing between 1996 and 2018 using data from the Japanese hereditary breast and ovarian cancer syndrome registry. The cumulative risk for contralateral and ipsilateral breast cancers and time to death since the first breast cancer were stratified based on the BRCA1/2 variant status. The median follow-up was 3.0 years (0.1-34.1 years) after the first breast cancer. The annual average risks of contralateral breast cancer in BRCA1 and BRCA2 and non-BRCA1/2 pathogenic variant carriers were 4.0%, 2.9%, and 1.9%, respectively (P = 0.001). The annual average risks of ipsilateral breast cancer in the three groups were 2.7%, 1.4%, and 1.1%, respectively (P = 0.06). BRCA1 pathogenic variant carriers had significantly higher risks of contralateral (hazard ratio 1.91, P < 0.001) and ipsilateral (hazard ratio 2.00, P = 0.02) breast cancers than non-BRCA1/2 pathogenic variant carriers. The time to death by the BRCA1/2 variant status was not significantly difference (P = 0.28). The prognosis of breast cancer patients who received standard treatment was comparable regardless of the BRCA1/2 variant status. abstract_id: PUBMED:21119029 Estrogen receptor-positive breast cancer in Japanese women: trends in incidence, characteristics, and prognosis. Background: The incidence of breast cancer in Japanese women has doubled in all age groups over the past two decades. Patients And Methods: We examined the characteristics of the tumors treated in three time periods between 1982 and 2010. Estrogen receptor (ER), progesterone receptor (PgR) and HER2 status were assessed by immunohistochemistry. Correlation of hormone receptor levels with clinicopathological factors and prognosis was analyzed in ER-positive, HER2-negative breast cancer in two age groups (≤50 years versus >50 years). Results: The frequency of ER-positive breast cancer in women aged 50 years or younger increased greatly over the interval studied (1982-1991: 52.5%, 1992-2001: 72.6%, 2002-2010: 87.1%, P < 0.0001). The frequency of ER-positive tumors also significantly increased in women over 50 years of age (1982-1991: 69.4%, 1992-2001: 73.3%, 2002-2010: 78.6%, P = 0.029). In ER-positive, HER2-negative breast cancer, tumor grade was negatively correlated with expression levels of ER and PgR. Prognosis for patients with ER-positive, HER2-negative disease significantly improved over time, due to advances in adjuvant therapies. Conclusion: It is necessary to establish risk factors, both genetic and environmental, capable of predicting the risk of ER-positive breast cancer and thus enable the efficient selection of candidates for hormone receptor-targeted chemoprevention. Answer: Based on the study reported in abstract PUBMED:15550865, the prognosis of Japanese men with breast cancer does not significantly differ from that of Japanese women when age and stage of the disease are taken into consideration. The study reviewed the clinicopathological features of 14 Japanese men with breast cancer and performed a case-control analysis with 140 female patients matched for age and stage. The results showed that disease-free survival (p=0.94) and overall survival (p=0.62) were not significantly different between the sexes. The 5-year disease-free survival was 77% for men and 75% for women, while the 5-year overall survival was 92% for men and 86% for women. The study concluded that survival rates are similar for male and female Japanese breast cancer patients when matched for age and stage, suggesting that sex does not affect prognosis significantly. However, it was noted that comorbid disease mortality is likely a major contributor to clinical outcomes in Japanese male breast cancer patients.
Instruction: The management of pre-hypertension in primary care: Is it adequate? Abstracts: abstract_id: PUBMED:25875919 The management of pre-hypertension in primary care: Is it adequate? Background: Pre-hypertension (pHT) is frequently diagnosed in the primary care setting, but its management by primary care physicians (PCPs) is not well characterized. Methods: All individuals aged 30-45 years who were insured by Clalit Health services in the Tel Aviv district and had their blood pressure (BP) measured from January 2006 to December 2010 were evaluated. Individuals were divided into three groups based on their initial BP value: optimal (< 120/80 mmHg), normal (systolic BP 120-129 or diastolic 80-84 mmHg) and borderline (130-139/85-89 mmHg). Groups were compared regarding clinical and laboratory follow-up performed by their PCP. Results: Of the 20,214 individuals included in the study, 6576 (32.5%) had values in the pHT range. Of these, 2126 (32.3% of those with pHT) had BP values defined as "borderline" and 4450 (67.6% of those with pHT) had BP values defined as "normal". The number of follow-up visits by the PCP and repeat BP measurement were similar in those with "optimal" BP and pHT. A third and fourth BP measurement were recorded more frequently in those with pHT. In those with pHT, there were more recorded BP measurements than in those with borderline BP (3.35 ± 3 vs. 3.23 ± 2.6), but the time from the initial to the second measurement and a record of a third and fourth measurement were the same in the two groups. Conclusion: Identification of pHT does not lead to a significant change in follow-up by PCPs, irrespective of BP values in the pHT range. abstract_id: PUBMED:26724242 Effectiveness of a Stroke Risk Self-Management Intervention for Adults with Prehypertension. Purpose: The aim of this study was to evaluate the effectiveness of a community-based intervention for prehypertensive adults, to enhance stroke risk awareness and to adopt a preventive lifestyle for primary stroke prevention. Methods: This was a single-blinded, repeated measures quasi-experimental study with 47 participants (23 in the experimental group and 24 in the control group) recruited through convenience sampling from two urban areas. The stroke risk self-management intervention consisted of three weekly, 2-hour, face-to-face sessions and two booster telephone sessions, utilizing strategies to enhance motivation for behavioral changes based on the Self-Determination Theory. All participants completed a pretest, a 1-month and a 3-month post test of stroke risk awareness and preventive lifestyle including blood pressure self-monitoring, healthy diet, and regular physical activity. Data were analyzed using descriptive statistics, chi-square test, two sample t test, repeated measures analysis of variance, and Friedman test with PASW Statistics 18.0. Results: After the intervention, significant improvements were found in the experimental group for stroke risk awareness, blood pressure self-monitoring and regular physical activity, and were sustained over time. Conclusions: Our preliminary results indicate that the stroke risk self-management intervention is feasible and associated with improvement in self-management of stroke risk factors for primary stroke prevention among a prehypertensive population. abstract_id: PUBMED:20606923 Screening for hypertension among older adults: a primary care "high risk" approach. Background: Recommendations for early detection and management of elevated blood pressure through opportunistic clinic-based screening may be inadequate for the rural population in India as access to health facilities is limited. Materials And Methods: Sixteen Health Aides (trained primary care workers) were trained to measure blood pressure using a standardized training procedure. Six of those assessed competent in initial evaluation were allotted a stratified random sample of about 150 persons each, 50 years or over, in the village under their care to measure blood pressures during their regular scheduled visits. Results: 14/16 of the health aides (83%) met the stipulated criteria for the simulation study using a module from British Hypertension Society. In the field survey of 920 individuals where 20% of the population was evaluated by a blinded investigator, the weighted Kappa for agreement, using normal, pre-hypertension and hypertension as categories, ranged between 62% and 89%. Only 75/286 (25%) of those detected to be hypertensive knew their status prior to the study. All those detected with hypertension were referred to a physician at a referral facility. 70% of those referred were evaluated at the referral facility and 64% of them initiated on treatment for hypertension within 3 months. Conclusion: Using primary care workers to screen for hypertension through the model suggested here will ensure that the population over 50 years of age will be screened once every 2 years without burdening the worker. This screening process will enable the health system to identify and cater to needs of this vulnerable population. abstract_id: PUBMED:22324864 Storied experiences of nurse practitioners managing prehypertension in primary care. Purpose: The purpose of this study was to explore the nurse practitioner (NP) experience with caring for prehypertensive patients. Lifestyle modifications are the primary recommendation for management of prehypertension. Given the historical foundation of health promotion and disease prevention as a fundamental component of NP professional identity, gaining insight into the experience of caring for prehypertensive patients in the current healthcare environment is valuable to the profession, patients, and communities. Therefore, the NP role in health promotion and disease prevention related to prehypertension was explored as well. Data Sources: Narrative inquiry was the chosen methodology to gather narrative accounts of eight NPs caring for prehypertensive patients in primary care. The three-dimensional narrative inquiry space was used to guide the researcher during data analysis. Conclusions: Three themes emerged from the NPs' narratives: realities of practice, ambiguous role identity, and bridging models. Time constraints, financial considerations, and bridging the nursing and medical models while adapting to practice environments were barriers identified as components of the NP experience caring for patients with prehypertension. Implications For Practice: This study revealed that caring for prehypertensive patients is a complex and multilayered experience. abstract_id: PUBMED:31496664 Mobile health technology (WeChat) for the hierarchical management of community hypertension: protocol for a cluster randomized controlled trial. Purpose: The prevalence of hypertension continues to increase worldwide, raising an urgent need for novel and efficient methods for controlling hypertension. As the Internet and smartphones become more popular, their multiple functions and large user base make mobile health (mHealth) technology a potential tool for hypertension management. We aim to evaluate the use of mHealth technology to improve blood pressure and self-management behavior in people with hypertension and prehypertension. Intervention: The mHealth intervention measures include health education, behavior promotion, group chatting and long-term blood pressure monitoring hierarchically delivered via WeChat application among 242 participants. The frequency, intensity and content of the hierarchical intervention are determined based on the cardiovascular risk stratification of the intervention subjects. Study Design: This cluster randomized controlled trial was carried out in two subdistricts in Guangzhou, China, among 492 smartphone users with hypertension or prehypertension, from August 2018 to September 2019. The intervention group received hierarchical intervention through WeChat for six months, while the control group received usual care in the community healthcare center during this period. Indicators are measured at three time points for each group, and a telephone follow-up is planned for two years after the intervention. The primary outcome is systolic blood pressure; secondary outcomes include BMI, CPAT score, improvements in behavior and diet, score of self-efficacy and self-management. Feasibility is evaluated by intervention participation. The cost-effectiveness is evaluated by ICER. Conclusion: This study aims to evaluate the effect of the WeChat-based hierarchical management mode on improving blood pressure and self-management behavior in population with hypertension and prehypertension, based on health-related knowledge, self-efficacy and medication adherence. If successful, the management mode will serve as a feasible, economical and efficient hypertension management mode suitable for the community.Clinical trial identifier: ChiCTR1900023002. abstract_id: PUBMED:26354334 A Risk Score to Predict Hypertension in Primary Care Settings in Rural India. We used the data of 297 participants (15-64 years old) from a cohort study (2003-2010) who were free from hypertension at baseline, to develop a risk score to predict hypertension by primary health care workers in rural India. Age ≥35 years, current smoking, prehypertension, and central obesity were significantly associated with incident hypertension. The optimal cutoff value of ≥3 had a sensitivity of 78.6%, specificity of 65.2%, positive predictive value of 41.1%, and negative predictive value of 90.8%. The area under the receiver operating characteristic curve of the risk score was 0.802 (95% confidence interval = 0.748-0.856). This simple and easy to administer risk score could be used to predict hypertension in primary care settings in rural India. abstract_id: PUBMED:25302227 Awareness and Approach towards Hypertension Management among General Practitioners of Western Vadodara. Background: Hypertension (HTN) is a major risk factor contributing to premature mortality from cardiovascular and cerebrovascular disease.To decrease morbidity and mortality from HTN, timely diagnosis of the disease and its complications, urgent treatment and referrals are required. General Practioners (GPs) are the first tier of the health care system in India and have a wide scope of practice. It is important to know the awareness and approach of primary care physicians to hypertension in their daily practice as compared to standard practice recommendations and guidelines, to identify targets for improvements. With this objective we decided to interview them personally and analyse their approach. Materials And Methods: We conducted a cross-sectional survey in 80 general practitioners (GPs) of the western part of Vadodara city with the use of a questionnaire prepared from JNC-7 guidelines and standard medical books. Seventy seven [97.55%] GPs completed the questionnaire and their responses were statistically analysed. Results: Twenty percent of GPs were not applying BP cuff properly for BP measurement. Only 18% and 16.6 % could diagnose isolated diastolic hypertension (IDH) and isolated systolic hypertension respectively (ISH) and 21% and 29% would have considered treatment of IDH and ISH respectively.48% consider treating pre-hypertension using non-pharmacological measures. Only 21% use thiazide diuretics for uncomplicated HTN and 50% use beta-blockers in coronary artery disease patients. Conclusion: In our study, most of the GPs in western Vadodara are well aware and updated about the initial lab investigations,non-pharmacological measures and complications of HTN but lack an effective approach towards history taking for HTN,technique for measurement of blood pressure, diagnosis and treatment of IDH and ISH. Pre-hypertension and systolic and diastolic hypertension is under-treated and thiazide diuretics are underutilized. This study can be used to identify targets and approaches to improve hypertension management at the primary care level. abstract_id: PUBMED:18854471 Prehypertension and hypertension in a primary care practice. Objective: To assess the prevalence of prehypertension and the prevalence and treatment of hypertension in a family practice population. Design: Cross-sectional study. Setting: An academic family practice unit. Participants: Practice patients aged 30 to 80 years who had visited the clinic at least once during the 2 years before the study and had at least 1 blood pressure (BP) measurement recorded on their charts during that time period. Main Outcome Measures: Most recent BP recorded on the chart; presence or absence of a diagnosis of hypertension recorded on the chart; number and class of prescribed antihypertensive medications. Results: Of the 1388 patients who met the inclusion criteria, 389 had a diagnosis of hypertension. Of the 999 who did not have a diagnosis of hypertension, 306 (30.6%) met the criteria for prehypertension used in this study (systolic BP of 130 to 139 mm Hg or diastolic BP of 85 to 89 mm Hg). Men and older patients (60 to 80 years of age) were more likely to have prehypertension than other patients were. Of the patients with hypertension, 254 (65%) had achieved a BP level of < 140/90 mm Hg. The majority of hypertensive patients were prescribed 1 or 2 medications. Only 4.5% were using more than 2 different medications. Conclusion: A large proportion of a family practice's patients need close surveillance of BP because of the prevalence of prehypertension. Despite the improvement in the management of hypertension, only 65% of hypertensive patients had achieved the recommended target BP. Family physicians could be treating their hypertensive patients more aggressively with medications; only 4.4% of patients were using more than 2 different antihypertensive medications, despite 35% not being at target. Hypertension surveillance and treatment to achieve target BP levels continue to be important issues in primary care. abstract_id: PUBMED:28804050 Diagnostic Errors in Primary Care Pediatrics: Project RedDE. Objective: Diagnostic errors (DEs), which encompass failures of accuracy, timeliness, or patient communication, cause appreciable morbidity but are understudied in pediatrics. Pediatricians have expressed interest in reducing high-frequency/subacute DEs, but their epidemiology remains unknown. The objective of this study was to investigate the frequency of two high-frequency/subacute DEs and one missed opportunity for diagnosis (MOD) in primary care pediatrics. Methods: As part of a national quality improvement collaborative, 25 primary care pediatric practices were randomized to collect 5 months of retrospective data on one DE or MOD: elevated blood pressure (BP) and abnormal laboratory values (DEs), or adolescent depression evaluation (MOD). Relationships between DE or MOD proportions and patient age, gender, and insurance status were explored with mixed-effects logistic regression models. Results: DE or MOD rates in pediatric primary care were found to be 54% for patients with elevated BP (n = 389), 11% for patients with abnormal laboratory values (n = 381), and 62% for adolescents with an opportunity to evaluate for depression (n = 400). When examining the number of times a pediatrician may have recognized an abnormal condition but either knowingly or unknowingly did not act according to recommended guidelines, providers did not document recognition of an elevated BP in 51% of patients with elevated BP, and they did not document recognition of an abnormal laboratory value without a delay in 9% of patients with abnormal laboratory values. Conclusions: DEs and MODs occur at an appreciable frequency in pediatric primary care. These errors may contribute to care delays and patient harm. abstract_id: PUBMED:17666199 Total cardiovascular risk management. Hypertension usually clusters with other cardiovascular risk factors, such as insulin resistance, visceral obesity, and dyslipidemia, greatly increasing an individual's risk for cardiovascular morbidity and death. Despite universal recognition that reduction in blood pressure and other cardiovascular risk factors is essential to improving long-term cardiovascular health, <25% of patients diagnosed with hypertension have adequate blood pressure control. Total cardiovascular risk is increased in the presence of risk factors, target organ damage, comorbid conditions, and the metabolic syndrome and may, to some extent, be prenatally determined. Individuals with "borderline" normal blood pressure and blood glucose are also at increased risk for cardiovascular disease, giving rise to the concept of individuals with "prehypertension" and "prediabetes." International treatment guidelines are now incorporating the concept of global cardiovascular risk assessment and management to improve long-term outcomes. Multifactorial intervention has proved to be highly effective at reducing cardiovascular risk and events in patients with type 2 diabetes mellitus, and studies suggest that reducing an array of risk factors by relatively small amounts can be more beneficial than achieving large reductions in a single risk factor. Clearly, isolated treatment of hypertension is no longer sufficient; risk factors and target organ damage need to be actively searched for and treated if long-term cardiovascular health is to be improved. Answer: The management of pre-hypertension in primary care appears to be inadequate based on the evidence from the provided abstracts. A study evaluating the management of pre-hypertension (pHT) by primary care physicians (PCPs) found that the identification of pHT did not lead to a significant change in follow-up by PCPs, irrespective of blood pressure values in the pHT range (PUBMED:25875919). This suggests that primary care may not be addressing pre-hypertension as aggressively as might be necessary to prevent progression to hypertension. Furthermore, another study highlighted that despite the improvement in the management of hypertension, only 65% of hypertensive patients had achieved the recommended target blood pressure, indicating that family physicians could be treating their hypertensive patients more aggressively with medications (PUBMED:18854471). This could also imply that the management of pre-hypertension, which is a precursor to hypertension, may not be optimal. Additionally, a study on diagnostic errors in primary care pediatrics found that DE or MOD rates in pediatric primary care were 54% for patients with elevated blood pressure, indicating that elevated blood pressure, which includes pre-hypertension, may not be adequately recognized or managed in pediatric primary care (PUBMED:28804050). Moreover, a study on total cardiovascular risk management emphasized that isolated treatment of hypertension is insufficient and that risk factors and target organ damage need to be actively searched for and treated to improve long-term cardiovascular health (PUBMED:17666199). This suggests that a more comprehensive approach to managing pre-hypertension, including addressing other cardiovascular risk factors, is necessary for effective primary care. In contrast, an intervention study showed that a community-based intervention for prehypertensive adults could lead to significant improvements in stroke risk awareness, blood pressure self-monitoring, and regular physical activity (PUBMED:26724242). This indicates that targeted interventions can be effective, but such programs may not be widely implemented in primary care settings. Overall, the evidence suggests that the current management of pre-hypertension in primary care may not be adequate and that there is a need for more proactive and comprehensive approaches to prevent the progression to hypertension and associated cardiovascular risks.
Instruction: Coronary ostium topography: an implication for transcatheter aortic valve implantation? Abstracts: abstract_id: PUBMED:33768497 Successful transcatheter aortic valve in valve implantation for degenerated trifecta bioprosthesis in a patient with a coronary anomaly. We report a case of transcatheter aortic valve implantation in a 79-year-old woman with a coronary anomaly who underwent surgical aortic valve replacement with a 23-mm Abbott Trifecta bioprosthesis. The procedure was performed in response to severe aortic stenosis caused by a bicuspid aortic valve. Computed tomography showed an anomalous origin of the right coronary artery from the left coronary sinus, with an interarterial course. Although the virtual transcatheter valve to coronary ostium distance-right coronary artery was short, the right coronary artery ostium was just behind the stent post. The externally mounted leaflet was unable to reach the coronary orifice beyond the stent post. This case highlights a successful transcatheter aortic valve implantation for stented bioprostheses with externally mounted leaflets when the virtual transcatheter valve to coronary ostium distance is shortened by a coronary anomaly. abstract_id: PUBMED:30546625 Silent coronary obstruction following transcatheter aortic valve implantation: Detection by transesophageal echocardiography. In several recent guidelines, transcatheter aortic valve implantation (TAVI) has been recommended as a therapeutic option for inoperable or high surgical risk patients with severe aortic stenosis. TAVI has various specific complications that seldom occur in surgical aortic valve replacement. Among them, coronary obstruction (CO) is an infrequent but serious complication. Previous case series have reported symptomatic CO cases diagnosed by hemodynamic instability, electrocardiographic changes, and abnormal findings on aortography. We report a case of silent CO in an 86-year-old female. Monitoring of coronary flow by transesophageal echocardiography led to a diagnosis of CO. Silent CO is probably an underdiagnosed complication of TAVI. <Learning objective: Coronary obstruction is an infrequent but serious complication of transcatheter aortic valve implantation (TAVI). Previous case series have reported only symptomatic coronary obstruction cases diagnosed by hemodynamic instability, electrocardiographic changes, and abnormal findings on aortography. Transesophageal echocardiography monitoring of coronary ostium flow is useful for detecting coronary obstruction. Silent coronary obstruction is probably an underdiagnosed complication of TAVI.>. abstract_id: PUBMED:36840437 Cardiac arrest caused by coronary occlusion during transcatheter aortic valve implantation: a unique cause. Coronary artery occlusion (CAO) is a rare but life-threatening complication of transcatheter aortic valve implantation (TAVI). The mechanism of CAO is the displacement of the native calcified valve leaflet over the coronary ostium. Here, we report on a woman who experienced sudden cardiac arrest and abrupt CAO during TAVI, which was caused by two different original obstructions, a rupture of aortic plaque or a partial tear of the aortic intima blocking the upper 2/3 of the left main trunk (LMT) ostium, and the transcatheter heart valve (THV) blocking the lower 1/3 of the LMT ostium. She was eventually successfully treated with the chimney stenting technique. Aortography other than coronary angiography was used to ascertain CAO. In patients presenting with abrupt cardiac arrest or cardiogenic shock with LMT occlusion, there must be prompt identification, and the causes of CAO may be various and rare. The identification of CAO relies not only on CAG but also on aortography, especially if the locations and origins of obstructions are special. Supportive therapy with an attempt at percutaneous revascularization is necessary. Pre-procedural assessment is crucial prior to TAVI interventions. In cases with high risk of CAO, upfront coronary artery protection can be provided. abstract_id: PUBMED:22967136 Coronary ostium topography: an implication for transcatheter aortic valve implantation? Objectives: Shorter distances from coronary ostia to the calcified aortic valve may result in occlusion with potential infarction during transcatheter aortic valve implantation. We hypothesized that preoperative CT-scan measurements might predict coronary occlusion. Methods: Distances from the coronary ostia to the calcified aortic valve were measured during open heart aortic valve replacement in 60 consecutive patients. Distances were compared to preoperative CT-scan measurements evaluating distance of the coronary ostia as well (n = 15). Results: The distances of the lower lip of the left and the right coronary artery ostia measured from the aortic annulus were 14.7 ± 3.9 mm and 13.4 ± 4.0 mm, respectively. The left, right and noncoronary cusp heights were 13.9 ± 2.5 mm, 12.8 ± 3.0 mm and 13.3 ± 3.1 mm, respectively. Coronary ostia topography indicated variations from the middle to the noncoronary commissure in 40% for the left and 63% for the right coronary ostium. CT-scan based measurements resulted in a distance of 12.8 ± 3.5 mm for the left and 13.9 ± 4.0 mm for the right coronary ostium, compared to 14.2 ± 4.2 mm and 13.5 ± 4.3 mm measured intraoperatively. A mild correlation between both measurements could be observed (r = 0.374, P = 0.188, left and r = 0.46, P = 0.09, n = 15). Conclusions: CT-scan-based measurements differed from the intraoperative measurements, however preoperative CT-scan evaluation may be a useful tool to identify patients with short distance of coronaries. abstract_id: PUBMED:29468106 A Case of Acute Left Main Coronary Obstruction Following Transcatheter Aortic Valve Implantation. Transcatheter aortic valve implantation (TAVI) is a highly effective procedure in selected patients with severe degenerative aortic valve stenosis at high risk for conventional surgery. Coronary occlusion is a periprocedural life-threatening complication that despite its low frequency (˂1%) is poorly predictable and requires immediate diagnosis and treatment. Herein, we report a coronary obstruction after transcatheter implantation of valve prosthesis, followed by coronary intervention with successful recanalization. abstract_id: PUBMED:36187005 Coronary access following ACURATE neo implantation for transcatheter aortic valve-in-valve implantation: Ex vivo analysis in patient-specific anatomies. Background: Coronary access after transcatheter aortic valve implantation (TAVI) with supra-annular self-expandable valves may be challenging or un-feasible. There is little data concerning coronary access following transcatheter aortic valve-in-valve implantation (ViV-TAVI) for degenerated surgical bioprosthesis. Aims: To evaluate the feasibility and challenge of coronary access after ViV-TAVI with the supra-annular self-expandable ACURATE neo valve. Materials And Methods: Sixteen patients underwent ViV-TAVI with the ACURATE neo valve. Post-procedural computed tomography (CT) was used to create 3D-printed life-sized patient-specific models for bench-testing of coronary cannulation. Primary endpoint was feasibility of diagnostic angiography and PCI. Secondary endpoints included incidence of challenging cannulation for both diagnostic catheters (DC) and guiding catheters (GC). The association between challenging cannulations with aortic and transcatheter/surgical valve geometry was evaluated using pre and post-procedural CT scans. Results: Diagnostic angiography and PCI were feasible for 97 and 95% of models respectively. All non-feasible procedures occurred in ostia that underwent prophylactic "chimney" stenting. DC cannulation was challenging in 17% of models and was associated with a narrower SoV width (30 vs. 35 mm, p < 0.01), STJ width (28 vs. 32 mm, p < 0.05) and shorter STJ height (15 vs. 17 mm, p < 0.05). GC cannulation was challenging in 23% of models and was associated with narrower STJ width (28 vs. 32 mm, p < 0.05), smaller transcatheter-to-coronary distance (5 vs. 9.2 mm, p < 0.05) and a worse coronary-commissural overlap angle (14.3° vs. 25.6 o , p < 0.01). Advanced techniques to achieve GC cannulation were required in 22/64 (34%) of cases. Conclusion: In this exploratory bench analysis, diagnostic angiography and PCI was feasible in almost all cases following ViV-TAVI with the ACURATE neo valve. Prophylactic coronary stenting, higher implantation, narrower aortic sinus dimensions and commissural misalignment were associated with an increased challenge of coronary cannulation. abstract_id: PUBMED:35079312 Left coronary ostial stenosis developing 15 months after transcatheter aortic valve replacement with balloon-expandable valve. We present the case of an 82-year-old man whose left coronary ostium became obstructed 15 months after transcatheter aortic valve replacement (TAVR) with a balloon-expandable valve. The patient underwent TAVR for symptomatic severe aortic stenosis with no complications. Fifteen months after the initial TAVR, the patient complained of chest pain while exercising, and the exercise stress myocardial perfusion scintigraphy demonstrated the development of regional myocardial ischemia in the region of the left coronary artery. Coronary angiography implied severe stenosis in the ostium of the left coronary artery. Computed tomography angiography and intravascular ultrasonography indicated a soft tissue component along with stent struts, which was considered to cause delayed coronary obstruction. Our report emphasizes the importance of having a low threshold for clinically suspecting delayed coronary obstruction in patients who have undergone TAVR, even after several years of the procedure. <Learning objective:Delayed coronary obstruction (DCO) should be suspected in patients presenting with new ischemic symptoms after transcatheter aortic valve replacement (TAVR). DCO may occur even in the case of TAVR with a balloon-expandable prosthetic valve, on antithrombotic regimens, and several years after the initial procedure.>. abstract_id: PUBMED:31854041 Transcatheter aortic valve implantation 10 years after valve-in-valve transcatheter aortic valve implantation for failing aortic valve homograft root replacement. Valve-in-valve transcatheter aortic valve implantation (ViV-TAVI) is an established therapy for a degenerated surgical bioprosthesis. TAVI-in-TAVI following ViV-TAVI has not been previously performed. We report a high-risk patient presenting with severe left ventricular failure secondary to undiagnosed critical aortic stenosis due to degeneration of the implanted transcatheter heart valve more than a decade after initial ViV-TAVI for a failing stentless aortic valve homograft. Successful TAVI-in-TAVI reversed the clinical and echocardiographic changes of decompensated heart failure with no evidence of coronary obstruction. abstract_id: PUBMED:25589972 In vitro study of coronary flow occlusion in transcatheter aortic valve implantation. Background: Transcatheter aortic valve implantation (TAVI) has been developed recently for patients with high morbidities and who are believed to be not tolerate standard surgical aortic valve replacement. Nevertheless, the TAVI is associated with complications such as potential obstruction of coronary ostia, mitral valve insufficiency, and stent migration although it seems promising. Impairment of the coronary blood flow after TAVI is catastrophic and it was believed to be associated with the close position of the coronary orifice and the aortic leaflets and valve stent. However, few data was available as to the anatomic relationship between valve stent and aortic root anatomic structures including the coronary arterial ostia, aortic leaflets. Methods: The aortic roots were observed in 40 hearts specimens. The width of aortic leaflet, height of aortic sinus annulus to the sinutubular junction (STJ), distance between aortic sinus annulus to its corresponding coronary ostia, and coronary arterial ostia to its corresponding STJ level were measured. Moreover, the relationships of valve stent, aortic leaflets and coronary ostia before/post stent implantation and after the open of aorta were evaluated respectively. Results: Approximate three quarters of the coronary ostia were located below the STJ level. The mean distances from left, right and posterior aortic sinus annulus to the related STJ level was comparable, which was 18.5±2.7, 18.9±2.6, 18.7±2.6 mm, respectively. Meanwhile, the height of left and right aortic sinus annulus to its corresponding coronary ostia was 16.6±2.8 and 17.2±3.1 mm for left and right side respectively. Conclusions: Most of the coronary ostia were located below the STJ level and could be covered by the leaflets. This highlights the need of modified stents to prevent occlusion of coronary flow after TAVI. abstract_id: PUBMED:28433356 Aortic sinus flow stasis likely in valve-in-valve transcatheter aortic valve implantation. Objective: Valve-in-valve procedures using transcatheter aortic valves are increasingly performed to treat degenerated bioprosthetic surgical aortic valves because they are less invasive than redo aortic valve replacement. The objective of this study is to quantify the changes in aortic sinus blood flow dynamics before and after a valve-in-valve procedure to gain insight into mechanisms for clinical and subclinical thrombosis of leaflets. Methods: A detailed description of the sinus hemodynamics for valve-in-valve implantation was performed in vitro. A Medtronic Hancock II (Medtronic Inc, Minneapolis, Minn) porcine bioprosthesis was modeled as a surgical aortic valve, and Medtronic CoreValve and Edwards Sapien (Edwards Lifesciences, Irvine, Calif) valves were used as the transcatheter aortic valves. High-resolution particle image velocimetry was used to compare the flow patterns from these 2 valves within both the left coronary and noncoronary sinuses in vitro. Results: Velocity and vorticity within the surgical valve sinuses reached peak values of 0.7 m/s and 1000 s-1, with a 70% decrease in peak fluid shear stress near the aortic side of the leaflet in the noncoronary sinus. With the introduction of transcatheter aortic valves, peak velocity and vorticity were reduced to approximately 0.4 m/s and 550 s-1 and 0.58 m/s and 653 s-1 without coronary flow and 0.60 m/s and 631 s-1 and 0.81 m/s and 669 s-1 with coronary flow for the CoreValve and Sapien valve-in-valve implantations, respectively. Peak shear stress was approximately 38% higher along the aortic side of the coronary versus noncoronary transcatheter aortic valve leaflet. Conclusions: Decreased flow and shear stress in valve-in-valve procedures indicate a higher risk of leaflet thrombosis secondary to flow stasis, perhaps more so in the noncoronary sinus. Answer: Coronary ostium topography is indeed an important consideration for transcatheter aortic valve implantation (TAVI), as it can have implications for the risk of coronary obstruction (CO), which is a serious but infrequent complication of the procedure. Shorter distances from the coronary ostia to the calcified aortic valve may result in occlusion with potential infarction during TAVI (PUBMED:22967136). Preoperative CT-scan measurements might predict coronary occlusion by evaluating the distance of the coronary ostia, although CT-scan-based measurements may differ from intraoperative measurements (PUBMED:22967136). Coronary artery occlusion (CAO) during TAVI is a rare but life-threatening complication, often caused by the displacement of the native calcified valve leaflet over the coronary ostium (PUBMED:36840437). In some cases, silent CO may occur without typical symptoms, and transesophageal echocardiography monitoring of coronary ostium flow can be useful for detecting this complication (PUBMED:30546625). The feasibility of coronary access after TAVI, especially in valve-in-valve procedures, can be influenced by factors such as the height of the sinotubular junction (STJ), the width of the aortic sinus, and the distance between the transcatheter valve and the coronary ostia. Bench-testing using 3D-printed life-sized patient-specific models can help evaluate the challenge of coronary cannulation after TAVI (PUBMED:36187005). In some cases, delayed coronary obstruction (DCO) can occur months or even years after TAVI, emphasizing the importance of clinical vigilance for new ischemic symptoms in patients who have undergone the procedure (PUBMED:35079312). Additionally, the risk of leaflet thrombosis secondary to flow stasis in valve-in-valve procedures may be higher, particularly in the noncoronary sinus (PUBMED:28433356). Overall, coronary ostium topography is a critical factor in TAVI planning and execution, and careful pre-procedural assessment, including CT imaging, is crucial to minimize the risk of coronary complications (PUBMED:22967136).
Instruction: Is gender responsible for everything? Abstracts: abstract_id: PUBMED:31372452 Nigeria's preparedness for internet of everything: A survey dataset from the work-force population. The article presents statistical facts on Nigeria's preparedness for Internet of everything. Copies of structured questionnaire were administered to 163 workers in Lagos State. Using descriptive statistics and charts (bar chart and histogram), the paper revealed that most of the respondents are aware of the concept of internet of everything, perceive that Nigeria is prepared for an internet enabled society and already have devices that can help them access the internet from where they are. More so, the challenges of cost, modern technology and signal coverage pose to be the greatest areas that should be addressed in the drive for an internet enabled society in Nigeria. abstract_id: PUBMED:31580979 The practice and perceptions of RRI-A gender perspective. Little is known to date about the practice and perceptions of RRI among researchers in Europe as well as the integration of the gender dimension into everyday RRI practices. This lack was addressed by two large-scale surveys that were launched in the course of the EU-funded MoRRI project (Monitoring the evolution and benefits of RRI, Contract number RTD-B6-PP-00964-2013, Duration 09/2013-03/2018). The analysis shows that the institutional environment positively influences the degree of RRI activities and the general attitudes towards more responsible research and innovation: researchers working in an institutional environment that systematically supports the practice of RRI are more active in RRI practices than researchers who do not rely on such structures. For the gender equality dimension, this means that institutions with a gender equality plan (GEP) in place are more inclined to support female researchers than institutions without such institutional incentives. Furthermore, researchers with experiences in EU-funded projects are more likely to be engaged in RRI activities. Even if female researchers have a stronger inclination to engage with society than their male counterparts, gender competence proves to be the relevant distinguishing criterion. Gender competent researchers are more often involved in other RRI activities. abstract_id: PUBMED:37525468 Why and how to incorporate issues of race/ethnicity and gender in research integrity education. With the increasing focus on issues of race/ethnicity and sex/gender1 across the spectrum of human activity, it is past time to consider how instruction in research integrity should incorporate these topics. Until very recently, issues of race/ethnicity and sex/gender have not typically appeared on any conventional lists of research integrity or responsible conduct of research (RCR) topics in the United States or, likely, other countries as well.2 However, I argue that not only can we incorporate these issues, we should do so to help accomplish some of the central goals of instruction in research integrity. I also offer some initial suggestions about where and how to incorporate them within familiar topics of instruction. abstract_id: PUBMED:15132489 Kind of a drag: gender, race, and ambivalence in The Birdcage and To Wong Foo, Thanks for Everything! Julie Newmar. This paper examines the ways in which two Hollywood films featuring drag queens, Too Wong Foo, Thanks for Everything! Julie Newmar and The Birdcage, offer a kind of "both/and" look at the complexities of gender, sexuality, race, and culture, simultaneously challenging some institutionalized attitudes (especially heterosexism) while reinforcing others (especially sexism and racism)--making the use of drag as a locus of discovery in both films, at best, ambivalent. abstract_id: PUBMED:35438550 TSIUNAS: A Videogame for Preventing Gender-Based Violence. Background: Gender-based violence (GBV) is a public health problem worldwide. Nonetheless, in rural areas and low-income countries, the problem may be more difficult to eradicate because there are stereotypes that reinforce negative attitudes toward women, which increase the severity of the problem. Goal: This article presents the development of "Tsiunas," a videogame designed to increase GBV awareness. By implementing game situations that represent attitudes and beliefs that justify unequal and violent relationships between men and women, this videogame attempts to transform attitudes and thoughts entrenched in a patriarchal society model. Results: Tsiunas was evaluated in two phases to: (1) validate the usability and stability of the game and (2) validate the potential change in students' perception regarding GBV situations and recognition of co-responsible masculinities. Both evaluations were carried out through surveys. The results showed that students had a high level of acceptance and appropriation of the content and message of the videogame. Conclusions: The findings allowed to conclude that the game situations presented in Tsiunas influenced changes of opinion in men and women regarding entrenched beliefs about patriarchal patterns, tolerance levels of violence against women, and attitudes toward violence against women. Likewise, the videogame supported the recognition of co-responsible masculinities. abstract_id: PUBMED:31441281 Who is Responsible for Responsible Innovation? Lessons From an Investigation into Responsible Innovation in Health Comment on "What Health System Challenges Should Responsible Innovation in Health Address? Insights From an International Scoping Review". Responsible innovation in health (RIH) takes the ideas of responsible research and innovation (RRI) and applies them to the health sector. This comment takes its point of departure from Lehoux et al which describes a structured literature review to determine the system-level challenges that health systems in countries at different levels of human development face. This approach offers interesting insights from the perspective of RRI, but it also raises the question whether and how RRI can be steered and achieved across healthcare systems. This includes the question who, if anybody, is responsible for responsible innovation and which insights can be drawn from the systemic nature RIH. abstract_id: PUBMED:33155825 The Meaning of "Doing Everything". Evaluating a high-risk patient for a high-risk operation is complicated. Discussing the benefits and burdens with your patient is only a part of the process. Should the decision be not to operate, explaining and planning the nonoperative path forward with all of its inherent challenges is crucial. Because the inevitable is likely to happen, the patient as well as their family must be prepared. If they are not, then the result may be exactly what the patient was hoping to avoid in the first place by declining the operation. Critical to this conversation is understanding the nuances of "doing everything" when dealing with a patient facing a life-limiting condition. abstract_id: PUBMED:27038814 Is Fear to Intervene with Problem Gamblers Related to Interveners' Gender and Status? A Study with VLT Operators. We assess how video lottery terminal (VLT) operators' self-perceive their ability to recognize a problem gambler, to what extent they are approached by problem gamblers seeking for assistance, how many detections and interventions they report, and the reasons they give for not intervening with clients who show signs of problem gambling. We also examine how these variables are related to the operators' gender and status in the establishment. 177 VLT operators anonymously completed a structured questionnaire at the beginning of a responsible gambling training class held in different French-speaking Swiss towns. The operators felt confident in their ability to detect problem gambling behaviors, were rarely approached by problem gamblers seeking assistance, and reported fewer interventions compared to the number of detections. This reluctance to intervene was mainly attributed to the fear of potential negative reactions from the client. Female staff were the most reluctant to intervene and the most fearful of potential negative reactions from the client. Responsible gambling training programs should include coping strategies for dealing with potential negative reactions from clients. Our findings suggest that staff gender and status are two individual characteristics that should be taken into account when planning responsible gambling trainings. abstract_id: PUBMED:27883302 Affirmative and Responsible Health Care for People with Nonconforming Gender Identities and Expressions. Although recent changes in health care delivery have improved routine and gender-affirming care for transgender people, common approaches to care are still often based on a binary (i.e., male/female) gender framework that can make patients with gender-nonconforming (GNC) identities and expressions feel marginalized. Binary representation perpetuates invisibility, discrimination, and victimization-and subsequent poorer health-among GNC patients. In response, clinicians and health care systems should extend their efforts to provide gender-affirming and responsible care to GNC people. This article reviews terminology related to gender, the limited research-and necessary directions for future research-on GNC communities, and provides strategies for health care professionals and systems to ensure provision of gender-affirming and responsible care to GNC patients. abstract_id: PUBMED:36146274 PINE: Post-Quantum Based Incentive Technique for Non-Cooperating Nodes in Internet of Everything. The Internet of Everything (IoE) is a smart system that interconnects smart entities by incorporating low-cost or low-energy gadgets that are useful for communication with people, processes, data, and devices/things. In such an instantaneously connected environment, network-enabled heterogeneous devices may exhibit non-cooperative behaviour which may lead to the degradation of the network. To address this performance degradation, the proposed Post-quantum based Incentive technique for Non-cooperating nodes in internet of Everything (PINE) protocol provides an end-to-end reliable solution by incorporating location-aware post-quantum encryption in these networks while addressing the non-cooperative behaviour of the nodes by employing an effective strategy in a bi-directional multi-hop relay environment. This proposed protocol further aims to evaluate the consequences of non-cooperative nodes by considering various metrics, namely, number of nodes, message size, execution time, memory consumption, average residual energy, percentage of selfish nodes, and blackhole nodes detection, aiming to achieve significant accuracy in an IoE environment. Answer: The question "Is gender responsible for everything?" is quite broad and can be interpreted in various ways. However, based on the provided abstracts, we can infer that gender plays a significant role in various aspects of society and research, but it is not solely responsible for everything. For instance, gender perspectives are important in the practice and perceptions of responsible research and innovation (RRI), where institutions with gender equality plans are more supportive of female researchers (PUBMED:31580979). Additionally, gender competence among researchers leads to more involvement in RRI activities, indicating that gender does have a notable impact in the research environment. In the context of research integrity education, incorporating issues of race/ethnicity and gender is argued to be essential to achieve the central goals of instruction in research integrity (PUBMED:37525468). This suggests that gender is a critical factor to consider in educational settings to ensure responsible conduct in research. Gender-based violence (GBV) is another area where gender plays a crucial role. The development of a videogame called "Tsiunas" aimed to transform attitudes and thoughts entrenched in a patriarchal society model, indicating that gender stereotypes and attitudes contribute to GBV and need to be addressed (PUBMED:35438550). In the realm of healthcare, providing affirmative and responsible care to people with nonconforming gender identities and expressions is necessary to avoid marginalization and discrimination (PUBMED:27883302). This shows that gender considerations are vital in healthcare delivery. However, gender is not the only factor at play in these contexts. Other factors such as institutional support, societal attitudes, and individual competencies also contribute to the outcomes in research, education, healthcare, and societal issues. In summary, while gender is a significant and influential factor in many areas, it is not responsible for everything. It interacts with a multitude of other factors to shape experiences and outcomes in various domains (PUBMED:31372452, PUBMED:31580979, PUBMED:37525468, PUBMED:35438550, PUBMED:27883302).
Instruction: Parental criticism and adolescent depression: does adolescent self-evaluation act as a mediator? Abstracts: abstract_id: PUBMED:19703331 Parental criticism and adolescent depression: does adolescent self-evaluation act as a mediator? Background: A better understanding of relationships between adolescent depression and family functioning may help in devising ways to prevent development of depression and design effective therapeutic interventions. Aims: This study explored the relationship of parental emotional attitudes, (perceived criticism and expressed emotion) to adolescent self-evaluation and depression. Methods: A sample of 28 clinic-referred adolescents and their mothers participated. The Five Minute Speech Sample was used to measure parental expressed emotion, and the adolescents completed the Children's Depression Inventory, Self-Perception Profile for Children global self-worth scale, a self-criticism scale and a perceived parental criticism scale. Results: There was partial support for a model of adolescent negative self-evaluation as a mediator in the relationship between parental emotional attitudes and adolescent depressive symptoms. The data also supported an alternative hypothesis whereby adolescent depressive symptoms are related to negative self-evaluation. Conclusions: The overall pattern of results emphasizes the significance of adolescents' perceptions of parental criticism, rather than actual levels, in understanding the relationship between parental emotional attitudes and adolescent depressive symptoms. abstract_id: PUBMED:35891892 Parental psychological control and adolescents depression during the COVID-19 pandemic: the mediating and moderating effect of self-concept clarity and mindfulness. During the COVID-19 pandemic, the mental health state of adolescents had caused widespread concern, especially the various problems caused by the relationship between adolescents and their parents in the long isolation at home. Based on the mindfulness reperceiving model and Rogers's Self-theory, this study aimed to explore the roles of adolescents' self-concept clarity and mindfulness level in the relationship between parental psychological control and adolescent depression. A total of 1,100 junior high school students from China completed the questionnaires regarding parental psychological control, depression, self-concept clarity, and mindfulness. Moderated mediation analyses suggest that parental psychological control affects adolescent depression via self-concept clarity. The association between parental psychological control and depression is moderated by self-concept clarity. The effect was stronger among adolescents with high mindfulness levels than those with low. This study suggests that it is necessary to consider both parental factors and adolescents' factors in the future. The interventions on self-concept or mindfulness may ameliorate adolescent mental problems more effectively. abstract_id: PUBMED:32846324 Non-suicidal self-injury in adolescence: Longitudinal evidence of recursive associations with adolescent depression and parental rejection. Introduction: Non-suicidal self-injury (NSSI) has been acknowledged as a major public health concern among adolescents. However, the complex association between parental rejection and NSSI is not entirely understood and the existing literature does not address the underlying mechanism of adolescent depressive symptoms in explaining the process. Methods: Three waves of data (called T1, T2 and T3) were collected 6 months apart, between November 2018 and 2019, in a sample of 1987 Chinese adolescents (56.1% males; ages 10 to 14, M = 12.32, SD = 0.53). Two separate autoregressive cross-lagged models were used to examine the bidirectional association between parental rejection and NSSI as well as the role of depressive symptoms in bidirectional mediation. Results: There was strong evidence of bidirectional effects between parental rejection and NSSI at both 6-month intervals. Parental rejection at T1/T2 positively predicted NSSI at T2/T3, and, vice versa, NSSI at T1/T2 positively predicted parental rejection at T2/T3. Furthermore, we found that the reciprocal association between parent rejection and NSSI was mediated by adolescent depressive symptoms. Conclusions: The present study found reciprocal associations between parental rejection and NSSI, and further demonstrated that the bidirectional process was mediated by depressive symptoms. The findings from this study are of great interest as they help to inform the development of future prevention and intervention strategies for NSSI. abstract_id: PUBMED:32012072 Tracking and Predicting Depressive Symptoms of Adolescents Using Smartphone-Based Self-Reports, Parental Evaluations, and Passive Phone Sensor Data: Development and Usability Study. Background: Depression carries significant financial, medical, and emotional burden on modern society. Various proof-of-concept studies have highlighted how apps can link dynamic mental health status changes to fluctuations in smartphone usage in adult patients with major depressive disorder (MDD). However, the use of such apps to monitor adolescents remains a challenge. Objective: This study aimed to investigate whether smartphone apps are useful in evaluating and monitoring depression symptoms in a clinically depressed adolescent population compared with the following gold-standard clinical psychometric instruments: Patient Health Questionnaire (PHQ-9), Hamilton Rating Scale for Depression (HAM-D), and Hamilton Anxiety Rating Scale (HAM-A). Methods: We recruited 13 families with adolescent patients diagnosed with MDD with or without comorbid anxiety disorder. Over an 8-week period, daily self-reported moods and smartphone sensor data were collected by using the Smartphone- and OnLine usage-based eValuation for Depression (SOLVD) app. The evaluations from teens' parents were also collected. Baseline depression and anxiety symptoms were measured biweekly using PHQ-9, HAM-D, and HAM-A. Results: We observed a significant correlation between the self-evaluated mood averaged over a 2-week period and the biweekly psychometric scores from PHQ-9, HAM-D, and HAM-A (0.45≤|r|≤0.63; P=.009, P=.01, and P=.003, respectively). The daily steps taken, SMS frequency, and average call duration were also highly correlated with clinical scores (0.44≤|r|≤0.72; all P<.05). By combining self-evaluations and smartphone sensor data of the teens, we could predict the PHQ-9 score with an accuracy of 88% (23.77/27). When adding the evaluations from the teens' parents, the prediction accuracy was further increased to 90% (24.35/27). Conclusions: Smartphone apps such as SOLVD represent a useful way to monitor depressive symptoms in clinically depressed adolescents, and these apps correlate well with current gold-standard psychometric instruments. This is a first study of its kind that was conducted on the adolescent population, and it included inputs from both teens and their parents as observers. The results are preliminary because of the small sample size, and we plan to expand the study to a larger population. abstract_id: PUBMED:37941014 'I am tired, sad and kind': self-evaluation and symptoms of depression in adolescents. Introduction: Although self-evaluation i.e., negative perceptions of the self is a common depression symptom in adolescents, little is known about how this population spontaneously describe their self and available data on adolescent self-evaluation is limited. This study aimed to generate and report on a list of words used by healthy adolescents and those with elevated depression symptoms to describe their self-evaluation. Linguistic analysis (LIWC) was then used to compare self-evaluation between the two groups. Methods: Adolescents aged 13-18 years (n = 549) completed a measure of depression symptoms (the Mood and Feelings Questionnaire) and a measure of self-evaluation (the Twenty Statements Test). Responses were then collated and presented in a freely accessible resource and coded using Linguistic Inquiry Word Count (LIWC) analysis. Results: Self-evaluation words generated by adolescents were uploaded to a publicly accessible site for future research: https://doi.org/10.15125/BATH-01234 . Adolescents with elevated depression symptoms described themselves as 'Tired' and 'Sad' more than healthy adolescents. However, there was no difference between groups in respect to their use of specific positive, prosocial self-evaluation 'words' (i.e., 'Caring' and 'Kind). Following Linguistic Inquiry Word Count (LIWC) analysis, adolescents with elevated depression symptoms generated significantly more words than healthy adolescents, generated more words classified as negative emotion, anxiety and sadness and generated fewer words classified positive emotion than healthy adolescents. Conclusions: As predicted by the cognitive model of depression, our findings suggest that adolescents with elevated symptoms of depression generated more negative self-evaluation words than healthy adolescents; however they also generated prosocial positive self-evaluation words at the same rate as non-depressed adolescents. These novel data therefore identify an 'island' of resilience that could be targeted and amplified by psychological treatments for adolescent depression, and thus provide an additional technique of change. abstract_id: PUBMED:33185477 How Does Parental Smartphone Addiction Affect Adolescent Smartphone Addiction?: Testing the Mediating Roles of Parental Rejection and Adolescent Depression. Little has been known about the mechanisms underlying parental smartphone addiction (PSA) and adolescent smartphone addiction (ASA). This study examined whether PSA predicts ASA and investigated the mediating roles of parental rejection (PR) and adolescent depression (ADP) among a sample of 4,415 parent-child dyads. Analysis of a serial multiple-mediator model indicated that PSA positively predicted ASA (B = 0.13, SE = 0.02, 95% confidence interval [CI] = 0.09-0.16). In addition, PR and ADP sequentially mediated the link between PSA and ASA (B = 0.01, 95% boot CI = 0.01-0.02). Implications of the findings and directions for future research are discussed. abstract_id: PUBMED:25642779 The role of parental self-efficacy in adolescent school-refusal. Parental characteristics such as psychopathology and parenting practices are understood to be implicated in school-refusal presentations. Expanding upon these largely affective and behavioral factors, the present study sought to examine the role of a parenting cognitive construct--parenting self-efficacy--in understanding school-refusal. School-refusing adolescents (n = 60, 53% male) and school-attending adolescents (n = 46, 39% male) aged 12-17 years (M = 13.93, SD = 1.33), along with a parent, participated in the study. Participants completed study measures of demographics, psychopathology, overall family functioning, and parenting self-efficacy. As expected, parents of school-refusing adolescents were found to have lower levels of parental self-efficacy than parents of school-attending adolescents. Parenting self-efficacy was inversely associated with parent- and adolescent- psychopathology as well as family dysfunction. Logistic regression analyses determined parenting self-efficacy to be a predictor of school-refusal. However, upon controlling for related constructs including family dysfunction, adolescent depression, and parent depression, the predictive capacity of parenting self-efficacy was eliminated. Taken together, the results highlight the likely complex relationships between parental self-efficacy, familial psychopathology, and dysfunctional family processes within this population. Research is required to further delineate these dynamic relationships among families of school-refusing adolescents. abstract_id: PUBMED:34861833 Self-evaluation as an active ingredient in the experience and treatment of adolescent depression; an integrated scoping review with expert advisory input. Background: Negative self-perceptions is one of the most common symptoms of depression in young people, and has been found to be strongly associated with severity of depression symptoms. Psychological treatments for adolescent depression are only moderately effective. Understanding the role and importance of these self-perceptions may help to inform and improve treatments. The aim of this review was to examine self-evaluation as a characteristic of adolescent depression, and as an active ingredient in treatment for adolescent depression. Methods: We conducted a scoping review which included quantitative and qualitative studies of any design that reported on self-evaluation as a characteristic of, or focus of treatment for, adolescent depression. Participants were required to be 11-24 years and experiencing elevated symptoms of depression or a diagnosis. We also met with 14 expert advisory groups of young people with lived experience, clinicians, and researchers, for their input. Findings from 46 peer-reviewed research studies are presented alongside views of 64 expert advisors, to identify what is known and what is missing in the literature. Results: Three overarching topics were identified following the review and reflections from advisors: 1) What does it look like? 2) Where does it come from? and 3) How can we change it? The literature identified that young people view themselves more negatively and less positively when depressed, however expert advisors explained that view of self is complex and varies for each individual. Literature identified preliminary evidence of a bidirectional relationship between self-evaluation and depression, however, advisors raised questions regarding the influences and mechanisms involved, such as being influenced by the social environment, and by the cognitive capacity of the individual. Finally, there was a consensus from the literature and expert advisors that self-evaluation can improve across treatment. However, research literature was limited, with only 11 identified studies covering a diverse range of interventions and self-evaluation measures. Various barriers and facilitators to working on self-evaluation in treatment were highlighted by advisors, as well as suggestions for treatment approaches. Conclusions: Findings indicate the importance of self-evaluation in adolescent depression, but highlight the need for more research on which treatments and treatment components are most effective in changing self-evaluation. abstract_id: PUBMED:29799126 The combined influence of cognitions in adolescent depression: Biases of interpretation, self-evaluation, and memory. Objectives: Depression is characterized by a range of systematic negative biases in thinking and information processing. These biases are believed to play a causal role in the aetiology and maintenance of depression, and it has been proposed that the combined effect of cognitive biases may have greater impact on depression than individual biases alone. Yet little is known about how these biases interact during adolescence when onset is most common. Methods: In this study, adolescents were recruited from the community (n = 212) and from a Child And Adolescent Mental Health Service (n = 84). Participants completed measures of depressive symptoms, interpretation bias, self-evaluation, and recall memory. These included the Mood and Feelings Questionnaire, Ambiguous Scenarios Test for Depression in Adolescents, Self-Description Questionnaire, and an immediate recall task. The clinically referred sample also took part in a formal diagnostic interview. Results: Individual cognitive biases were significantly intercorrelated and associated with depression severity. The combination of cognitive biases was a stronger predictor of depression severity than individual biases alone, predicting 60% of the variance in depression severity across all participants. There were two significant predictors, interpretation bias and negative self-evaluation; however, almost all of the variance was explained by negative self-evaluation. Conclusions: The findings support the interrelationship and additive effect of biases in explaining depression and suggest that understanding the way in which cognitive biases interact could be important in advancing methods of identification, early intervention, and treatment. Practitioner Points: A combination of biases was a better predictor of depression symptom severity than individual biases. Interpretation and self-evaluation were better predictors of depression symptom severity than recall. Negative self-evaluation was the strongest individual predictor of depression symptom severity. Negative self-evaluation was able to classify depressed from non-depressed adolescents. The cross-sectional design of the study precludes any conclusions about the potential causal role of these variables. Different tasks were used to assess different types of cognitive bias meaning that the possible linear operation along an information processing 'pathway' could not be examined. abstract_id: PUBMED:24757914 Self-efficacy: a mediator of smoking behavior and depression among college students. Cigarette smoking is a growing problem among adolescents. This correlational study tested theoretical relationships between the dependent variable (smoking behavior) and the independent variables (depression and smoking resistance self-efficacy) in a convenience sample of 364 college students ages 18 to 21 years recruited from a large urban public college. An a priori mediational model tested the role of smoking resistance self-efficacy as a mediator in the relationship between smoking behavior and depression. Findings showed there was a statistically significant positive relationship between depression and smoking behavior (r = 0.122, p = 0.01). There was a statistically significant negative relationship between smoking resistance self-efficacy and smoking behavior (r = -0.744, p = 0.01). Additionally, smoking resistance self-efficacy was a mediator of the relationship between depression and smoking behavior (beta = -0.757, p = 0.001). This study identifies a need for further theory-driven study of the relation of adolescent depression and smoking behavior. The findings of this study have implications for nursing interventions targeted to both current smokers and smoking initiation prevention programs. Answer: Yes, adolescent self-evaluation acts as a mediator in the relationship between parental criticism and adolescent depression. The study by PUBMED:19703331 found partial support for a model where negative self-evaluation by the adolescent mediates the relationship between parental emotional attitudes, such as perceived criticism and expressed emotion, and adolescent depressive symptoms. This suggests that how adolescents perceive themselves can influence the impact of parental criticism on their depressive symptoms. The study emphasizes the significance of adolescents' perceptions of parental criticism in understanding the relationship between parental emotional attitudes and adolescent depressive symptoms. Additionally, PUBMED:35891892 supports the notion that self-concept clarity mediates the effect of parental psychological control on adolescent depression, with the association being moderated by mindfulness. This indicates that a clear self-concept can buffer the negative impact of parental psychological control on depression, particularly among adolescents with higher levels of mindfulness. Furthermore, PUBMED:32846324 provides evidence that adolescent depressive symptoms mediate the bidirectional relationship between parental rejection and non-suicidal self-injury (NSSI), which is a behavior often associated with adolescent depression. This suggests that depressive symptoms can be both a consequence and a contributing factor to the negative cycle of parental rejection and NSSI. In summary, adolescent self-evaluation, including aspects like self-concept clarity and negative self-perceptions, acts as a mediator in the relationship between various forms of parental criticism or rejection and adolescent depression. These findings highlight the importance of considering adolescents' self-evaluation in the context of parental influences when addressing adolescent depression.
Instruction: Radiation-Associated Toxicities in Obese Women with Endometrial Cancer: More Than Just BMI? Abstracts: abstract_id: PUBMED:26146653 Radiation-Associated Toxicities in Obese Women with Endometrial Cancer: More Than Just BMI? Purpose: The study characterizes the impact of obesity on postoperative radiation-associated toxicities in women with endometrial cancer (EC). Material And Methods: A retrospective study identified 96 women with EC referred to a large urban institution's radiation oncology practice for postoperative whole pelvic radiotherapy (WPRT) and/or intracavitary vaginal brachytherapy (ICBT). Demographic and clinicopathologic data were obtained. Toxicities were graded according to RTOG Acute Radiation Morbidity Scoring Criteria. Follow-up period ranged from 1 month to 11 years (median 2 years). Data were analyzed by χ(2), logistic regression, and recursive partitioning analyses. Results: 68 EC patients who received WPRT and/or ICBT were analyzed. Median age was 52 years (29-73). The majority were Hispanic (71%). Median BMI at diagnosis was 34.5 kg/m(2) (20.5-56.6 kg/m(2)). BMI was independently associated with radiation-related cutaneous (p = 0.022) and gynecologic-related (p = 0.027) toxicities. Younger women also reported more gynecologic-related toxicities (p = 0.039). Adjuvant radiation technique was associated with increased gastrointestinal- and genitourinary-related toxicities but not gynecologic-related toxicity. Conclusions: Increasing BMI was associated with increased frequency of gynecologic and cutaneous radiation-associated toxicities. Additional studies to critically evaluate the radiation treatment dosing and treatment fields in obese EC patients are warranted to identify strategies to mitigate the radiation-associated toxicities in these women. abstract_id: PUBMED:28620815 Radiation-related toxicities and outcomes in endometrial cancer: are obese women at a disadvantage? Objective: To assess the impact of body mass index (BMI) on radiotherapy toxicities in endometrial cancer patients. Methods: This was a retrospective cohort study of women diagnosed with endometrial cancer between January 2006 and December 2014 at the Royal Cornwall Hospital Trust. Women who received radiotherapy as part of their treatment, including external beam radiotherapy (EBRT) and/or vaginal brachytherapy were included. Radiation-related toxicities were graded according to the Radiation Therapy Oncology Group (RTOG) guidelines. Toxicity outcomes were compared across BMI groups-non-obese (BMI <30 kg/m2) and obese (BMI ≥30 kg/m2)-according to radiotherapy treatment received (EBRT, brachytherapy or a combination). Results: Of a total of 159 women who received radiotherapy, 110 were eligible for inclusion in the study. Sixty-three women had a BMI <30 kg/m2 and 47 women were obese. Obese women had poorer Eastern Cooperative Oncology Group performance status (P = 0.021) and more comorbidities (P < 0.001) compared to the non-obese group. Total (any) toxicity rates were 60.3, 72.7 and 52.0% for EBRT and brachytherapy (N = 63), single-mode EBRT (N = 22) and brachytherapy (N = 25), respectively. BMI was not associated with the incidence of acute and late radiation toxicities in the different radiotherapy groups, and there were no differences in individual complications between the BMI groups. Conclusion: When comparing obese to non-obese women, obesity does not negatively impact the incidence of radiation toxicities in endometrial cancer. However, toxicities remain an important challenge as they are common and negatively influence the quality of life (QoL) of survivors. Future studies need to further explore the role of BMI and possible interventions to improve toxicities and QoL. abstract_id: PUBMED:25930924 (P144) Radiation-Associated Toxicities in Obese Women With Endometrial Cancer: More Than Just BMI? N/A abstract_id: PUBMED:851161 Carcinoma of the endometrium: radiation followed immediately by operation. A prospective study was established in August, 1967, to treat all adenocarcinomas of the endometrium by protocols of preoperative radiation followed immediately by operation. Two hundred and ninety-five women have been treated, 220 of whom had Stage I disease. In these cases, factors known to be associated with survival were studied, and their influence upon survival was noted. Preoperative radium followed immediately by operation was the primary method of therapy. Life tables demonstrated a five-year survival rate of 91 per cent with a low complication rate in patients with Stage I disease. Cell type, degree of differentiation, and depth of myomentrial invasion were the primary factors influencing survival. abstract_id: PUBMED:34000661 Distinct clinical and genetic mutation characteristics in sporadic and Lynch syndrome-associated endometrial cancer in a Chinese population. Background: The diagnosis of Lynch syndrome-associated endometrial cancer patients is significant for early warning of their relatives. The purpose of this study was to provide diagnostic indicators of Lynch syndrome-associated endometrial cancer by screening the differential clinical and genetic characteristics. Methods: Clinical information and hysterectomy specimens were collected from 377 eligible patients with endometrial cancer. The MLH1 methylation level was detected by an EZ DNA Methylation-Gold Kit. According to the above experimental results, the patients were then divided into sporadic endometrial cancer and suspected Lynch syndrome-associated endometrial cancer groups. A total of 62 samples were randomly selected for whole-exome sequencing. IBM SPSS Statistics 21 was used to compare the clinical data between the sporadic and suspected Lynch syndrome-associated endometrial cancer groups, and the relationship between the specific high-frequency-mutation genes and the clinical data. Results: According to the results of MMR immunohistochemistry and MLH1 methylation, the sporadic endometrial cancer group included 361 patients and the suspected Lynch syndrome-associated endometrial cancer group included 16 patients in this study. In the clinical analysis, the average age of the suspected Lynch syndrome-associated endometrial cancer patients was 45.50 ± 11.50 years, which was significantly younger than the 51.17 ± 10.03 years of the sporadic endometrial cancer patients (P = 0.028). The average BMI of the suspected Lynch syndrome-associated endometrial cancer patients was 23.43 kg/m2 (CI: 20, 30), which was lower than the 26.50 kg/m2 of the sporadic endometrial cancer patients (P = 0.028). Combined with the WES data, MASP2, NADK and RNF223 were identified as three specific mutation sites related to age, FIGO stage and histology. Conclusions: Compared with the suspected endometrial cancer patients, the Lynch syndrome-associated endometrial cancer patients were younger and less obese. Mutations in MASP2, NADK and RNF223 might be regarded as genetic endometrial cancer features related to clinical characteristics. abstract_id: PUBMED:26743834 Medically inoperable endometrial cancer in patients with a high body mass index (BMI): Patterns of failure after 3-D image-based high dose rate (HDR) brachytherapy. Background And Purpose: High BMI is a reason for medical inoperability in patients with endometrial cancer in the United States. Definitive radiation is an alternative therapy for these patients; however, data on patterns of failure after definitive radiotherapy are lacking. We describe the patterns of failure after definitive treatment with 3-D image-based high dose rate (HDR) brachytherapy for medically inoperable endometrial cancer. Materials And Methods: Forty-three consecutive patients with endometrial cancer FIGO stages I-III were treated definitively with HDR brachytherapy with or without external beam radiation therapy. Cumulative incidence of failures was estimated and prognostic variables were identified Results: Mean follow up was 29.7 months. Median BMI was 50.2 kg/m(2) (range: 25.1-104 kg/m(2)). The two-year overall survival was 65.2%. The two-year cumulative incidence of pelvic and distant failures was 8.3% and 13.5%, respectively. Grade 3 disease was associated with a higher risk of all-failures (Hazard Ratio [HR]: 4.67, 95% CI: 1.04-20.9, p=0.044). The incidence of acute Grade 3 GI/GU toxicities was 4.6%. Conclusions: Pelvic failure at two years was less than 10%. Patients with grade 3 disease were more likely to experience disease failure and may warrant closer follow up. abstract_id: PUBMED:17096437 Treatment effects, disease recurrence, and survival in obese women with early endometrial carcinoma : a Gynecologic Oncology Group study. Background: The objective was to examine whether rates of disease recurrence, treatment-related adverse effects, and survival differed between obese or morbidly obese and nonobese patients. Methods: Data from patients who participated in a randomized trial of surgery with or without adjuvant radiation therapy were retrospectively reviewed. RESULTS.: Body mass index (BMI) data were available for 380 patients, of whom 24% were overweight (BMI, 25-29.9), 41% were obese (BMI, 30-39.9), and 12% were morbidly obese (BMI, > or =40). BMI did not significantly differ based on age, performance status, histology, tumor grade, myometrial invasion, or lymphovascular-space involvement. BMI > 30 was more common in African Americans (73%) than non-African Americans (50%). Patients with a BMI > or = 40 compared with BMI < 30 (hazards ratio [HR], 0.42; 95% confidence interval [CI], 0.09-1.84; P = .246) did not have lower recurrence rates. Compared with BMI < 30, there was no significant difference in survival in patients with BMI 30-39.9 (HR, 1.48; 95% CI, 0.82-2.70; P = .196); however, there was evidence for decreased survival in patients with BMI > or = 40 (HR, 2.77; 95% CI, 1.21-6.36; P = .016). Unadjusted and adjusted BMI hazards ratios for African Americans versus non-African Americans in the current study differed, thus suggesting a confounding effect of BMI on race. Eight (67%) of 12 deaths among 45 morbidly obese patients were from noncancerous causes. For patients who received adjuvant radiation therapy, increased BMI was significantly associated with less gastrointestinal (R, -0.22; P = .003) and more cutaneous (R, 0.17; P = .019) toxicities. Conclusions: In the current study, obesity was associated with higher mortality from causes other than endometrial cancer but not disease recurrence. Increased BMI was also associated with more cutaneous and less gastrointestinal toxicity in patients who received adjuvant radiation therapy. Future recommendations include lifestyle intervention trials to improve survival in obese endometrial cancer patients. abstract_id: PUBMED:27681755 Simultaneous Integrated Boost Volumetric Modulated Arc Therapy in the Postoperative Treatment of High-Risk to Intermediate-Risk Endometrial Cancer: Results of ADA II Phase 1-2 Trial. Purpose: A prospective phase 1-2 clinical trial aimed at determining the recommended postoperative dose of simultaneous integrated boost volumetric modulated arc therapy (SIB-VMAT) in a large series of patients with high-risk and intermediate-risk endometrial cancer (HIR-EC) is presented. The study also evaluated the association between rate and severity of toxicity and comorbidities and the clinical outcomes. Methods And Materials: Two SIB-VMAT dose levels were investigated for boost to the vaginal vault, whereas the pelvic lymph nodes were always treated with 45 Gy. The first cohort received a SIB-VMAT dose of 55 Gy in 25 consecutive 2.2-Gy fractions, and the subsequent cohort received higher doses (60 Gy in 2.4-Gy fractions). Results: Seventy consecutive HIR-EC patients, roughly half of whom were obese (47.1%) or overweight (37.1%), with Charlson Age-Comorbidity Index >2 (48.5%), were enrolled. Thirty-one patients (44.3%) were administered adjuvant chemotherapy before starting radiation therapy. All patients (n=35 per dose level) completed irradiation without any dose-limiting toxicity. Proctitis (any grade) was associated with radiation therapy dose (P=.001); not so enterocolitis. Grade ≥2 gastrointestinal (GI) and genitourinary (GU) toxicity were recorded in 17 (24.3%) and 14 patients (20.0%), respectively, and were not associated with radiation dose. As for late toxicity, none of patients experienced late grade ≥3 GI or grade ≥2 GU toxicity. The 3-year late grade ≥2 GI and GU toxicity-free survival were 92.8% and 100%, respectively, with no difference between the 2 dose levels. With a median follow-up period of 25 months (range, 4-60 months), relapse/progression of disease was observed in 10 of 70 patients (14.2%). The 3-year cumulative incidence of recurrence was 1.5% (95% confidence interval (CI): 0.2-10.7), whereas the 3-year disease-free survival was 81.3% (95% CI: 65.0-90.0). Conclusions: This clinical study showed the feasibility of this technique and its good profile in terms of acute and late toxicity at the recommended doses even in aged and frail patients. abstract_id: PUBMED:37357678 Distinct Lipid Phenotype of Cancer-Associated Fibroblasts (CAFs) Isolated From Overweight/Obese Endometrial Cancer Patients as Assessed Using Raman Spectroscopy. Obesity is strongly linked with increased risk and poorer prognosis of endometrial cancer (EC). Cancer-associated fibroblasts (CAFs) are activated fibroblasts that form a large component of the tumor microenvironment and undergo metabolic reprogramming to provide critical metabolites for tumor growth. However, it is still unknown how obesity, characterized by a surplus of free fatty acids drives the modifications of CAFs lipid metabolism which may provide the mechanistic link between obesity and EC progression. The present study aims to evaluate the utility of Raman spectroscopy, an emerging nondestructive analytical tool to detect signature changes in lipid metabolites of CAFs from EC patients with varying body mass index. We established primary cultures of fibroblasts from human EC tissues, and CAFs of overweight/obese and nonobese women using antibody-conjugated magnetic beads isolation. These homogeneous fibroblast cultures expressed fibroblast markers, including α-smooth muscle actin and vimentin. Analysis was made in the Raman spectra region best associated with cancer progression biochemical changes in lipids (600-1800 cm-1 and 2800-3200 cm-1). Direct band analysis and ratiometric analysis were conducted to extract information from the Raman spectrum. Present results demonstrated minor shifts in the CH2 symmetric stretch of lipids at 2879 cm-1 and CH3 asymmetric stretching from protein at 2932 cm-1 in the overweight/obese CAFS compared to nonobese CAFs, indicating increased lipid content and a higher degree of lipid saturation. Principal component analysis showed that CAFs from overweight/obese and nonobese EC patients can be clearly distinguished indicating the capability of Raman spectroscopy to detect changes in biochemical components. Our results suggest Raman spectroscopy supported by chemometric analysis is a reliable technique for characterizing metabolic changes in clinical samples, providing an insight into obesity-driven alteration in CAFs, a critical stromal component during EC tumorigenesis. abstract_id: PUBMED:33221024 Factors associated with endometrial cancer and hyperplasia among middle-aged and older Hispanics. Objective: While disparities in endometrial hyperplasia and endometrial cancer are well documented in Blacks and Whites, limited information exists for Hispanics. The objective is to describe the patient characteristics associated with endometrial hyperplasia symptoms, endometrial hyperplasia with atypia and endometrial cancer, and assess factors contributing to racial/ethnic differences in disease outcomes. Methods: This single-center, retrospective study included women aged ≥50 years with ≥ two encounters for endometrial hyperplasia symptoms, endometrial hyperplasia with atypia and endometrial cancer between 2012 and 2016. Multivariate logistic regression models evaluated the predictors of endometrial cancer and hyperplasia. Results: We included 19,865 women (4749 endometrial hyperplasia symptoms, 71 endometrial hyperplasias with atypia, 201 endometrial cancers) with mean age of 60.45 years (SD 9.94). The odds of endometrial hyperplasia symptoms were higher in non-Hispanic Blacks (Odds Ratio [OR] 1.56, 95% Confidence Interval [CI] 1.20-1.72), Hispanics (OR 1.35, 95% CI 1.22-1.49), family history of female cancer (OR 1.25, 95% CI 1.12-1.39), hypertension (OR 1.24, 95% CI 1.14-1.35), and birth control use (OR 1.29, 95% CI 1.15-1.43). Odds of endometrial cancer and atypical hyperplasia increased for ages 60-64 (OR 7.95, 95% CI 3.26-19.37; OR 3.66, 95% 1.01-13.22) and being obese (OR 1.61, 95% CI 1.08-2.41; OR: 6.60, 95% CI 2.32-18.83). Odds of endometrial cancer increased with diabetes (OR 1.68, 95% CI 1.22-2.32). Conclusion(s): Patients with obesity and diabetes had increased odds of endometrial cancer and hyperplasia with atypia. Further study is needed to understand the exogenous estrogen effect contributing to the increased incidence among Hispanics. Answer: The impact of obesity on radiation-associated toxicities in women with endometrial cancer (EC) appears to be multifaceted and cannot be solely attributed to body mass index (BMI). Several studies have investigated the relationship between obesity and radiation-related toxicities in EC patients. One study found that increasing BMI was independently associated with increased frequency of gynecologic and cutaneous radiation-associated toxicities in EC patients who received postoperative whole pelvic radiotherapy (WPRT) and/or intracavitary vaginal brachytherapy (ICBT). Younger women also reported more gynecologic-related toxicities. The study suggested that additional research is needed to evaluate radiation treatment dosing and fields in obese EC patients to mitigate these toxicities (PUBMED:26146653). Conversely, another study concluded that obesity does not negatively impact the incidence of radiation toxicities when comparing obese to non-obese women with EC. However, toxicities were common and negatively influenced the quality of life of survivors, indicating that toxicities remain an important challenge (PUBMED:28620815). In terms of treatment outcomes, a study on medically inoperable EC in patients with high BMI treated with 3-D image-based high dose rate (HDR) brachytherapy showed that pelvic failure at two years was less than 10%. However, patients with grade 3 disease were more likely to experience disease failure, suggesting the need for closer follow-up in this subgroup (PUBMED:26743834). Another study indicated that obesity was associated with higher mortality from causes other than endometrial cancer but not with disease recurrence. Increased BMI was also associated with more cutaneous and less gastrointestinal toxicity in patients who received adjuvant radiation therapy (PUBMED:17096437). Overall, these findings suggest that while BMI is a factor in radiation-associated toxicities in obese women with EC, it is not the only consideration. Other factors such as age, comorbidities, performance status, and the presence of grade 3 disease also play roles in the incidence and severity of toxicities and treatment outcomes. Therefore, it is clear that radiation-associated toxicities in obese women with EC involve more than just BMI, and a comprehensive approach considering various patient-specific factors is warranted to optimize treatment and mitigate toxicities (PUBMED:26146653; PUBMED:28620815; PUBMED:26743834; PUBMED:17096437).
Instruction: The use of allograft bone in spine surgery: is it safe? Abstracts: abstract_id: PUBMED:37305828 The histological assessment of new bone formation with zolendronic acid loaded bone allograft in rabbit femoral bone defect. The aim of this experimental study was to evaluate the effect of zolendronic acid (ZOL) combined with bone allograft prepared using the Marburg Bone Bank System on bone formation in the implant remodeling zone. Femoral bone defects with a diameter of 5 mm and a depth of 10 mm were created in 32 rabbits. Animals were divided into 2 similar groups: Group 1 (control), where defects were filled with bone allograft, and Group 2, where allograft was combined with ZOL. Eight animals from each group were sacrificed at 14- and 60-days post-surgery and bone defect healing was assessed using histopathological and histomorphometric analyses after 14 and 60 days. The results showed that new bone formation within the bone allograft was significantly greater in the control group than in the ZOL-treated group after 14 and 60 days (p<0.05). In conclusion, local co-administration of ZOL on heat-treated allograft inhibits allograft resorption and new bone formation in the bone defect. abstract_id: PUBMED:33489583 Bone Allograft Prosthesis Composite to Revise a Failed Massive Allo-Prosthesis: Case Report and 10 Years of Follow-Up. An 18-year-old male patient with a high-grade osteosarcoma was initially treated with resection and reconstruction using an osteochondral allograft. The allograft collapsed after five years, and thus a revision with a constrained knee prosthesis was performed. After one year, the implant failed due to a fracture, requiring another revision with a new allo-prosthetic composite. The long-term results were satisfactory. Allo-prosthetic composites may offer good long-term results after sarcoma resection. The failure of a massive bone allograft does not preclude the use of another allograft to maintain the bone stock and preserve the function. abstract_id: PUBMED:33189329 The choice between allograft or demineralized bone matrix is not unambiguous in trauma surgery. In fracture surgery, large bone defects and non-unions often require bone transplantation, and alternatives to autograft bone substitutes in the form of allografts from bone banks and the derivate demineralised bone matrix (DBM) are widely used. With a focus on efficacy, clinical evidence, safety, cost, and patient acceptance, this review evaluated the difference between allogeneic allograft or DBM as a bone substitute in trauma surgery. The efficacy in supporting bone healing from allograft and DBM is highly influenced by donor characteristics and graft processing. Mechanical stability is achieved from a structural graft. Based on the existing literature it is difficult to identify where DBM is useful in trauma surgery, and the level of evidence for the relevant use of allograft bone in trauma is low. The risk of transmitting diseases is negligible, and the lowest risk is from DBM due to the extensive processing procedures. A cost comparison showed that DBM is significantly more expensive. The experiences of dental patients have shown that many patients do not want to receive allografts as a bone substitute. It is not possible to definitively conclude whether it makes a difference if allograft or DBM is used in trauma surgery. It is ultimately the surgeon's individual choice, but this article may be useful in providing considerations before a decision is made. abstract_id: PUBMED:29703460 Comparison and Use of Allograft Bone Morphogenetic Protein Versus Other Materials in Ankle and Hindfoot Fusions. Bone grafting is a common procedure in foot and ankle surgery. Because autogenous graft use results in comorbidity to the patient, the search has been ongoing for the ideal substitute. A novel processing technique for allograft using bone marrow, which retains many of the growth factors, has shown promise in the spinal data and early reports of foot and ankle surgery. We performed a retrospective, comparative study of patients undergoing hindfoot and ankle arthrodesis, with a total of 68 patients included. Of the 68 patients, 29 (42.65%) received a bone morphogenetic protein allograft and 39 (57.35%) did not. The patient demographics and social and medical history were similar between the 2 groups and both groups had a similar time to union (p = .581). Of the 29 patients in the bone morphogenetic protein allograft group, 3 (10.3%) experienced nonunion and 4 (13.8%) developed a complication. Of the 39 patients undergoing other treatment, 7 (17.9%) experienced nonunion and 14 (35.9%) developed a complication. The difference for nonunion was not statistically significant (p = .5). However, the difference in the overall complication rate was statistically significant (p = .04). We found that this novel bone graft substitute is safe and can be used for foot and ankle arthrodesis. abstract_id: PUBMED:30828203 Bone stock reconstruction for huge bone loss using allograft-bones, bone marrow, and teriparatide in an infected total knee arthroplasty. Bone stock reconstruction using allograft-bones, bone marrow (BM), and teriparatide (TPTD) is reported. Huge and extensive bone losses occurred in the medullary cavity of the femur and tibia of a 55-year-old female rheumatoid arthritis patient with severe osteoporosis after debridement of her infected total knee arthroplasty. Because of the risks of unstable prosthetic fixation and intra-operation fracture, we first reconstructed the bone stock. Chipped allograft bones mixed with BM were implanted in the bone defects, and TPTD was administrated for the osteoporosis therapy. Good bone formation was found by computed tomography after 4 months. Bone turnover markers and bone mineral density (BMD) were increased at 6 months. We confirmed good bone formation at the re-implantation surgery. The newly formed bone harvested during the re-implantation surgery showed active osteoblast-like lining cells. TPTD is known to enhance allograft bone union, mesenchymal stem cell differentiation into osteoblasts, and BMD. This tissue engineering-based technique might be improved by the various effects of TPTD. This method without any laboratory cell culture might be a good option for bone stock reconstruction surgery in ordinary hospitals. abstract_id: PUBMED:24890134 Overlapping allograft for primary or salvage bone tumor reconstruction. Background: Compared with end-to-end allograft coaptation, overlapping allograft offer a superior union rate by increasing the contact area. However, reports on overlapping allograft are scarce. Therefore, we attempted to confirm the usefulness of this technique either after primary tumor resection or in salvaging a failed reconstruction. Methods: We analyzed the outcome of 35 overlapping allografts reconstructions. Indications were primary reconstruction of a skeletal defect (n = 19) and salvage of a failed reconstruction (n = 16). Graft survival, union rate, and time to union were evaluated as a function of clinical variables such as age, use of chemotherapy, type of junction, method of fixation, length of overlapped bone, and method of overlapping. Results: All 35 overlapping allografts showed union at a mean of 5.6 months (range, 3-14 months). One allograft was removed with local recurrence at 19 months post-operatively. Average length of overlapped bone was 3.5 cm (range, 1.4-6.5 cm). Patient age <15-years (P = 0.001) and circumferential overlapping (P = 0.011) shortened the time to union. Conclusions: In terms of graft failure rate, union rate, and time to union, overlapping allograft is an excellent technique, which overcomes the limitations of end-to-end fixation. abstract_id: PUBMED:26335550 Structural bone allograft fractures in oncological procedures. Purpose: We report our experience analysing the risk of fracture amongst allografts in limb-preserving surgery for bone tumours. Methods: We retrospectively reviewed our experience with bone allograft and its major complications when used for limb -preserving operations for bone tumours. Forty-one structural allografts were performed in 39 patients between 1992 and 2012. Minimum follow-up was 20 months. Massive allografts have a high complication rate. Results: Excluding infection and nonunion, five acute fractures were found. All fractures occurred after the graft-host junction was united. Local factors-such as graft preservation, weight bearing, fixation to the host or systemic factors such as adjuvant treatments (chemotherapy or radiotherapy)-influence fracture rate. In our study, four patients achieved consolidation with internal fixation and autologous iliac-crest graft, whilst only one required graft exchange. Discussion: There is no general consensus as to when to treat fractures using open reduction and internal fixation or by exchanging the allograft. Higher fracture rate in relation to systemic treatment was found. Conclusions: Massive structural allograft reconstruction still has a place in limb-preserving surgery, with an acceptable fracture rate and a durable solution. abstract_id: PUBMED:18805063 The use of allograft bone in spine surgery: is it safe? Background Context: Allograft bone is commonly used in various spinal surgeries. The large amount of recalled allograft tissue, particularly in recent years, has increased concerns regarding the safety of allograft bone for spinal surgery. An analysis of allograft recall and its safety in spinal surgery has not been reported previously. Purpose: To determine 1) the number and types of allograft recall and the reasons for recall, 2) the types of disease transmission to spine patients, and 3) assess the safety of allograft bone in spinal surgery. Study Design/setting: Retrospective review. Methods: A retrospective review of all Food and Drug Administration (FDA) data from 1994 to June 2007 was reviewed to determine the amount and types of recalled allograft tissue. The literature and data from the Center for Disease Control were reviewed to determine the number and types of disease transmissions from allograft bone that have occurred to spine surgery patients during the study period. Results: There were 59,476 musculoskeletal allograft tissue specimens recalled by FDA during the study period, which accounts for 96.5% of all allograft tissue recalled in the United States. Improper donor evaluation, contamination, and recipient infections are the main reasons for allograft recall. There has been one case of human immunodeficiency virus infection transmission to a spine surgery patient in 1988. This is the only reported case of viral transmission. There are no reports of bacterial disease transmission from the use of allograft bone to spine surgery patients. Conclusions: The precise number of allografts used in spine surgery annually and the precise incidence of disease transmission to spine surgery patients linked to the use of allograft tissue is unknown. Musculoskeletal allograft tissue accounts for the majority of recalled tissue by FDA. Despite the large number of allograft recalls in this country, there is only one documented case in the literature of disease transmission to a spine surgery patient. There appears to be no overt risk associated with the use of allograft bone in spine surgery. However, as discussed in this article, there are certain aspects regarding the use of allograft bone that should be considered. abstract_id: PUBMED:36977641 Bone revascularization: structural allograft intramedullary vs extramedullary. Experimental work Introduction: successful treatment in patients with significant bone defects secondary to infection, non-union and osteoporotic fractures resulting from previous trauma is challenging. In the current literature we did not find any reports that compare the use of intramedullary allograft boards versus the same ones placed lateral to the lesion. Material And Methods: we worked on a sample of 20 rabbits (2 groups of 10 rabbits each). Group 1 underwent surgery using the extramedullary allograft placement technique, while group 2 with the intramedullary technique. Four months after surgery, imaging and histology studies were performed to compare between groups. Results: the analysis of the imaging studies showed a statistically significant difference between both groups with greater resorption and bone integration of the intramedullary placed allograft. Regarding histology, there were no statistically significant differences, but there was a significant prediction with a p value < 0.10 in favor of the intramedullary allograft. Conclusion: through our work we were able to show the great difference between the allograft placement technique with respect to imaging and histological analysis using revascularization markers. Although the intramedullary placed allograft shows us greater bone integration, the extramedullary graft will provide more support and structure in patients who require it. abstract_id: PUBMED:24729101 Vancomycin iontophoresis of allograft bone. Objectives: The most concerning infection of allografts and operative procedures is methicillin resistant Staphylococcus aureus (MRSA) and no current iontophoresed antibiotics effectively combat this microbe. It was initially hypothesised that iontophoresis of vancomycin through bone would not be effective due to its large molecular size and lack of charge. The aim of this study was to determine whether this was a viable procedure and to find the optimum conditions for its use. Methods: An iontophoresis cell was set up with varying concentrations of Vancomycin within the medulla of a section of sheep tibia, sealed from an external saline solution. The cell was run for varying times, Vancomycin concentrations and voltages, to gain information on optimisation of conditions for impregnating the graft. Each graft was then sectioned and dust ground from the exposed surface. The dust was serially washed to extract the Vancomycin and concentrations measured and plotted for all variables tested. Results: Vancomycin was successfully delivered and impregnated to the graft using the iontophoresis technique. The first order fit to the whole data set gave a significant result (p = 0.0233), with a significant concentration (p = 0.02774) component. The time component was the next most significant (p = 0.0597), but did not exceed the 95% confidence level. Conclusions: Iontophoresis is an effective method for delivering Vancomycin to allograft bone. The concentrations of the vancomycin solution affected the bone concentration, but results were highly variable. Further study should be done on the effectiveness of delivering different antibiotics using this method. Cite this article: Bone Joint Res 2014;3:101-7. Answer: The use of allograft bone in spine surgery appears to be safe based on the available evidence. A retrospective review of all Food and Drug Administration (FDA) data from 1994 to June 2007 found that there were 59,476 musculoskeletal allograft tissue specimens recalled by the FDA during the study period, which accounts for 96.5% of all allograft tissue recalled in the United States. The main reasons for allograft recall were improper donor evaluation, contamination, and recipient infections. Despite the large number of recalls, there has been only one documented case of disease transmission to a spine surgery patient, which was a human immunodeficiency virus infection transmission in 1988. There are no reports of bacterial disease transmission from the use of allograft bone to spine surgery patients. Therefore, the review concluded that there is no overt risk associated with the use of allograft bone in spine surgery, although certain aspects regarding its use should be considered (PUBMED:18805063). It is important to note that while the risk of disease transmission is low, the safety of allograft bone also depends on the quality of the donor screening, the processing of the allograft, and the surgical technique used. Surgeons should be aware of the potential risks and benefits when considering the use of allograft bone in spine surgery and should discuss these with their patients.
Instruction: Are there racial/ethnic disparities in VA PTSD treatment retention? Abstracts: abstract_id: PUBMED:25421265 Are there racial/ethnic disparities in VA PTSD treatment retention? Background: Chronic posttraumatic stress disorder (PTSD) can result in significant social and physical impairments. Despite the Department of Veterans Affairs' (VA) expansion of mental health services into primary care clinics to reach larger numbers of Veterans with PTSD, many do not receive sufficient treatment to clinically benefit. This study explored whether the odds of premature mental health treatment termination varies by patient race/ethnicity and, if so, whether such variation is associated with differential access to services or beliefs about mental health treatments. Methods: Prospective national cohort study of VA patients who were recently diagnosed with PTSD (n = 6,788). Self-administered surveys and electronic VA databases were utilized to examine mental health treatment retention across racial/ethnic groups in the 6 months following the PTSD diagnosis controlling for treatment need, access factors, age, gender, treatment beliefs, and facility factors. Results: African American and Latino Veterans were less likely to receive a minimal trial of pharmacotherapy and African American Veterans were less likely to receive a minimal trial of any treatment in the 6 months after being diagnosed with PTSD. Controlling for beliefs about mental health treatments diminished the lower odds of pharmacotherapy retention among Latino but not African American Veterans. Access factors did not contribute to treatment retention disparities. Conclusions: Even in safety-net healthcare systems like VA, racial and ethnic disparities in mental health treatment occur. To improve treatment equity, clinicians may need to more directly address patients' treatment beliefs. More understanding is needed to address the treatment disparity for African American Veterans. abstract_id: PUBMED:27799020 A Prospective Study of Racial and Ethnic Variation in VA Psychotherapy Services for PTSD. Objectives: To determine whether there are racial or ethnic disparities in receipt of U.S. Department of Veterans Affairs (VA) psychotherapy services for veterans with posttraumatic stress disorder (PTSD), the authors examined the odds of receipt of any psychotherapy and of individual psychotherapy among self-identified racial and ethnic groups for six months after individuals were diagnosed as having PTSD. Methods: Data were from a national prospective cohort study of 6,884 veterans with PTSD. Patients with no mental health care in the prior year were surveyed immediately following receipt of a PTSD diagnosis. VA databases were used to determine mental health service use. Analyses controlled for treatment need, access to services, and treatment beliefs. Results: Among veterans with PTSD initially seen in VA mental health treatment settings, Latino veterans were less likely than white veterans to receive any psychotherapy, after the analyses controlled for treatment need, access, and beliefs. Among those initially seen in mental health settings who received some psychotherapy services, Latinos, African Americans, and Asian/Pacific Islanders were less likely than white veterans to receive any individual therapy. These racial-ethnic differences in psychotherapy receipt were due to factors occurring between VA health care networks as well as factors occurring within networks. Drivers of disparities differed across racial and ethnic groups. Conclusions: Inequity in psychotherapy services for some veterans from racial and ethnic minority groups with PTSD were due to factors operating both within and between health care networks. abstract_id: PUBMED:34369806 Racial Disparities in Clinical Outcomes of Veterans Affairs Residential PTSD Treatment Between Black and White Veterans. Objective: Racial disparities across various domains of health care are a long-standing public health issue that affect a variety of clinical services and health outcomes. Mental health research has shown that prevalence rates of posttraumatic stress disorder (PTSD) are high for Black veterans compared with White veterans, and some studies suggest poorer clinical outcomes for Black veterans with PTSD. The aim of this study was to examine the impact of racial disparities longitudinally in the U.S. Department of Veterans Affairs (VA) residential rehabilitation treatment programs (RRTPs). Methods: Participants included 2,870 veterans treated nationally in VA PTSD RRTPs in fiscal year 2017. Veterans provided demographic data upon admission to the program. Symptoms of PTSD and depression were collected at admission, discharge, and 4-month follow-up. Hierarchical linear modeling was used to examine symptom change throughout and after treatment. Results: Black veterans experienced attenuated PTSD symptom reduction during treatment as well as greater depression symptom recurrence 4 months after discharge, relative to White veterans. Conclusions: This study adds to the body of literature that has documented poorer treatment outcomes for Black compared with White veterans with PTSD. Although both Black and White veterans had an overall reduction in symptoms, future research should focus on understanding the causes, mechanisms, and potential solutions to reduce racial disparities in mental health treatment. abstract_id: PUBMED:34353373 Racial/ethnic equity in substance use treatment research: the way forward. Background: Opioid use and opioid-related overdose continue to rise among racial/ethnic minorities. Social determinants of health negatively impact these communities, possibly resulting in poorer treatment outcomes. Research is needed to investigate how to overcome the disproportionate and deleterious impact of social determinants of health on treatment entry, retention, drug use and related outcomes among racial/ethnic minorities. The current commentary provides recommendations that may help researchers respond more effectively to reducing health disparities in substance use treatment. We begin with recommendations of best research practices (e.g., ensuring adequate recruitment of racial/ethnic minorities in research, central components of valid analysis, and adequate methods for assessing effect sizes for racial/ethnic minorities). Then, we propose that more NIDA research focuses on issues disproportionately affecting racial/ethnic minorities. Next, techniques for increasing the number of underrepresented racial/ethnic treatment researchers are suggested. We then recommend methods for infusing racial/ethnic expertise onto funding decision panels. This commentary ends with a case study that features NIDA's National Drug Abuse Treatment Clinical Trials Network (CTN). Conclusions: The proposed recommendations can serve as guidelines for substance use research funders to promote research that has the potential to reduce racial/ethnic disparities in substance use treatment and to increase training opportunities for racial/ethnic minority researchers. abstract_id: PUBMED:32839050 The role of perceived treatment need in explaining racial/ethnic disparities in the use of substance abuse treatment services. Objective: The current study examined the role of perceived treatment need in explaining racial/ethnic disparities in treatment utilization for a substance use disorder (SUD). Methods: We pooled data from the National Survey on Drug Use and Health survey for years 2014-2017. The analytic sample included adult white, Black, and Latino participants with a past-year SUD (n = 16,393). Multivariable logistic regressions examined racial/ethnic disparities in perceived treatment need-the perception of needing mental health and/or SUD treatment services within the past 12 months-and utilization of past-year substance use, mental health, and any treatment. Results: Latinos with SUD were less likely to perceive a need for treatment than whites. Black and Latino participants, relative to white participants, had lower odds of past-year treatment utilization, regardless of treatment type. In models stratified by perceived treatment need, racial/ethnic differences in the use of past-year SUD treatment and any treatment service were only significant among persons without a perceived need for treatment. We found no disparities in use of mental health treatment. Conclusions: Adults with SUD have low perceived treatment need overall but especially among Latinos. Furthermore, Black and Latino disparities in SUD treatment use may be driven in part by lower perceived need for treatment. Interventions that promote better perceived need and delivery models that strengthen the integration of SUD treatment in mental health services may help to reduce these disparities. abstract_id: PUBMED:31289768 Racial/Ethnic Disparities in Mortality Across the Veterans Health Administration. Purpose: Equal-access health care systems such as the Veterans Health Administration (VHA) reduce financial and nonfinancial barriers to care. It is unknown if such systems mitigate racial/ethnic mortality disparities, such as those well documented in the broader U.S. population. We examined racial/ethnic mortality disparities among VHA health care users, and compared racial/ethnic disparities in VHA and U.S. general populations. Methods: Linking VHA records for an October 2008 to September 2009 national VHA user cohort, and National Death Index records, we assessed all-cause, cancer, and cardiovascular-related mortality through December 2011. We calculated age-, sex-, and comorbidity-adjusted mortality hazard ratios. We computed sex-stratified, age-standardized mortality risk ratios for VHA and U.S. populations, then compared racial/ethnic disparities between the populations. Results: Among VHA users, American Indian/Alaskan Natives (AI/ANs) had higher adjusted all-cause mortality, whereas non-Hispanic Blacks had higher cause-specific mortality versus non-Hispanic Whites. Asians, Hispanics, and Native Hawaiian/Other Pacific Islanders had similar, or lower all-cause and cause-specific mortality versus non-Hispanic Whites. Mortality disparities were evident in non-Hispanic-Black men compared with non-Hispanic White men in both VHA and U.S. populations for all-cause, cardiovascular, and cancer (cause-specific) mortality, but disparities were smaller in VHA. VHA non-Hispanic Black women did not experience the all-cause and cause-specific mortality disparity present for U.S. non-Hispanic Black women. Disparities in all-cause and cancer mortality existed in VHA but not in U.S. population AI/AN men. Conclusion: Patterns in racial/ethnic disparities differed between VHA and U.S. populations, with fewer disparities within VHAs equal-access system. Equal-access health care may partially address racial/ethnic mortality disparities, but other nonhealth care factors should also be explored. abstract_id: PUBMED:34198132 Racial/ethnic disparities in the use of medications for opioid use disorder (MOUD) and their effects on residential drug treatment outcomes in the US. Background: This study examines racial/ethnic disparities in the use of medications for opioid use disorder (MOUD) in residential treatment and the influence of race/ethnicity on the association between MOUD use and treatment retention and completion. Methods: Data were extracted from SAMHSA's 2015-2017 Treatment Episode Dataset-Discharge (TEDS-D) datasets for adult opioid admissions/discharges to short-term (ST) (30 days or less) (N = 83,032) or long-term (LT) (> 30 days) residential treatment settings (N=61,626). Logistic regression estimated the likelihood of MOUD use among racial/ethnic groups and the moderation of race/ethnicity on the probability of treatment completion and retention, controlling for background factors. Results: After adjusting for covariates, compared to Whites, MOUD use was less likely for Blacks in ST (OR = 0.728) and LT settings (OR = 0.725) and slightly less likely for Hispanics in ST settings (OR = 0.859) but slightly more likely for Hispanics in LT settings (OR = 1.107). In ST settings, compared to Whites, the positive effect of MOUD on retention was enhanced for Blacks (OR = 1.191) and Hispanics (OR = 1.234), and the positive effect on treatment completion was enhanced for Hispanics (OR = 1.144). In LT settings, the negative association between MOUD and treatment completion was enhanced for Hispanics (OR = 0.776). Conclusions: Access to medications for opioid use disorder in short term residential treatment is particularly beneficial for Blacks and Hispanics, though adjusted models indicate they are less likely to receive it compared to Whites. Results are mixed for long-term residential treatment. Residential addiction treatment may represent an important setting for mitigating low rates of medication initiation and early discontinuation for minority patients. abstract_id: PUBMED:33223802 Disparities in Cardiovascular Care and Outcomes for Women From Racial/Ethnic Minority Backgrounds. Purpose Of Review: Racial, ethnic, and gender disparities in cardiovascular care are well-documented. This review aims to highlight the disparities and impact on a group particularly vulnerable to disparities, women from racial/ethnic minority backgrounds. Recent Findings: Women from racial/ethnic minority backgrounds remain underrepresented in major cardiovascular trials, limiting the generalizability of cardiovascular research to this population. Certain cardiovascular risk factors are more prevalent in women from racial/ethnic minority backgrounds, including traditional risk factors such as hypertension, obesity, and diabetes. Female-specific risk factors including gestational diabetes and preeclampsia as well as non-traditional psychosocial risk factors like depressive and anxiety disorders, increased child care, and familial and home care responsibility have been shown to increase risk for cardiovascular disease events in women more so than in men, and disproportionately affect women from racial/ethnic minority backgrounds. Despite this, minimal interventions to address differential risk have been proposed. Furthermore, disparities in treatment and outcomes that disadvantage minority women persist. The limited improvement in outcomes over time, especially among non-Hispanic Black women, is an area that requires further research and active interventions. Summary: Understanding the lack of representation in cardiovascular trials, differential cardiovascular risk, and disparities in treatment and outcomes among women from racial/ethnic minority backgrounds highlights opportunities for improving cardiovascular care among this particularly vulnerable population. abstract_id: PUBMED:36806517 Exploring racial/ethnic disparities in rehabilitation outcomes after TBI: A Veterans Affairs Model Systems study. Background: Almost one-third of the U.S. military population is comprised of service members and veterans (SMVs) of color. Research suggests poorer functional and psychosocial outcomes among Black and Hispanic/Latine vs. White civilians following traumatic brain injury (TBI). Objective: This study examined racial/ethnic differences in 5-year functional independence and life satisfaction trajectories among SMVs who had undergone acute rehabilitation at one of five Veterans Affairs (VA) TBI Model Systems (TBIMS) Polytrauma Rehabilitation Centers (PRCs). Methods: Differences in demographic and injury-related factors were assessed during acute rehabilitation among White (n = 663), Black (n = 89), and Hispanic/Latine (n = 124) groups. Functional Independence Measure (FIM) Motor, FIM Cognitive, and Satisfaction with Life Scale (SWLS) scores were collected at 1, 2, and 5 years after injury. Racial/ethnic comparisons in these outcome trajectories were made using hierarchical linear modeling. Results: Black SMVs were less likely than White and Hispanic/Latine SMVs to have been deployed to a combat zone; there were no other racial/ethnic differences in any demographic or injury-related variable assessed. In terms of outcomes, no racial/ethnic differences emerged in FIM Motor, FIM cognitive, or SWLS trajectories. Conclusion: The absence of observable racial/ethnic differences in 5-year outcome trajectories after TBI among SMVs from VA TBIMS PRCs contrasts sharply with previous research identifying disparities in these same outcomes and throughout the larger VA health care system. Individuals enrolled in VA PRCs are likely homogenized on key social determinants of health that would otherwise contribute to racial/ethnic disparities in outcome trajectories. abstract_id: PUBMED:37219714 Racial and Ethnic Disparities in Stroke Reperfusion Therapy in the USA. Racial and ethnic inequities in stroke care are ubiquitous. Acute reperfusion therapies, i.e., IV thrombolysis (IVT) and mechanical thrombectomy (MT), are central to acute stroke care and are highly efficacious at preventing death and disability after stroke. Disparities in the use of IVT and MT in the USA are pervasive and contribute to worse outcomes among racial and ethnic minority individuals with ischemic stroke. A meticulous understanding of disparities and underlying root causes is necessary in order to develop targeted mitigation strategies with lasting effects. This review details racial and ethnic disparities in the use of IVT and MT after stroke and highlights inequities in the underlying process measures as well as the contributing root causes. Furthermore, this review spotlights the systemic and structural inequities that contribute to race-based differences in the use of IVT and MT, including geographic and regional differences and differences based on neighborhood, zip code, and hospital type. In addition, recent promising trends suggesting improvements in racial and ethnic IVT and MT disparities and potential approaches for future solutions to achieve equity in stroke care are briefly discussed. Answer: Yes, there are racial/ethnic disparities in VA PTSD treatment retention. African American and Latino Veterans were found to be less likely to receive a minimal trial of pharmacotherapy, and African American Veterans were less likely to receive a minimal trial of any treatment in the 6 months after being diagnosed with PTSD. Controlling for beliefs about mental health treatments diminished the lower odds of pharmacotherapy retention among Latino but not African American Veterans. Access factors did not contribute to treatment retention disparities (PUBMED:25421265). Additionally, Latino veterans were less likely than white veterans to receive any psychotherapy, and among those who received some psychotherapy services, Latinos, African Americans, and Asian/Pacific Islanders were less likely than white veterans to receive any individual therapy (PUBMED:27799020). Black veterans also experienced attenuated PTSD symptom reduction during treatment and greater depression symptom recurrence 4 months after discharge, relative to White veterans (PUBMED:34369806). These findings indicate that even in safety-net healthcare systems like the VA, racial and ethnic disparities in mental health treatment occur, and more understanding is needed to address these disparities.
Instruction: Does a reduction in general practitioners' use of diagnostic tests lead to more hospital referrals? Abstracts: abstract_id: PUBMED:7619582 Does a reduction in general practitioners' use of diagnostic tests lead to more hospital referrals? Background: Individual feedback on general practitioners' requests for tests can improve the quality of their test ordering behaviour. Little is known of the side effects on hospital referral behaviour when the use of tests is reduced through feedback. Aim: A study was undertaken to explore changes in general practitioners' hospital referral rates in a region where their use of diagnostic tests is reduced through feedback. Method: Trends in test requests and of first referrals to specialists were compared among 64 general practitioners in the Maastricht region of the Netherlands where routine feedback on test ordering behaviour is provided by the diagnostic coordinating centre. Results: Reduction in diagnostic test use was not accompanied by a higher hospital referral rate, not even for specialties related to tests discussed in feedback. Good responders to feedback had decreased hospital referral rates in contrast to increased rates for poor responders (P < 0.01). Conclusion: Reducing the volume of general practitioners' diagnostic tests through feedback does not lead to more specialist referrals. Together with lower test use, fewer hospital referrals were seen. abstract_id: PUBMED:29727249 Reasons for and Frequency of End-of-Life Hospital Admissions: General Practitioners' Perspective on Reducing End-of-Life Hospital Referrals. Background: Many palliative care patients are admitted to hospital shortly before death even though the acute hospital setting is not considered ideal for end-of-life care (EOLC). Objectives: This study aimed to evaluate General Practitioners' (GPs') perspective on the frequency of and reasons for hospital referrals of these patients. Methods: Cross-sectional survey involving a stratified random sample of 2000 GPs in Switzerland in 2014. GP characteristics, frequency and type of end-of-life transfers, reasons for referrals, confidence in EOLC, and regional palliative care provision were assessed. Multivariate regression analysis was performed to identify the variables associated with frequency of hospital referrals at the end of life. Results: The questionnaire was completed by 579 (31%) GPs. Frequent hospital referrals shortly before death were reported by 38%. GPs were less likely to report frequent hospitalizations when they felt confident in palliative care competencies, especially in anticipation of crisis. GPs were more likely to report frequent hospitalizations as being due to relatives' wishes, difficulties in symptom control, inadequate or absent care network, and the expense of palliative care at home. Conclusions: The results suggest that adequate support of and a care network for palliative patients and their caregivers are crucial for continuous home-based EOLC. Timely recognition of the advanced palliative phase as well as the involvement of well-trained GPs who feel confident in palliative care, together with adequate financial support for outpatient palliative care, might diminish the frequency of transitions shortly before death. abstract_id: PUBMED:23508316 What use general practitioners do they tests and scales referred to geriatric? Justification: the complexity of the elderly's cares justifies to set up tools of screening, diagnosis and follow-up of multiple pathologies. Numerous tests and scales were elaborated and validated, and are recommended by the Haute autorité de santé (HAS). Are these tools useful and are they adapted to the consultation of general medicine? What are the obstacles to their appropriation by the general practitioners? Objective: to determine the frequency of use of a series of twelve tests and scales to geriatric aim by the general practitioners. Secondary Objective: interests of these tests and obstacles to their use in practice. Method: transverse survey by mail way, with a representative sampling of general practitioners of Meurthe-et-Moselle concerning the use of 12 tests or scales validated. Results: 84 general practitioners on 145 requested participated in the survey (rate of answers 58%). The most used tests are the MMSE, the AGGIR, the test of the clock and the test of five words of Dubois (respectively 48, 43, 38 and 36% of regular users). Thirty five percent of the general practitioners never use tests or scales, while 37% use them at least once a month. 85.5% of the trained practitioners use it more frequently. A practitioner on two (51%) considers that these tools are unsuitable for his practice while almost totality of the general practitioners (90%) admits an interest about these tools in the screening, the diagnosis and the follow-up of the geriatric pathologies. The obstacles are essentially connected to the very time-consuming character, to the absence of specific quotation and to the lack of training. Conclusion: in the absence of tests and scales conceived by geriatricians and general practitioners, the existing tests although recognized useful are underused by the general practitioners of Meurthe-et-Moselle. abstract_id: PUBMED:35858954 Consensus among clinicians on referrals' priority and use of digital decision-making support systems. The growing demand for referrals is a main policy concern in health systems. One approach involves the development of demand management tools in the form of clinical prioritization to regulate patient referrals from primary care to specialist care. For clinical prioritization to be effective, it is critical that general practitioners (GPs) assess patient priority in the same way as specialists. The progressive development of IT tools in clinical practice, in the form of electronic referrals support systems (e-RSS), can facilitate clinical prioritization. In this study, we tested if higher use of e-RSS or higher use of high-priority categories was associated with the degree of agreement and therefore consensus on clinical priority between GPs and specialists. We found that higher use by GPs of the e-RSS tool was positively associated with greater degree of priority agreement with specialists, while higher use of the high-priority categories was associated with lower degree of priority agreement with specialists. Furthermore, female GPs, GPs in association with others, and GPs using a specific electronic medical record showed higher agreement with specialists. Our study therefore supports the use of electronic referrals systems to improve clinical prioritization and manage the demand of specialist visits and diagnostic tests. It also shows that there is scope for reducing excessive use by GPs of high-priority categories. abstract_id: PUBMED:23008681 A comparison of psychiatric referrals within the teaching hospital with those from primary care and general hospitals in saudi arabia. Objective: This study aims at examining the pattern of psychiatric referrals with particular reference to (1) age and gender (2) source of referrals and (3) diagnosis of referred patients within a teaching hospital Method: Four hundred and twenty seven referrals (n=427) for psychiatric consultation within KKUH were selected prospectively by systematic randomization over a period of one year, and were compared with a general hospital (n=138) and primary health care (n=402) psychiatric referrals to a mental health facility. Results: The age of referred patients across the three settings differed significantly and the male patients were slightly over-represented in the teaching hospital referrals. Pediatric clinics in the teaching hospital constituted significant sources of psychiatric referrals as compared to the general hospitals. Schizophrenic disorders and acute psychoses were significantly less among teaching hospital referred patients, whereas anxiety and mood disorders were much more common among teaching hospital and primary care patients. The number of personality disorders diagnosed in teaching hospital settings was significant. Conclusions: In Saudi Arabia, sources of psychiatric referrals and diagnostic patterns of mental disorders differ across the three levels, and this is comparable to international research on psychiatric referrals. Besides exploring other aspects of referral process, researchers at the three settings should carry out follow-up studies to assess the impact of psychiatric consultations on the global outcome of referred consultees. abstract_id: PUBMED:37528362 Diagnosing knee osteoarthritis in patients, differences between general practitioners and orthopedic surgeons: a retrospective cohort study. Background: knee complaints are one of the most common reasons to consult the general practitioners in the Netherlands and contribute to the increasing burden on general practitioners. A proportion of patients that are referred to orthopedic outpatient clinics are potentially referred unnecessarily. We believe osteoarthritis is not always considered by general practitioners as the cause of atraumatic knee complaints. This may impede early recognition and timely care of osteoarthritis complaints and lead to unnecessary referrals. Methods: the aim of this study was to compare the frequency of (differential) diagnosis of osteoarthritis mentioned in referral letters of general practitioners with the frequency of osteoarthritis mentioned as orthopedic diagnosis at the outpatient clinic. Therefore we conducted a retrospective cohort study based on data collected from referral letters and the corresponding outpatient clinic reports of patients with atraumatic knee complaints of 45 years or older referred to a regional hospital in Nijmegen, The Netherlands in the period from 1-6-2019 until 1-01-2020. Results: a total of 292 referral letters were included. In the younger aged patients (45-54 years) osteoarthritis was mentioned less frequent and meniscal lesions were mentioned more frequent in referral letters when compared to diagnoses made at the outpatient clinic. Differences in differential diagnosis of osteoarthritis as well as meniscal lesions between orthopedic surgeon and general practitioners were found (both p < 0.001, McNemar). Matching diagnoses were present in 58.2% when all referral letters were analyzed (n = 292) and 75.2% when only referrals containing a differential diagnosis were analyzed (n = 226). Matching diagnoses were present in 31.6% in the younger age categories (45-54 years). A linear trend showing fewer matching diagnoses in younger patient categories was observed (p < 0.001). Conclusions: Osteoarthritis was less frequently mentioned in general practitioner referral letters among the differential diagnosis then it was diagnosed at the outpatient clinic, especially in younger patients (45-54 years). Also matching diagnoses in younger patients were evidently lower than in older patients, partly explained by underdiagnosing of osteoarthritis in younger patients in this cohort. Better recognition of osteoarthritis in younger patients and changing the diagnostic approach of general practitioners might improve efficacy in knee care. Future research should focus on the effectiveness of musculoskeletal triage, the need for multidisciplinary educational programs for patients and promotion of conservative treatment modalities among general practitioners. abstract_id: PUBMED:32407577 The effect of a dermato-oncological training programme on the diagnostic skills and quality of referrals for suspicious skin lesions by general practitioners. Background: The rising incidence rates of skin cancer (SC) lead to an enormous burden on healthcare systems. General practitioners (GPs) might play an important part in SC care, but research has shown poor clinical recognition of SC, leading to a high rate of potentially unnecessary referrals. Objectives: The aim of this study was to evaluate if a dermato-oncological training programme (DOTP) for GPs improved their diagnostic skills and quality of referrals. Methods: Out of 194 GPs in the Nijmegen area, 83 (42·8%) followed a DOTP on SC. Referrals from both a trained cohort (TC) and two cohorts of untrained GPs [untrained present cohort (UPC) and untrained historical cohort (UHC)] were included. Data on diagnostic skills, quality of referrals and the number of potentially unnecessary referrals were evaluated. Results: A total number of 1662 referrals were analysed. The referral diagnosis was correct more often in the TC (70·3%) compared with the UPC (56·2%; P < 0·001) and the UHC (51·6%; P < 0·001). Furthermore, the TC also provided a better lesion description, mentioned a diagnosis more often in their referral letters and more often performed diagnostics before referral. In addition, fewer potentially unnecessary referrals were identified in the TC compared with the UPC (62·7% vs. 73·7%; P < 0·001) and the UHC (75·2%; P < 0·001). Conclusions: GPs who followed a DOTP had better diagnostic skills and quality of referrals than untrained GPs, leading to fewer potentially unnecessary referrals. This might enhance a more efficient use of the limited capacity in secondary dermatological care and consequently lead to lower healthcare costs. abstract_id: PUBMED:37599892 Psychiatric referrals to the general hospital emergency department: are we being effective? Introduction: General hospital emergency departments (GHEDs) are notoriously overcrowded. This is caused, in part, by ineffective referrals, that is to say referrals that do not require medical examination or other interventions in the context of a general hospital. This study aims to investigate the contribution of psychiatric referrals to this issue, to identify potential determinants of these referrals and offer means to reduce them. Materials And Methods: Retrospective data were collected from psychiatric admission files within a GHED of a tertiary-care city hospital over a 1 year period. Two experienced clinicians separately reviewed each file to determine rationale of referrals according to predetermined criteria. Results: A total of 2,136 visits included a psychiatric examination, 900 (42.1%) were determined "effective," and 1,227 (57.4%) were deemed "potentially ineffective." The leading causes for potentially ineffective referrals to a GHED were psychiatric illness exacerbation (43.4%), and suicidal ideations (22%). Most referrals (66.9%) were initiated by the patient or their family, and not by a primary care physician or psychiatrist. Conclusion: More than half of the psychiatric referrals did not necessarily require the services of a general hospital, and may be more suitable for referral to a dedicated psychiatric facility. Ineffective referrals to the GHED pose a burden on general hospital resources, and may be less effective for the psychiatric patients. This calls for clear guidelines for the provision of optimal emergency treatment for mental-health patients. abstract_id: PUBMED:10448998 Use of diagnostic imaging services in the Central Region by general practitioners. Aim: To record general practitioners use of diagnostic imaging facilities in areas with different levels of access. Method: A random sample of general practitioners from three different areas kept a prospective record for one month of all patients referred to diagnostic imaging facilities, the outcomes of these investigations and the total number of patients seen. Results: Referrals for diagnostic imaging services were significantly different between the localities (p = 0.014). The area with the least restrictions on access and the highest referral rate showed the greatest variation between doctors. There was a significant difference (p = 0.047) between the localities for patients who could not be referred because of restricted access and the cost of the services. The majority (68%) of patients who were unable to access these services were referred to the hospital for admission, for an outpatient appointment or to the accident and emergency department. There was little difference between the localities in either the presence or absence of pathology, or in the proportion of patients where the diagnosis was changed as a result of the diagnostic imaging investigation. Conclusion: Improved access to diagnostic imaging services for patients of general practitioners would result in better patient management and prevent unnecessary hospital referrals. abstract_id: PUBMED:2558207 A diagnostic centre for general practitioners: results of individual feedback on diagnostic actions. A diagnostic centre, managing diagnostic tests for general practice, can improve the service provided by primary health care and the communication between general practitioners and specialists. In addition, it can evaluate the use and misuse of tests. This paper describes the work of a diagnostic centre in the Netherlands serving 80 general practitioners. Following the introduction of individual feedback to general practitioners on their use of diagnostic tests there was a decrease in the number of requests for tests. Answer: No, a reduction in general practitioners' (GPs) use of diagnostic tests does not necessarily lead to more hospital referrals. A study conducted among 64 general practitioners in the Maastricht region of the Netherlands, where routine feedback on test ordering behavior is provided by the diagnostic coordinating center, showed that a reduction in diagnostic test use was not accompanied by a higher hospital referral rate. In fact, good responders to feedback had decreased hospital referral rates in contrast to increased rates for poor responders (PUBMED:7619582). This suggests that reducing the volume of GPs' diagnostic tests through feedback can be associated with fewer hospital referrals rather than more.
Instruction: Biliary reconstruction in liver transplantation: is a biliary tutor necessary? Abstracts: abstract_id: PUBMED:18053502 Biliary reconstruction in liver transplantation: is a biliary tutor necessary? Objectives: To assess the incidence and type of biliary complications in liver transplantation after biliary reconstruction with or without a biliary tutor. Material And Method: A prospective, non-randomized study of 128 consecutive patients undergoing elective liver transplantation was performed. Retransplantations, emergency transplantations, hepaticojejunostomy and patients who died within 3 months of causes other than biliary complications were excluded. Group I (n = 64) underwent termino-terminal choledochocholedochostomy with a Kehr tube and group II (n = 64) underwent choledochocholedochostomy without Kehr tube. Complications, therapeutic procedures, reoperations and survival free of biliary complications were analyzed. Results: The overall rate of biliary complications was 15% (17% in group I and 14% in group II). Types of complication (overall and in groups I and II, respectively) consisted of fistulas 4% (6% vs. 3%), stenosis 8% (4% vs. 12%), and Kehr dysfunction 3%. The mean number of therapeutic procedures, including endoscopic retrograde cholangiopancreatography, percutaneous transhepatic cholangiography, trans-Kehr cholangiography and drainage of collections, was 2.1 vs. 2 per complicated patient. The overall reoperation rate was 5% (2% vs. 9%) (p < 0.05). One-year survival free of biliary complications was 85% vs. 82% (Log Rank = 0.5). Conclusions: No statistically significant differences were found in complications after choledocho-choledocho anastomosis with or without a biliary tutor. However, the patient group that did not receive a biliary tutor required more complex procedures for treatment of complications, as well as a greater number of reoperations. abstract_id: PUBMED:31228080 Is Surgery Necessary? Endoscopic Management of Post-transplant Biliary Complications in the Modern Era. Background: Biliary complications are common following liver transplantation (LT) and traditionally managed with Roux-en-Y hepaticojejunostomy. However, endoscopic management has largely supplanted surgical revision in the modern era. Herein, we evaluate our experience with the management of biliary complications following LT. Methods: All LTs from January 2013 to June 2018 at a single institution were reviewed. Patients with biliary bypass prior to, or at LT, were excluded. Patients were grouped by biliary complication of an isolated stricture, isolated leak, or concomitant stricture and leak (stricture/leak). Results: A total of 462 grafts were transplanted into 449 patients. Ninety-five (21%) patients had post-transplant biliary complications, including 56 (59%) strictures, 28 (29%) leaks, and 11 (12%) stricture/leaks. Consequently, the overall stricture, leak, and stricture/leak rates were 12%, 6%, and 2%, respectively. Endoscopic management was pursued for all stricture and stricture/leak patients, as well as 75% of leak patients, reserving early surgery only for those patients with an uncontrolled leak and evidence of biliary peritonitis. Endoscopic management was successful in the majority of patients (stricture 94%, leak 90%, stricture/leak 90%). Only six patients (5.6%) received additional interventions-two required percutaneous transhepatic cholangiography catheters, three underwent surgical revision, and one was re-transplanted. Conclusions: Endoscopic management of post-transplant biliary complications resulted in long-term resolution without increased morbidity, mortality, or graft failure. Successful endoscopic treatment requires collaboration with a skilled endoscopist. Moreover, multidisciplinary transplant teams must develop treatment protocols based on the local availability and expertise at their center. abstract_id: PUBMED:9512802 Biliary atresia and biliary cysts. The authors present a review of the classification, aetiology, presentation, treatment and long-term outcome of children and adults with biliary atresia and choledochal cyst disease. Biliary atresia should be suspected in any infant with jaundice beyond the second week of life. Although the aetiology and pathogenesis remain unclear, early management with portoenterostomy has significantly improved the course of this disease. Recent advances in immunosuppression have made liver transplantation a valuable and necessary adjunct to biliary bypass. With choledochal cyst disease, adults, unlike children, often present with acute biliary tract symptoms or pancreatitis. The treatment of choice remains extrahepatic cyst excision and biliary bypass. This treatment has excellent long-term results that minimize the development of malignancy. abstract_id: PUBMED:2183702 Biliary atresia and its complications. Infants with idiopathic perinatal fibroinflammatory obliteration of the lumen of the extrahepatic biliary tree ("biliary atresia") invariably died of biliary cirrhosis before surgical techniques were devised to permit drainage of bile into the duodenum. Survival rates in operated patients now approach 75 percent at 10 years. While definitive diagnosis of biliary atresia without the use of cholangiography at laparotomy is difficult, because other disorders have similar clinical features, early diagnosis is important. The earlier surgery is undertaken, the more successful it is. With delay, irreversible changes occur in the liver that produce portal hypertension. This and liver failure eventually make liver transplantation necessary even in some operated patients. Hepatic disease associated with biliary atresia is in part due to delay in diagnosis, but complications of surgical therapy, such as ascending cholangitis, also play a role. With prolonged survival and as numbers of liver transplant recipients rise, new therapy-related complications, such as those associated with immunosuppression, will become more important in surgically treated biliary atresia. abstract_id: PUBMED:12655251 Cholangiopathy and the biliary cast syndrome. Biliary casts are uncommon but are more frequently described in liver transplant patients. To our knowledge there have been only two published cases describing biliary casts in non-liver transplant patients. The aetiology of cast development is not fully known but is likely to be multifactorial with the presence of biliary sludge being a prerequisite for cast formation. Bile duct damage and ischaemia, biliary infection, fasting, parenteral nutrition, abdominal surgery and possibly other factors, are all thought to be implicated in cast pathogenesis via sludge development. Endoscopic management has been shown to be effective in a minority of cases and may act as a temporary measure in others but surgical removal of casts is usually necessary. With a greater understanding and improvement in liver transplant surgical techniques and the management of post-operative complications, the development and severity of biliary sludge and casts have decreased. abstract_id: PUBMED:28889961 Analysis of the reversibility of biliary cirrhosis in young rats submitted to biliary obstruction. Background/purpose: Biliary atresia and other liver biliary obstructions are relevant conditions in pediatric surgery due to their progression to biliary cirrhosis and indication for liver transplantation. It is known that the period during which biliary obstruction persists determines the development of cirrhosis and its reversibility after a biliary drainage procedure. However, no time or histological markers of biliary cirrhosis reversibility have been established. Materials And Methods: One hundred and twenty-nine young Wistar rats underwent surgery for ligation of the common bile duct and were maintained until 8weeks. A part of these animals was submitted to biliary drainage surgery at 2, 3, 4, 5, or 6weeks after the initial procedure. After cyst formation at the site of obstruction, cyst-jejunal anastomosis was performed to restore bile flow. After biliary obstruction and drainage, liver samples were collected for histological and molecular analysis of the genes responsible for collagen deposition and fibrosis. Results: The mortality rates were 39.8% and 56.7% after the first and second procedures, respectively. Ductular proliferation (p=0.001) and collagen deposition increased according to the period under obstruction (p=0.0001), and both alterations were partially reduced after biliary drainage. There were no significant differences in the values of desmin and α-actin according to the period during which the animal remained with biliary obstruction (p=0.09 and p=0.3, respectively), although increased values of transforming growth factor beta 1 (TGFβ1) occurred after 8weeks (p=0.000). Desmin levels decreased, and α-actin and TGFβ1 levels increased according to the period under obstruction. The molecular alterations were partially reversed after biliary drainage. Conclusions: The histologic and molecular changes in the liver parenchyma promoted by biliary obstruction in the young animal can be partially reversed by a biliary drainage procedure. abstract_id: PUBMED:26542028 Percutaneous Treatment of Intrahepatic Biliary Leak: A Modified Occlusion Balloon Technique. Purpose: To report a novel modified occlusion balloon technique to treat biliary leaks. Methods: A 22-year-old female patient underwent liver transplantation with biliary-enteric anastomosis. She developed thrombosis of the common hepatic artery and extensive ischemia in the left hepatic lobe. Resection of segments II and III was performed and a biliary-cutaneous leak originating at the resection plane was identified in the early postoperative period. Initial treatment with percutaneous transhepatic drainage was unsuccessful. Therefore, an angioplasty balloon was coaxially inserted within the biliary drain and positioned close to the leak. Results: The fistula output abruptly decreased after the procedure and stopped on the 7th day. At the 3-week follow-up, cholangiography revealed complete resolution of the leakage. Conclusion: This novel modified occlusion balloon technique was effective and safe. However, greater experience and more cases are necessary to validate the technique. abstract_id: PUBMED:36038334 Emphasis on the treatment of biliary liver disease Biliary liver disease refers to a group of diseases in which lesions of the biliary system cause liver damage. Early detection and standardized treatment can improve patient outcomes and prolong survival.Treatment includes pharmacological, non-pharmacological, and combination therapy. However, due to the involvement of a variety of diseases and different treatment characteristics, multidisciplinary cooperation is required when necessary, and there are still certain problems to be solved in the treatment. abstract_id: PUBMED:18557140 Iatrogenic lesions of the biliary tract. Lesions of the biliary tract can happen during the performance of different types of surgery, not only biliary surgery itself. Mainly five types of surgery are responsible for these events: Biliary surgery, Liver surgery, Portal Hypertension surgery, Pancreatic surgery, Gastric surgery. We will concentrate on biliary surgery itself, excluding the specific problems from liver transplant surgery. Surgery for biliary lithiasis, tumours, inflammatory conditions, bilio-digestive anastomosis, placement of prosthesis or tumour intubations or biliary re-interventions, can all be cause for complications, specific of the biliary tree. abstract_id: PUBMED:25362914 Biliary atresia: a severe illness Biliary atresia is a serious disease of unknown cause, affecting newborns. An inflammation and progressive destruction of the bile ducts lead to jaundice, dark urines, and acholia, between the second and sixth weeks of life. Neonatal cholestasis could be due to several different diseases, thus a diagnosis of biliary atresia and early derivation for surgical treatment are necessary to allow a restoration of the bile flow. Eighty percent of the children normalize serum bilirubin after the portoenterostomy (Kasai operation), if they are operated before their 45 days of life. When Kasai operation fails, a liver transplantation is the only possibility. Biliary atresia must be diagnosed before the first month of life and must be considered as a surgical emergency. Answer: The question of whether a biliary tutor is necessary in biliary reconstruction during liver transplantation has been addressed in a study where the incidence and type of biliary complications were assessed after biliary reconstruction with or without a biliary tutor (PUBMED:18053502). In this prospective, non-randomized study of 128 consecutive patients undergoing elective liver transplantation, two groups were compared: Group I (n = 64) underwent termino-terminal choledochocholedochostomy with a Kehr tube (biliary tutor), and Group II (n = 64) underwent choledochocholedochostomy without a Kehr tube. The results showed that the overall rate of biliary complications was similar between the two groups (17% in Group I and 14% in Group II). However, the group without a biliary tutor required more complex procedures for the treatment of complications and a greater number of reoperations. Despite these findings, there were no statistically significant differences in one-year survival free of biliary complications between the two groups (85% vs. 82%). In the modern era, endoscopic management has largely supplanted surgical revision for post-transplant biliary complications (PUBMED:31228080). This study found that endoscopic management was successful in the majority of patients with biliary complications following liver transplantation, including strictures and leaks. Only a small percentage of patients required additional interventions such as percutaneous transhepatic cholangiography catheters, surgical revision, or re-transplantation. Based on these studies, while a biliary tutor may not be strictly necessary, as there are no significant differences in the rate of biliary complications or survival free of biliary complications, its absence may lead to the need for more complex procedures and reoperations. Endoscopic management has become a successful alternative to surgical revision for managing biliary complications post-transplantation.
Instruction: Child protection in Sweden: are routine assessments reliable? Abstracts: abstract_id: PUBMED:27559222 The Voice of the Child in Social Work Assessments: Age-Appropriate Communication with Children. This article describes a child-centred method for engaging with children involved in the child protection and welfare system. One of the primary arguments underpinning this research is that social workers need to be skilled communicators to engage with children about deeply personal and painful issues. There is a wide range of research that maintains play is the language of children and the most effective way to learn about children is through their play. Considering this, the overarching aim of this study was to investigate the role of play skills in supporting communication between children and social workers during child protection and welfare assessments. The data collection was designed to establish the thoughts and/or experiences of participants in relation to a Play Skills Training (PST) programme designed by the authors. The key findings of the study reveal that the majority of social work participants rate the use of play skills in social work assessments as a key factor to effective engagement with children. Of particular importance, these messages address how social work services can ensure in a child-centred manner that the voice of children is heard and represented in all assessments of their well-being and future care options. abstract_id: PUBMED:29637146 Audit of child maltreatment medical assessments in a culturally diverse, metropolitan setting. Objective: Child maltreatment (CM) is a major public health problem globally. While there is evidence for the value of medical examination in the assessment of CM, little is known about the quality of clinical assessments for CM. South Western Sydney (SWS) has a large metropolitan population with many vulnerable subgroups. We aimed to describe acute presentations of CM in SWS over a 3-year period-with a focus on the quality of the clinical assessments. We wanted to determine whether the cases assessed fulfilled established minimum standards for clinical assessment of CM and whether the assessments were performed in a child-friendly manner. Design: We gathered data from the acute child protection database on all children <16 years referred for assessment between 2013 and 2015. We performed simple descriptive analysis on the data. We measured the assessment, report writing and follow-up against criteria for minimum standards for CM assessments, and identified whether assessments were child-friendly from available clinical information. Results: There were 304 children referred; 279 seen for acute assessment; most (73%) were for sexual abuse, 75 (27%) were for physical abuse/neglect. Over half the assessments identified other health concerns; joint assessments performed by paediatric and forensic doctors were better at identifying these health concerns than solo assessments. Most assessments were multidisciplinary and used protocols; half were not followed up; a third were performed after-hours and a third had no carer present during assessments. Conclusions: We identified strengths and weaknesses in current CM assessments in our service. Locally relevant standards for CM assessments are achievable in the acute setting, more challenging is addressing appropriate medical and psychosocial follow-up for these children. While we have established baseline domains for measuring a child-friendly approach to CM assessments, more should be done to ensure these vulnerable children are assessed in a timely, child-friendly manner, with appropriate follow-up. abstract_id: PUBMED:33071905 Cognitive Assessment of Children Who Are Deafblind: Perspectives and Suggestions for Assessments. The overall goal of a cognitive assessment is to improve communication, learning, and quality of life for a child who is deafblind. This article will give a brief description and perspective on different evaluation approaches as a basis for reliable cognitive assessments and offer suggestions on how to improve the quality of a cognitive assessment in our clinical practice. The assessor should be aware of the limitations of norm-referenced tests if standardized normative measures are applied to evaluate the cognitive functions of a child who is deafblind. However, if engaging a child with deafblindness in a standardized normative assessment, special considerations and assessment concessions would be required. Furthermore, key issues on how to improve the quality of a cognitive assessment by affording multiple assessment pathways for cognitive assessments will be addressed. Particular attention is given to the following assessment approaches: multi-method, multi-informant assessment, ecological assessment, and dynamic assessment. The use of multiple assessment pathways is necessary to reveal the genuine cognitive abilities and potentials of a child with deafblindness. abstract_id: PUBMED:17062480 Child protection in Sweden: are routine assessments reliable? Aim: To study the validity of the decision not to investigate mandatory reports of suspected child maltreatment. Methods: Written files of 220 reports indicating possible child maltreatment were analysed and re-evaluated. As a measure of the justification for the decisions, a 5-y follow-up study was done. Results: We determined that 76% of the reports still indicated child maltreatment after the initial assessment was done. In the follow-up study, 45% of the children had been investigated. The social worker used the family as the only source of information in 74% of the cases, in 6% someone outside the family was contacted, and in 11% no further information in addition to the report was collected. In 9%, data on information sources were missing. Conclusion: The findings are rather discouraging, as they challenge the belief that a report is a means of ensuring that maltreatment does not continue. The study shows that, depending upon the way in which the initial assessments are made, maltreated children may run a risk of not being identified, even though the maltreatment has been reported. This suggests that there may be a need for national guidelines concerning the reporting of maltreatment. abstract_id: PUBMED:28499473 Child protection, a question of society Child protection is a sector undergoing major changes in which local authorities play a central role. There are several different types of child protection measures covering different needs: monitoring in the home, foster family, placement in a children's home or a stay in a mother-and-baby centre for young mothers. For all these children and adolescents, leaving care is a key moment which requires support. abstract_id: PUBMED:32680347 Identifying the interactional processes in the first assessments in child mental health. Background: A comprehensive assessment is essential to contemporary practice in child and adolescent mental health. In addition to determining diagnosis and management, it is seen as important for clinical engagement and forming a therapeutic relationship. However, there has been little research on the processes which occur during this interaction, particularly in first assessments. Method: Twenty-eight naturally occurring child mental health initial assessments were video recorded and subjected to the basic principles of the conversation analytic method. Results: Several processes were identified in a typical child and adolescent mental health assessment. These included introductions, reasons for attendance, problem presentation, decision-making and session closure. Conclusions: Initial assessments provide a platform for all future engagement with services and an understanding of the processes occurring within this setting is important for the eventual outcomes, particularly in respect to new ways of working such as the Choice and Partnership Approach (CAPA). abstract_id: PUBMED:19846995 Child protection medical assessments: why do we do them? Introduction: Child protection guidelines highlight the importance of medical assessments for children suspected of having been abused. Aim: To identify how medical assessments might contribute to a diagnosis of child abuse and to the immediate outcome for the child. Method: Review of all notes pertaining to medical assessments between January 2002 and March 2006. Results: There were 4549 child protection referrals during this period, of which 848 (19%) proceeded to a medical examination. 742 (88%) case notes were reviewed. Of the medical examinations, 383 (52%) were for alleged physical abuse, 267 (36%) for sexual abuse and 20 (3%) for neglect. 258 (67%) of the physical abuse cases were considered to have diagnostic or supportive findings as compared to 61 (23%) of the sexual abuse cases (chi2=146.31, p<0.001). In diagnostic or supportive examinations or where other potentially abusive concerns were identified, 366 (73%) proceeded to further multi-agency investigation and 190 (41%) to case conference. 131 (69%) of these resulted in the registration of the child on the child protection register. Other health concerns were identified in 121 (31%) of physical and 168 (63%) of sexual abuse cases. Conclusion: In this case series, 465 (63%) out of 742 examinations showed signs diagnostic or supportive of alleged abuse or highlighted other abusive concerns. This endorses the view that medical examination is an important component in the assessment of child abuse as it provides information to support or refute an allegation and helps to identify the health and welfare needs of vulnerable children. abstract_id: PUBMED:35819167 Social workers' perceptions of assessing the parental capacity of parents with intellectual disabilities in child protection investigations. Parental capacity is one of the main aspects assessed by social workers as part of child protection investigations. The aim of this study is to explore the social workers' perceptions of assessing the parental capacity of parents with intellectual disabilities in child protection investigations. Four focus group interviews were conducted with twelve social workers in May-October 2021. Data were analysed using an inductive, conventional qualitative content analysis. One overarching theme, "Parental capacity in relation to the child's needs was assessed, not disability of parents" was created. Three main categories and ten sub-categories were identified exploring the social workers' perceptions of assessing parental capacity. The study shows that social workers perceive the assessment of parental capacity in parents with intellectual disabilities as demanding and complex, in which the assessment of what is good enough is perceived as the most difficult task. abstract_id: PUBMED:34266552 Mother and Child Protection: constantly changing missions Since its creation in 1945, Mother and Child Protection (PMI) has undergone many transformations. Initially, after the war, it was subject to the imperative need to offer children real protection, and then it was built up by reshaping its principles of action in line with the various social and economic contingencies. Linked to the notion of the child's needs, but also to the major idea of prevention, the PMI has never ceased to adjust to the profile of families in order to provide them with an appropriate response. abstract_id: PUBMED:33427578 Attachment goes to court: child protection and custody issues. Attachment theory and research are drawn upon in many applied settings, including family courts, but misunderstandings are widespread and sometimes result in misapplications. The aim of this consensus statement is, therefore, to enhance understanding, counter misinformation, and steer family-court utilisation of attachment theory in a supportive, evidence-based direction, especially with regard to child protection and child custody decision-making. The article is divided into two parts. In the first, we address problems related to the use of attachment theory and research in family courts, and discuss reasons for these problems. To this end, we examine family court applications of attachment theory in the current context of the best-interest-of-the-child standard, discuss misunderstandings regarding attachment theory, and identify factors that have hindered accurate implementation. In the second part, we provide recommendations for the application of attachment theory and research. To this end, we set out three attachment principles: the child's need for familiar, non-abusive caregivers; the value of continuity of good-enough care; and the benefits of networks of attachment relationships. We also discuss the suitability of assessments of attachment quality and caregiving behaviour to inform family court decision-making. We conclude that assessments of caregiver behaviour should take center stage. Although there is dissensus among us regarding the use of assessments of attachment quality to inform child custody and child-protection decisions, such assessments are currently most suitable for targeting and directing supportive interventions. Finally, we provide directions to guide future interdisciplinary research collaboration. Answer: The reliability of routine assessments in child protection in Sweden has been called into question by a study that analyzed written files of 220 reports indicating possible child maltreatment. The study found that 76% of the reports still indicated child maltreatment after the initial assessment was done. However, during the 5-year follow-up, only 45% of the children had been investigated. The study also revealed that in 74% of the cases, the social worker used the family as the only source of information, and in 11% of the cases, no further information was collected beyond the report. These findings suggest that maltreated children may not be identified even though the maltreatment has been reported, indicating a need for national guidelines concerning the reporting of maltreatment (PUBMED:17062480). This suggests that routine assessments in Sweden may not be reliably identifying and protecting children from maltreatment, and improvements in the assessment process may be necessary to ensure the safety and well-being of children.
Instruction: Extrahepatic biliary anatomy at laparoscopic cholecystectomy: is aberrant anatomy important? Abstracts: abstract_id: PUBMED:8378825 The importance of extrahepatic biliary anatomy in preventing complications at laparoscopic cholecystectomy. Major biliary complications of laparoscopic cholecystectomy may be prevented by an understanding of extrahepatic biliary ductal and arterial anatomic relationships. The common patterns of anatomic variations important to the surgeon performing laparoscopic cholecystectomy are reviewed with respect to recently reported biliary injury during this procedure. Recommendations for delineating biliary anatomy and avoiding laparoscopic complications are reviewed. abstract_id: PUBMED:36595208 Biliary Anatomy Quiz: Test Your Knowledge. One of the most common surgical procedures performed in the USA is the cholecystectomy. Understanding biliary anatomy, which includes the gallbladder and extrahepatic biliary tree, is essential for every general surgeon. This quiz includes clinically relevant anatomy and radiology questions for the current and future surgeon at every level of training, and we hope it will be a useful adjunct to one's review. abstract_id: PUBMED:28585107 A Technique to Define Extrahepatic Biliary Anatomy Using Robotic Near-Infrared Fluorescent Cholangiography. Background: Bile duct injury is a rare but serious complication of minimally invasive cholecystectomy. Traditionally, intraoperative cholangiogram has been used in difficult cases to help delineate anatomical structures, however, new imaging modalities are currently available to aid in the identification of extrahepatic biliary anatomy, including near-infrared fluorescent cholangiography (NIFC) using indocyanine green (ICG).1-5 The objective of the study was to evaluate if this technique may aid in safe dissection to obtain the critical view. Methods: Thirty-five consecutive multiport robotic cholecystectomies using NIFC with ICG were performed using the da Vinci Firefly Fluorescence Imaging System. All patients received 2.5 mg ICG intravenously at the time of intubation, followed by patient positioning, draping, and establishment of pneumoperitoneum. No structures were divided until the critical view of safety was achieved. Real-time toggling between NIFC and bright-light illumination was utilized throughout the case to define the extrahepatic biliary anatomy. Results: ICG was successfully administered to all patients without complication, and in all cases the extrahepatic biliary anatomy was able to be identified in real-time 3D. All procedures were completed without biliary injury, conversion to an open procedure, or need for traditional cholangiography to obtain the critical view. Specific examples of cases where x-ray cholangiography or conversion to open was avoided and NIFC aided in safe dissection leading to the critical view are demonstrated, including (1) evaluation for aberrant biliary anatomy, (2) confirmation of non-biliary structures, and (3) use in cases where the infundibulum is fused to the common bile duct. Conclusion: NIFC using ICG is demonstrated as a useful technique to rapidly identify and aid in the visualization of extrahepatic biliary anatomy. Techniques that selectively utilize this technology specifically in difficult cases where the anatomy is unclear are demonstrated in order to obtain the critical view of safety. abstract_id: PUBMED:8378821 Laparoscopic anatomy of the biliary tree. A thorough knowledge of the anatomy of the extrahepatic biliary tree and its frequent anatomic variations is essential for performance of a safe laparoscopic cholecystectomy. The surgeon should have an appreciation for the distortions in the anatomy as a result of retraction on the gallbladder and how the direction of retraction alters the spatial relationships between the cystic duct and common bile duct. The steps in the operative procedure have been outlined to provide good exposure and optimize the identification of structures. Good exposure will enable the surgeon to identify anatomic variants; however, a thorough knowledge of these variants is necessary for safe performance of the operation. abstract_id: PUBMED:15943723 Extrahepatic biliary anatomy at laparoscopic cholecystectomy: is aberrant anatomy important? Background: The prevention of major duct injury at cholecystectomy relies on the accurate dissection of the cystic duct and artery, and avoidance of major adjacent biliary and vascular structures. Innumerable variations in the anatomy of the extrahepatic biliary tree and associated vasculature have been reported from radiographical and anatomical studies, and are cited as a potential cause of bile duct injury at cholecystectomy. Methods: A photographic study of the dissected anatomy of 186 consecutive cholecystectomies was undertaken and each photo analysed to assess the position of the cystic duct and artery, the common bile duct and any anomalous structures. Results: The anatomy in the region of the gallbladder neck was relatively constant. Anatomical variations were uncommon and anomalous ducts were not seen. Vascular variations were the only significant abnormalities found in the present series. Conclusion: Anatomy in the region of the gallbladder neck varies mostly in vascular patterns. Aberrant ducts or duct abnormalities are rarely seen during cholecystectomy hightlighting the principle that careful dissection and identification is the key to safe cholecystectomy. abstract_id: PUBMED:24679417 Anatomy and embryology of the biliary tract. Working knowledge of extrahepatic biliary anatomy is of paramount importance to the general surgeon. The embryologic development of the extrahepatic biliary tract is discussed in this article as is the highly variable anatomy of the biliary tract and its associated vasculature. The salient conditions related to the embryology and anatomy of the extrahepatic biliary tract, including biliary atresia, choledochal cysts, gallbladder agenesis, sphincter of Oddi dysfunction, and ducts of Luschka, are addressed. abstract_id: PUBMED:32864277 Risk of Gallstone Formation in Aberrant Extrahepatic Biliary Tract Anatomy: A Review of Literature. The age-long mnemonic of '5Fs' (fat, female, fertile, forty, and fair) has traditionally been used in medical school instructions to describe the risk factors for gallstone disease. However, evidence suggests that aberrant extrahepatic biliary tract (EHBT) anatomy may contribute significantly to the risk of gallstone disease. This review explores the anatomy and embryological bases of EHBT variations as well as the prevalence of these variations. Also, we discuss the risk factors for gallstone formation in the relationship between gallstone disease and aberrant EHBT anatomy. abstract_id: PUBMED:10685157 Embryology, anatomy, and surgical applications of the extrahepatic biliary system. As technology has improved and the ability to apply this technology in the surgical arena has grown, surgeons have been able to perform more sophisticated operative procedures. Hepatobiliary surgeons are now able to use laparoscopy, immunosuppressive drugs, and technical advances in cryosurgery to accomplish magnificent results. The success and safety of laparoscopic cholecystectomy, orthotopic liver transplantation, and trisegmentectomy for hepatic tumors depend on a high regard for and an accurate knowledge of the anatomy and some of the common embryologic anomalies of the biliary tree. The blood supply, ductal variations, and gallbladder anatomy of this area are often the source of major challenge to unprepared and unaware surgeons. The authors have attempted to stimulate an interest in, a respect for, and perhaps some desire to learn more about the important and fascinating anatomy of this region. abstract_id: PUBMED:31083792 Intraoperative detection of aberrant biliary anatomy via intraoperative cholangiography during laparoscopic cholecystectomy. Background: Laparoscopic cholecystectomy (LC) is the standard of treatment for symptomatic cholelithiasis. Although intraoperative cholangiography (IOC) is widely used as an adjunct to LC, there is still no worldwide consensus on the value of its routine use. Anatomical studies have shown that variations of the biliary tree are present in approximately 35% of patients with variations in right hepatic second-order ducts being especially common (15-20%). Approximately, 70-80% of all iatrogenic bile duct injuries are a consequence of misidentification of biliary anatomy. The purpose of this study was to assess the adequacy of and the reporting of IOCs during LC. Methods: IOCs obtained from 300 consecutive LCs between July 2014 and July 2016 were analysed retrospectively by two surgical trainees and confirmed by a radiologist. Biliary tree anatomy was classified from IOC films as described by Couinaud (1957) and correlated with documented findings. The accuracy of intraoperative reporting was assessed. Biliary anatomy was correlated to clinical outcome. Results: A total of 95% of IOCs adequately demonstrated biliary anatomy. Aberrant right sectoral ducts were identified in 15.2% of the complete IOCs, and 2.6% demonstrated left sectoral or confluence anomalies. Only 20.4% of these were reported intraoperatively. Bile leaks occurred in two patients who had IOCs (0.73%) and two who did not (7.4%). Conclusion: Surgeons generally demonstrate biliary anatomy well on IOC but reporting of sectoral duct variation can be improved. Further research is needed to determine whether anatomical variation is related to ductal injury. abstract_id: PUBMED:21206646 Anatomy relevant to cholecystectomy. This review discusses anatomical facts that are of relevance to the performance of a safe cholecystectomy. Misinterpretation of normal anatomy and anatomical variations contribute to the occurrence of major postoperative complications like biliary injuries following a cholecystectomy, the incidence being higher with laparoscopic cholecystectomy. A look at the basic anatomy is therefore important for biliary and minimally invasive surgeons. This includes normal anatomy and variations of the biliary apparatus as well as the arterial supply to the gallbladder. Specific anatomical distortions due to the laparoscopic technique, their contribution in producing injury and a preventive strategy based on this understanding are discussed. Investigative modalities that may help in assessing anatomy are considered. Newer insights into the role of anatomic illusions as well as the role of a system-based approach to preventing injuries is also discussed. Answer: Yes, aberrant extrahepatic biliary anatomy is important during laparoscopic cholecystectomy. Understanding the extrahepatic biliary anatomy, including its common variations, is crucial for surgeons to prevent major biliary complications during the procedure (PUBMED:8378825). Anatomical variations are cited as a potential cause of bile duct injury at cholecystectomy (PUBMED:15943723). Although anatomical variations in the region of the gallbladder neck are relatively uncommon, vascular variations are significant abnormalities that can be encountered, and careful dissection and identification are key to a safe cholecystectomy (PUBMED:15943723). Moreover, evidence suggests that aberrant extrahepatic biliary tract (EHBT) anatomy may contribute significantly to the risk of gallstone disease (PUBMED:32864277). Intraoperative cholangiography (IOC) has been used to detect aberrant biliary anatomy, which is present in approximately 35% of patients, and to prevent bile duct injuries that often result from misidentification of biliary anatomy (PUBMED:31083792). Additionally, new imaging modalities such as near-infrared fluorescent cholangiography (NIFC) using indocyanine green (ICG) have been demonstrated as useful techniques to rapidly identify and aid in the visualization of extrahepatic biliary anatomy, especially in difficult cases where the anatomy is unclear (PUBMED:28585107). In summary, aberrant anatomy is important in laparoscopic cholecystectomy because it can impact the risk of complications, including bile duct injuries and gallstone formation. Surgeons must be aware of these variations and utilize appropriate techniques to delineate the biliary anatomy to ensure the safety and success of the procedure.
Instruction: Umbilical cord prolapse. Is the time from diagnosis to delivery critical? Abstracts: abstract_id: PUBMED:9513874 Umbilical cord prolapse. Is the time from diagnosis to delivery critical? Objective: To review the peripartum clinical course of patients whose pregnancies are complicated by umbilical cord prolapse at a large teaching hospital and to evaluate the time from diagnosis to delivery and its impact on neonatal outcome. Study Design: The computerized perinatal database at Hartford Hospital was used to identify all cases of umbilical cord prolapse from 1988 to 1994. Each maternal and neonatal chart was reviewed, and the following variables were evaluated: gestational age, fetal presentation, status of membranes, time from diagnosis to delivery, mode of delivery, type of anesthesia and neonatal outcome. Results: A total of 65 cases of umbilical cord prolapse were identified from 26,545 deliveries. There were 48 cases of frank cord prolapse and 17 of occult prolapse. Cord prolapse occurred with artificial rupture of membranes in 51% of cases and in 74% of patients at term. There were 59 cesarean births and 6 vaginal deliveries (5 in the occult prolapse group). The mean time from diagnosis to delivery was 20 minutes (range, 2-77). None of the neonates with an occult cord prolapse had a five-minute Apgar score < 7, while 9 (19%) of the neonates with frank prolapse had a five-minute Apgar score < 7. In the frank prolapse group, there were five cases of neonatal asphyxia, all at a gestational age of > or = 36 weeks, and all were delivered by cesarean section. The mean delivery time for these affected neonates was 11 minutes (range, 5-16). Conclusion: Our review indicated that umbilical cord prolapse continues to be associated with poor perinatal outcomes in some cases despite emergency delivery in a modern, high-risk obstetric unit. The asphyxiated neonate had a shorter-than-average time from diagnosis to delivery, suggesting that the time from diagnosis to delivery may not be the only critical determinant of neonatal outcome, particularly with frank cord prolapse. Occult cord prolapse was associated with less perinatal morbidity when compared to frank prolapse. abstract_id: PUBMED:29405969 Ultrasound screening of umbilical cord abnormalities and delivery management. With the improvement of prenatal diagnoses of foetuses, the prevalence of stillbirth due to foetal anomaly after mid-gestation decreased, whereas that of stillbirth associated with umbilical cord factors tended to increase. Prenatal detection of umbilical cord abnormalities and appropriate management during the antenatal period and delivery based on the ultrasound diagnosis will improve the perinatal morbidity and mortality rates. In the present review, the strategy to reduce the incidence of foetal compromise due to umbilical cord problems is discussed considering the current knowledge regarding the physiological and pathological aspects of umbilical cord abnormalities. abstract_id: PUBMED:17990422 Umbilical cord prolapse--a review of diagnosis to delivery interval on perinatal and maternal outcome. Objective: To determine the significance of the Diagnosis to Delivery Interval (DDI) on perinatal outcome and maternal complications in patients with umbilical cord prolapse. Methods: This was a case series of 44 patients identified with "Umbilical cord prolapse" during a 10-year period at the Aga Khan University Hospital. Data was retrieved for gestational age, foetal presentation, DDI, incision to delivery time, delivery method, apgar score, birth weight and outcome, and maternal complications. The influence of DDI on perinatal mortality, apgar scores at 5 minutes, neonatal intensive care unit (NICU) admission and maternal complications resulting from mode of delivery with cord prolapse was assessed. Results: The hospital based incidence of cord prolapse was 1.4 per 1000 deliveries. The mean DDI was 18 minutes, with 64% of women delivering within this time. Of the 13 (29%) neonates transferred to NICU with < 7 apgar score at 5 minutes, 10/13 (76%) delivered within the mean DDI. There were 4 perinatal deaths, of which 2 were term pregnancies with birth asphyxia, whereas 2 were < or = 28 weeks. There was no statistically significant impact of DDI on 5-minute apgar scores, perinatal mortality, NICU admissions and maternal complications in patients with cord prolapse Conclusions: DDI may not be the only critical determinant of neonatal outcome. Most neonates with poor apgar scores had DDI within the average time. Artificial rupture of membranes should be performed cautiously with preexisting CTG trace abnormalities. In-utero resuscitative measures may help reduce further cord compression and improve outcome. abstract_id: PUBMED:32862427 Bradycardia-to-delivery interval and fetal outcomes in umbilical cord prolapse. Introduction: Umbilical cord prolapse is a major obstetric emergency associated with significant perinatal complications. However, there is no consensus on the optimal decision-to-delivery interval, as many previous studies have shown poor correlation between the interval and umbilical cord arterial blood gas or perinatal outcomes. We aim to investigate whether bradycardia-to-delivery or decision-to-delivery interval was related to poor cord arterial pH or adverse perinatal outcome in umbilical cord prolapse. Material And Methods: This was a retrospective study conducted at a university tertiary obstetric unit in Hong Kong. All women with singleton pregnancy complicated by cord prolapse during labor between 1995 and 2018 were included. Women were categorized into three groups. Group 1: persistent bradycardia; Group 2: any type of decelerations without bradycardia; and Group 3: normal fetal heart rate. The main outcome was cord arterial blood gas results of the newborns in different groups. Maternal demographic data and perinatal outcomes were reviewed. Correlation analysis between cord arterial blood gas result and time intervals including bradycardia-to-delivery, deceleration-to-delivery, and decision-to-delivery were performed for the different groups with Spearman test. Results: There were 34, 30, and 50 women in Groups 1, 2, and 3, respectively. Cord arterial pH and base excess did not correlate with decision-to-delivery interval in any of the groups, but they were inversely correlated with bradycardia-to-delivery interval in Group 1 (Spearman's ρ = -.349; P = .043 and Spearman's ρ = -.558; P = .001, respectively). The cord arterial pH drops at 0.009 per minute with bradycardia-to-delivery interval in Group 1 (95% CI 0.0180-0.0003). The risk of significant acidosis (pH < 7) was 80% when bradycardia-to-delivery interval was >20 minutes, and 17.2% when the interval was <20 minutes. Conclusions: There is significant correlation between bradycardia-to-delivery interval and cord arterial pH in umbilical cord prolapse with fetal bradycardia but not in cases with decelerations or normal heart rate. The drop of cord arterial pH is rapid and urgent delivery is essential in such situations. abstract_id: PUBMED:34135046 Evaluation of the risk of umbilical cord prolapse in the second twin during vaginal delivery: a retrospective cohort study. Objective: This study aimed to evaluate the success rate of vaginal delivery, the reasons for unplanned caesarean delivery, the rate of umbilical cord prolapse and the risk of umbilical cord prolapse in twin deliveries. Design: Retrospective cohort study. Setting: Single institution. Participants: This study included 455 women pregnant with twins (307 dichorionic and 148 monochorionic) who attempted vaginal delivery from January 2009 to August 2018. The following criteria were considered for vaginal delivery: diamniotic twins, cephalic presentation of the first twin, no history of uterine scar, no other indications for caesarean delivery, no major structural abnormality in either twin and no fetal aneuploidy. Results: The rate of vaginal delivery of both twins was 89.5% (407 of 455), caesarean delivery of both twins was 7.7% (35 of 455) and caesarean delivery of only the second twin was 2.9% (13 of 455). The major reasons for unplanned caesarean delivery were arrest of labour and non-reassuring fetal heart rate pattern. The rate of umbilical cord prolapse in the second twin was 1.8% (8 of 455). Multivariate analysis revealed that abnormal umbilical cord insertion in the second twin (velamentous or marginal) was the only significant factor for umbilical cord prolapse in the second twin (OR, 5.05, 95% CI 1.139 to 22.472, p=0.033). Conclusions: Abnormal umbilical cord insertion in the second twin (velamentous or marginal) was a significant factor for umbilical cord prolapse during delivery. Antenatal assessment of the second twin's umbilical cord insertion using ultrasonography would be beneficial. abstract_id: PUBMED:37349738 Decision-to-delivery interval and neonatal outcomes in intrapartum umbilical cord prolapse. Background: Rapid delivery is important in cases of umbilical cord prolapse to prevent hypoxic injury to the fetus/neonate. However, the optimal decision-to-delivery interval remains controversial. Objective: The aim of the study was to investigate the association between the decision-to-delivery interval in women with umbilical cord prolapse, stratified by fetal heart rate pattern at diagnosis, and neonatal outcome. Study Design: The database of a tertiary medical center was retrospectively searched for all cases of intrapartum cord prolapse between 2008 and 2021. The cohort was divided into three groups according to findings on the fetal heart tracing at diagnosis: 1) bradycardia; 2) decelerations without bradycardia; and 3) reassuring heart rate. The primary outcome measure was fetal acidosis. The correlation between cord blood indices and decision-to-delivery interval was analyzed using Spearman's rank correlation coefficient. Results: Of the total 103,917 deliveries performed during the study period, 130 (0.13%) were complicated by intrapartum umbilical cord prolapse. Division by fetal heart tracing yielded 22 women (16.92%) in group 1, 41 (31.53%) in group 2, and 67 (51.53%) in group 3. The median decision-to-delivery interval was 11.0 min (IQR 9.0-15.0); the interval was more than 20 min in 4 cases. The median cord arterial blood pH was 7.28 (IQR 7.24-7.32); pH was less than 7.2 in 4 neonates. There was no correlation of cord arterial pH with decision-to-delivery interval (Spearman's Ρ = - 0.113; Ρ = 0.368) or with fetal heart rate pattern (Spearman's Ρ = .425; Ρ = .079, Ρ = - .205; Ρ = .336, Ρ = - .324; Ρ = .122 for groups 1-3, respectively). Conclusion: Intrapartum umbilical cord prolapse is a relatively rare obstetric emergency with an overall favorable neonatal outcome if managed in a timely manner, regardless of the immediately preceding fetal heart rate. In a clinical setting which includes a high obstetric volume and a rapid, protocol-based, response, there is apparently no significant correlation between decision-to-delivery interval and cord arterial cord pH. abstract_id: PUBMED:7445828 Prolapse of umbilical cord - new aspects (author's transl) In an evaluation of more than 20,000 births, 39 recorded cases of prolapse of the umbilical cord were investigated in greater detail for the risk of perinatal mortality and morbidity. Particular attention was given to several variants of breech presentation in comparison to vertex presentations as well as to the time of rupture. The interval between diagnosis of prolapse of the umbilical cord and termination of birth was also in the focus of interest.-The authors concluded that the overall definition of "breech presentation" was quite inadequate and should be replaced, at least, by differentiation between real breech presentation and combined breech-footling presentations, in order to make allowance for the highly differentiated risks involved.-Therefore, a demand is made for strict differentiation of approach in key with presentation. Delivery should be terminated immediately in cases of prolapse of the umbilical cord in concomitance with real breech presentation or vertex presentation, whereas no direct time pressure existed in cases of combined breech-footling presentation.-The approach taken to all kinds of breech presentation, above all real breech presentation, should be identical with that taken to vertex presentation for adequately programmed delivery which has been a growing demand.-All conclusions proposed are discussed in great detail and substantiated as well as supported by comprehensive literature data. abstract_id: PUBMED:12883608 Results of delivery in umbilical cord prolapse. Objective: To review the peripartum clinical course of patients whose pregnancies were complicated by umbilical cord prolapse and to evaluate its impact on neonatal outcome. Methods: All cases of cord prolapse managed in King Khalid University Hospital, Riyadh, Kingdom of Saudi Arabia between 1990-2000 were identified. There were 111 patients identified among 55,789 deliveries. Each maternal and fetal chart was reviewed for parity, age, gestational age, fetal presentation, status of membranes, time from diagnosis to delivery, mode of delivery, baby weight, Apgar scores and cord blood hydrogen ion concentration (PH). The data collected was analyzed using Gold Stat Software Package, and statistical significance was established by using analysis of variance and Chi-square. Results: The incidence of cord prolapse was found to be one in 503 cases (1.99 per thousand deliveries) in our study. Seventy-two (64.9%) of the fetuses were in vertex presentation and 39 (35.1%) were non-vertex, including breech and transverse presentations. Ninety one point nine percent were singletons and 8% were twins. At the time of diagnosis in 15 (13.5%) membranes were artificially ruptured and in 96 (86.5%), they were spontaneously ruptured. The cervix was fully dilated in 10% and minimally dilated in 100 (90%). Regarding mode of delivery, 7 (6.5%) were vaginal deliveries and 104 (93.5%) were cesarean sections. The interval from diagnosis to delivery ranged from 10 minutes to >20 minutes. Six (5.4%) of the babies were delivered in 10 minutes, 49 (44.1%) in 20 minutes and 56 (50.5%) in more than 20 minutes. Apgar score was less than 7 in 44 (39.6%) of the babies at one minute and in 5 (4.5%) of the babies at 5 minutes. Cord PH was less than 7 in 2 (1.8%) cases and more than 7 in 109 (98.2%). Forty-one (36.9%) of the babies were admitted in neonatal intensive care unit. There was no perinatal mortality in our study group. Conclusion: In our review, we found that cord prolapse is not associated with higher rates of perinatal mortality or morbidity and our study supports clinical management of cord prolapse by cesarean section. The interval from diagnosis to delivery may not be the only determinant of neonatal outcome. abstract_id: PUBMED:32856717 Umbilical Cord Prolapse: A Review of the Literature. Importance: Umbilical cord prolapse is a rare occurrence and is a life-threatening emergency for the fetus. These events are unpredictable and unpreventable. Umbilical cord prolapse requires swift diagnosis and management for optimal outcome. Objective: The aim of this review is to describe the incidence, risk factor, pathophysiology, diagnosis, and management of this rare but potentially life-threatening event. Evidence Acquisition: A PubMed, Web of Science, and CINAHL search was undertaken with no limitations on the number of years searched. Results: There were 200 articles identified, with 53 being the basis of review. Multiple risk factors for a umbilical cord prolapse have been suggested including fetal malpresentation or abnormal lie, prematurity, multifetal gestation, and polyhydramnios. The diagnosis is largely made by examination and found after rupture of membranes, and most often, examination is prompted by fetal heart rate decelerations. The management of umbilical cord prolapse is expedited delivery; however, there are rare specific scenarios in which immediate delivery is not possible and efforts should be made to relieve cord compression. Conclusions: Rapid identification of an umbilical cord prolapse facilitates management and increases likelihood of an optimal outcome. The management is an expedited delivery with efforts to relieve cord compression until delivery can be achieved. Relevance: Umbilical cord prolapse is a rare but a life-threatening obstetrical emergency. abstract_id: PUBMED:20609529 Management of umbilical cord prolapse and neonatal outcomes Objective: To evaluate the obstetrical management of umbilical cord prolapse and the neonatal outcomes. Methods: Retrospective study of 57 prolapses of umbilical cord between 1998 and 2009. Arterial pH of umbilical cord, Apgar score and diagnosis delivery time (DDT) were analyzed. Results: The incidence of the cord prolapse was of 1.25 for 1000 deliveries. Cord prolapse occurred with the artificial rupture of membranes in 24 cases (42%) out of 57. There were 48 caesarean births. There were three hydramnios and seven cases of twin pregnancy. The mean pH in the umbilical arteries was 7.15 ± 0.13 in 27 cases. The mean Apgar for the 57 newborns was 6 ± 3 at 1 min and 8 ± 3 at 5 min. The mean DDT was 18 ± 8 min (range: 3-44). In 17 cases out of 27, the mean arterial umbilical pH was 7.07 ± 0.09. Fifteen newborns (26%) had a 5-minute Apgar score less than 7 and were admitted in intensive care unit. The mean Apgar score in the nine vaginal deliveries was 8 ± 4 min. In case of cephalic presentations without associated foetal or maternal pathologies there was a tendency of a better pH when the DDT was shorter. In non-cephalic presentations (14 cases), the mean Apgar score was 8 ± 3 at 5 min. The mean pH measured in eight cases was 7.20 ± 0.13 with mean DDT of 20 minutes. Conclusion: The umbilical cord prolapse remains a serious event for the newborns. The reduction of the DDT in cephalic presentation seems to be correlated to a better neonatal state. The caesarean section is the preferential way of childbirth. Answer: The time from diagnosis to delivery in cases of umbilical cord prolapse is a critical factor that has been extensively studied to understand its impact on neonatal outcomes. Several studies have investigated this relationship, with varying conclusions. One study found that despite emergency delivery in a modern, high-risk obstetric unit, umbilical cord prolapse continues to be associated with poor perinatal outcomes in some cases. The asphyxiated neonates in this study had a shorter-than-average time from diagnosis to delivery, suggesting that the time from diagnosis to delivery may not be the only critical determinant of neonatal outcome, particularly with frank cord prolapse (PUBMED:9513874). Another study indicated that the Diagnosis to Delivery Interval (DDI) may not be the sole determinant of neonatal outcome. Most neonates with poor Apgar scores had a DDI within the average time, and artificial rupture of membranes should be performed cautiously with preexisting CTG trace abnormalities. In-utero resuscitative measures may help reduce further cord compression and improve outcome (PUBMED:17990422). A retrospective study found a significant correlation between bradycardia-to-delivery interval and cord arterial pH in umbilical cord prolapse with fetal bradycardia but not in cases with decelerations or normal heart rate. The drop of cord arterial pH is rapid, and urgent delivery is essential in such situations (PUBMED:32862427). Another study concluded that in a clinical setting with a high obstetric volume and a rapid, protocol-based response, there is apparently no significant correlation between decision-to-delivery interval and cord arterial cord pH (PUBMED:37349738). Lastly, a review of peripartum clinical courses of patients with umbilical cord prolapse found that the condition is not associated with higher rates of perinatal mortality or morbidity, supporting clinical management by cesarean section. The interval from diagnosis to delivery may not be the only determinant of neonatal outcome (PUBMED:12883608). In summary, while rapid delivery following the diagnosis of umbilical cord prolapse is generally considered important to prevent hypoxic injury to the fetus, the time from diagnosis to delivery may not be the only factor influencing neonatal outcomes. Other factors, such as fetal heart rate pattern at diagnosis, in-utero resuscitative measures, and the clinical setting, may also play significant roles.
Instruction: Travelers with immune-mediated inflammatory diseases: are they different? Abstracts: abstract_id: PUBMED:34458071 Common Dermatologic Conditions in Returning Travelers. Purpose Of Review: Travel medicine practitioners often are confronted with returning travelers with dermatologic disorders that could be of infectious causes or inflammatory or allergic. Some dermatologic processes are the result of exposure to insects or acquired due to environmental exposures. There is a broad range of dermatosis of infectious and non-infectious etiologies that clinicians need to consider in the differential diagnosis of dermatosis in travelers. Recent Findings: With increasing international travel to tropical destinations, many individuals may be exposed to rickettsia (i.e., African tick bite fever, scrub typhus, or Mediterranean spotted fever), parasitic infections (i.e., cutaneous larva migrans, cutaneous leishmaniasis, African trypanosomiasis, or American trypanosomiasis), viral infections (i.e., measles or Zika virus infection), bacterial (i.e., Buruli ulcer) or ectoparasites (scabies or tungiasis), and myiasis. Cutaneous lesions provide clinical clues to the diagnosis of specific exposures during travel among returned travelers. Summary: Dermatologic disorders represent the third most common health problem in returned travelers, after gastrointestinal and respiratory illness. Many of these conditions may pose a risk of severe complications if there is any delay in diagnosis. Therefore, clinicians caring for travelers need to become familiar with the most frequent infectious and non-infectious skin disorders in travelers. abstract_id: PUBMED:32212152 Eye diseases in travelers. Travelling has been growing in popularity over the last several decades. Eye diseases, e.g. decreased visual acuity, inflammatory or degenerative lesions, chronic diseases or eye trauma, affect all groups of travelers. The main risk factors contributing to the manifestation or exacerbation of common ocular diseases include exposure to dry air (inside the airplane cabin or in air-conditioned hotel rooms), exposure to chlorinated or salty water (swimming/bathing in swimming pools or in the sea), and sudden changes in the weather conditions. In addition, travelers to tropical destinations are at risk of ocular diseases which are rarely seen in temperate climate, e.g. onchocerciasis, loiasis, gnatostomosis, African trypanosomosis, or trachoma. The most common condition of the eye seen in travelers is conjunctivitis; it may be either of cosmopolitan (bacterial or viral infections, allergic inflammation) or tropical etiology, e.g. arboviral infections (zika, chikungunya). Given the fact that a large proportion of the general population have decreased visual acuity and many of them wear contact lenses rather than glasses, keratitis has become a common health problem among travelers as well; the major risk factors in such cases include sleeping in contact lenses, prolonged exposure to air-conditioning, working with a computer or swimming/bathing in microbiologically contaminated water (e.g. Acanthoamoeba protozoa). Conditions affecting the cornea, conjunctiva or lens may also occur due to excessive exposure to solar radiation, especially if travelers wear glasses without a UV protection. abstract_id: PUBMED:25528864 Travelers with immune-mediated inflammatory diseases: are they different? Background: Patients with immune-mediated inflammatory diseases (IMIDs) increasingly benefit from improved health due to new therapeutic regimens allowing increasing numbers of such patients to travel overseas. This study aims to assess the proportion of IMID travelers seeking advice at the Travel Clinic of the University of Zurich, Switzerland, and to determine whether demographics, travel, and vaccination patterns differ between IMID- and non-IMID travelers. Methods: Pre-travel visits and differences between IMID- and non-IMID travelers were assessed; logistic regression was used to adjust for confounders. Results: Among 22,584 travelers who visited the Zurich Travel Clinic in a 25-month period, 1.8% suffered from an IMID, with gastroenterological and rheumatic conditions being the most common; 34.2% were using immunosuppressive or immunomodulatory medication. The reasons for travel and the destinations did not differ between IMID- and non-IMID travelers, Thailand and India being the most common destinations. IMID travelers stayed less often for longer than 1 month abroad and traveled less frequently on a low budget. Inactivated vaccines were similarly administered to both groups, while live vaccines were given half as often to IMID travelers. Conclusions: The increasing numbers of IMID patients, many using immunosuppressive or immunomodulatory therapy, show similar travel patterns as non-IMID travelers. Thus, they are exposed to the same travel health risks, vaccine-preventable infections being one among them. Particularly, in view of the fact that live attenuated vaccines are less often administered to IMID patients more data are needed on the safety and immunogenicity of vaccines and on travel-specific risks to be able to offer evidence-based pre-travel health advice. abstract_id: PUBMED:30922527 Tungiasis, a rare case of plantar inflammatory disease, a review of travelers skin lesions for emergency providers. Parasitic infections while common in underdeveloped nations are rarely seen in developed urban centers. We report a case of a thirty-three-year-old male with no past medical history who presented to the emergency department with a chief complaint of "eggs coming out of my foot" after returning home from Brazil. Based on clinical presentation, travel history, and appearance of the lesion, diagnosis was most consistent with tungiasis infection which was confirmed by the pathology examination. It is important to make the appropriate diagnosis when skin lesions are found in returning travelers and emergency providers should take broad differential diagnosis into consideration. abstract_id: PUBMED:29983013 Sleep and immune system Sleep is a process that occupies one third part of the life of the human being, and it is essential in order for the individual to be able to maintain body homeostasis. It emerges as an important regulator of the immune system since, during sleep, the necessary functions to maintain its balance are carried out. On the other hand, decreased sleep has deleterious effects that alter the metabolism and produce an increase in the secretion of C-reactive protein, interleukin (IL)-6 and tumor necrosis factor (TNF). These cytokines activate NF-κB; therefore, sleep disturbance can be a risk factor for the development of chronic inflammatory and metabolic diseases. Pro-inflammatory cytokines IL-1, IL-6 and TNF increase non-rapid eye movement sleep, whereas anti-inflammatory cytokines such as IL-4 and IL-10 decrease it. Sleep can modify the immune system function by inducing changes in the hypothalamus-pituitary-adrenal axis and the sympathetic nervous system. In turn, the circadian rhythm of hormones such as cortisol and adrenaline, which have a nocturnal decrease, favors different activities of the immune system. The purpose of the present review is to address different aspects of sleep and their relationship with the immune system. abstract_id: PUBMED:22981182 Acute pulmonary schistosomiasis in travelers: case report and review of the literature. We report the case of an American traveler who developed acute pulmonary schistosomiasis after swimming in a lake in Madagascar, and we review the literature on acute pulmonary schistosomiasis. Schistosomiasis is one of the world's most prevalent parasitic diseases, with three species (Schistosoma mansoni, Schistosoma haematobium and Schistosoma japonicum) causing the greatest burden of disease. Pulmonary manifestations may develop in infected travelers from non-endemic areas after their first exposure. The pathophysiology of acute pulmonary disease is not well-understood, but is related to immune response, particularly via inflammatory cytokines. Diagnosis of schistosomiasis may be either through identification of characteristic ova in urine or stool or through serology. Anti-inflammatory drugs can provide symptomatic relief; praziquantel, the mainstay of chronic schistosomiasis treatment, is likely not effective against acute disease; the only reliable prevention remains avoidance of contaminated freshwater in endemic areas, as there is no vaccine. Travelers who have been exposed to potentially contaminated freshwater in endemic areas should seek testing and, if infected, treatment, in order to avoid severe manifestations of acute schistosomiasis and prevent complications of chronic disease. Clinicians are reminded to elicit a detailed travel and exposure history from their patients. abstract_id: PUBMED:35732819 Effects of helminths on the human immune response and the microbiome. Helminths have evolved sophisticated immune regulating mechanisms to prevent rejection by their mammalian host. Our understanding of how the human immune system responds to these parasites remains poor compared to mouse models of infection and this limits our ability to develop vaccines as well as harness their unique properties as therapeutic strategies against inflammatory disorders. Here, we review how recent studies on human challenge infections, self-infected individuals, travelers, and endemic populations have improved our understanding of human type 2 immunity and its effects on the microbiome. The heterogeneity of responses between individuals and the limited access to tissue samples beyond the peripheral blood are challenges that limit human studies on helminths, but also provide opportunities to transform our understanding of human immunology. Organoids and single-cell sequencing are exciting new tools for immunological analysis that may aid this pursuit. Learning about the genetic and immunological basis of resistance, tolerance, and pathogenesis to helminth infections may thus uncover mechanisms that can be utilized for therapeutic purposes. abstract_id: PUBMED:16696213 Obesity and immune function The perspective of obesity as a low grade systemic inflammatory condition has triggered a new interest on the many overlapping areas between this pathology and the immune system. White adipose tissue production of proteins related to the immune function has shown that many of these adipokines are implied in the ethiopathogenesis of some of the major metabolic diseases such as diabetes, hypertension, cardiovascular diseases, which share with obesity an important role in the Metabolic Syndrome. Besides, dysregulation of immune system may be present due to a dysregulation in the factors produced by adipose tissue. Weight loss through diet or surgery has proved to be beneficial for the recovery of the physiological levels of some of these pro-inflammatory molecules, but other studies are needed to clarify to which extent its possible to pursue risk reduction by this way. abstract_id: PUBMED:2693882 Travelers' diarrhea Travelers' diarrhea is an acute infection of the gastrointestinal tract which has been known for many years, but which only recently has acquired epidemiologic and economic importance. The etiology is multiple and includes bacteria, viruses and protozoa, but the most frequently found agent is enterotogenic E. coli (ETEC). The symptomatology is characterized by watery acute diarrhea, without mucus, inflammatory cells and blood, usually without febrile elevation. The syndrome is normally self-limiting, without any specific antibiotic therapy, while antidiarrheal agents reduce entity and length of the symptomatology. Antimicrobial therapy is indicated in persistent forms only after ascertainment of the etiologic agent. abstract_id: PUBMED:16628572 Fecal cytokines and markers of intestinal inflammation in international travelers with diarrhea due to Noroviruses. The intestinal immune and inflammatory responses of Norovirus (NoV) are poorly defined. The objective of this study was to investigate fecal cytokine and lactoferrin profiles in response to NoV gastroenteritis in travelers. Both fecal cytokines and fecal lactoferrin were measured for NoV-associated diarrhea (N = 7), mixed infection of NoV and enterotoxigenic E. coli (ETEC)-associated diarrhea (N = 10) and in pathogen-negative diarrhea cases (N = 19). Both IL-2 and IFN-gamma were significantly increased in NoV-associated diarrhea specimens, suggesting a predominant Th1 immune response to NoV infection in the gut. When a mixed infection of NoV and ETEC occurred, a combined Th1/Th2 response was observed suggesting a dual immune response secondary to infection by both pathogens. Intestinal inflammation associated with increased fecal lactoferrin, important in bacterial enteric infection, was not found in NoV-associated gastroenteritis. Answer: Travelers with immune-mediated inflammatory diseases (IMIDs) do exhibit some differences compared to non-IMID travelers. According to a study assessing the proportion of IMID travelers seeking advice at the Travel Clinic of the University of Zurich, Switzerland, it was found that 1.8% of the 22,584 travelers who visited the clinic over a 25-month period suffered from an IMID, with gastroenterological and rheumatic conditions being the most common. Notably, 34.2% of these IMID travelers were using immunosuppressive or immunomodulatory medication (PUBMED:25528864). The study also revealed that the reasons for travel and the destinations did not differ significantly between IMID and non-IMID travelers, with Thailand and India being the most common destinations for both groups. However, IMID travelers tended to stay abroad for shorter periods and traveled less frequently on a low budget compared to non-IMID travelers. When it came to vaccinations, inactivated vaccines were administered similarly to both groups, but live vaccines were given half as often to IMID travelers, likely due to concerns about the safety of live vaccines in immunocompromised individuals (PUBMED:25528864). These findings suggest that while IMID patients travel with similar patterns as non-IMID travelers, their condition and the medications they take may influence the type of pre-travel health advice and vaccinations they receive. Given that IMID patients are exposed to the same travel health risks, including vaccine-preventable infections, there is a need for more data on the safety and immunogenicity of vaccines and on travel-specific risks to provide evidence-based pre-travel health advice to this particular group of travelers (PUBMED:25528864).
Instruction: Does self-reported sleep quality predict poor cognitive performance among elderly living in elderly homes? Abstracts: abstract_id: PUBMED:23621835 Does self-reported sleep quality predict poor cognitive performance among elderly living in elderly homes? Objectives: Sleep complaints are common among elderly, especially institutionalized elderly, as they experience poorer sleep quality and higher use of sedative hypnotics, when compared to community-dwelling elderly. Recent findings suggest that there may be a relationship between poor quality of sleep and cognitive deficits. This study aimed at studying the relation between sleep quality and cognitive performance in older adults living in elderly homes. Method: 100 elderly living in an elderly home in El Mansoura, Egypt, were recruited in this study, 50 cases with subjective poor quality of sleep and 50 controls with subjective good quality of sleep as assessed by Pittsburgh sleep quality index (PSQI). Each participant went through comprehensive geriatric assessment (CGA), including geriatric depression scale (GDS), assessment of cognitive function by mini mental state examination (MMSE). Results: 52% of poor sleepers showed impaired MMSE, while only 24% of good sleepers had impaired MMSE. Both orientation and (attention and calculation) were more affected (P = 0.027 and 0.035, respectively). Linear correlation coefficient between PSQI and different variables revealed significant negative correlation with total MMSE score, attention and calculation. Conclusion: Poor quality of sleep is related to cognitive impairment among elderly living in elderly homes and this problem should be taken in consideration among this group of elders. abstract_id: PUBMED:36091504 Association of sleep quality and nap duration with cognitive frailty among older adults living in nursing homes. Background: Sleep status, including sleep quality and nap duration, may be associated with frailty and cognitive impairment in older adults. Older adults living in nursing homes may be more prone to physical and cognitive frailties. This study aimed to investigate the association between sleep quality and nap duration, and cognitive frailty among older adults living in nursing homes. Methods: This study included 1,206 older adults aged ≥ 60 years from nursing homes in Hunan province, China. A simple frailty questionnaire (FRAIL scale) was used and Mini-Mental State Examination was conducted to assess physical frailty and cognitive impairment, respectively, to confirm cognitive frailty. The Pittsburgh Sleep Quality Index was used to assess the sleep quality. Nap duration was classified as follows: no, short (≤30 min), and long (>30 min) napping. Multinomial logistic regression was conducted to estimate the odds ratio (OR) and 95% confidence interval (CI). Results: The prevalence of cognitive frailty among the older adults in nursing homes was 17.5%. Approximately 60.9% of the older adults had a poor sleep quality. Among the 1,206 participants, 43.9% did not take naps, 29.1% had short naps, and 26.9% had long naps. After adjusting for all covariates, poor sleep quality (OR 2.53; 95% CI 1.78-3.59; P < 0.001) and long nap duration (OR 1.77; 95% CI 1.19-2.64; P = 0.003) were associated with higher odds of cognitive frailty, but short nap duration (OR 0.60; 95% CI 0.40-0.89; P = 0.012) was associated with low prevalence of cognitive frailty. Conclusion: Poor sleep quality and long nap duration are significantly associated with high risk of cognitive frailty among the older adults in nursing homes. Short nap duration was associated with low prevalence of cognitive frailty. However, these associations require further validation in older adults. Clinical Trial Registration: https://osf.io/57hv8. abstract_id: PUBMED:26689628 Subjective memory complaints in an elderly population with poor sleep quality. Objectives: The association between sleep disturbances and cognitive decline in the elderly has been putative and controversial. We evaluated the relation between subjective sleep quality and cognitive function in the Korean elderly. Method: Among 459 community-dwelling subjects, 352 subjects without depression or neurologic disorders (mean age 68.2 ± 6.1) were analyzed in this study. All the participants completed the Korean version of the consortium to establish a registry for Alzheimer's disease neuropsychological battery (CERAD-KN) as an objective cognitive measure and subjective memory complaints questionnaire (SMCQ). Based on the Pittsburgh sleep quality index, two types of sleepers were defined: 'good sleepers' and 'poor sleepers'. Results: There were 192 good sleepers (92 men) and 160 poor sleepers (51 men). Poor sleepers reported more depressive symptoms and more use of sleep medication, and showed higher SMCQ scores than good sleepers, but there was no difference in any assessments of CERAD-KN. In the regression analysis, depressive symptoms and subjective sleep quality were associated with subjective memory complaints (β = 0.312, p < 0.001; β = 0.163, p = 0.005). Conclusion: In the elderly without depression, poor sleep quality was associated with subjective memory complaints, but not with objective cognitive measures. As subjective memory complaints might develop into cognitive disorders, poor sleep quality in the elderly needs to be adequately controlled. abstract_id: PUBMED:36636291 Activities of Daily Living and Depression in Chinese Elderly of Nursing Homes: A Mediation Analysis. Purpose: This study aimed to explore the role of sleep quality as a mediator in the activities of daily living (ADLs) and depression. Patients And Methods: Participants (N=645; age≥60) were included in six nursing homes in Weifang, Shandong Province, using convenience sampling. Participants completed questionnaires to assess sleep quality, ADLs, and depression. Depression condition was assessed by the Patient Health Questionnaire (PHQ-9), ADLs was assessed by the Barthel Index (BI), and sleep quality was measured by the Pittsburgh Sleep Quality Index (PSQI). Mediation analysis was carried out by SPSS PROCESS. Results: ADLs (r=0.449, P<0.01) and sleep quality (r=0.450, P<0.01) were found to be positively associated with depression among the elderly. Sleep quality plays a significant mediating role in the influence of ADLs on depression in the elderly in nursing homes (Bootstrap 95% CI [0.076, 0.139]), The pathway from ADLs to sleep quality to depression yielded a medium effect size of 20.23%. Conclusion: ADLs help to explain how sleep quality partly mediates depression among the elderly in nursing homes. It is therefore recommended that timely detection and efficient interventions should focus on promoting physical function and improving sleep quality among the elderly in nursing homes. abstract_id: PUBMED:27410171 Depressive symptoms moderate the relationship between sleep quality and cognitive functions among the elderly. Objective: The co-occurrence of sleep problems, cognitive impairment, and depression among the elderly suggests that these three conditions are likely to be interrelated. Recent findings suggest that depressive symptoms moderate the relationship between sleep problems and cognitive impairment in elderly people but methodological problems have led to inconsistent conclusions. The present study aims to better understand the relationship between sleep quality, depressive symptoms, and cognitive function. Method: We administered the Repeatable Battery for the Assessment of Neuropsychological Status and self-report measures of sleep quality and depression to 380 elderly participants (Mage = 68 years, SD= 5.7). Bootstrapped moderation analyses were conducted to examine the role of depressive symptoms in the relationship between sleep and various aspects of cognitive function. Results: This moderation effect was significant in the domains of delayed memory (ΔR(2) = .01, F = 4.5, p = .04), language (ΔR(2) = .01, F = 4.6, p = .035), and general cognitive status (ΔR(2) = .01, F = 5.3, p = .02). However, unlike previous studies, higher sleep quality corresponded to better outcomes in delayed memory, language abilities, and general cognitive status in participants with low levels of depressive symptoms. No significant relationship between sleep quality and any cognitive function was observed among participants with high levels of depressive symptoms. Conclusions: Among individuals who reported low levels of depressive symptoms, sleep quality was positively related to cognitive performance in the domains of delayed recall, language, and general cognitive status. However, sleep quality was not significantly associated with cognitive abilities in these domains among participants with elevated levels of depressive symptoms; participants had relatively poor outcomes in these cognitive domains regardless of their sleep quality. abstract_id: PUBMED:34980052 Poor sleep quality is negatively associated with low cognitive performance in general population independent of self-reported sleep disordered breathing. Background: Sleep disordered breathing (SDB) plays a significant role in both sleep quality and cognition and whether it has an impact on the relationship between above two factors remains to be clear. The study aimed to explore the association between sleep quality and cognitive performance in general population by considering influence of sleep disordered breathing (SDB). Methods: In this cross-sectional study, we enrolled subjects aged ≥ 18 years using a multi-stage random sampling method. Cognitive status was assessed using Mini Mental State Examination (MMSE) questionnaire, sleep quality using Pittsburgh Sleep Quality Index (PSQI) and SDB was assessed using No-SAS scale, respectively. Multi-variable logistic regression was applied to examine the association of sleep quality and cognitive performance. Subgroup analyses were performed in different age groups, and in those with and without SDB. Results: Finally, 30,872 participants aged 47.5 ± 13.8 years with 53.5% women were enrolled, of whom 32.4% had poor sleep quality and 18.6% had low cognitive performance. Compared with good sleepers, subjects with poor sleep quality exhibited significantly higher presence of low cognitive performance (23.7% vs 16.2%, P < 0.001). Poor sleepers revealed 1.26 (95%CI: 1.16,1.36), 1.26 (1.08,1.46) and 1.25 (1.14,1.37) fold odds for low cognitive performance in general population and in subjects with and without self-reported SDB respectively. Stratified by age and SDB, the association was observed in young and middle-aged group without SDB (OR = 1.44, 95%CI: 1.30,1.59) and in the elderly group with SDB (OR = 1.30, 95%CI: 1.07,1.58). Conclusions: Sleep quality is in a negative association with cognitive performance in general population independent of SDB, implying improvement of sleep disturbances is a potential objective of intervention strategies for cognitive protection at population level. abstract_id: PUBMED:35673685 Do Cognitively Impaired Elderly Patients with Cancer Respond Differently on Self-reported Symptom Scores? A 5-Year Retrospective Analysis. Objectives: An increasing number of elderly subjects with cancer were admitted to the palliative care unit and they have suffered both distressing symptoms and cognitive impairment. We aim to identify the prevalence of cognitive impairment among elderly cancer patients receiving in-patient palliative care and to examine any difference between patients with cognitive impairment on self-reported symptoms. Materials And Methods: Subjects' age ≥65 admitted to a palliative care unit from 01 September 2015 to 31 August 2020 was included in the study. Exclusion criteria were those with an impaired conscious state, severe cognitive impairment, or language problems that were non-communicable. Variables collected included baseline demographics, cancer diagnosis, cancer stage, mobility state using the modified Barthel index (mBI), and performance status as measured by the palliative performance scale. Cognitive impairment was defined by abbreviated mental test ≤6. Self-reported symptoms scales were measured by the Chinese version of MD Anderson Symptom Inventory and EORTC QLQ C-30 (European Organisation for Research and Treatment of Cancer, Quality of Life Core Questionnaire 30). Results: Nine hundred and ninety-one subjects with 1174 admissions were retrieved. Eight hundred and seventy-three admission episodes were included in this study. Three hundred and eight (35%) have cognitive impairment. Cognitively impaired subjects were older, showed worse physical function and performance status, and more often residing in old age homes. Independent predictors of cognitive impairment were age (OR 1.09), mBI (OR 0.96), chair/bed bound state (OR 1.79), and presence of brain metastasis (OR 2.63). They reported lower scores in pain (P < 0.001), distress (P < 0.001), sleep disturbance (P < 0.001) and nausea and vomiting (P = 0.012) in the self-reported symptoms scale. Conclusion: Elderly cancer patients with cognitive impairment were older with poorer performance status. They have reported a lower level of pain, distress, and sleep disturbance. Clinicians should be alerted to this phenomenon to tackle the unmet concomitant symptoms. abstract_id: PUBMED:30430592 Effects of Physical Activity Program on cognitive function and sleep quality in elderly with mild cognitive impairment: A randomized controlled trial. Aim: The aim of this study is to determine the effect of a 20-week Physical Activity Program for elderly individuals with mild cognitive impairment (MCI) on their cognitive functions and sleep quality. Methods: A randomized controlled trial research design was used in this experimental pretest-posttest study. The data were analyzed using SPSS 21.0. Results: This study showed that the Physical Activity Program improved the cognitive functions and sleep quality of elderly individuals (p < 0.05). Conclusion And Practice Implications: The findings of this study showed that the cognitive functions and sleep quality of elderly individuals improved thanks to a 20-week Physical Activity Program. It is recommended that physical activities should be included in nursing interventions for elderly people with MCI. abstract_id: PUBMED:29213496 Effects of a cognitive training program and sleep hygiene for executive functions and sleep quality in healthy elderly. Introduction: The aging process causes changes in the sleep-wake cycle and cognition, especially executive functions. Interventions are required to minimize the impact of the losses caused by the aging process. Objective: To evaluate the effects of a cognitive training program and psychoeducation on sleep hygiene techniques for executive functions and sleep quality in healthy elderly. Methods: The participants were 41 healthy elderly randomized into four groups ([CG] control group, cognitive training group [CTG], sleep hygiene group [SHG] and cognitive training and hygiene group [THG]). The study was conducted in three stages:1st - assessment of cognition and sleep;2nd - specific intervention for each group;3rd - post-intervention assessment. Results: The results showed that the CTG had significant improvements in cognitive flexibility tasks, planning, verbal fluency and episodic memory, gains in sleep quality and decreased excessive daytime sleepiness. The SHG also had improved sleep quality, excessive daytime sleepiness and significant improvements in insights, planning, attention and episodic memory. The THG had significant gains in cognitive flexibility, problem solving, verbal fluency, attention and episodic memory. Conclusion: Cognitive training and sleep hygiene interventions were useful strategies for improving cognitive performance and sleep quality of healthy elderly, but there was no evidence that sessions combining cognitive training and psychoeducation on sleep hygiene enhanced the gains provided by these interventions applied individually. abstract_id: PUBMED:36981847 Associations between Sleep Quality, Frailty, and Quality of Life among Older Adults in Community and Nursing Home Settings. Poor sleep quality is prevalent among older adults, but limited data document associations between frailty and quality of life comparing individuals living in the community with those in nursing homes. This cross-sectional study (conducted between August and November 2019) included 831 older adults (mean age 76.5 years) from Slovenia's community and nursing home settings. The results showed comorbidity in 38% of community-dwelling older adults and 31% of older adults in nursing homes. The prevalence of frailty among community-dwelling older adults was 36.5%, and among older adults in a nursing home was 58.5%. A total of 76% of community-dwelling older adults and 95.8% of nursing home residents reported poor sleep quality. Sleep quality and frailty predict 42.3% of the total variability of quality of life for older adults in nursing homes and 34.8% for community-dwelling older adults. The study's results indicate that the quality of life can be affected by factors (e.g., worse sleep quality and frailty) among older adults, regardless of being a resident or from the community. Understanding how sleep quality is affected by social, environmental, and biological factors can help improve sleep quality and potentially the quality of life of older adults. Answer: Yes, self-reported sleep quality does predict poor cognitive performance among elderly living in elderly homes. Studies have found that poor sleep quality is associated with cognitive impairment in this population. For instance, a study involving 100 elderly individuals living in an elderly home in El Mansoura, Egypt, found that 52% of those with subjective poor quality of sleep showed impaired cognitive performance on the Mini-Mental State Examination (MMSE), compared to only 24% of those with subjective good quality of sleep. The study also found a significant negative correlation between the Pittsburgh Sleep Quality Index (PSQI) scores and total MMSE scores, suggesting that poorer sleep quality is related to worse cognitive performance (PUBMED:23621835). Another study with 1,206 older adults from nursing homes in Hunan province, China, reported that poor sleep quality and long nap duration were associated with higher odds of cognitive frailty, while short nap duration was associated with a lower prevalence of cognitive frailty (PUBMED:36091504). This indicates that not only the quality of nighttime sleep but also the duration of daytime naps can influence cognitive health in elderly nursing home residents. Furthermore, research has shown that poor sleep quality is negatively associated with cognitive performance in the general population, independent of self-reported sleep-disordered breathing (PUBMED:34980052). This suggests that interventions aimed at improving sleep disturbances could be beneficial for cognitive protection at the population level, including among elderly individuals in nursing homes. In summary, self-reported sleep quality is a predictor of poor cognitive performance among the elderly living in elderly homes, and addressing sleep quality issues may be an important aspect of maintaining cognitive health in this population.
Instruction: Does attendance at an immediate life support course influence nurses' skill deployment during cardiac arrest? Abstracts: abstract_id: PUBMED:15246583 Does attendance at an immediate life support course influence nurses' skill deployment during cardiac arrest? Objective: To determine if attendance at a Resuscitation Council (UK) immediate life support (ILS) course influenced the skill deployment of nurses at a subsequent cardiac arrests. Methods: Data from all cardiac arrests occurring in two 12-month periods (before and 12 months after ILS course implementation) were collected. Semi-structured interviews were conducted with a sample of nurses who had completed ILS training within the past 12 months and who had subsequently attended a cardiac arrest. Results: There were 103 patients defibrillated (after ILS implementation). Only one ward nurse defibrillated prior to the arrival of the crash team. There were 99 laryngeal mask airways (LMAs) inserted during the same period. Ward nurses performed two of these, one with the supervision of the resuscitation officer (RO). The interviews revealed that although many nurses felt confident after the course most felt that as time passed their confidence reduced to such a degree that they would not use their skills without supervision. Attendance at cardiac arrest soon after the course appeared to be a key element in maintaining confidence levels. Conclusion: ILS training alone may be insufficient to increase deployment of these skills by nurses who are not cardiac arrest team members. A more supportive approach, involving individual coaching of these individuals may need to be considered. abstract_id: PUBMED:35001108 Life Support Course for Nurses: beyond competency training. The Life Support Course for Nurses (LSCN) equips nurses with the resuscitation skills needed to be first responders in in-hospital cardiac arrests. Previous published articles on the LSCN were mainly focused on the development of the LSCN in Singapore, as well as nurses' confidence level, defibrillation experience and outcomes, the perceived barriers faced by nurses and the usefulness of the course. This paper highlights the importance of two key learning methodologies in the LSCN: deep learning and reflection. abstract_id: PUBMED:21879212 Life support course for nurses in Singapore. Nurses are usually the first caregivers for cardiac arrest patients in an in-hospital environment, and subsequently partner with doctors in the further resuscitation of patients. The skills of basic life support are crucial for their practice. The Advanced Cardiac Life Support programme is traditionally geared toward training of medical staff in advanced resuscitation skills. The need for a bridging course that focuses on the knowledge and skills required by nurses to become effective members of the resuscitation team has resulted in the creation of the Life Support Course for Nurses (LSCN) in Singapore. The components of the LSCN programme have evolved over the years, taking into consideration the modifications to resuscitation guidelines. The LSCN programme is gradually including a larger proportion of nurses in the emergency and critical care environments as well as those in the general ward. abstract_id: PUBMED:12668295 The immediate life support course. The immediate life support course (ILS) was launched by the Resuscitation Council (UK) in January 2002. This multi-professional 1-day resuscitation course teaches the essential knowledge and skills required to manage a patient in cardiac arrest for the short time before the arrival of a cardiac arrest team or other experienced medical assistance. The ILS course also introduces healthcare professionals to the role of a cardiac arrest team member. The course provides the candidate with the knowledge and skills to recognise and treat the acutely ill patient before cardiac arrest, to manage the airway with basic techniques, and to provide rapid, safe defibrillation using either manual or automated external defibrillators (AEDs). The course includes lectures, skill stations and cardiac arrest scenarios. The ILS course has standardised much of the life support training that already takes place in UK hospitals. In 2002, 16547 candidates attended ILS courses in 128 course centres. In this article, we discuss the rationale for, and the development and structure of the ILS course. We also present the first year's results and discuss possible future developments. It is hoped that this course may become established in counties in continental Europe through the European Resuscitation Council. abstract_id: PUBMED:21591423 Immediate Cardiac Life Support (ICLS) course developed by Japanese Association for Acute Medicine The Immediate Cardiac Life Support (ICLS) course was developed and launched by Japanese Association for Acute Medicine (JAAM) for resident training, in April 2002. The ICLS course is designed as multi-professional one-day (8 hours) resuscitation course and teaches the essential skills and team dynamics required to manage a patient in cardiac arrest for 10 minutes before the arrival of a cardiovascular specialist. The course consists of skill stations and scenario stations. The skill stations provide basic life support (BLS) with automated external defibrillator (AED), basic airway management and in-hospital management with electrocardiographic (ECG) monitoring with manual external defibrillator. In total, 117,246 candidates attended 6,971 ICLS courses until the end of December 2010. Furthermore, we developed additional course of ICLS to manage stroke, Immediate Stroke Life Support (ISLS). We also describe the development and structure of, and rationale for the ICLS course. abstract_id: PUBMED:12867310 Use of advanced life support skills. Background: The Advanced Life Support (ALS) Provider Course trains healthcare professionals in a standardised approach to the management of a cardiac arrest. In the setting of limited resources for healthcare training, it is important that courses are fit for purpose in addressing the needs of both the individual and healthcare system. This study investigated the use of ALS skills in clinical practice after training on an ALS course amongst members of the cardiac arrest team compared to first responders. Methods: Questionnaires measuring skill use after an ALS course were distributed to 130 doctors and nurses. Results: 91 replies were returned. Basic life support, basic airway management, manual defibrillation, rhythm recognition, drug administration, team leadership, peri- and post-arrest management and resuscitation in special circumstances were used significantly more often by cardiac arrest team members than first responders. There was no difference in skill use between medically and nursing qualified first responders or arrest team members. Conclusion: We believe that the ALS course is more appropriately targeted to members of a cardiac arrest team. In our opinion the recently launched Immediate Life Support course, in parallel with training in the recognition and intervention in the early stages of critical illness, are more appropriate for the occasional or first responder to a cardiac arrest. abstract_id: PUBMED:28741004 Seventeen years of life support courses for nurses: where are we now? The Life Support Course for Nurses (LSCN) equips nurses with the resuscitation skills to be first responders in in-hospital cardiac arrests. Seventeen years after the initiation of the LSCN, a confidential cross-sectional Qualtrics™ survey was conducted in May 2016 on LSCN graduands to assess the following: confidence in nurse-initiated resuscitation post-LSCN; defibrillation experience and outcomes; and perceived barriers and usefulness of the LSCN. The majority of respondents reported that the course was useful and enhanced their confidence in resuscitation. Skills retention can be enhanced by organising frequent team-based resuscitation training. Resuscitation successes should be publicised to help overcome perceived barriers. abstract_id: PUBMED:17161900 The immediate life support (ILS) course--the Italian experience. Aim Of The Study: The 1-day immediate life support course (ILS) was started in the United Kingdom and adopted by the ERC to train healthcare professionals who attend cardiac arrests only occasionally. Currently, there are no reports about the ILS course from outside the UK. In this paper we describe our initial Italian experience of teaching ILS to nurses. We have also measured the impact that ILS has on the resuscitation knowledge of nurses. Methods: The ILS course materials were translated by Italian ALS instructors who had observed the ILS course previously in the UK. From March to November 2005 nurses from a single hospital department attended the Italian ILS course. Candidate feedback was collected using an evaluation form. The change in knowledge of candidates was measured using a pre- and post-course test. Variables associated with candidate performance on course papers were investigated using multivariate linear regression analysis. Results: A total of 119 nurses attended nine ILS courses. All candidates completed the course successfully and gave high evaluation scores. ILS produced a significant increase from pre- to post-course score (10.15+/-2.75 to 13.19+/-2.53, p<0.001). The pre-course score was higher for nurses working in ICU compared with those coming from non-intensive wards, but this difference disappeared in the post-course evaluation (13.89+/-2.18 versus 12.79+/-2.65, p=ns). Conclusions: We have reproduced the ILS course in Italy successfully. ILS teaching resulted in an improvement in resuscitation knowledge of the first group of nurses trained. abstract_id: PUBMED:38461591 Exploring nurses' experiences of performing basic life support in hospital wards: An inductive thematic analysis. Aim: The aim of this study was to undertake an in-depth exploration of the lived experiences of in-hospital, non-intensive care, ward-based nurses' experiences of real-life CPR events. Background: There is growing evidence suggesting that may nurses not be able to successfully perform in a cardiac arrest situation. Reasons include a lack of clear leadership at the arrest, performance anxiety, role confusion and knowledge and skill degradation over time. Methods: In-depth semi-structured interviews were conducted with fifteen ward-based hospital nurses from three hospitals. Interviews were recorded, transcribed verbatim and inductive thematic analysis was completed using NVivo 12 software. Findings: Four main themes emerged from data. The main themes are: (1) Not Being able to Perform When it Matters, (2) Working Really Well as a Team, (3) Reflecting on the Experience: The Good, the Bad & the Ugly and (4) Learning to get it Right for Next Time CONCLUSION: Performing BLS is a stressful and anxiety-provoking experience for ward-based nurses. Anxiety levels appear to decrease slightly only when nurses have had at least one previous real-life experience with resuscitation. Current BLS education does not prepare nurses for the complexities of resuscitation. Future BLS education should focus on in-depth scenarios, including interdisciplinary team training and with greater frequency than the current yearly mandatory sessions. Listening to the lived experiences of nurses who have performed BLS has given much needed insight into approaches that educators can use to improve BLS education delivery. abstract_id: PUBMED:11114466 The European Resuscitation Council's paediatric life support course 'Advanced Paediatric Life Support'. The poor outcome for resuscitation from cardiopulmonary arrest in childhood is widely recognised. The European Resuscitation Council has adopted the Advanced Paediatric Life Support course (originating in the UK and now available in a number of countries) as its course for providers caring for children. This paper outlines the course content and explains its remit, which is to reduce avoidable deaths in childhood by not only resuscitation from cardiac arrest but, more effectively, by recognising and treating in a timely and effective fashion life-threatening illness and injury in infants and children. Two related courses Paediatric Life Support, a less intense course for less advanced providers, and Pre-Hospital Paediatric Life Support for immediate care providers are also described. Answer: Attendance at an immediate life support (ILS) course does influence nurses' skill deployment during cardiac arrest, but the impact may vary and is not always straightforward. According to the data collected from cardiac arrests occurring in two 12-month periods before and after ILS course implementation, there was a low deployment of skills such as defibrillation and laryngeal mask airway (LMA) insertion by ward nurses, with only one nurse defibrillating prior to the arrival of the crash team and two LMAs inserted by ward nurses (one with supervision) (PUBMED:15246583). Interviews with nurses who had completed ILS training revealed that while many felt confident immediately after the course, their confidence diminished over time, affecting their willingness to use their skills without supervision. The study concluded that ILS training alone might be insufficient to increase skill deployment by nurses who are not cardiac arrest team members, suggesting that a more supportive approach, such as individual coaching, may be necessary (PUBMED:15246583). Other studies have emphasized the importance of methodologies such as deep learning and reflection in life support courses for nurses, suggesting that these approaches could enhance the effectiveness of the training (PUBMED:35001108). The Life Support Course for Nurses (LSCN) in Singapore, for example, has evolved over the years and includes a larger proportion of nurses, indicating an ongoing effort to improve and adapt life support training for nurses (PUBMED:21879212). However, it has been noted that the ALS course may be more appropriately targeted to members of a cardiac arrest team, and that the ILS course, along with training in the recognition and intervention in the early stages of critical illness, may be more suitable for first responders to a cardiac arrest (PUBMED:12867310). Seventeen years after the initiation of the LSCN, a survey found that the majority of respondents reported that the course was useful and enhanced their confidence in resuscitation, although skills retention could be improved with more frequent team-based training (PUBMED:28741004). In summary, while immediate life support courses do influence nurses' skill deployment during cardiac arrest, the extent of this influence depends on various factors, including the retention of confidence and skills over time, the support provided post-training, and the specific roles of the nurses within the cardiac arrest response team.
Instruction: Does laterality of positive needle biopsy in clinical T2a patients with prostate cancer affect biochemical recurrence-free survival? Abstracts: abstract_id: PUBMED:18372026 Does laterality of positive needle biopsy in clinical T2a patients with prostate cancer affect biochemical recurrence-free survival? Objectives: To test whether patients with clinical Stage T2a prostate cancer with biopsy-proven disease only contralateral to the palpable abnormality experience outcomes similar to those of patients with clinical Stage T1c. Methods: We identified 1567 patients who had undergone radical prostatectomy at our institution from 1995 to 2007 with a prostate-specific antigen level of less than 10 ng/mL and complete information regarding the laterality of positive biopsy cores. Of these patients, 1157 had clinical Stage T1c and 410 Stage cT2a. The patients with clinical Stage T2a were divided into two groups according to the laterality of the positive biopsy cores: ipsilateral only (n = 241) and contralateral only (n = 53). Kaplan-Meier analyses were used to compare the biochemical recurrence-free survival (BRFS) probabilities. Results: The patients with clinical Stage T2a had significantly poorer 5-year BRFS than did the patients with clinical Stage T1c (83.5% versus 94.4%, P <0.001). The difference in BRFS between the contralateral and ipsilateral clinical Stage T2a groups was statistically insignificant. A significant difference was found in BRFS between patients with cT1c and cT2a ipsilateral disease. A statistically insignificant difference in BRFS was found between patients with cT1c and cT2a contralateral disease. Conclusions: The laterality of the needle biopsy in relation to the palpable abnormality in patients with clinical Stage T2a could affect BRFS. Our data have demonstrate an insignificant difference between patients with cT2a contralateral disease and those with contralateral cT1c disease. abstract_id: PUBMED:12771734 Improved clinical staging system combining biopsy laterality and TNM stage for men with T1c and T2 prostate cancer: results from the SEARCH database. Purpose: A number of studies have failed to show significant differences in outcome following radical prostatectomy between men with palpable, clinically localized prostate cancer (cT2) and those whose tumors are not palpable (cT1c). We determined whether we could improve the prognostic value of the TNM staging system in men with cT1c and cT2 cancers by including information on whether prostate needle biopsy was unilaterally or bilaterally positive. Materials And Methods: A retrospective survey of 992 patients from the SEARCH (Shared Equal Access Regional Cancer Hospital) Database treated with radical prostatectomy at 4 equal access medical centers between 1988 and 2002 was done. TNM 1992 clinical stage was T1c in 421 patients, T2a in 287, T2b in 202 and T2c in 82. Multivariate analysis was used to examine whether biopsy laterality and clinical stage were significant predictors of surgical margin status, nonorgan confined disease, seminal vesicle invasion, and time to prostate specific antigen (PSA) recurrence following radical prostatectomy. Results: Patients with clinical stages T2b and T2c cancers had similar rates of PSA recurrence, which were significantly higher than in patients with T1c and T2a disease, who also had similar rates of PSA recurrence. Bilateral positive biopsy further stratified patients with T1c and T2a disease (p = 0.01) but not those with T2b and T2c cancers (p = 0.207). Grouping these 1992 clinical stages with biopsy laterality resulted in a new clinical staging system, which was a significant predictor of PSA recurrence following radical prostatectomy (p <0.001). On multivariate analysis whether TNM clinical stage was evaluated as a categorical or continuous variable only PSA, biopsy Gleason score and the new clinical staging system (1992 TNM stage groupings combined with biopsy laterality) were significant independent predictors of time to biochemical recurrence following radical prostatectomy. Conclusions: Combining low (T1c and T2a) and high (T2b and T2c) risk 1992 clinical stages with biopsy laterality (unilateral versus bilateral positive) resulted in a new clinical staging system that was a stronger predictor of PSA recurrence following radical prostatectomy than the 1992 or 1997 TNM clinical staging system. If confirmed at other centers and in men who undergo with other treatment modalities, consideration should be given to revising the current TNM staging system to reflect these findings. abstract_id: PUBMED:30063011 Effects of perineural invasion in prostate needle biopsy on tumor grade and biochemical recurrence rates after radical prostatectomy. To predict local invasive disease before retropubic radical prostatectomy (RRP), the correlation of perineural invasion (PNI) on prostate needle biopsy (PNB) and RRP pathology data and the effect of PNI on biochemical recurrence (BR) were researched. For patients with RRP performed between 2005 and 2014, predictive and pathologic prognostic factors were assessed. Initially all and D'Amico intermediate-risk group patients were comparatively assessed in terms of being T2 or T3 stage on RRP pathology, positive or negative for PNI presence on PNB and positive or negative BR situation. Additionally the effect of PNI presence on recurrence-free survival (RFS) rate was investigated. When all patients are investigated, multivariate analysis observed that in T3 patients PSA, PNB Gleason score (GS) and tumor percentage were significantly higher; in PNI positive patients PNB GS, core number and tumor percentage were significantly higher and in BR positive patients PNB PNI positivity and core number were significantly higher compared to T2, PNI negative and BR negative patients, separately (p < 0.05). When D'Amico intermediate-risk patients are evaluated, for T3 patients PSA and PNB tumor percentage; for PNI positive patients PNB core number and tumor percentage; and for BR positive patients PNB PNI positivity were significantly higher compared to T2, PNI negative and BR negative patients, separately (p < 0.05). Mean RFS in the whole patient group was 56.4 ± 4.2 months for PNI positive and 96.1 ± 5.7 months for negative groups. In the intermediate-risk group, mean RFS was 53.7 ± 5.1 months for PNI positive and 100.3 ± 7.7 months for negative groups (p < 0.001). PNI positivity on PNB was shown to be an important predictive factor for increased T3 disease and BR rates and reduced RFS. abstract_id: PUBMED:12893339 Positive prostate biopsy laterality and implications for staging. Objectives: To examine the effect of including positive prostate biopsy information in palpation staging (2002 system) and the influence of this information on freedom from biochemical failure (bNED). Prostate biopsy laterality status (unilateral versus bilateral positive) is part of clinical staging using American Joint Commission on Cancer criteria, but is rarely used. Methods: From April 1, 1989 to September 30, 1999, 1038 patients with palpable T1-T3Nx-0M0 prostate cancer were treated with three-dimensional conformal radiotherapy alone. Kaplan-Meier bNED curves were compared using the log-rank test. The Cox proportional hazards regression model of bNED was used for multivariate analysis. Results: The median follow-up was 46 months. The proportion of patients with bilateral positive biopsies by palpation category T1c was 24%, by T2a was 17%, by T2b was 26%, by T2c was 65%, and by T3 was 53%. No statistically significant difference was noted in bNED on the basis of biopsy laterality status for the palpation T stages T1c, T2a, T2b, or T3. A statistically significant difference in the 5-year bNED in the T2c stage was found; those with unilateral positive biopsies fared worse (46% versus 74%, respectively, P = 0.04). Conclusions: Inclusion of positive biopsy laterality status into clinical staging causes stage migration without reflecting a change in outcome and should not be used. abstract_id: PUBMED:11796287 Influence of biopsy perineural invasion on long-term biochemical disease-free survival after radical prostatectomy. Objectives: To investigate the influence of biopsy perineural invasion (PNI) on long-term prostate-specific antigen recurrence rates, final pathologic stage, and surgical margin status of men treated with radical prostatectomy. Radical prostatectomy offers the best chance for surgical cure when performed for organ-confined disease. However, the histologic identification of PNI on prostate biopsy has been associated with a decreased likelihood of pathologically organ-confined disease. Methods: Seventy-eight men with histologic evidence of PNI on biopsy underwent radical prostatectomy by a single surgeon between April 1984 and February 1995 and were compared with 78 contemporary matched (biopsy Gleason score, prostate-specific antigen level, clinical stage, age) controls without PNI. Biochemical disease-free survival and pathologic findings were compared. Results: After a mean follow-up of 7.05 +/- 2.2 years and 7.88 +/- 2.7 years (P = 0.04) for patients with biopsy PNI and controls, respectively, no significant difference in the long-term prostate-specific antigen recurrence rates was observed (P = 0.13). The final Gleason score and pathologic staging were also similar in this matched cohort. Although the numbers of neurovascular bundles resected were comparable between the groups, no difference was found in the rate of positive surgical margins identified (13% versus 10%, P = 0.62). Conclusions: We were unable to show that PNI on needle biopsy influences long-term tumor-free survival. abstract_id: PUBMED:17868773 Stromogenic prostatic carcinoma pattern (carcinomas with reactive stromal grade 3) in needle biopsies predicts biochemical recurrence-free survival in patients after radical prostatectomy. We previously reported that reactive stromal grading in radical prostatectomies is a predictor of recurrence and that reactive stromal grading 0 and 3 are associated with lower biochemical recurrence-free survival rates than reactive stromal grading 1 and 2. We explored the prognostic significance of reactive stromal grading in preoperative needle biopsies. At Baylor College of Medicine, 224 cases of prostatic carcinoma were diagnosed by needle biopsy. Reactive stromal grading was evaluated on hematoxylin-eosin (H&E)-stained sections on the basis of previously described criteria: grade 0, with 0% to 5% reactive stroma; grade 1, 6% to 15%; grade 2, 16% to 50%; grade 3, 51% to 100%, or at least a 1:1 ratio between glands and stroma. Kaplan-Meier and Cox proportional hazard analyses were used. Reactive stromal grading distribution was as follows: reactive stromal grading 0, 1 case (0.5%); reactive stromal grading 1, 149 cases (66.5%); reactive stromal grading 2, 59 cases (26.3%); reactive stromal grading 3, 15 cases (6.7%). Reactive stromal grading in biopsies was correlated with adverse clinicopathologic parameters in the prostatectomy. Patients with reactive stromal grading 1 and 2 had better survival than those with 0 and 3 (P = .0034). Reactive stromal grading was an independent predictor of recurrence (hazard ratio = 1.953; P = .0174). Reactive stromal grading is independent of Gleason 4 + 3 and 3 + 4 in patients with a Gleason score of 7. Quantitation of reactive stroma and recognition of the stromogenic carcinoma in H&E-stained biopsies is useful to predict biochemical recurrence in prostate carcinoma patients independent of Gleason grade and prostate-specific antigen. abstract_id: PUBMED:25438682 Effect of positive surgical margins on biochemical failure, biochemical recurrence-free survival, and overall survival after radical prostatectomy: median long-term results. The aim of this study was to investigate the median long-term effects of positive surgical margin (PSM) and other prognostic factors on biochemical recurrence-free survival, overall survival, and biochemical failure in patients who underwent radical prostatectomy. Our study included 121 patients with pT2-3N0 disease treated between March 2006 and August 2012. The patients were divided into two groups: those with PSM and those with negative surgical margin (NSM). We analyzed the age, clinical and pathological stages, preoperative and postoperative Gleason scores, duration of the follow-up, adjuvant chemo-/radiotherapy, biochemical failure, biochemical recurrence-free survival, and overall survival in these patients. PSM was found in 25 (20%) patients, whereas 96 patients had NSM. The median follow-up time was 46.6 months (range 12-72 months) for the PSM group and 48.3 months (range 7-149 months) for the NSM group. The biochemical failure rate was 24% in the PSM group and 8.3% in the NSM group (p = 0.029). The biochemical recurrence-free survival was found as 76% in the PSM group and 91.7% in the NSM group. The difference between the groups was not statistically significant (p = 0.06). The overall survival was 100% in both groups. The surgical margins of the radical prostatectomy material is an important pathological indicator for biochemical failure at mid long-term follow-up. We did not find any effect of PSM on overall survival or biochemical recurrence-free survival. abstract_id: PUBMED:33712224 Opioids and premature biochemical recurrence of prostate cancer: a randomised prospective clinical trial. Background: Prostate cancer is one of the most prevalent neoplasms in male patients, and surgery is the main treatment. Opioids can have immune modulating effects, but their relation to cancer recurrence is unclear. We evaluated whether opioids used during prostatectomy can affect biochemical recurrence-free survival. Methods: We randomised 146 patients with prostate cancer scheduled for prostatectomy into opioid-free anaesthesia or opioid-based anaesthesia groups. Baseline characteristics, perioperative data, and level of prostate-specific antigen every 6 months for 2 yr after surgery were recorded. Prostate-specific antigen >0.2 ng ml-1 was considered biochemical recurrence. A survival analysis compared time with biochemical recurrence between the groups, and a Cox regression was modelled to evaluate which variables affect biochemical recurrence-free survival. Results: We observed 31 biochemical recurrence events: 17 in the opioid-free anaesthesia group and 14 in the opioid-based anaesthesia group. Biochemical recurrence-free survival was not statistically different between groups (P=0.54). Cox regression revealed that biochemical recurrence-free survival was shorter in cases of obesity (hazard ratio [HR] 1.63, confidence interval [CI] 0.16-3.10; p=0.03), high D'Amico risk (HR 1.58, CI 0.35-2.81; P=0.012), laparoscopic surgery (HR 1.6, CI 0.38-2.84; P=0.01), stage 3 tumour pathology (HR 1.60, CI 0.20-299) and N1 status (HR 1.34, CI 0.28-2.41), and positive surgical margins (HR 1.37, CI 0.50-2.24; P=0.002). The anaesthesia technique did not affect time to biochemical recurrence (HR -1.03, CI -2.65-0.49; P=0.18). Conclusions: Intraoperative opioid use did not modify biochemical recurrence rates and biochemical recurrence-free survival in patients with intermediate and high D'Amico risk prostate cancer undergoing radical prostatectomy. Clinical Trial Registration: NCT03212456. abstract_id: PUBMED:22964539 Preoperative predictors of pathologic stage T2a in low-risk prostate cancer: implications for focal therapy. Objective: To assess preoperative parameters that may be predictive of pathologic stage T2a disease in low-risk prostate cancer patients. Methods: Data from a cohort of 1,495 consecutive men with low-risk prostate cancer who underwent a radical prostatectomy between 1993 and 2009 were evaluated. Preoperative parameter assessment focused on age, race, clinical stage, diagnostic PSA level, biopsy tumor laterality and diagnostic Gleason score. Preoperative parameters were analyzed by univariate and multivariate methods. Kaplan-Meier method was used to evaluate the biochemical disease-free survival. Results: Among the 1,495 men, 236 (15.8%) had pT2a disease. In univariate analysis, biopsy tumor unilaterality (p < 0.001), diagnostic PSA ≤ 4 ng/ml (p < 0.001) and non-African-American race (p = 0.009) were significant variables. In multivariate analysis, biopsy tumor laterality (OR 0.377; p < 0.001), diagnostic PSA ≤ 4 ng/ml (OR 0.621; p = 0.002) and race (OR 0.583; p = 0.029) were independent predictors. Low-risk patients with pT2a disease showed a better PSA recurrence-free survival rate, compared with men with >pT2a diseases (p = 0.012). Conclusions: Biopsy tumor unilaterality, diagnostic PSA ≤ 4 ng/ml and race are independent predictors of pT2a in low-risk prostate cancer. These three preclinical variables may be a useful reference to begin the selection process for focal therapy in men with low-risk prostate cancer. abstract_id: PUBMED:10210388 Clinical and pathological characteristics, and recurrence rates of stage T1c versus T2a or T2b prostate cancer. Purpose: We compare clinicopathological features, and cancer recurrence and survival rates in men with stage T1c versus T2a or T2b prostate cancer. Materials And Methods: From 1988 through 1998, 1 surgeon (W. J. C.) performed radical retropubic prostatectomy in 1,620 men with a mean age plus or minus standard deviation of 62.3 +/- 7 years. Clinical stage was T1c in 39%, T2a in 22% and T2b in 39% of patients. Patients were followed with semiannual prostate specific antigen (PSA) measurement and annual digital rectal examination. Serum total PSA greater than 0.3 ng./ml., histologically confirmed local tumor recurrence or distant metastases were considered evidence of cancer recurrence. Simple univariate statistics were used to compare clinical and pathological features by clinical stage, and multivariate Cox models were used to compare 5-year recurrence-free probabilities . The 5-year all cause and disease specific survival rates were calculated using Kaplan-Meier product limit estimates. Results: Mean patient age was younger for the clinical stage T1c group (61 years) than for the T2a (62 years) or T2b (64 years) group. Mean preoperative PSA and the percentage of patients with biopsy Gleason score 8 to 10 were more favorable for the T1c (8 ng./ml., 3%) and T2a (7, 5%) groups than for the T2b group (11, 6%). Cancerous surgical margins, seminal vesicle invasion and lymph node metastases were also less frequent in the T1c (20, 5 and 0.8%, respectively) and T2a (23, 5 and 0.3%) groups than in the T2b group (29, 11 and 1.8%). The 5-year recurrence-free survival rate was 85% for T1c, 83% for T2a and 72% for T2b cases. Multivariate analysis indicated a decreased risk of recurrence for the T1c group compared to the T2a and T2b groups. The 5-year disease specific survival rate was 100% for the T1c and T2a groups, and 97% for the T2b group. Conclusions: Clinical and pathological features were similar for stages T1c and T2a, and different from stage T2b cancers. The 5-year recurrence-free survival was similar for T1c and T2a (log rank 0.89, p = 0.34), and higher than that for T2b (log rank 34.5, p <0.0001) cancers. However, controlling for all other prognostic factors on a Cox multivariate model, the risk of cancer recurrence was decreased for T1c compared to T2a and T2b disease. The detection of nonpalpable prostate cancer appears to be advantageous for intermediate-term cancer control. Answer: The laterality of positive needle biopsy in clinical T2a patients with prostate cancer does not appear to significantly affect biochemical recurrence-free survival (BRFS). A study identified 1567 patients who had undergone radical prostatectomy, with 410 of them having clinical Stage T2a. These T2a patients were divided into two groups based on the laterality of the positive biopsy cores: ipsilateral only and contralateral only. The Kaplan-Meier analyses comparing the BRFS probabilities between these two groups showed that the difference in BRFS was statistically insignificant. This suggests that the laterality of the needle biopsy in relation to the palpable abnormality in patients with clinical Stage T2a does not significantly impact BRFS (PUBMED:18372026). Additionally, another study from the SEARCH database aimed to improve the prognostic value of the TNM staging system by including information on whether prostate needle biopsy was unilaterally or bilaterally positive. The study found that bilateral positive biopsy further stratified patients with T1c and T2a disease, but not those with T2b and T2c cancers. However, when considering the laterality of biopsy, the new clinical staging system that was developed was a significant predictor of PSA recurrence following radical prostatectomy. This suggests that while laterality may not directly affect BRFS, it could be valuable when combined with other staging information (PUBMED:12771734). In summary, the laterality of positive needle biopsy alone does not seem to have a significant impact on BRFS in clinical T2a prostate cancer patients, but it may contribute to a more accurate staging system when combined with other factors.
Instruction: Xerostomia in long-term survivors of aggressive non-Hodgkin's lymphoma of Waldeyer's ring: a potential role for parotid-sparing techniques? Abstracts: abstract_id: PUBMED:19307951 Xerostomia in long-term survivors of aggressive non-Hodgkin's lymphoma of Waldeyer's ring: a potential role for parotid-sparing techniques? Background: The degree of xerostomia in patients treated for intermediate-and high-grade non-Hodgkin lymphoma (NHL) of Waldeyer's ring (WR) is unknown. Methods And Materials: Fifteen patients treated for stage I-IV NHL of WR with radiotherapy (RT) were administered a xerostomia questionnaire. Numerical responses (0 = no xerostomia; 100 = maximum xerostomia) were compared with responses from 5 sets of patients treated for head and neck squamous cell carcinoma who were grouped by amount of parotid in RT field: larynx-only, ipsilateral parotid, bilateral-partial parotid, bilateral-total parotid, parotid-sparing intensity-modulated radiotherapy. Results: Waldeyer's patients' median xerostomia questionnaire score was 31, which was significantly different from the larynx-only group, bilateral-partial parotid group, and bilateral-total parotid group, but not significantly different from the ipsilateral parotid group or parotid-sparing intensity-modulated radiotherapy group. Conclusions: Xerostomia in survivors WR NHL is a detectable toxicity with severity like that in head and neck squamous cell carcinoma patients who receive ipsilateral parotid irradiation, and warrants parotid-sparing RT techniques. abstract_id: PUBMED:16489463 Solitary extramedullary plasmacytoma and granulomatous sialadenitis of the parotid gland preceding a B-cell non-Hodgkin's lymphoma. A patient with swelling of the left parotid gland of four-months' duration, sicca syndrome (xerophthalmia and xerostomia) and a history of progressive systemic sclerosis with an incomplete form of the CREST syndrome was referred to our department. On ultrasound a parotid mass of reduced echogenicity without any enlarged cervical lymph nodes was found. Ultrasonographically guided fine-needle biopsy could not provide any definitive diagnosis. After partial parotidectomy with complete tumor removal the histologic exam showed an extramedullary plasmacytoma with concurrent non-necrotizing granulomatous sialadenitis of the parotid gland. Complete systemic work-up excluded multiple myeloma, leukemia, lymphoma and sarcoidosis. Post-operative radiotherapy of the left parotid region and left neck including the supraclavicular lymph node area was performed. Six months after surgery an aggressive B-cell non-Hodgkin's lymphoma was diagnosed. abstract_id: PUBMED:17703372 Parotid gland involvement, the presenting sign of high grade non-Hodgkin lymphoma in two patients with Gaucher disease and sicca syndrome. Increased risk of haematological malignancies has been described in Gaucher disease patients; however, high-grade lymphoma has been rarely observed. We report two patients with Gaucher disease and sicca syndrome diagnosed with aggressive lymphoma involving the parotid gland. A 29-year-old woman with Gaucher disease developed tumour of the left parotid gland. She reported chronic arthralgias, xerostomia and xerophthalmia. Parotid gland biopsy disclosed diffuse large B-cell lymphoma. No lymphadenopathy was found. Bone biopsy revealed focal lymphomatous infiltration consistent with stage IV disease. MACOP-B chemotherapy regimen (cyclophosphamide, adriamycin, methotrexate, bleomycin, vincristine, prednisone) resulted in complete remission for 15 years. A 76-year-old patient with Gaucher disease suffered from dry-mouth feeling. He developed a left parotid gland tumour. CT scan disclosed diffuse lymphadenopathy, pleural effusion and multiple lung nodules. A cervical lymph node biopsy revealed mantle cell lymphoma. Fine-needle aspiration of the parotid gland showed lymphoma cells. Immunochemotherapy with fludarabine, cyclophosphamide and rituximab resulted in complete remission. Accumulation of the glucocerebroside in Gaucher disease activates macrophages, inducing release of pro-inflammatory cytokines which may be involved in the pathogenesis of second malignancy. Patients with Gaucher disease bear an increased risk of haematological malignancies; however, aggressive lymphoma has been described only occasionally. In both our patients the presenting sign of lymphoma was tumour of the parotid gland. The patients suffered from sicca syndrome, which increases risk for developing lymphoma. The underlying Gaucher disease and sicca syndrome might be implicated as immunological triggers for lymphoma occurrence and its propensity for the parotid gland in these patients. abstract_id: PUBMED:10680876 Bilateral mucosa-associated lymphoid tissue lymphoma of the parotid gland. Mucosa-associated lymphoid tissue (MALT) tumors of the parotid gland are extranodal non-Hodgkin lymphomas. Stage I and II MALT tumors are usually treated with surgery or radiotherapy. Bilateral MALT-derived non-Hodgkin lymphoma of the parotid glands is rare, and optimal treatment is debatable. Two patients presented at the otorhinolaryngology department of the Friedrich-Alexander-University of Erlangen-Nuremberg, Erlangen, Germany. The treatment strategy that was used in case 1 was also successfully used in case 2. A precise diagnosis could not be made by either fine-needle biopsy or intraoperative frozen section biopsy; it was achieved with open biopsy. Surgery and/or radiotherapy proved to be effective. There was no recurrence of disease in either case. The advantages of surgery are complete resection of the tumor and absence of xerostomia and mucositis, which are caused by irradiation. Radiotherapy does not produce a scar or an indentation at the parotid region, however, and results in a better cosmetic appearance. Therefore, we recommend open biopsy with facial nerve monitoring and subsequent irradiation in cases in which bilateral prominence of the parotid glands and suspicion of a MALT lymphoma are both present. abstract_id: PUBMED:9006743 Alternating chemotherapy and radiotherapy for limited-stage intermediate and high-grade non-Hodgkin's lymphomas: long-term results for 96 patients with tumors > 5 cm. Background: The role and timing of radiotherapy for optimal treatment of localized aggressive non-Hodgkin's lymphoma (NHL) is controversial. We report the long-term results of a single-institution pilot study of alternating chemotherapy (CT) and radiotherapy (RT) in patients with clinical stages I or II tumors exceeding 5 cm. Patients And Methods: From 1981 to 1992, 96 patients with stages I-II aggressive NHL received an alternating regimen of CT and RT consisting of 8 cycles of CT with 3 courses of RT interjected after the 2nd, 3rd and 4th cycles of CT. The CT combined cyclophosphamide, doxorubicin, teniposide and prednisone every 28 days. Each RT course was started 8 to 10 days after CT (15 Gy in 6 fractions to initially involved and contiguous areas). Results: The median age was 54 years. The disease predominantly located in the head and neck area was stage II in 63% of patients. Bulky tumors (10 cm or larger) were found in 24% of patients. Six patients discontinued CT because of acute toxicity (mucositis). The mean relative dose intensity achieved for doxorubicin, cyclophosphamide and teniposide were 72%, 82%, and 78%, respectively. Late toxicity consisted mostly of severe xerostomia lasting more than 2 years in 7 patients irradiated in Waldeyer's ring. The complete response (CR) rate was 91%; 20 of the 86 patients in CR relapsed (3 locally only). The median follow-up was 61 months, and at 5 years, overall survival (OS) was 77%. Classification according to the International Prognostic Factor Index was possible for 54 patients, all but three of whom were in the 'low risk' group (0-1 factor). Bulky disease was the only unfavorable prognostic factor (P < 0.001) for CR, freedom from progression (FFP) and OS rates; the low relative dose intensity of CT achieved in this study did not affect outcome. Conclusion: Alternating chemo-radiotherapy for localized aggressive NHL was feasible and yielded long-term results comparable to those obtained with standard treatments, despite a reduction in dose intensity considerably below that of CHOP which suggested synergistic effects of CT and RT in this scheme. abstract_id: PUBMED:24287196 The role of parotidectomy in Sjögren's syndrome. Sjögren's syndrome, a chronic and progressive autoimmune disorder mainly characterized by xerophthalmia, xerostomia, and parotid enlargement, is primarily managed medically, but some patients will require surgical management. Patients with Sjögren's syndrome have an increased risk of non-Hodgkin lymphoma. Superficial parotidectomy is indicated for diagnostic purposes and can be therapeutic in limited circumstances. Surgical indications for parotidectomy in Sjögren's syndrome include recurrent parotitis refractory to medical management; salivary gland malignancy; and severe, refractory pain. Surgical complications include transient or permanent facial nerve injury, post-operative pain, persistent inflammation of remnant parotid tissue, Frey syndrome, and facial scarring. abstract_id: PUBMED:15976068 Importance of the initial volume of parotid glands in xerostomia for patients with head and neck cancers treated with IMRT. Objective: Our aim was to evaluate predictors of xerostomia in patients with head and neck cancers treated with intensity-modulated radiation therapy (IMRT). Methods: Thirty-three patients with pharyngeal cancer were evaluated for xerostomia after having been treated with IMRT. All patients were treated with whole-neck irradiation of 46-50 Gy by IMRT, followed by boost IMRT to the high-risk clinical target volume to a total dose of 56-70 Gy in 28-35 fractions (median, 68 Gy). For boost IMRT, a second computed tomography (CT-2) scan was done in the third to fourth week of IMRT. Xerostomia was scored 3-4 months after the start of IMRT. Results: The mean doses to the contralateral and ipsilateral parotid glands were 24.0 +/- 6.2 and 30.3 +/- 6.6 Gy, respectively. Among the 33 patients, xerostomia of grades 0, 1, 2 and 3 was noted in one, 18, 12 and two patients, respectively. Although the mean dose to the parotid glands was not correlated with the grade of xerostomia, the initial volume of the parotid glands was correlated with the grade of xerostomia (P = 0.04). Of 17 patients with small parotid glands (< or =38.8 ml) on initial CT (CT-1), 11 (65%) showed grade 2 or grade 3 xerostomia, whereas only three (19%) of 16 patients with larger parotid glands showed grade 2 xerostomia (P < 0.05). The mean volume of the parotid glands on CT-1 was 43.1 +/- 15.2 ml, but decreased significantly to 32.0 +/- 11.4 ml (74%) on CT-2 (P < 0.0001). Conclusions: Initial volumes of the parotid glands are significantly correlated with the grade of xerostomia in patients treated with IMRT. The volume of the parotid glands decreased significantly during the course of IMRT. abstract_id: PUBMED:17889351 Late effects in survivors of Hodgkin and non-Hodgkin lymphoma treated with autologous hematopoietic cell transplantation: a report from the bone marrow transplant survivor study. We determined the prevalence of self-reported late-effects in survivors of autologous hematopoietic cell transplantation (HCT) for Hodgkin lymphoma (HL, n = 92) and non-Hodgkin lymphoma (NHL, n = 184) using a 255-item questionnaire and compared them to 319 sibling controls in the Bone Marrow Transplant Survivor Study. Median age at HCT was 39 years (range: 13-69) and median posttransplant follow-up was 6 years (range: 2-17). Median age at survey was 46 years (range: 21-73) for survivors and 44 years (range: 19-79) for siblings. Compared to siblings, HCT survivors reported a significantly higher frequency of cataracts, dry mouth, hypothyroidism, bone impairments (osteoporosis and avascular necrosis), congestive heart failure, exercise-induced shortness of breath, neurosensory impairments, inability to attend work or school, and poor overall health. Compared to those receiving no total-body irradiation (TBI), patients treated with TBI-based conditioning had higher risks of cataracts (odds-ratio [OR] 4.9, 95% confidence interval [CI] 1.5-15.5) and dry mouth (OR 3.4, 95% CI 1.1-10.4). Females had a greater likelihood of reporting osteoporosis (OR 8.7, 95% CI: 1.8-41.7), congestive heart failure (OR 4.3, 95% CI 1.1-17.2), and abnormal balance, tremor, or weakness (OR 2.4, 95% CI 1.0-5.5). HL and NHL survivors of autologous HCT have a high prevalence of long-term health-related complications and require continued monitoring for late effects of transplantation. abstract_id: PUBMED:33453004 Late effects in survivors treated for lymphoma as adolescents and young adults: a population-based analysis. Purpose: The study objective is to describe and quantify the incidence of treatment-induced late effects in AYA lymphoma patients. Methods: Consecutive patients diagnosed with Hodgkin lymphoma (HL) or non-Hodgkin lymphoma (NHL) at 15-24 years of age were identified. All patients in British Columbia who received radiation therapy (RT) from 1974 to 2014 with ≥ 5-year survival post-RT were included. Late effects' analyses included only survivors who received RT to the relevant anatomical site(s) and/or relevant chemotherapy, and were reported as cumulative incidence (CI) ± standard error. Results: Three hundred and five patients were identified (74% HL). Median age of diagnosis was 21 years. Median follow-up was 19.1 years for secondary malignancy and 7.2 years for other endpoints. Hypothyroidism was the most prevalent late effect, with a CI of 22.4 ± 2.8% and 35.1 ± 4% at 5 and 10 years, respectively. CI of in-field secondary malignancy was 0.4 ± 0.4% at 10 years and 2.8 ± 1.2% at 20 years. CI of symptomatic pulmonary toxicity was 4.6 ± 1.5% and 6.8 ± 2.0% at 5 and 10 years, respectively, and was higher in patients receiving multiple RT courses (p = 0.009). Esophageal complications occurred at a CI of 1.4 ± 0.8% at 5 years and 2.2 ± 1.1% at 10 years. CI of xerostomia/dental decay was 2.6 ± 1.3% at 5 years and 4.9 ± 2.1% at 10 years. CI of cardiac disease was at 2.3 ± 0.9% at 5 years and 4.4 ± 1.5% at 10 years. CI of infertility was 6.5 ± 1.6% at 5 years and 9.4 ± 2.1% at 10 years. Conclusion: Survivors of AYA lymphoma have a high incidence and diverse presentation of late effects. Implications For Cancer Survivors: AYA lymphoma survivors should be educated about their risks of late effects and offered screening and follow-up when appropriate. abstract_id: PUBMED:8426199 Central lymphatic irradiation for stage III nodular malignant lymphoma: long-term results. Purpose: To report the long-term results of central lymphatic irradiation for stage III nodular malignant lymphoma. Patients And Methods: Between 1969 and 1985, 34 patients (26 with nodular poorly differentiated lymphoma, four with nodular mixed lymphocytic/histiocytic lymphoma, and four with nodular histiocytic lymphoma) were treated with central lymphatic irradiation. Median age of the group was 51 years (range, 30 to 73). There were 15 men and 19 women. Staging work-up included a physical examination and bone marrow biopsy in all patients. Seventy-four percent had a lymphangiogram (LAG) and 44% a laparotomy (LAP). Eighty-two percent had either a LAP or a LAG. Thirty-two patients were Ann Arbor stage IIIA and two were stage IIIB. All patients received lymphatic irradiation that encompassed cervical, supraclavicular, axillary, mediastinal, paraaortic, mesenteric, pelvic, and femoral lymphatics to total doses ranging from 20 to 30 Gy in 1.0- to 1.8-Gy fractions. Waldeyer's ring was initially treated in 17 patients. Follow-up information is available on all 34 patients. Median follow-up is 9 years, 8 months (range, 15 to 244 months). Results: Life-table actuarial overall, disease-free, and cause-specific survival rates at 15 years are 28%, 40%, and 46%, respectively. Only one relapse was observed after 9 years. Disease-free survival was significantly improved in patients with five or fewer sites of involvement (P = .02). Age, sex, B symptoms, histology, and technique of irradiation were not prognostically significant. Salvage therapy, including further irradiation and/or chemotherapy, was delivered to 20 patients. Ten percent of these patients remain alive without evidence of disease. Toxicity data were available for the patients treated at the Medical College of Wisconsin (MCW). Radiation Therapy Oncology Group (RTOG) acute hematologic, gastrointestinal, and salivary toxicity scores were < or = 2 in 83% of patients. Late toxicity scores were < or = 2 in 96%. Persistent xerostomia was noted in 23% of patients who received initial treatment to Waldeyer's ring. Conclusion: These results suggest that initial comprehensive central lymphatic irradiation may be the preferred approach to achieve a durable relapse-free interval for this group of patients. Answer: Xerostomia, or dry mouth, is a known toxicity in survivors of aggressive non-Hodgkin's lymphoma (NHL) of Waldeyer's ring, and its severity is comparable to that experienced by head and neck squamous cell carcinoma patients who receive ipsilateral parotid irradiation (PUBMED:19307951). This suggests that parotid-sparing radiotherapy (RT) techniques could be beneficial in reducing the incidence and severity of xerostomia in these patients. The importance of sparing the parotid glands during RT is further supported by studies showing that the initial volume of the parotid glands is significantly correlated with the grade of xerostomia in patients treated with intensity-modulated radiation therapy (IMRT) for head and neck cancers (PUBMED:15976068). Additionally, late effects in survivors treated for lymphoma as adolescents and young adults, including xerostomia, highlight the need for careful consideration of treatment-induced complications and the potential benefits of parotid-sparing approaches (PUBMED:33453004). Moreover, the occurrence of xerostomia as a late effect in survivors of Hodgkin and non-Hodgkin lymphoma treated with autologous hematopoietic cell transplantation (HCT) further emphasizes the need for ongoing monitoring and management strategies to mitigate such complications (PUBMED:17889351). In conclusion, the evidence suggests that parotid-sparing RT techniques could play a role in reducing the incidence and severity of xerostomia in long-term survivors of aggressive NHL of Waldeyer's ring. This approach may help improve the quality of life for these patients by minimizing one of the distressing side effects of their treatment.
Instruction: Do socioeconomic characteristics of neighbourhood of residence independently influence incidence of coronary heart disease and all-cause mortality in older British men? Abstracts: abstract_id: PUBMED:18277181 Do socioeconomic characteristics of neighbourhood of residence independently influence incidence of coronary heart disease and all-cause mortality in older British men? Background: The relationship between coronary heart disease (CHD) incidence and death, and individual sociodemographic status is well established. Our aim was to examine whether neighbourhood deprivation scores predict CHD and death in older men, independently of individual sociodemographic status. Methods: Prospective study of 5049 men, born between 1918 and 1939, recruited from 24 British towns encompassing 969 electoral wards, without documented evidence of previous major CHD when responding to a questionnaire in 1992, and followed up for incidence of major CHD and death. Results: Four hundred and seventy-two new major CHD events (1.08% pa), and 1021 deaths (2.28% pa) occurred over an average of 9.75 years. When men were divided into fifths according to increasing neighbourhood deprivation score, CHD incidences (% pa) were 0.92, 0.89, 0.99, 1.33 and 1.29. When modelling continuous trends, the rate ratio for men in the top fifth compared with the bottom fifth was 1.55 (95% confidence interval 1.19-2.00) for CHD. This rate ratio was, however, no longer statistically significant [1.22 (95% confidence interval 0.92-1.61)] when effects of individual sociodemographic status measures (car ownership, housing, longest held occupation, marital status and social networks) were accounted for. Conclusion: Little evidence of an independent relationship of neighbourhood deprivation with CHD incidence was found once individual measures of sociodemographic status had been adjusted for. abstract_id: PUBMED:30253698 Heat wave-related mortality in Sweden: A case-crossover study investigating effect modification by neighbourhood deprivation. Aims: The present study aimed to investigate if set thresholds in the Swedish heat-wave warning system are valid for all parts of Sweden and if the heat-wave warning system captures a potential increase in all-cause mortality and coronary heart disease (CHD) mortality. An additional aim was to investigate whether neighbourhood deprivation modifies the relationship between heat waves and mortality. Methods: From 1990 until 2014, in 14 municipalities in Sweden, we collected data on daily maximum temperatures and mortality for the five warmest months. Heat waves were defined according to the categories used in the current Swedish heat-wave warning system. Using a case-crossover approach, we investigated the association between heat waves and mortality in Sweden, as well as a modifying effect of neighbourhood deprivation. Results: On a national as well as a regional level, heat waves significantly increased both all-cause mortality and CHD mortality by approximately 10% and 15%, respectively. While neighbourhood deprivation did not seem to modify heat wave-related all-cause mortality, CHD mortality did seem to modify the risk. Conclusions: It may not be appropriate to assume that heat waves in Sweden will have the same impact in a northern setting as in a southern, or that the impact of heat waves will be the same in affluent and deprived neighbourhoods. When designing and implementing heat-wave warning systems, neighbourhood, regional and national information should be incorporated. abstract_id: PUBMED:23447572 Neighbourhood deprivation and hospitalization for atrial fibrillation in Sweden. Aims: Several cardiovascular disorders (CVDs) are strongly associated with socioeconomic disparities and neighbourhood deprivation. However, no study has determined whether neighbourhood deprivation is associated with atrial fibrillation (AF). We aimed to determine whether there is an association between neighbourhood deprivation and hospitalization for AF. Methods And Results: The entire Swedish population aged 25-74 years was followed from 1 January 2000 until hospitalization for AF, death, emigration, or the end of the study period (31 December 2008). Data were analysed by multilevel logistic regression, with individual-level characteristics (age, marital status, family income, educational attainment, migration status, urban/rural status, mobility, and comorbidity) at the first level and level of neighbourhood deprivation at the second level. Neighbourhood deprivation was significantly associated with AF hospitalization rate in women [odds ratio (OR) = 1.40, 95% confidence interval (CI) 1.35-1.47], but not men (OR = 1.01, 95% CI 0.97-1.04). The odds of AF in women living in the most deprived neighbourhoods remained significant after adjustment for age and individual-level socioeconomic characteristics (OR = 1.12, 95% 1.08-1.16). However, in the full model, which took account of age, individual-level socioeconomic characteristics, and comorbidities (chronic lower respiratory diseases, OR = 1.30; type 2 diabetes, OR = 1.32; alcoholism and alcohol-related liver disease, OR = 1.57; hypertension, OR = 2.84; obesity, OR = 1.80; heart failure, OR = 7.40; coronary heart disease, OR = 1.81; and hyperthyroidism, OR = 6.79), the odds of AF did not remain significant in women in the most deprived neighbourhoods (OR = 1.03, 95% CI 0.99-1.07). Conclusion: Neighbourhood deprivation and socioeconomic disparities are not independently associated with hospitalized AF in contrast to many other CVDs. abstract_id: PUBMED:8604778 Socioeconomic differentials in mortality risk among men screened for the Multiple Risk Factor Intervention Trial: I. White men. Objectives: This study examined socioeconomic differentials in risk of death from a number of specific causes in a large cohort of White men in the United States. Methods: For 300 685 White men screened for the Multiple Risk Factor Intervention Trial between 1973 and 1975, data were collected on median income of White households in the zip code of residence, age, cigarette smoking, blood pressure, serum cholesterol, previous myocardial infarction, and drug treatment for diabetes. The 31 737 deaths that occurred over the 16-year follow-up period were grouped into specific causes and related to median White family income. Results: There was an inverse association between age- adjusted all-cause mortality and median family income. There was no attenuation of this association over the follow-up period, and the association was similar for the 22 clinical centers carrying out the screening. The gradient was seen for many-but not all-of the specific causes of death. Other risk factors accounted for some of the association between income and coronary heart disease and smoking-related cancers. Conclusions: Socioeconomic position, as measured by median family income of area of residence, is an important determinant of mortality risk in White men. abstract_id: PUBMED:17395482 Skin color and mortality risk among men: the Puerto Rico Heart Health Program. Purpose: To examine the association between skin color and all-cause and cardiovascular disease (CVD)-related mortality risk before and after adjusting for selected characteristics and risk factors, we used data on 5,304 men with information on skin color at Exam 3 of the Puerto Rico Heart Health program (PRHHP), a longitudinal study of the incidence of coronary heart disease in Puerto Rican men. Methods: Mortality was ascertained using hospital and physician records, postmortem records, death certificates, and information from the next of kin. Results: Dark-skinned men exhibited higher age-adjusted mortality rates than light skinned men (10.1 vs. 8.8/10,000 population). There was no association between skin color and all-cause and CVD-related mortality. However, the association between skin color and all-cause mortality varied with area of residence (p for interaction = 0.05). Among men living in urban areas, the risk of all-cause mortality was 28% (95% confidence interval, 1.02-1.61) greater among dark-skinned men than their light-skinned counterparts after adjusting for age, education, BMI, physical activity, and the presence of diabetes. There was no association between skin color and CVD mortality in urban men. Neither all-cause nor CVD mortality was associated with skin color among rural men. Conclusion: Our results suggest that skin color may be capturing environmental dynamics that may influence mortality risk among Puerto Rican men. abstract_id: PUBMED:14966232 The role of individual and contextual socioeconomic circumstances on mortality: analysis of time variations in a city of north west Italy. Study Objective: To evaluate the independent and mutual effects of neighbourhood deprivation and of individual socioeconomic conditions on mortality and to assess the trends over the past 30 years and the residual neighbourhood heterogeneity. Design: General and cause specific mortality was analysed as a function of time period, highest educational level achieved, housing conditions, and neighbourhood deprivation, using multilevel Poisson models stratified by gender and age class. Setting: The study was conducted in Turin, a city in north west Italy with nearly one million inhabitants and consisting of 23 neighbourhoods. Participants: The study population included three cohorts of persons aged 15 years or older, recorded in the censuses of 1971, 1981, and 1991 and followed up for 10 years after each census. Main Results: Individual and contextual socioeconomic conditions showed an independent and significant impact on mortality, both among men and women, with significantly higher risks for coronary heart and respiratory diseases among people, aged less than 65 years, residing in deprived neighbourhoods (9% and 15% excess for coronary heart diseases, 20% and 24% for respiratory diseases, respectively for men and women living in deprived neighbourhoods compared with rich). The decreasing time trend in general mortality was less pronounced among men with lower education and poorer housing conditions, compared with their more advantaged counterparts; the same was found in less educated women aged less than 65 years. Conclusions: These results and further developments in the evaluation of impact and mechanisms of other contextual effects can provide information for both health and non-health oriented urban policies. abstract_id: PUBMED:26864672 Neighbourhood socioeconomic status and coronary heart disease in individuals between 40 and 50 years. Objective: The incidence of myocardial infarction (MI) has decreased in general but not among younger middle-aged adults. We performed a cohort study of the association between neighbourhood socioeconomic status (SES) at the age of 40 and risk of MI before the age of 50 years. Methods: All individuals in Sweden were included in the year of their 40th birthday, if it occurred between 1998 and 2010. National registers were used to categorise neighbourhood SES into high, middle and low, and to retrieve information on incident MI and coronary heart disease (CHD). Cox regression models, adjusted for marital status, education level, immigrant status and region of residence, provided an estimate of the HRs and 95% CIs for MI or CHD. Results: Out of 587 933 men and 563 719 women, incident MI occurred in 2877 (0.48%) men and 932 (0.17%) women; and CHD occurred in 4400 (0.74%) men and 1756 (0.31%) women during a mean follow-up of 5.5 years. Using individuals living in middle-SES neighbourhoods as referents, living in high-SES neighbourhoods was associated with lower risk of MI in both sexes (HR (95% CI): men: 0.72 (0.64 to 0.82), women: 0.66 (0.53 to 0.81)); living in low-SES neighbourhoods was associated with a higher risk of MI (HR (95% CI): men: 1.31 (1.20 to 1.44), women: 1.28 (1.08 to 1.50)). Similar risk estimates for CHD were found. Conclusions: The results of our study suggest an increased risk of MI and CHD among residents from low-SES neighbourhoods and a lower risk in those from high-SES neighbourhoods compared with residents in middle-SES neighbourhoods. abstract_id: PUBMED:37463808 Workplace socioeconomic characteristics and coronary heart disease: a nationwide follow-up study. Objectives: Important gaps in previous research include a lack of studies on the association between socioeconomic characteristics of the workplace and coronary heart disease (CHD).We aimed to examine two contextual factors in association with individuals' risk of CHD: the mean educational level of all employees at each individual's workplace (educationwork) and the neighbourhood socioeconomic characteristics of each individual's workplace (neighbourhood SESwork). Design: Nationwide follow-up/cohort study. Setting: Nationwide data from Sweden. Participants: All individuals born in Sweden from 1943 to 1957 were included (n=1 547 818). We excluded individuals with a CHD diagnosis prior to 2008 (n=67 619), individuals without workplace information (n=576 663), individuals lacking residential address (n=4139) and individuals who had unknown parents (n=7076). A total of 892 321 individuals were thus included in the study (426 440 men and 465 881 women). Primary And Secondary Outcome Measures: The outcome variable was incident CHD during follow-up between 2008 and 2012. The association between educationwork and neighbourhood SESwork and the outcome was explored using multilevel and cross-classified logistic regression models to determine ORs and 95% CIs, with individuals nested within workplaces and neighbourhoods. All models were conducted in both men and women and were adjusted for age, income, marital status, educational attainment and neighbourhood SESresidence. Results: Low (vs high) educationwork was significantly associated with increased CHD incidence for both men (OR 1.29, 95% CI 1.23 to 1.34) and women (OR 1.38, 95% CI 1.29 to 1.47) and remained significant after adjusting for potential confounders. These findings were not replicable for the variable neighbourhood SESwork. Conclusions: Workplace socioeconomic characteristics, that is, the educational attainment of an individual's colleagues, may influence CHD risk, which represents new knowledge relevant to occupational health management at workplaces. abstract_id: PUBMED:9680230 Residential segregation and mortality in New York City. The objective of this research was to determine the effect of residential racial segregation on all-cause and cardiovascular disease mortality in New York City. A cross-sectional study of residents in New York City was conducted linking mortality records from 1988 through 1994, to the 1990 United States Census data stratified by zipcode. All-cause and cardiovascular disease mortality rates for non-Hispanic blacks and whites were estimated by zipcode. Zipcodes were aggregated according to the degree of residential segregation (predominantly (> or = 75%) white and black areas) and mortality rates were compared. Multiple regression analysis was used to associate population characteristics with mortality. In New York City, although overall mortality rates of blacks exceed whites, these rates varied substantially by locality according to the pattern of racial segregation. Whites living in the higher (mainly white) socioeconomic areas had lower mortality rates than whites living in predominantly black areas (1473.7 vs 1934.1 for males, and 909.9 vs 1414.7 for females for all-cause mortality). This was true for all age groups. By contrast, elderly blacks living in black areas, despite their less favorable socioeconomic status, had lower mortality rates for all-cause, total cardiovascular disease, and coronary heart disease, than did those living in white areas, even after adjusting for available socioeconomic variables. Racial segregation in residence is independently associated with mortality. Within racially segregated areas, members of the dominant group, for all age, among whites and elderly blacks, enjoy outcomes superior both to members of the minority racial group of their community, and to members of the same race residing in other areas, where they are in the minority, independent of socio-economic status. abstract_id: PUBMED:25863724 Living arrangements as determinants of myocardial infarction incidence and survival: A prospective register study of over 300,000 Finnish men and women. Living with a spouse is associated with a reduced risk of coronary heart disease mortality in middle age, but it remains unclear whether marriage and other living arrangements are important both for the development of the disease and the survival following incidence. Cohabitation and living alone have also become more common in many Western societies and thus warrant further study. We explored the association between living arrangements and myocardial infarction (MI) incidence and fatality. We used a population-based register sample of adults aged 40-60 in Finland in 1995 (n = 302,885) followed up until the end of 2007. MI incidence and mortality were identified from hospital discharge records and cause of death register (5917 incident cases in men and 1632 in women). Living with a marital partner was contrasted to three alternatives: cohabiting with non-marital partner, co-residence with persons other than a partner and living alone. MI incidence and long-term fatality were analysed with Cox proportional hazards regression with time-varying covariates and first-day fatality with logistic regression. Men who were married had a lower risk of MI incidence even after adjusting for socioeconomic factors - i.e. education, occupation, income, wealth and employment status - with small differences between the other living arrangement groups. For women the effects of living arrangements on incidence were fully explained by the same socioeconomic factors. However, our findings revealed that living arrangements were strong determinants for survival after MI independent of other socio-demographic factors. The results demonstrate greater fatality associated with living alone in men and suggest that cohabitation in midlife may be associated with a greater fatality risk in women. The social support and control offered by a marital relationship may protect from MI fatality in particular. Answer: The relationship between socioeconomic characteristics of the neighborhood of residence and the incidence of coronary heart disease (CHD) and all-cause mortality in older British men has been examined in a prospective study of 5049 men. The study found that when men were divided into fifths according to increasing neighborhood deprivation score, CHD incidences were higher in the more deprived neighborhoods. However, when individual sociodemographic status measures such as car ownership, housing, longest held occupation, marital status, and social networks were accounted for, the relationship between neighborhood deprivation and CHD incidence was no longer statistically significant. This suggests that individual measures of sociodemographic status may be more important predictors of CHD incidence than neighborhood deprivation alone in this population (PUBMED:18277181). In summary, the study provides little evidence of an independent relationship between neighborhood deprivation and CHD incidence once individual measures of sociodemographic status have been adjusted for in older British men.
Instruction: The smooth muscle ratio at the renal pelvis in adults: does it predict surgical outcome? Abstracts: abstract_id: PUBMED:15539845 The smooth muscle ratio at the renal pelvis in adults: does it predict surgical outcome? Introduction: Patients with ureteropelvic junction obstruction occasionally remain undiagnosed until adulthood. There are no objective criteria to predict the results of pyeloplasty in adult patients. We have evaluated the results of pyeloplasty in adult patients and investigated whether these results are correlated with the histopathology of the surgical specimen. Materials And Methods: Histological sections from 26 patients with ureteropelvic junction obstruction were analyzed. Their mean age was 25.1 years. Diethylenetriamine-pentaacetic acid scans were used to determine the degree of renal obstruction and postoperative healing. Improvement in diuretic renography was defined as a greater than 20% decrease in the half-time of the preoperative value. To examine smooth muscle and collagen tissue, sections were stained using Masson's trichrome. Smooth muscle ratio was identified by color image analysis. Results: 18 patients (69.2%) fulfilled the criteria of healing. Patients with an improved scan had a mean smooth muscle percent (SMP) of 1.85+/-0.87 while subjects with no significant change in their diuretic scans had a mean SMP of 0.36+/-0.03 (p=0.001). There was a strong correlation between the SMP and the improvement. Conclusions: Adult pyeloplasty was found successful in about 70% of the cases. The SMP of the renal pelvis seems to be helpful in predicting the surgical outcome. abstract_id: PUBMED:21831482 Thickness of the renal pelvis smooth muscle indicates the postoperative course of ureteropelvic junction obstruction treatment Objective: To investigate the relationship between the histopathologic findings and the postoperative course of children surgically treated for ureteropelvic junction (UPJ) obstruction. Material And Methods: Twenty-eight patients operated for unilateral UPJ obstruction from 1998 to 2005 with adequate histopathologic specimens and postoperative follow up were retrospectively reviewed. Specimens were stained using elastic van Geisson to differentiate smooth muscle from collagen and elastin. Postoperative follow up included renal ultrasound (U/S) and diuretic renogram studies. Results: Twelve patients with mean renal pelvis smooth muscle thickness (mRPSMT) of 136.97 ± 34.17 improved on the 6(th) postoperative month. Nine patients that improved after 9 months postoperatively had mRPSMT=173.61 ± 33.91. The rest 7 patients that improved on the 12(th) postoperative month had mRPSMT=258.78 ± 96.09. Correlation between renal pelvis smooth muscle and time of postoperative improvement was extremely significant (r = 0.7928, p &lt; 0.0001). Conclusion: The thickness of the renal pelvis smooth muscle is significantly correlated to the postoperative course of patients with UPJ obstruction and can be used as a prognostic tool for the onset of their improvement. abstract_id: PUBMED:32415739 Identification and classification of interstitial cells in the mouse renal pelvis. Key Points: Platelet-derived growth factor receptor-α (PDGFRα) is a novel biomarker along with smooth myosin heavy chain for the pacemaker cells (previously termed 'atypical' smooth muscle cells) in the murine and cynomolgus monkey pelvis-kidney junction. PDGFRα+ cells present in adventitial and urothelial layers of murine renal pelvis do not express smooth muscle myosin heavy chain (smMHC) but are in close apposition to nerve fibres. Most c-Kit+ cells in the renal pelvis are mast cells. Mast cells (CD117+ /CD45+ ) are more abundant in the proximal renal pelvis and pelvis-kidney junction regions whereas c-Kit+ interstitial cells (CD117+ /CD45- ) are found predominantly in the distal renal pelvis and ureteropelvic junction. PDGFRα+ cells are distinct from c-Kit+ interstitial cells. A subset of PDGFRα+ cells express the Ca2+ -activated Cl- channel, anoctamin-1, across the entire renal pelvis. Spontaneous Ca2+ transients were observed in c-Kit+ interstitial cells, smMHC+ PDGFRα cells and smMHC- PDGFRα cells using mice expressing genetically encoded Ca2+ sensors. Abstract: Rhythmic contractions of the renal pelvis transport urine from the kidneys into the ureter. Specialized pacemaker cells, termed atypical smooth muscle cells (ASMCs), are thought to drive the peristaltic contractions of typical smooth muscle cells (TSMCs) in the renal pelvis. Interstitial cells (ICs) in close proximity to ASMCs and TSMCs have been described, but the role of these cells is poorly understood. The presence and distributions of platelet-derived growth factor receptor-α+ (PDGFRα+ ) ICs in the pelvis-kidney junction (PKJ) and distal renal pelvis were evaluated. We found PDGFRα+ ICs in the adventitial layers of the pelvis, the muscle layer of the PKJ and the adventitia of the distal pelvis. PDGFRα+ ICs were distinct from c-Kit+ ICs in the renal pelvis. c-Kit+ ICs are a minor population of ICs in murine renal pelvis. The majority of c-Kit+ cells were mast cells. PDGFRα+ cells in the PKJ co-expressed smooth muscle myosin heavy chain (smMHC) and several other smooth muscle gene transcripts, indicating these cells are ASMCs, and PDGFRα is a novel biomarker for ASMCs. PDGFRα+ cells also express Ano1, which encodes a Ca2+ -activated Cl- conductance that serves as a primary pacemaker conductance in ICs of the GI tract. Spontaneous Ca2+ transients were observed in c-Kit+ ICs, smMHC+ PDGFRα cells and smMHC- PDGFRα cells using genetically encoded Ca2+ sensors. A reporter strain of mice with enhanced green fluorescent protein driven by the endogenous promotor for Pdgfra was shown to be a powerful new tool for isolating and characterizing the phenotype and functions of these cells in the renal pelvis. abstract_id: PUBMED:2260681 Electrical properties of smooth muscle cell membrane in renal pelvis of rabbits. Intracellular recordings were made to study the electrical properties of smooth muscle cells in the rabbit renal pelvis. The muscle cells exhibited spontaneous oscillation in the membrane potential (slow wave). The slow waves were regular and were resistant to tetrodotoxin and sympathomimetic or parasympathomimetic antagonists, findings indicative of myogenic activity. The membrane was depolarized by an increase in extracellular concentration of K+ ([K+]o), decrease in [Na+]o, inhibition of the electrogenic Na(+)-K+ pump by ouabain or K(+)-free solution, and the application of norepinephrine (NE, greater than 10(-6) M). The maximum slope of the membrane depolarization produced by a 10-fold increase in [K+]o was approximately 48 mV. Reductions in [Ca2+]o inhibited the generation of slow waves with no marked change in the membrane potential. Depolarizations produced by any given method increased the frequency and decreased the amplitude of the slow wave, and NE had the most potent accelerating action on the frequency. Hyperpolarization of the membrane by 1-5 mV with extracellularly applied current stimuli reduced the frequency, and a strong hyperpolarization (greater than 5 mV) blocked the generation of slow waves. Electrophysiological properties of the slow waves obtained with tissues of the renal pelvis and intestinal smooth muscles were compared. abstract_id: PUBMED:3944918 Characteristics of spontaneous contraction and effects of isoproterenol on contractility in isolated rabbit renal pelvic smooth muscle strips. The contractile characteristics and the contractile responses to isoproterenol were examined by measurement of the isometric force in three types of smooth muscle strips obtained from the upper and lower parts of the renal pelvis and longitudinal strips encompassing the entire length of the renal pelvis. Regular rhythmic contraction was recorded from the upper renal pelvic strip with a frequency of 8.4 +/- 0.9 times per minute and from the lower renal pelvic strip with a frequency of 2.3 +/- 0.4 times per minute. The contractile frequency in the whole pelvic strip encompassing the entire longitudinal length of renal pelvis was found dependent on that in the upper renal pelvic strip. Isoproterenol caused an increase of contractile force in the upper pelvic strip and a decrease in the lower pelvic and whole pelvic strips. The results seem to suggest that the smooth muscle of the upper part of the renal pelvis is distinct from other smooth muscles and resembles heart muscle in its contractile response to isoproterenol. abstract_id: PUBMED:24718010 Leiomyoma of renal pelvis. Leiomyoma is a benign tumour of smooth muscle origin, which can also affect many organs specially kidneys. In kidneys, it is mostly found in the renal capsule but rarely does it also involve the renal pelvis. It is mostly found in middle-aged women. A 25-year-old man presented with hematuria histologically secondary to proven leiomyoma of left renal pelvis with unusual clinical features underwent minimally invasive surgical management. abstract_id: PUBMED:7202455 Ultrastructure of the urinary tract muscle coat in man. Calices, renal pelvis, pelvi-ureteric junction and ureter. Muscle coat specimens from human calices, renal pelvis, pelvi-ureteric junction, upper, middle and lower ureter segments were examined under an electron microscope. These specimens were taken from 8 patients who had undergone nephroureterectomy: 6 for localized renal carcinoma and 2 for papillary tumor of the pelvis. Two types of smooth muscle cells were observed, "typical" muscle cells and "special" muscle cells. The latter are rich in agranular endoplasmic reticulum, have few myofilaments and are interconnected in numerous, extended, peculiar contact areas. The ratio between these two types of muscle cells differs as also does their innervation between the various segments examined. On the basis of our findings we propose that the "special" muscle cells perform a "pacemaking" function. abstract_id: PUBMED:11173555 Properties of spontaneous electrical activity in smooth muscle of the guinea-pig renal pelvis. In the guinea-pig renal pelvis, most smooth muscle cells examined (&gt;90%), using a conventional microelectrode, had a resting membrane potential of about -50 mV and produced spontaneous action potentials with initial fast spikes and following plateau potentials. The remainder (&lt;10%) had a resting membrane potential of about -40 mV and produced periodical depolarization with slow rising and falling phases. Experiments were carried out to investigate the properties of spontaneous action potentials. The potentials were abolished by nifedipine, suggesting a possible contribution of voltage-gated Ca(2+) channels to the generation of these potentials. Niflumic acid and 4,4'-diisothiocyanostilbene-2,2'-disulfonic acid (DIDS), inhibitors of Ca(2+)-activated Cl(-) channels, showed different effects on the spontaneous action potentials, and the former but not the latter inhibited the activities, raised the question of an involvement of Cl(-) channels in the generation of these activities. Depleting internal Ca(2+) stores directly with caffeine or indirectly by inhibiting Ca(2+)-ATPase at the internal membrane with cyclopiazonic acid (CPA) prevented the generation of spontaneous activity. Chelating intracellular Ca(2+) by 1,2-bis(2-aminophenoxy)ethane-N,N,N',N'-tetraacetic acid (BAPTA) increased the amplitude of the spike component of spontaneous activity. Indomethacin inhibited the spontaneous activity, whereas prostaglandin F(2 alpha) enhanced it. The results indicate that in smooth muscle of the renal pelvis, the generation of spontaneous activity is causally related to the activation of voltage-gated Ca(2+) channels through which the influx of Ca(2+) may trigger the release of Ca(2+) from the internal stores to activate a set of ion channels at the membrane. Endogenous prostaglandins may be involved in the initiation of spontaneous activity. abstract_id: PUBMED:15501701 Vasopressin excitatory action on smooth muscle from human renal calyx and pelvis. The motor response to vasopressin, a neuropeptide promoting the reabsorption of water, was isometrically investigated in vitro in human renal calyces and pelvis in relation to possible modulation of urinary flow by these tubular structures. Kidneys were obtained from nine male patients who underwent nephrectomy for either renal or ureteral cancer. Minor calyces and pelvis were carefully removed. Strips (10 mm x 3 mm) were cut from infundibular region of minor calyces and from renal pelvis and placed in 10 ml organ bath for isometric tension recordings. Calyceal and pelvic smooth muscle strips exhibited spontaneous phasic contractions which occurred with regular frequency and amplitude. Vasopressin induced a dose-dependent [10(-10) to 10(-6) M] enhancement of basal tone (P &lt;0.01) and a decrease of spontaneous contractions on isolated strips from minor calyces and pelvis. The effect of vasopressin was inhibited by prior administration of D(CH2)5Tyr(Me)2-Arg8-Vasopressin antagonist [10(-7) M]. The excitatory response to vasopressin was Tetrodotoxin [TTX]-resistant and was not affected by pre-treatment with phentolamine [10(-5) M], atropine [10(-5) M], and hexamethonium [10(-5) M]. After incubation of the specimens in Ca2+-free medium containing EGTA [0.5 mM] or after treatment with nifedipine [10(-5) M], both spontaneous and vasopressin-induced contractions [10(-10) to 10(-6) M] were completely inhibited in all specimens. Our results can be interpreted to imply that the tonic contractions induced by vasopressin facilitate the reabsorption of water by increasing the hydraulic resistance of the tubular structures below collecting ducts. abstract_id: PUBMED:25204732 Electrical propagation in the renal pelvis, ureter and bladder. Under normal conditions, following the passage of urine from the collecting duct, the urine is stored briefly in the renal pelvis before being transported through the ureter to the bladder where the urine is stored for a longer time (hours) before being voided through the urethra. The transport of urine from the renal pelvis to the bladder occurs spontaneously due to contractions of the muscles in the wall of the pelvis and ureter. Spontaneous contractions also occur in the detrusor muscle and are responsible for maintaining the bladder shape during the filling phase. These muscle contractions occur as result of electrical impulses, which are generated and propagated through different parts of the urinary tract. The renal pelvis and the ureter differ from the bladder in relation to the origin, characteristics and propagation of these electrical impulses. In the ureter, the electrical impulses originate mainly at the proximal region of the renal pelvis and are transmitted antegradely down the length of the ureter. The electrical impulses in the bladder, on the other hand, originate at any location in the bladder wall and can be transmitted in different directions with the axial direction being the prominent one. In this manuscript, an overview of the current state of research on the origin and propagation characteristics of these electrical impulses in the normal and pathological conditions is provided. Answer: The smooth muscle ratio (SMP) at the renal pelvis in adults does appear to predict surgical outcomes, particularly in the context of ureteropelvic junction obstruction. A study analyzing histological sections from 26 adult patients with ureteropelvic junction obstruction found that those with an improved postoperative diuretic renogram had a significantly higher mean SMP (1.85±0.87) compared to those with no significant change in their scans (0.36±0.03, p=0.001). This suggests a strong correlation between the SMP and surgical improvement, indicating that a higher proportion of smooth muscle in the renal pelvis is associated with better outcomes following pyeloplasty. The study concluded that the SMP of the renal pelvis could be helpful in predicting the surgical outcome in adult patients undergoing this procedure (PUBMED:15539845). Furthermore, another study investigating the postoperative course of children surgically treated for ureteropelvic junction obstruction also found a significant correlation between the thickness of the renal pelvis smooth muscle and the time of postoperative improvement. Patients with a mean renal pelvis smooth muscle thickness (mRPSMT) of 136.97 ± 34.17 micrometers improved by the 6th postoperative month, while those with mRPSMT of 258.78 ± 96.09 micrometers improved by the 12th postoperative month. The correlation between renal pelvis smooth muscle thickness and the time of postoperative improvement was extremely significant (r = 0.7928, p < 0.0001), indicating that the thickness of the renal pelvis smooth muscle is a prognostic tool for the onset of improvement in patients with UPJ obstruction (PUBMED:21831482). These findings suggest that the smooth muscle content and thickness in the renal pelvis are important factors that can be used to predict the success of surgical intervention for ureteropelvic junction obstruction.
Instruction: Management of extremity soft tissue sarcomas with limb-sparing surgery and postoperative irradiation: do total dose, overall treatment time, and the surgery-radiotherapy interval impact on local control? Abstracts: abstract_id: PUBMED:7607971 Management of extremity soft tissue sarcomas with limb-sparing surgery and postoperative irradiation: do total dose, overall treatment time, and the surgery-radiotherapy interval impact on local control? Purpose: To evaluate potential prognostic factors in the treatment of extremity soft tissue sarcomas that may influence local control, distant metastases, and overall survival. Methods And Materials: Sixty-seven patients with extremity soft tissue sarcomas were treated with curative intent by limb-sparing surgery and postoperative radiation therapy at the Fox Chase Cancer Center or the Hospital of the University of Pennsylvania, between October 1970 and March 1991. Follow-up ranged from 4-218 months. The median external beam dose was 60.4 Gy. In 13 patients, interstitial brachytherapy was used as a component of treatment. Results: The 5-year local control rate for all patients was 87%. The 5-year local control rate for patients who received &lt; or = 62.5 Gy was 78% compared to 95% for patients who received &gt; 62.5 Gy had larger tumors (p = 0.008) and a higher percentage of Grade 3 tumors and positive margins than patients who received &lt; or = 62.5 Gy. The 5-year local control rate for patients with negative or close margins was 100% vs. 56% in patients with positive margins (p = 0.002). Cox proportional hazards regression analysis was performed using the following variables as covariates: tumor dose, overall treatment time, interval from surgery to initiation of radiation therapy, margin status, grade, and tumor size. Total dose (p = 0.04) and margin status (p = 0.02) were found to significantly influence local control. Only tumor size significantly influenced distant metastasis (p = 0.01) or survival (p = 0.03). Conclusion: Postoperative radiation therapy doses &gt; 62.5 Gy were noted to significantly improve local control in patients with extremity soft tissue sarcomas. This is the first analysis in the literature to demonstrate the independent influence of total dose on local control of extremity soft tissue sarcomas treated with adjuvant postoperative irradiation. abstract_id: PUBMED:28717856 Intraoperative radiotherapy for extremity soft-tissue sarcomas: can long-term local control be achieved? Background: Intraoperative electron-beam radiation therapy (IOERT) during limb-sparing surgery has the advantage of delivering a single high boost dose to sarcoma residues and surgical bed area near to radiosensitive structures with limited toxicity. Retrospective studies have suggested that IOERT may improve local control compared to standard radiotherapy and we aimed to demonstrate this theory. Therefore, we performed an observational prospective study to determine (1) if it is possible to achieve high local control by adding IOERT to external-beam radiation therapy (EBRT) in extremity soft-tissue sarcomas (STS), (2) if it is possible to improve long-term survival rates, and (3) if toxicity could be reduced with IOERT MATERIALS AND METHODS: From 1995-2003, 39 patients with extremity STS were treated with IOERT and postoperative radiotherapy. The median follow-up time was 13.2 years (0.7-19). Complications, locoregional control and survival rates were collected. Results: Actuarial local control was attained in 32 of 39 patients (82%). Control was achieved in 88% of patients with primary disease and in 50% of those with recurrent tumors (p = 0.01). Local control was shown in 93% of patients with negative margins and in 50% of those with positive margins (p = 0.002). Limb-sparing was achieved in 32 patients (82%). The overall survival rate was 64%. 13% of patients had grade ≥3 acute toxicity, and 12% developed grade ≥3 chronic toxicity. Conclusion: IOERT used as a boost to EBRT provides high local control and limb-sparing rates in patients with STS of the extremities, with less toxicity than EBRT alone. abstract_id: PUBMED:38225730 Radiotherapy versus limb-sparing surgery alone in low-grade soft-tissue sarcoma of the extremity and trunk wall: a systematic review and meta-analysis. Current guidelines recommend the use of radiotherapy in the management of intermediate and high-grade soft-tissue sarcoma of the extremity and trunk wall. Its use in low-grade sarcoma is less clear. To date there have been no pooled data analyses regarding its role in this context. Its use is not without complications and therefore must be justified. We aim to assess the oncological impact of radiotherapy versus limb-sparing surgery alone in this subset of sarcoma. Medline, EMBASE and Cochrane's databases were searched from 1982 to present. Studies on or having a subgroup analysis of low-grade soft tissue sarcoma, with a radiotherapy and a surgery only arm were included. Outcomes included local recurrence and overall survival. Patients were at least 16 years of age with primary de-novo sarcoma who had not undergone prior resection or treatment. Those undergoing concomitant therapy were excluded. Data extraction was performed independently by two reviewers. Results were pooled using a random-effects model and presented as a forest plot. Primary outcome measures included local recurrence and overall survival. Eleven unique studies were included, consisting of two RCTs and nine non-randomized studies. Overall, there were 12 799 patients. Four studies were included in meta-analysis and the overall pooled effect showed a limited role of radiotherapy in overall survival outcomes when compared to limb-sparing surgery alone HR 1.00 [0.83-1.20] P = 0.41. Descriptive analysis suggests there is limited role of radiotherapy in improving local recurrence outcomes. This study suggests there is limited role for radiotherapy versus limb-sparing surgery alone in low-grade soft-tissue sarcoma. These findings strongly suggest there is lack of high-quality data and that further research must be undertaken prior to forming any strong conclusions regarding the management of low-grade soft-tissue sarcoma. Demonstrating a role for radiotherapy may help improve the quality of excisional margins and thus potentiate limb-sparing surgery. abstract_id: PUBMED:22934557 Long-term clinical outcome of patients with soft tissue sarcomas treated with limb-sparing surgery and postoperative radiotherapy. Background: To evaluate long-term local control, survival, radiation side effects and functional outcome after limb-sparing surgery followed by postoperative radiotherapy (RT) for soft tissue sarcoma (STS). Material And Methods: Between 1995 and 2010, 118 patients with STS of an extremity were treated with limb-sparing surgery and postoperative RT. Follow-up was complete for all patients. Acute and late radiation related toxicities were scored using CTCAE v4.0. Results: Median follow-up was 93 months. RT dose was 60 Gy in 92.4% of the patients; 5.1% received 66 Gy; 2.5% 50-56 Gy. Actuarial local recurrence rates at five and 10 years were 9% and 12%. Five- and 10-year overall survival rates were 69% and 51%. Acute radiation toxicities occurred in 91% of the patients; 19% were grade 3, 2% grade 4. Late radiation toxicities were reported in 71% of the patients: 50% grade 1, 18% grade 2, and 3% grade 3. Limb and joint function after treatment were good, 19% having mild limitation of motion, 1.5% moderate, and 2.5% severe limitations. Conclusion: Limb-sparing surgery with 60 Gy postoperative radiotherapy for patients with STS provides excellent local control and high survival rates with acceptable toxicity and functional outcomes. abstract_id: PUBMED:22984684 Treatment outcome of conservative surgery plus postoperative radiotherapy for extremity soft tissue sarcoma. Purpose: To evaluate the treatment outcome and prognostic factor of postoperative radiotherapy for extremity soft tissue sarcoma (STS). Materials And Methods: Forty three patients with extremity STS were treated with conservative surgery and postoperative radiotherapy from January 1981 to December 2010 at Korea University Medical Center. Median total 60 Gy (range, 50 to 74.4 Gy) of radiation was delivered and 7 patients were treated with chemotherapy. Results: The median follow-up period was 70 months (range, 5 to 302 months). Twelve patients (27.9%) sustained relapse of their disease. Local recurrence occurred in 3 patients (7.0%) and distant metastases developed in 10 patients (23.3%). The 5-year overall survival (OS) was 69.2% and disease free survival was 67.9%. The 5-year local relapse-free survival was 90.7% and distant relapse-free survival was 73.3%. On univariate analysis, no significant prognostic factors were associated with development of local recurrence. Histologic grade (p = 0.005) and stage (p = 0.02) influenced the development of distant metastases. Histologic grade was unique significant prognostic factor for the OS on univariate and multivariate analysis. Severe acute treatment-related complications, Common Terminology Criteria for Adverse Events (CTCAE) grade 3 or 4, developed in 6 patients (14.0%) and severe late complications in 2 patients (4.7%). Conclusion: Conservative surgery with postoperative radiotherapy achieved a satisfactory rate of local control with acceptable complication rate in extremity STS. Most failures were distant metastases that correlate with tumor grade and stage. The majority of local recurrences developed within the field. Selective dose escalation of radiotherapy or development of effective systemic treatment might be considered. abstract_id: PUBMED:7833102 Limb-sparing therapy of extremity soft tissue sarcomas: treatment outcome and long-term functional results. The purpose of this study is to assess the long-term success rate and functional results of limb-sparing therapy in a group of 156 patients with soft tissue sarcomas of the extremities in the Netherlands Cancer Institute, treated according to a standard protocol of surgery and radiotherapy, if indicated. The patients (79 females and 77 males) were treated between 1977 and 1983 by an intended wide local excision with a margin of at least 2 cm. Postoperative radiotherapy was applied in 117 patients; 26 patients had surgery only, including 13 patients who had to be treated by amputation. The total dose was 60 Gy, with 40 Gy to a large volume and a boost of 20 Gy to the tumour bed at 2 Gy per fraction, five fractions per week. Most sarcomas were located in the proximal part of the lower extremity (51%). The group comprised 50 liposarcomas, 47 malignant fibrous hystiocystoma (MFH) and 59 other histologies; 69 (44%) had high-grade tumours. Three treatment groups with limb-sparing treatment were defined: group I (n = 26) patients who had a complete excision receiving no further treatment, group II (n = 64) with narrow surgical margins and radiotherapy and group III (n = 53) with incomplete resection and radiotherapy. The 10-year actuarial overall survival and local control rate for all patients was 63 and 81%, respectively. Multivariate analysis showed that histological grade (P &lt; 0.0001), age (P = 0.0005) and location deep to the fascia (P = 0.0008) were independent prognostic factors for survival, while local control was predicted by grade (P = 0.0014) and treatment group (p = 0.028). Patients with surgery only (group I) had 81% 5-year local control as compared to 92% with radiotherapy after narrow surgery (group II) and 74% with incomplete surgery and radiotherapy (group III). Limb preservation when attempted was achieved in 90% of the patients. After limb-sparing treatment, 7% had severe impairment of mobility, 3% had lymph oedema and 16% marked fibrosis. Fractures in the irradiated bone occurred in 6% of the patients. The combination of limited surgery followed by radiotherapy resulted in a high local control rate with good functional results. Ultimately limb sparing treatment was successful in 83% of all patients with extremity sarcomas. abstract_id: PUBMED:35803098 Limb-sparing surgery with latissimus dorsi flap reconstruction in extremity soft tissue sarcoma: Case series. Introduction And Importance: Function-preserving and Limb-sparing surgery are now the accepted gold standard of care for Extremity Soft Tissue Sarcoma (ESTS) with the goal of surgery for STS of extremities to obtain local tumor control and minimal morbidity. Limb-sparing surgery with post-operative radiotherapy for STS results in high survival rates and local control. Adjuvant radio chemotherapy might improve distant and local recurrence in high-risk patients. Hence, we aim to present how to achieve local tumor control and minimal morbidity for high grade ESTS by conducting limb-sparing surgery combined with appropriate reconstruction and radio chemotherapy. Case Presentation: We present 2 cases with high grade sarcoma that underwent limb-sparing surgery with Latissimus Dorsi (LD) flap reconstructions. Wide excisions were completed with limb-sparing surgeries for both cases with free surgical margins and LD flap reconstructions. There was no post-operative complication. Follow up examination revealed normal function of the arm. The first patient was still in remission after 2-years follow up. The second patient got pulmonary metastasis after complete resection and adjuvant radiotherapy. Clinical Discussion: Limb-sparing surgery with LD flap reconstruction is able to remove the tumor completely with negative margin for the primary objective. Secondary objectives are minimizing the morbidity, maximizing postoperative body functions, as well as achieving the best cosmetic value are also achieved. LD flap is commonly easy to harvest and able to give large tissue coverage for reconstruction after surgery. Conclusion: Limb sparing surgery followed by soft-tissue reconstruction and radio chemotherapy are suitable for increasing oncologic outcome, tumor control, and limb preservation. However, inhibitions towards local recurrence and distant metastases were not guaranteed. abstract_id: PUBMED:28229172 External-beam radiation therapy combined with limb-sparing surgery in elderly patients (&gt;70 years) with primary soft tissue sarcomas of the extremities : A retrospective analysis. Purpose: To report our experience with EBRT combined with limb-sparing surgery in elderly patients (&gt;70 years) with primary extremity soft tissue sarcomas (STS). Methods: Retrospectively analyzed were 35 patients (m:f 18:17, median 78 years) who all presented in primary situation without nodal/distant metastases (Charlson score 0/1 in 18 patients; ≥2 in 17 patients). Median tumor size was 10 cm, mainly located in lower limb (83%). Stage at presentation (UICC7th) was Ib:3%, 2a:20%, 2b:20%, and 3:57%. Most lesions were high grade (97%), predominantly leiomyosarcoma (26%) and undifferentiated pleomorphic/malignant fibrous histiocytoma (23%). Limb-sparing surgery was preceded (median 50 Gy) or followed (median 66 Gy) by EBRT. Results: Median follow-up was 37 months (range 1-128 months). Margins were free in 26 patients (74%) and microscopically positive in 9 (26%). Actuarial 3‑ and 5‑year local control rates were 88 and 81% (4 local recurrences). Corresponding rates for distant control, disease-specific survival, and overall survival were 57/52%, 76/60%, and 72/41%. The 30-day mortality was 0%. Severe postoperative complications were scored in 8 patients (23%). Severe acute radiation-related toxicity was observed in 2 patients (6%). Patients with Charlson score ≥2 had a significantly increased risk for severe postoperative complications and acute radiation-related side effects. Severe late toxicities were found in 7 patients (20%), including fractures in 3 (8.6%). Final limb preservation rate was 97%. Conclusion: Combination of EBRT and limb-sparing surgery is feasible in elderly patients with acceptable toxicities and encouraging but slightly inferior outcome compared to younger patients. Comorbidity correlated with postoperative complications and acute toxicities. Late fracture risk seems slightly increased. abstract_id: PUBMED:36039442 Clinical Outcomes of Limb-sparing Tumor Surgery With Vascular Reconstruction for Bone and Soft-tissue Tumors. Background/aim: This study aimed to retrospectively investigate clinical outcomes after tumor resection surgery and discuss reconstruction methods and postoperative complications. Patients And Methods: We analyzed the clinical outcomes, such as graft survival and prognosis, of nine patients with bone and soft-tissue tumors of the extremities with major vascular invasion who underwent limb-sparing surgery with vascular reconstruction between January 2006 and December 2020. Results: The primary tumor was malignant in eight cases and intermediate in one case, with a mean postoperative follow-up duration of 52.1 months. A total of 10 vascular reconstructions (arterial in eight patients and both arterial and venous in one) were performed with autologous vein grafts in four cases and synthetic grafts in five cases. Graft occlusion was observed in two cases reconstructed with the great saphenous vein measuring &gt;200 mm in length, and the 5-year arterial patency rate was 8/9. Only one case showed local recurrence, and at 5 years, local control was achieved in eight out of nine patients. Limb-sparing was achieved in all cases and the 5-year overall and disease-free survival rates were 77.8%. Postoperative complications occurred in six patients and wound-related complications were improved by re-surgery, while the others were controlled by conservative treatment. Conclusion: Limb-sparing tumor resection surgery with vascular reconstruction has favorable clinical and oncological outcomes. Most postoperative complications related to this surgery can be controlled by conservative treatment, except for wound-related complications. In reconstructions with autologous vein grafts of a length exceeding 200 mm, the graft occlusion rate may increase, and synthetic grafts may be recommended. abstract_id: PUBMED:18612160 Impact of intensity-modulated radiation therapy on local control in primary soft-tissue sarcoma of the extremity. Purpose: One of the concerns about intensity-modulated radiation therapy (IMRT) is that its tight dose distribution, an advantage in reducing RT morbidity to surrounding normal structures, might compromise tumor coverage. The purpose of this study is to determine if such concern is warranted in soft-tissue sarcoma (STS) of the extremity. Methods: Between 02/02 and 05/05, 41 adult patients with primary STS of the extremity were treated with limb-sparing surgery and adjuvant IMRT. The margins were positive/within 1 mm in 21. Tumor size was more than 10 cm in 68% of patients and grade was high in 83%. Preoperative IMRT was given to 7 patients (50 Gy) and postoperative IMRT (median dose, 63 Gy) was given to 34 patients. Complete gross resection including periosteal stripping/bone resection was required in 11, and neurolysis/nerve resection in 24. Results: With a median follow-up time of 35 months, two (4.8%) of 41 patients developed local recurrence. The 5-year actuarial local control rate was 94% (95% CI, 86% to 100%). The local control rate was also 94% for patients with negative or positive/close margin. Other prognostic factors such as age, size, and grade did not impact local control either. The 5-year distant control rate was 61% (95% CI, 45% to 76%) and the overall survival rate was 64% (95% CI, 45% to 84%). Conclusion: IMRT in STS of the extremity provides excellent local control in a group of patients with high risk features. This suggests that the precision with which IMRT dose is distributed has a beneficiary effect in sparing normal tissue and improving local control. Answer: The management of extremity soft tissue sarcomas with limb-sparing surgery and postoperative irradiation involves several factors that may impact local control. The total dose of radiation therapy is one such factor. A study found that postoperative radiation therapy doses greater than 62.5 Gy significantly improved local control in patients with extremity soft tissue sarcomas (PUBMED:7607971). This suggests that higher total doses of radiation can be beneficial for local control. The overall treatment time and the interval between surgery and the initiation of radiation therapy are also important considerations. However, the abstracts provided do not offer specific insights into the impact of overall treatment time or the surgery-radiotherapy interval on local control. Instead, they focus on the outcomes of different radiation therapy strategies, such as intraoperative electron-beam radiation therapy (IOERT) (PUBMED:28717856), and the use of radiotherapy in low-grade soft tissue sarcoma (PUBMED:38225730). Other studies have reported on the outcomes of limb-sparing surgery followed by postoperative radiotherapy, indicating satisfactory rates of local control and acceptable complication rates (PUBMED:22934557, PUBMED:22984684). One study specifically mentioned that the majority of local recurrences developed within the field, suggesting that the radiation field and dose are critical factors in achieving local control (PUBMED:22984684). In summary, while the total dose of radiation therapy is shown to influence local control in the treatment of extremity soft tissue sarcomas, the provided abstracts do not directly address the impact of overall treatment time or the surgery-radiotherapy interval on local control. Further research may be needed to clarify the roles of these factors in the management of these sarcomas.
Instruction: Toothbrushing: A Link Between Noncommunicable and Communicable Diseases? Abstracts: abstract_id: PUBMED:29463270 The unfunded priorities: an evaluation of priority setting for noncommunicable disease control in Uganda. Background: The double burden of infectious diseases coupled with noncommunicable diseases poses unique challenges for priority setting and for achieving equitable action to address the major causes of disease burden in health systems already impacted by limited resources. Noncommunicable disease control is an important global health and development priority. However, there are challenges for translating this global priority into local priorities and action. The aim of this study was to evaluate the influence of national, sub-national and global factors on priority setting for noncommunicable disease control in Uganda and examine the extent to which priority setting was successful. Methods: A mixed methods design that used the Kapiriri &amp; Martin framework for evaluating priority setting in low income countries. The evaluation period was 2005-2015. Data collection included a document review (policy documents (n = 19); meeting minutes (n = 28)), media analysis (n = 114) and stakeholder interviews (n = 9). Data were analysed according to the Kapiriri &amp; Martin (2010) framework. Results: Priority setting for noncommunicable diseases was not entirely fair nor successful. While there were explicit processes that incorporated relevant criteria, evidence and wide stakeholder involvement, these criteria were not used systematically or consistently in the contemplation of noncommunicable diseases. There were insufficient resources for noncommunicable diseases, despite being a priority area. There were weaknesses in the priority setting institutions, and insufficient mechanisms to ensure accountability for decision-making. Priority setting was influenced by the priorities of major stakeholders (i.e. development assistance partners) which were not always aligned with national priorities. There were major delays in the implementation of noncommunicable disease-related priorities and in many cases, a failure to implement. Conclusions: This evaluation revealed the challenges that low income countries are grappling with in prioritizing noncommunicable diseases in the context of a double disease burden with limited resources. Strengthening local capacity for priority setting would help to support the development of sustainable and implementable noncommunicable disease-related priorities. Global support (i.e. aid) to low income countries for noncommunicable diseases must also catch up to align with NCDs as a global health priority. abstract_id: PUBMED:33961498 The Role of Noncommunicable Diseases in the Pursuit of Global Health Security. Noncommunicable diseases and their risk factors are important for all aspects of outbreak preparedness and response, affecting a range of factors including host susceptibility, pathogen virulence, and health system capacity. This conceptual analysis has 2 objectives. First, we use the Haddon matrix paradigm to formulate a framework for assessing the relevance of noncommunicable diseases to health security efforts throughout all phases of the disaster life cycle: before, during, and after an event. Second, we build upon this framework to identify 6 technical action areas in global health security programs that are opportune integration points for global health security and noncommunicable disease objectives: surveillance, workforce development, laboratory systems, immunization, risk communication, and sustainable financing. We discuss approaches to integration with the goal of maximizing the reach of global health security where infectious disease threats and chronic disease burdens overlap. abstract_id: PUBMED:33750391 Diagnostics and monitoring tools for noncommunicable diseases: a missing component in the global response. A key component of any health system is the capacity to accurately diagnose individuals. One of the six building blocks of a health system as defined by the World Health Organization (WHO) includes diagnostic tools. The WHO's Noncommunicable Disease Global Action Plan includes addressing the lack of diagnostics for noncommunicable diseases, through multi-stakeholder collaborations to develop new technologies that are affordable, safe, effective and quality controlled, and improving laboratory and diagnostic capacity and human resources. Many challenges exist beyond price and availability for the current tools included in the Package of Essential Noncommunicable Disease Interventions (PEN) for cardiovascular disease, diabetes and chronic respiratory diseases. These include temperature stability, adaptability to various settings (e.g. at high altitude), need for training in order to perform and interpret the test, the need for maintenance and calibration, and for Blood Glucose Meters non-compatible meters and test strips. To date the issues surrounding access to diagnostic and monitoring tools for noncommunicable diseases have not been addressed in much detail. The aim of this Commentary is to present the current landscape and challenges with regards to guidance from the WHO on diagnostic tools using the WHO REASSURED criteria, which define a set of key characteristics for diagnostic tests and tools. These criteria have been used for communicable diseases, but so far have not been used for noncommunicable diseases. Diagnostic tools have played an important role in addressing many communicable diseases, such as HIV, TB and neglected tropical diseases. Clearly more attention with regards to diagnostics for noncommunicable diseases as a key component of the health system is needed. abstract_id: PUBMED:29155655 Synergies between Communicable and Noncommunicable Disease Programs to Enhance Global Health Security. Noncommunicable diseases are the leading cause of death and disability worldwide. Initiatives that advance the prevention and control of noncommunicable diseases support the goals of global health security in several ways. First, in addressing health needs that typically require long-term care, these programs can strengthen health delivery and health monitoring systems, which can serve as necessary platforms for emergency preparedness in low-resource environments. Second, by improving population health, the programs might help to reduce susceptibility to infectious outbreaks. Finally, in aiming to reduce the economic burden associated with premature illness and death from noncommunicable diseases, these initiatives contribute to the objectives of international development, thereby helping to improve overall country capacity for emergency response. abstract_id: PUBMED:37764823 Association of Vitamin D Genetic Risk Score with Noncommunicable Diseases: A Systematic Review. Background and Aims: The genetic risk score (GRS) is an important tool for estimating the total genetic contribution or susceptibility to a certain outcome of interest in an individual, taking into account their genetic risk alleles. This study aims to systematically review the association between the GRS of low vitamin D with different noncommunicable diseases/markers. Methods: The article was first registered in PROSPERO CRD42023406929. PubMed and Embase were searched from the time of inception until March 2023 to capture all the literature related to the vitamin D genetic risk score (vD-GRS) in association with noncommunicable diseases. This was performed using comprehensive search terms including "Genetic Risk Score" OR "Genetics risk assessment" OR "Genome-wide risk score" AND "Vitamin D" OR 25(HO)D OR "25-hydroxyvitamin D". Results: Eleven eligible studies were included in this study. Three studies reported a significant association between vD-GRS and metabolic parameters, including body fat percentage, body mass index, glycated hemoglobin, and fasting blood glucose. Moreover, colorectal cancer overall mortality and the risk of developing arterial fibrillation were also found to be associated with genetically deprived vitamin D levels. Conclusions: This systematic review highlights the genetic contribution of low-vitamin-D-risk single nucleotides polymorphisms (SNPs) as an accumulative factor associated with different non-communicable diseases/markers, including cancer mortality and the risk of developing obesity, type 2 diabetes, and cardiovascular diseases such as arterial fibrillation. abstract_id: PUBMED:27886846 Are we facing a noncommunicable disease pandemic? The global boom in premature mortality and morbidity from noncommunicable diseases (NCDs) shares many similarities with pandemics of infectious diseases, yet public health professionals have resisted the adoption of this label. It is increasingly apparent that NCDs are actually communicable conditions, and although the vectors of disease are nontraditional, the pandemic label is apt. Arguing for a change in terminology extends beyond pedantry as the move carries serious implications for the public health community and the general public. Additional resources are unlocked once a disease reaches pandemic proportions and, as a long-neglected and underfunded group of conditions, NCDs desperately require a renewed sense of focus and political attention. This paper provides objections, definitions, and advantages to approaching the leading cause of global death through an alternative lens. A novel framework for managing NCDs is presented with reference to the traditional influenza pandemic response. abstract_id: PUBMED:35124814 Noncommunicable diseases and social determinants of health in Buddhist monks: An integrative review. The prevalence of noncommunicable diseases (NCDs) is increasing worldwide. Buddhist monks in Thailand play a critical role in health as community leaders accounting for 0.3% of the population. However, some monks require treatment and hospitalization to alleviate the burden of NCDs due to religious beliefs and practices during ordainment. Risk factors for NCDs among Buddhist monks, and the relationship to social determinants of health (SDH) remain unclear. This integrative review examined the prevalence of NCDs and explored the relationship between SDH and health outcomes among Buddhist monks. Cohort, descriptive, and correlational studies published in both English and Thai languages were identified from the PubMed, Science Direct, CINAHL, and Thai journal databases. Keywords included "Thai Buddhist monks," "non-communicable diseases," and "prevalence". Twenty-two studies were selected. Obesity and hypertension were the most prevalent NCDs. Religious beliefs and practices influence SDH domains and play an important role in the lifestyle and health behaviors among Buddhist monks. Further understanding of the impact of the religious lifestyle is needed, particularly given the role and influence of monks in society. abstract_id: PUBMED:30633713 Earth Observation: Investigating Noncommunicable Diseases from Space. The United Nations has called on all nations to take immediate actions to fight noncommunicable diseases (NCDs), which have become an increasingly significant burden to public health systems around the world. NCDs tend to be more common in developed countries but are also becoming of growing concern in low- and middle-income countries. Earth observation (EO) technologies have been used in many infectious disease studies but have been less commonly employed in NCD studies. This review discusses the roles that EO data and technologies can play in NCD research, including ( a) integrating natural and built environment factors into NCD research, ( b) explaining individual-environment interactions, ( c) scaling up local studies and interventions, ( d) providing repeated measurements for longitudinal studies including cohorts, and ( e) advancing methodologies in NCD research. Such extensions hold great potential for overcoming the challenges of inaccurate and infrequent measurements of environmental exposure at the level of both the individual and the population, which is of great importance to NCD research, practice, and policy. abstract_id: PUBMED:33342333 Exploring the Influences of Hegemonic and Complicit Masculinity on Lifestyle Risk Factors for Noncommunicable Diseases Among Adult Men in Maseru, Lesotho. Masculinity is an important health determinant and has been studied as a risk factor for communicable diseases in the African context. This paper explores how hegemonic and complicit masculinities influence the lifestyle risk factors for noncommunicable diseases among men. A qualitative research method was used, where eight focus group discussions were conducted among adult men in Maseru, Lesotho. The data were analyzed using a thematic analysis approach. Although the participants typically described taking responsibility as a key feature of what it meant to be a man in Lesotho, their reported behaviors and rationales indicated that men commonly abdicated responsibility for their health to women. Participants were aware of the negative effects of smoking on health and acknowledged the difficulty to stop smoking due to the addictive nature of the habit. The initiation of smoking was linked by participants to the need to be seen as a man, and then maintained as a way of distinguishing themselves from the feminine. Regarding harmful alcohol consumption, participants reported that stress, particularly in their relationships with women, were linked to the need to drink, as they reported limited outlets for emotional expression for men in Lesotho. On the subject of poor diet, the study found that most men were aware of the importance of vegetable consumption; the perceived lengthy preparation process meant they typically depended on women for such healthy food preparation. Almost all participants were aware of the increased susceptibility to diverse negative health effects from physical inactivity, but because of the physical nature of the work, those engaged in traditionally masculine occupations did not exercise. In the context of lifestyle risk factors for noncommunicable diseases, masculinity has positive and negative impacts. It is important to design health education programs targeting men to successfully mitigate the negative health impacts of masculinity. abstract_id: PUBMED:22628917 Research priorities for prevention and control of noncommunicable diseases in India. India is undergoing a demographic and epidemiological transition which is influencing its health. Noncommunicable diseases (NCDs) are posing major health and development threats, while we are grappling with communicable diseases and maternal and child health-related issues. The major NCDs include cardiovascular diseases (including stroke), diabetes, cancer, chronic obstructive pulmonary diseases, mental health, and injuries. Tobacco, alcohol, diet, physical inactivity, high blood pressure, and obesity are the major risk factors common to many chronic diseases. Research on NCDs under the ICMR and through other institutions has resulted in the initiation of some national health programs such as National Cancer Control Program and District Mental Health Program. Important epidemiological descriptions have informed us on the causes and distribution of NCDs and their risk factors, including the non-health determinants (poverty, education, employment, etc) and health systems assessments, have shown the inadequacies in tackling NCDs. Several global efforts and publications have provided guidance in shaping the research agenda. The special UN NCD Summit held on 19-20 September 2011 brought the world leaders to deliberate on ways to address NCDs in a concerted manner through partnerships. In this paper the authors review the present status of NCDs and their risk factors in the country and propose a strategic research agenda to provide adequate thrust to accelerate research towards a useful outcome. Answer: The provided abstracts do not directly address the specific question of whether toothbrushing serves as a link between noncommunicable and communicable diseases. However, they do discuss various aspects of noncommunicable diseases (NCDs) and their impact on global health, as well as the factors that influence their prevalence and management. For instance, the abstract from PUBMED:29463270 discusses the challenges of setting priorities for NCD control in Uganda, highlighting the influence of national, sub-national, and global factors on priority setting and the lack of resources dedicated to NCDs despite their recognition as a priority area. The abstract from PUBMED:33961498 emphasizes the role of NCDs in global health security, suggesting that they affect aspects of outbreak preparedness and response, including host susceptibility and health system capacity. PUBMED:33750391 points out the lack of diagnostic tools for NCDs and the need for multi-stakeholder collaborations to develop new technologies that are affordable and effective. PUBMED:29155655 discusses how initiatives that advance the prevention and control of NCDs can strengthen health delivery systems and reduce susceptibility to infectious outbreaks, thereby enhancing global health security. PUBMED:37764823 reviews the association between vitamin D genetic risk score and various NCDs, indicating a genetic contribution to these diseases. PUBMED:27886846 argues for the recognition of NCDs as a pandemic, suggesting that they share similarities with infectious disease pandemics and require a renewed focus and political attention. PUBMED:35124814 examines the prevalence of NCDs among Buddhist monks and the relationship between social determinants of health and health outcomes in this population. PUBMED:30633713 discusses the potential of Earth observation technologies to contribute to NCD research by integrating environmental factors and scaling up studies and interventions. PUBMED:33342333 explores the influence of masculinity on lifestyle risk factors for NCDs among men in Lesotho, highlighting the impact of gender norms on health behaviors. Lastly, PUBMED:22628917 reviews the status of NCDs in India and proposes a strategic research agenda to accelerate research towards effective outcomes. While these abstracts provide valuable insights into the broader context of NCDs, they do not specifically address the link between toothbrushing and the interplay between communicable and noncommunicable diseases. Toothbrushing is generally associated with oral health and the prevention of dental caries and periodontal diseases, which are communicable conditions. However, oral health has also been linked to various NCDs, such as cardiovascular disease and diabetes, through mechanisms like inflammation and the spread of harmful bacteria from the mouth to other parts of the body (PUBMED:29155655). While the abstracts do not explicitly discuss toothbrushing, the connection between oral health and systemic health is well-established in the broader medical literature, suggesting that maintaining good oral hygiene practices like toothbrushing could potentially influence the risk factors for both communicable and noncommunicable diseases.
Instruction: Markers of glycemic control and insulin resistance in non-diabetic patients with Obstructive Sleep Apnea Hypopnea Syndrome: does adherence to CPAP treatment improve glycemic control? Abstracts: abstract_id: PUBMED:19231280 Markers of glycemic control and insulin resistance in non-diabetic patients with Obstructive Sleep Apnea Hypopnea Syndrome: does adherence to CPAP treatment improve glycemic control? Background And Aim: Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS) is associated with glucose dysmetabolism and insulin resistance, therefore the amelioration of breathing disturbances during sleep can allegedly modify the levels of markers of glucose regulation and insulin resistance, such as glycated hemoglobin, fasting glucose, insulin and HOMA(IR). The aim of this study was to explore the association between these parameters and sleep characteristics in non-diabetic OSAHS patients, as well as the effect of 6 months CPAP therapy on these markers, according to adherence to CPAP treatment. Methods: Euglycemic patients (n=56; mean age+/-SD: 46.07+/-10.67 years) with newly diagnosed OSAHS were included. Glycated hemoglobin, fasting glucose, insulin levels and HOMA(IR) were estimated at baseline and 6 months after CPAP application. According to CPAP adherence, patients were classified as follows: group 1 (mean CPAP use 4 h/night), group 2 (mean CPAP use &lt; 4 h/night) and group 3 (refused CPAP treatment), and comparisons of levels of the examined parameters were performed. Results: At baseline, average SpO(2) during sleep was negatively correlated with insulin levels and HOMA(IR) while minimum SpO(2) during sleep was also negatively correlated with insulin levels. After 6 months, only group 1 patients demonstrated a significant decrease in glycated hemoglobin (p=0.004) accompanied by a decrease in hs-CRP levels (p=0.002). No other statistically significant change was observed. Conclusions: Nighttime hypoxia can affect fasting insulin levels in non-diabetic OSAHS patients. Good adherence to long-term CPAP treatment can significantly reduce HbA(1C) levels, but has no effect on markers of insulin resistance. abstract_id: PUBMED:19110882 CPAP therapy of obstructive sleep apnea in type 2 diabetics improves glycemic control during sleep. Background: Type 2 diabetes and obstructive sleep apnea (OSA) are frequently comorbid conditions. OSA is associated with increased insulin resistance, but studies of continuous positive airway pressure (CPAP) have shown inconsistent effects on glycemic control. However, endpoints such as hemoglobin A1c and insulin sensitivity might not reflect short-term changes in glycemic control during sleep. Methods: We used a continuous glucose-monitoring system to measure interstitial glucose every 5 minutes during polysomnography in 20 patients with type 2 diabetes and newly diagnosed OSA. The measurements were repeated after an average of 41 days of CPAP (range 26-96 days). All patients were on a stable diet and medications. Each 30-second epoch of the polysomnogram was matched with a continuous glucose-monitoring system reading, and the sleeping glucose level was calculated as the average for all epochs scored as sleeping. Results: The mean sleeping glucose decreased from untreated (122.0 +/- 61.7 mg/dL) to treated (102.9 +/- 39.4 mg/dL; p = 0.03 by Wilcoxon paired rank test). The sleeping glucose was more stable after treatment, with the median SD decreasing from 20.0 to 13.0 mg/dL (p = 0.005) and the mean difference between maximum and minimum values decreasing from 88 to 57 mg/dL (p= 0.003). The change in the mean hemoglobin A1c from 7.1% to 7.2% was not significant. Conclusions: Our study is limited by the lack of a control group, but the results suggest that sleeping glucose levels decrease and are more stable after patients with type 2 diabetes and OSA are treated with CPAP. abstract_id: PUBMED:26910598 Effect of Continuous Positive Airway Pressure on Glycemic Control in Patients with Obstructive Sleep Apnea and Type 2 Diabetes. A Randomized Clinical Trial. Rationale: Obstructive sleep apnea (OSA) is a risk factor for type 2 diabetes that adversely impacts glycemic control. However, there is little evidence about the effect of continuous positive airway pressure (CPAP) on glycemic control in patients with diabetes. Objectives: To assess the effect of CPAP on glycated hemoglobin (HbA1c) levels in patients with suboptimally controlled type 2 diabetes and OSA, and to identify its determinants. Methods: In a 6-month, open-label, parallel, and randomized clinical trial, 50 patients with OSA and type 2 diabetes and two HbA1c levels equal to or exceeding 6.5% were randomized to CPAP (n = 26) or no CPAP (control; n = 24), while their usual medication for diabetes remained unchanged. Measurements And Main Results: HbA1c levels, Homeostasis Model Assessment and Qualitative Insulin Sensitivity Check Index scores, systemic biomarkers, and health-related quality of life were measured at 3 and 6 months. After 6 months, the CPAP group achieved a greater decrease in HbA1c levels compared with the control group. Insulin resistance and sensitivity measurements (in noninsulin users) and serum levels of IL-1β, IL-6, and adiponectin also improved in the CPAP group compared with the control group after 6 months. In patients treated with CPAP, mean nocturnal oxygen saturation and baseline IL-1β were independently related to the 6-month change in HbA1c levels (r(2) = 0.510, P = 0.002). Conclusions: Among patients with suboptimally controlled type 2 diabetes and OSA, CPAP treatment for 6 months resulted in improved glycemic control and insulin resistance compared with results for a control group. Clinical trial registered with www.clinicaltrials.gov (NCT01801150). abstract_id: PUBMED:26315076 Effect of Continuous Positive Airway Pressure Therapy on Glycemic Excursions and Insulin Sensitivity in Patients with Obstructive Sleep Apnea-hypopnea Syndrome and Type 2 Diabetes. Background: For patients with obstructive sleep apnea-hypopnea syndrome (OSAHS) and type 2 diabetes mellitus (T2DM), the night sleep interruption and intermittent hypoxia due to apnea or hypopnea may induce glycemic excursions and reduce insulin sensitivity. This study aimed to investigate the effect of continuous positive airway pressure (CPAP) therapy in patients with OSAHS and T2DM. Methods: Continuous glucose monitoring system (CGMS) was used in 40 patients with T2DM and newly diagnosed OSAHS. The measurements were repeated after 30 days of CPAP treatment. Subsequently, insulin sensitivity and glycohemoglobin (HbA1c) were measured and compared to the pretreatment data. Results: After CPAP therapy, the CGMS indicators showed that the 24-h mean blood glucose (MBG) and the night time MBG were significantly reduced (P &lt; 0.05 and P = 0.03, respectively). The mean ambulatory glucose excursions (MAGEs) and the mean of daily differences were also significantly reduced (P &lt; 0.05 and P = 0.002, respectively) compared to pretreatment levels. During the night, MAGE also significantly decreased (P = 0.049). The differences between the highest and lowest levels of blood glucose over 24 h and during the night were significantly lower than prior to CPAP treatment (P &lt; 0.05 and P = 0.024, respectively). The 24 h and night time durations of high blood glucose (&gt;7.8 mmol/L and &gt; 11.1 mmol/L) decreased (P &lt; 0.05 and P &lt; 0.05, respectively) after the treatment. In addition, HbA1c levels were also lower than those before treatment (P &lt; 0.05), and the homeostasis model assessment index of insulin resistance was also significantly lower than before CPAP treatment (P = 0.034). Conclusions: CPAP therapy may have a beneficial effect on improving not only blood glucose but also upon insulin sensitivity in T2DM patients with OSAHS. This suggests that CPAP may be an effective treatment for T2DM in addition to intensive diabetes management. abstract_id: PUBMED:27415404 Subjective sleep disturbances and glycemic control in adults with long-standing type 1 diabetes: The Pittsburgh's Epidemiology of Diabetes Complications study. Aims: To date, studies on sleep disturbances in type 1 diabetes (T1D) have been limited to youth and/or small samples. We therefore assessed the prevalence of subjective sleep disturbances and their associations with glycemia and estimated insulin sensitivity in individuals with long-standing T1D. Methods: We conducted a cross-sectional study including 222 participants of the Epidemiology of Diabetes Complications study of childhood-onset T1D attending the 25-year examination (mean age=52years, diabetes duration=43years). The Berlin Questionnaire (risk of obstructive sleep apnea, OSA), the Epworth Sleepiness Scale (daytime sleepiness), and the Pittsburgh Sleep Quality Index (sleep quality, bad dreams presence, and sleep duration) were completed. Associations between sleep disturbances and poor glycemic control (HbA1c⩾7.5%/58mmol/mol), log-transformed HbA1c, and estimated insulin sensitivity (estimated glucose disposal rate, eGDR, squared) were assessed in multivariable regression. Results: The prevalences of high OSA risk, excessive daytime sleepiness, poor sleep quality, and bad dreams were 23%, 13%, 41%, and 26%, respectively, with more women (51%) reporting poor sleep quality than men (30%, p=0.004). Participants under poor glycemic control were twice as likely to report bad dreams (p=0.03), but not independently (p=0.07) of depressive symptomatology. Sleep duration was directly associated with HbA1c among individuals with poor glycemic control, but inversely in their counterparts (interaction p=0.002), and inversely associated with eGDR (p=0.002). Conclusions: These findings suggest important interrelationships between sleep, gender, depressive symptomatology, and glycemic control, which may have important clinical implications. Further research is warranted to examine the mechanism of the interaction between sleep duration and glycemic control. abstract_id: PUBMED:26847407 Glucose tolerance and cardiovascular risk biomarkers in non-diabetic non-obese obstructive sleep apnea patients: Effects of long-term continuous positive airway pressure. Background: Insulin resistance, glucose dyshomeostasis and oxidative stress are associated to the cardiovascular consequences of obstructive sleep apnea (OSA). The effects of a long-term continuous positive airway pressure (LT-CPAP) treatment on such mechanisms still remain conflicting. Objective: To investigate the effect of LT-CPAP on glucose tolerance, insulin sensitivity, oxidative stress and cardiovascular biomarkers in non-obese non-diabetic OSA patients. Patients & Methods: Twenty-eight apneic, otherwise healthy, men suffering from OSA (mean age = 48.9 ± 9.4 years; apnea-hypopnea index = 41.1 ± 16.1 events/h; BMI = 26.6 ± 2.8 kg/m(2); fasting glucose = 4.98 ± 0.37 mmol/L) were evaluated before and after LT-CPAP by an oral glucose tolerance test (OGTT), measuring plasma glucose, insulin and proinsulin. Glycated hemoglobin, homeostasis model assessment resistance insulin, blood lipids, oxidative stress, homocysteine and NT-pro-brain natriuretic peptide (NT-proBNP) were also measured. Results: LT-CPAP treatment lasted 13.9 ± 6.5 months. At baseline, the time spent at SaO2&lt;90%, minimal and mean SaO2 were associated with insulin area under the curve during OGTT (r = 0.448, P = 0.011; r = -0.382; P = 0.047 and r = -0.424; P = 0.028, respectively) and most other glucose/insulin homeostasis biomarkers, as well as with homocysteine (r = 0.531, P = 0.006; r = -0.487; P = 0.011 and r = -0.409; P = 0.034, respectively). LT-CPAP had no effect on all the OGTT-related measurements, but increased plasma total antioxidant status (+7.74%; P = 0.035) in a duration-dependent manner (r = 0.607; P &lt; 0.001), and decreased both homocysteine (-15.2%; P = 0.002) and NT-proBNP levels (-39.3%; P = 0.002). Conclusions: In non-obese non-diabetic OSA patients, nocturnal oxygen desaturation is strongly associated to insulin resistance. LT-CPAP does not improve glucose homeostasis nor insulin sensitivity but has a favorable effect on antioxidant capacity and cardiovascular risk biomarkers. abstract_id: PUBMED:26146025 Effects of continuous positive airway pressure treatment on glucose metabolism in patients with obstructive sleep apnea. A possible association between obstructive sleep apnea (OSA) and type 2 diabetes (T2DM) has been suggested. OSA could alter glucose metabolism, generating insulin resistance and favoring the development of T2DM. In addition, our greater understanding of intermediate disorders produced by intermittent hypoxia and sleep fragmentation, such as sympathetic activation, oxidative stress, systemic inflammation and alterations in appetite-regulating hormones, provides biological plausibility to this possible association. Nevertheless, there are still few data available about the consequences of suppressing apnea. Therefore, the objective of this review was to analyze current knowledge about the effect of continuous positive airway pressure (CPAP) on glucose metabolism. A global interpretation of the studies evaluated shows that CPAP could improve insulin resistance, and perhaps also glycemic control, in OSA patients who still have not developed diabetes. In addition, it seems possible that the effect of CPAP is still greater in patients with OSA and T2DM, particularly in those patients with more severe and symptomatic OSA, in those with poorer baseline glycemic control and with greater compliance and duration of CPAP treatment. In conclusion, although the current information available is limited, it suggests that apnea reversion by means of CPAP could improve the control of glucose metabolism. abstract_id: PUBMED:17608154 The clinical investigation of the relationship between obstructive sleep apnea-hypopnea syndrome and insulin resistance Objective: To study the relationship between obstructive sleep apnea-hypopnea syndrome (OSAHS) and insulin resistance (IR), then evaluate the effectiveness of the improved Uvulopalatopharyngoplasty (MUPPP) and continuous positive airway pressure (CPAP) on IR. Method: Fourteen patients of OSAHS were treated by MUPPP, and sixteen patients of OSAHS were treated by CPAP. All index of the nocturnal polysomnography, fasting plasma insulin, fasting plasma glucose, insulin and blood sugar of 2 hours after meal were analyzed before and after therapy, and 10 patients of OSAHS were untreated by MUPPP or CPAP. The other 33 cases of non-OSAHS were selected as control group. According to the model of HOMA and the formula of LiGuangwei,insulin resistance index and insulin sensitivity index were calculated, respectively. Result: There were significant differences between OSAHS group before treatment and control group, before and after treatment, after treatment and untreated group (P &lt; 0.01 or P &lt; 0.05). There was very significant correlation between IAI and LSaO2 (r = 0.633),and there was significantly negative correlation between IAI and AHI (r = -0.654). Conclusion: OSAHS is an important risk factor for the development of insulin resistance. It shows that OSAHS may develop IR of the patients and the treatment of MUPPP and CPAP can improve insulin sensitivity. abstract_id: PUBMED:25276145 Effects of continuous positive airway pressure treatment on glycaemic control and insulin sensitivity in patients with obstructive sleep apnoea and type 2 diabetes: a meta-analysis. Introduction: Obstructive sleep apnoea (OSA) is a prevalent disorder characterised by repetitive upper-airway obstruction during sleep, and it is associated with type 2 diabetes. Continuous positive airway pressure (CPAP) is the primary treatment for OSA. Prior studies investigating whether CPAP can improve insulin resistance or glucose control in OSA patients have resulted in conflicting findings. This meta-analysis investigated whether CPAP treatment could improve glucose metabolism and insulin resistance in patients with OSA and type 2 diabetes. Material And Methods: We performed a systematic literature search using Medline, Cochrane, EMBASE, and Google Scholar databases for randomised controlled prospective studies that investigated the effect of CPAP on glycaemic control or insulin sensitivity in subjects with type 2 diabetes. Results: The combined standard (STD) paired difference in mean change in the levels of glycated haemoglobin (HbA1c) was -0.073% (standard error (SE): 0.126), indicating that CPAP treatment did not alter HbA1c levels. The combined STD paired difference in mean change of insulin sensitivity was observed as 0.552 µmol/kg • min (SE = 0.196) and indicated insulin sensitivity significantly increased with CPAP treatment (p = 0.005). Conclusions: We found that the CPAP treatment did not alter HbA1c levels but did significantly improve insulin resistance, indicating treating OSA can positively impact the symptoms of type 2 diabetes. abstract_id: PUBMED:24033956 Effects of nasal continuous positive airway pressure treatment on insulin resistance and ghrelin levels in non-diabetic apnoeic patients with coronary heart disease. Background: Obesity is a common risk factor for several diseases. Obesity related hormone and increased insulin resistance (IR) may contribute to the effects of obstructive sleep apnoea on cardiovascular consequences. We investigated ghrelin and IR in non-diabetic apnoeic patients with stable coronary heart disease and assessed the effects of continuous positive airway pressure (CPAP). Methods: Plasma ghrelin, glucose and insulin were measured in 22 patients with CPAP and 22 matched controls without CPAP at baseline and three months. Indexes including homeostasis model assessment IR (HOMA IR), HOMA S and HOMA β were calculated for the assessment of IR, insulin sensitivity and pancreatic β cell function. Results: At three months follow-up, plasma ghrelin levels and HOMA IR in CPAP group were significantly decreased (P=0.002 and 0.046, respectively) while those in control group increased significantly (P=0.012 and 0.009, respectively). Significant moderate correlations were found between ghrelin vs. HOMA IR and ghrelin vs. HOMA S after CPAP, however, for those without CPAP, no significant associations were observed. Conclusions: Short-term effective continuous positive airway pressure had a significant effect on lowering plasma ghrelin levels and IR, but not body fat. Further large scale and longer term studies are warranted to corroborate these findings. Answer: In non-diabetic patients with Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS), markers of glycemic control and insulin resistance, such as glycated hemoglobin, fasting glucose, insulin levels, and HOMA(IR), are of interest to determine the impact of the condition and the effectiveness of treatment on metabolic parameters. The study by Steiropoulos et al. (PUBMED:19231280) aimed to explore the association between these parameters and sleep characteristics in non-diabetic OSAHS patients, as well as the effect of 6 months of CPAP therapy on these markers, according to adherence to CPAP treatment. The results indicated that good adherence to long-term CPAP treatment could significantly reduce HbA1C levels, but had no effect on markers of insulin resistance. In summary, adherence to CPAP treatment in non-diabetic patients with OSAHS can improve some markers of glycemic control, specifically HbA1C levels, suggesting a potential benefit for metabolic health. However, the impact on insulin resistance markers was not observed in this particular study. It is important to note that CPAP adherence is crucial for observing these benefits, as indicated by the significant improvements seen in patients who adhered well to the CPAP therapy regimen.
Instruction: Are healthcare quality "report cards" reaching consumers? Abstracts: abstract_id: PUBMED:29480085 The Evolution of Nursing Home Report Cards. Nursing home report cards can be potentially key tools for disseminating information to consumers. However, few accounts of state-based nursing home report cards are available. In the research presented here, the scale, scope, utility, and changes over time in these nursing home report cards are described. This article finds that the number of report cards has increased from 24 in 2003 to 29 in 2009. The quality information presented varies considerably; however, deficiency citations are still the most frequently reported quality indicators. The utility of report cards varies considerably. The authors present their opinions of features that seem most conducive for consumer use of these report cards. abstract_id: PUBMED:33417173 Patient selection in the presence of regulatory oversight based on healthcare report cards of providers: the case of organ transplantation. Many healthcare report cards provide information to consumers but do not represent a constraint on the behavior of healthcare providers. This is not the case with the report cards utilized in kidney transplantation. These report cards became more salient and binding, with additional oversight, in 2007 under the Centers for Medicare and Medicaid Services Conditions of Participation. This research investigates whether the additional oversight based on report card outcomes influences patient selection via waiting-list registrations at transplant centers that meet regulatory standards. Using data from a national registry of kidney transplant candidates from 2003 through 2010, we apply a before-and-after estimation strategy that isolates the impact of a binding report card. A sorting equilibrium model is employed to account for center-level heterogeneity and the presence of congestion/agglomeration effects and the results are compared to a conditional logit specification. Our results indicate that patient waiting-list registrations change in response to the quality information similarly on average if there is additional regulation or not. We also find evidence of congestion effects when spatial choice sets are smaller: new patient registrations are less likely to occur at a center with a long waiting list when fewer options are available. abstract_id: PUBMED:27468943 Is Anyone Paying Attention to Physician Report Cards? The Impact of Increased Availability on Consumers' Awareness and Use of Physician Quality Information. Objective: To determine if the release of health care report cards focused on physician practice quality measures leads to changes in consumers' awareness and use of this information. Primary Data Sources: Data from two rounds of a survey of the chronically ill adult population conducted in 14 regions across the United States, combined with longitudinal information from a public reporting tracking database. Both data were collected as part of the evaluation for Aligning Forces for Quality, a nationwide quality improvement initiative funded by the Robert Wood Johnson Foundation. Study Design: Using a longitudinal design and an individual-level fixed effects modeling approach, we estimated the impact of community public reporting efforts, measured by the availability and applicability of physician quality reports, on consumers' awareness and use of physician quality information (PQI). Principal Findings: The baseline level of awareness was 12.6 percent in our study sample, drawn from the general population of chronically ill adults. Among those who were not aware of PQI at the baseline, when PQI became available in their communities for the first time, along with quality measures that are applicable to their specific chronic conditions, the likelihood of PQI awareness increased by 3.8 percentage points. For the same group, we also find similar increases in the uses of PQI linked to newly available physician report cards, although the magnitudes are smaller, between 2 and 3 percentage points. Conclusions: Specific contents of physician report cards can be an important factor in consumers' awareness and use of PQI. Policies to improve awareness and use of PQI may consider how to customize quality report cards and target specific groups of consumers in dissemination. abstract_id: PUBMED:34218041 Hospital report cards: Quality competition and patient selection. Hospital 'report cards' policies involve governments publishing information about hospital quality. Such policies often aim to improve hospital quality by stimulating competition between hospitals. Previous empirical literature lacks a comprehensive theoretical framework for analysing the effects of report cards. We model a report card policy in a market where two hospitals compete for patients on quality under regulated prices. The report card policy improves the accuracy of the quality signal observed by patients. Hospitals may improve their published quality scores by costly quality improvement or by selecting healthier patients to treat. We show that increasing information through report cards always increases quality and only sometimes induces selection. Report cards are more likely to increase patient welfare when quality scores are well risk-adjusted, where the cost of selecting patients is high, and the cost of increasing quality is low. abstract_id: PUBMED:38096004 Web-Based Public Reporting as a Decision-Making Tool for Consumers of Long-Term Care in the United States and the United Kingdom: Systematic Analysis of Report Cards. Background: Report cards can help consumers make an informed decision when searching for a long-term care facility. Objective: This study aims to examine the current state of web-based public reporting on long-term care facilities in the United States and the United Kingdom. Methods: We conducted an internet search for report cards, which allowed for a nationwide search for long-term care facilities and provided freely accessible quality information. On the included report cards, we drew a sample of 1320 facility profiles by searching for long-term care facilities in 4 US and 2 UK cities. Based on those profiles, we analyzed the information provided by the included report cards descriptively. Results: We found 40 report cards (26 in the United States and 14 in the United Kingdom). In total, 11 of them did not state the source of information. Additionally, 7 report cards had an advanced search field, 24 provided simplification tools, and only 3 had a comparison function. Structural quality information was always provided, followed by consumer feedback on 27 websites, process quality on 15 websites, prices on 12 websites, and outcome quality on 8 websites. Inspection results were always displayed as composite measures. Conclusions: Apparently, the identified report cards have deficits. To make them more helpful for users and to bring public reporting a bit closer to its goal of improving the quality of health care services, both countries are advised to concentrate on optimizing the existing report cards. Those should become more transparent and improve the reporting of prices and consumer feedback. Advanced search, simplification tools, and comparison functions should be integrated more widely. abstract_id: PUBMED:27890391 Public reporting of hospital quality shows inconsistent ranking results. Background: Evidence from the US has demonstrated that hospital report cards might generate confusion for consumers who are searching for a hospital. So far, little is known regarding hospital ranking agreement on German report cards as well as underlying factors creating disagreement. Objective: This study examined the consistency of hospital recommendations on German hospital report cards and discussed underlying reasons for differences. Methods: We compared hospital recommendations for three procedures on four German hospital report cards. The agreement between two report cards was determined by Cohen's-Kappa. Fleiss' kappa was applied to evaluate the overlap across all four report cards. Results: Overall, 43.40% of all hospitals were labeled equally as low, middle, or top performers on two report cards (hip replacement: 43.2%; knee replacement: 42.8%; percutaneous coronary intervention: 44.3%). In contrast, 8.5% of all hospitals were rated a top performer on one report card and a low performer on another report card. The inter-report card agreement was slight at best between two report cards (κmax=0.148) and poor between all four report cards (κmax=0.111). Conclusions: To increase the benefit of public reporting, increasing the transparency about the concept of - medical - "quality" that is represented on each report card seems to be important. This would help patients and other consumers use the report cards that most represent one's individual preferences. abstract_id: PUBMED:9141339 Will quality report cards help consumers? This study assesses the relationship between the salience of quality information and how well it is understood by consumers. The analysis is based on survey data and content analysis from focus-group data (104 participants). The findings show that poorly understood indicators are viewed as not useful. Consumers often do not understand quality information because they do not understand the current health care context. All of this suggests that salience alone is not sufficient to determine which indicators should be included in report cards. abstract_id: PUBMED:10563281 The current quality of health plan report cards. In general, health plan report cards can provide valuable information to consumers, physicians, and health care purchasers regarding plan performance and quality to assist in the selection of a health plan. However, significant limitations of health plan report cards currently exist. It is only with further evolvement and refinement that health plan report cards can live up to their potential and become a distinctive and useful tool in evaluating and selecting among various health plans. abstract_id: PUBMED:24463142 Competition and the impact of online hospital report cards. Information on the quality of healthcare gives providers an incentive to improve care, and this incentive should be stronger in more competitive markets. We examine this hypothesis by studying Pennsylvanian hospitals during the years 1995-2004 to see whether those hospitals located in more competitive markets increased the quality of the care provided to Medicare patients after report cards rating the quality of their Coronary Artery Bypass Graft programs went online in 1998. We find that after the report cards went online, hospitals in more competitive markets used more resources per patient, and achieved lower mortality among more severely ill patients. abstract_id: PUBMED:19548518 Evaluation criteria for report cards of healthcare providers. Report cards, performance evaluations, and quality assessments continue to penetrate the lexicon of the healthcare sector. The value of report cards is typically couched as enhancing consumerism among patients, increasing accountability among healthcare providers, and more broadly increasing the transparency of healthcare information. This paper discusses the potential benefits and pitfalls of these performance assessments. This paper briefly reviews empirical evidence regarding the impact of report cards for healthcare providers and synthesizes the role and limitations of these performance measures into distinct evaluation criteria. The rapid proliferation of report cards for healthcare providers suggests a growing need to develop mechanisms and tools to evaluate their impact. The risks associated with utilizing report cards for provider oversight include the deleterious impact on vulnerable populations and a failure to accurately measure quality of care. The capacity to create report cards should not be the sole criterion to develop and utilize report cards to evaluate healthcare providers. Rather, careful consideration of the benefits and risks should accompany the implementation and utilization of report cards into regulatory processes. This report proposes an evaluation checklist by which to assess the role of report cards in a given healthcare context. Answer: Healthcare quality "report cards" are designed to disseminate information to consumers and can be a key tool for making informed decisions regarding healthcare services. The evolution of nursing home report cards has shown an increase in their number from 24 in 2003 to 29 in 2009, with deficiency citations being the most frequently reported quality indicators (PUBMED:29480085). However, the utility of these report cards varies considerably, and there are opinions on features that could make them more conducive for consumer use. In the context of physician quality information (PQI), the release of report cards focused on physician practice quality measures has been linked to a modest increase in consumers' awareness and use of this information. Specifically, when PQI became available in communities for the first time, along with quality measures applicable to specific chronic conditions, the likelihood of PQI awareness increased by 3.8 percentage points (PUBMED:27468943). Despite these positive trends, there are challenges in reaching consumers effectively. For instance, a study on German hospital report cards found inconsistent ranking results, with only 43.40% of hospitals being labeled equally across different report cards. This inconsistency could generate confusion for consumers searching for a hospital (PUBMED:38096004). Additionally, the salience of quality information does not guarantee that it is well understood by consumers, as indicated by a study showing that poorly understood indicators are viewed as not useful (PUBMED:9141339). Moreover, the current quality of health plan report cards has significant limitations, and further evolution and refinement are needed for them to become a distinctive and useful tool for evaluating and selecting health plans (PUBMED:10563281). In summary, while healthcare quality report cards are reaching consumers to some extent, there is evidence of variability in their utility, awareness, and understanding among consumers. Efforts to improve the transparency, customization, and dissemination of these report cards could enhance their effectiveness as a decision-making tool for consumers (PUBMED:27890391; PUBMED:24463142; PUBMED:19548518).
Instruction: Mediolateral episiotomy: are trained midwives and doctors approaching it from a different angle? Abstracts: abstract_id: PUBMED:24388846 Mediolateral episiotomy: are trained midwives and doctors approaching it from a different angle? Objectives: The angle at which a mediolateral episiotomy is incised is critical to the risk of obstetric anal sphincter injuries (OASIS). When a mediolateral episiotomy is incised at least 60 degrees from the midline it is protective to the anal sphincter. The objective of our study was to investigate how accoucheurs described and depicted a mediolateral episiotomy. Study Design: One hundred doctors and midwives were invited to complete an interview-administered questionnaire in a district general hospital in the United Kingdom over a 10-month period commencing in August 2012. Accoucheurs were asked to describe the angle at which they would cut a mediolateral episiotomy, and to depict this on a pictorial representation of the perineum. The angle drawn was calculated by an investigator blinded to the participant's initial description of a mediolateral episiotomy. Results: Sixty-one midwives and 39 doctors participated. Doctors and midwives stated they would perform a mediolateral episiotomy at an angle of 45 degrees from the midline, but midwives depicted episiotomies 8 degrees closer to the midline (37.3 degrees vs. 44.9 degrees, p=0.013) than they described. Seventy-six percent of accoucheurs had undergone formal training in how to perform a mediolateral episiotomy, but this had no impact on their clinical practice. Accoucheurs who had been supervised for ten episiotomies before independent practice performed them in keeping with the angle they described. Conclusions: Doctors and midwives are unaware of the appropriate angle (60 degrees) at which a mediolateral episiotomy should be incised at to minimise obstetric anal sphincter injury. The correct angle should be emphasised to accoucheurs to minimise the risk of anal sphincter damage. In addition midwives depict episiotomies that are significantly more acute than they describe. Accoucheurs should also perform at least 10 episiotomies under supervision prior to independent practice. Training programmes should be devised and validated to improve visual measurement of the episiotomy incision angle at crowning. Consideration should also be given to the development of novel surgical devices that help the accoucheur to perform a mediolateral episiotomy accurately. abstract_id: PUBMED:24570598 Cutting a mediolateral episiotomy at the correct angle: evaluation of a new device, the Episcissors-60. Background: Anal incontinence is nine times more prevalent in women than in men due to obstetric anal sphincter injury (OASI). OASI is linked to midline episiotomies and mediolateral episiotomies with post-delivery angles of &lt;30 and &gt;60 degrees. Studies show that doctors and midwives are unable to correctly "eyeball" the safe angle required due to perineal stretching by the fetal head at crowning. A new scissor instrument (Episcissors-60) was devised to allow cutting a mediolateral episiotomy at a fixed angle of 60 degrees from the perineal midline. Methods: Scissors with a marker guide limb pointing towards the anus were devised, ensuring an angle of 60 degrees between the scissor blades and the guide limb. This device was initially tested in models. Post-delivery angles were recorded on transparencies and analyzed by an author blinded to clinical details. Accoucheurs were asked to rate the ease of use on a 5-point scale. Results: Of the 17 women, 14 delivered with ventouse, two with forceps, and one with sequential ventouse-forceps. Indications for instrumental delivery were suboptimal cardiotocogram and/or prolonged second stage of labor. Mean birth weight was 3.41 (2.92-4.12) kg. A mean post-delivery angle of 42.4±7 (range 30-60, median 43) degrees (95% confidence interval 38.8-46) was achieved with the Episcissors-60 instrument. Eighty-eight percent of clinicians agreed or strongly agreed that the scissors were easy to use. Conclusion: The Episcissors-60 delivered a consistent post-delivery angle of 43 degrees. They could replace "eyeballing" when performing mediolateral episiotomies and form part of a preventative strategy to reduce OASI. abstract_id: PUBMED:28477150 The optimal angle of the mediolateral episiotomy at crowning of the head during labor. Introduction And Hypothesis: The aim of the mediolateral episiotomy incision is to increase the diameter of the soft tissue of the vaginal outlet to facilitate birth and to prevent vaginal tears. Episiotomy angles that are too narrow and close to the midline increase the risk of obstetric anal sphincter injuries. In order to determine the optimal angle of the episiotomy, we assessed the changes in the angles of episiotomy lines marked during the first stage of labor and measured at the time of crowning of the head. Methods: Incision lines for mediolateral episiotomy were marked on the perineal skin at angles of 30°, 45°, and 60° from the midline during the first stage of labor in women with a singleton pregnancy. The angles of the marked lines were measured at crowning of the head. Mediolateral episiotomy was performed only for obstetric indications. Results: The study included 102 women with a singleton pregnancy. Of these women, 50 were primiparous and 52 were multiparous. All angles marked during the first stage of labor increased significantly (by more than 30°) at crowning of the head. Similar changes were observed in primiparous and multiparous women. Conclusions: The angle of the mediolateral episiotomy line was significantly greater at crowning of the head than when marked during the first stage of labor. To achieve the desired episiotomy angle, it is important to take into consideration the changes in mediolateral episiotomy angles that occur during labor. abstract_id: PUBMED:16045535 Are mediolateral episiotomies actually mediolateral? This study investigated potential differences in the cutting of mediolateral episiotomy between doctors and midwives. Depth, length, distance from midline and shortest distance from the midpoint of the anal canal to the episiotomy were measured in a sample of primigravid women. The angle subtended from the sagittal or parasagittal plane was calculated. Two hundred and forty-one women participated of whom 98 (41%) had a mediolateral episiotomy. Doctors performed episiotomies that were significantly deeper, longer and more obtuse than those by midwives. No midwife and only 13 (22%) doctors performed truly mediolateral episiotomies. It appears that the majority of episiotomies are not truly mediolateral but closer to the midline. More focused training in mediolateral episiotomy technique is required. abstract_id: PUBMED:26381595 Differences in characteristics of mediolateral episiotomy in proffesionals at the same hospital. Objectives: The objective of our study was to compare the theoretical concept of the accoucheur in our institution with regard to the characteristics of the mediolateral episiotomy (MLE), with a crowning head and after a delivery. Methods: We devised two simple pictorial questionnaires (one with a crowning head and the other in rest after a delivery) in order to explore possible differences in clinical practice between the accoucheurs of our institution with respect to the MLE characteristics. Results: With a crowning head, we found more acute angles when the age of accoucheurs was greater than 35 years old and more than 15 years of experience, but no with the perineum at rest. No difference was found between doctors and midwives, nor between males and females. 28.1% of accoucheurs indicated an acuter episiotomy angle with a crowning head. Conclusion: This study confirmed that the individual interpretation of MLE differed widely among professionals at the same hospital. These differences which have been shown could predispose women to a greater risk of anal sphincter injuries. For this reason, there is a need to standardize this practice, to make the technique more homogeneous, particularly in the context of future research into the risks and benefits of episiotomy with respect to major perineal trauma. abstract_id: PUBMED:28340588 Comparison of the COM-FCP inclination angle and other mediolateral stability indicators for turning. Background: Studies have shown that turning is associated with more instability than straight walking and instability increases with turning angles. However, the precise relationship of changes in stability with the curvature and step length of turning is not clear. The traditional center of mass (COM)-center of pressure (COP) inclination angle requires the use of force plates. A COM-foot contact point (FCP) inclination angle derived from kinematic data is proposed in this study as a measure of the stability of turning. Methods: In order to generate different degrees of stability, we designed an experiment of walking with different curvatures and step lengths. Simultaneously, a novel method was proposed to calculate the COM-FCP inclination angles of different walking trajectories with different step lengths for 10 healthy subjects. The COM-FCP inclination angle, the COM acceleration, the step width and the COM-ankle inclination angles were statistically analyzed. Results: The statistical results showed that the mediolateral (ML) COM-FCP inclination angles increased significantly as the curvature of the walking trajectories or the step length in circular walking increased. Changes in the ML COM acceleration, the step width and the ML COM-ankle inclination angle verified the feasibility and reliability of the proposed method. Additionally, the ML COM-FCP inclination angle was more sensitive to the ML stability than the ML COM-ankle inclination angle. Conclusions: The work suggests that it is more difficult to keep balance when walking in a circular trajectory with a larger curvature or in a larger step length. Essentially, turning with a larger angle in one step leads to a lower ML stability. A novel COM-FCP inclination angle was validated to indicate ML stability. This method can be applied to complicated walking tasks, where the force plate is not applicable, and it accounts for the variability of the base of support (BOS) compared to the COM-ankle inclination angle. abstract_id: PUBMED:25085451 Midwives' and doctors' perceptions of their preparation for and practice in managing the perineum in the second stage of labour: a cross-sectional survey. Objective: to identify the perceptions of midwives and doctors at Monash Women's regarding their educational preparation and practices used for perineal management during the second stage of labour. Design: anonymous cross-sectional semi-structured questionnaire ('The survey'). Setting: the three maternity hospitals that form Monash Women's Maternity Services, Monash Health, Victoria, Australia. Participants: midwives and doctors attending births at one or more of the three Monash Women's maternity hospitals. Methods: a semi-structured questionnaire was developed, drawing on key concepts from experts and peer-reviewed literature. Findings: surveys were returned by 17 doctors and 69 midwives (37% response rate, from the 230 surveys sent). Midwives and doctors described a number of techniques they would use to reduce the risk of perineal trauma, for example, hands on the fetal head/perineum (11.8% of doctors, 61% of midwives), the use of warm compresses (45% of midwives) and maternal education and guidance with pushing (49.3% of midwives). When presented with a series of specific obstetric situations, respondents indicated that they would variably practice hands on the perineum during second stage labour, hands off and episiotomy. The majority of respondents indicated that they agreed or strongly agreed that an episiotomy should sometimes be performed (midwives 97%, doctors 100%). All the doctors had training in diagnosing severe perineal trauma involving anal sphincter injury (ASI), with 77% noting that they felt very confident with this. By contrast, 71% of the midwives reported that they had received training in diagnosing ASI and only 16% of these reported that they were very confident in this diagnosis. All doctors were trained in perineal repair, compared with 65% of midwives. Doctors were more likely to indicate that they were very confident in perineal repair (88%) than the midwives (44%). Most respondents were not familiar with the rates of perineal trauma either within their workplace or across Australia. Key Conclusions: Midwives and doctors indicated that they would use the hands on or hands off approach or episiotomy depending on the specific clinical scenario and described a range of techniques that they would use in their overall approach to minimising perineal trauma during birth. Midwives were more likely than doctors to indicate their lack of training and/or confidence in conducting perineal repair and diagnosing ASI. Implications For Practice: many midwives indicated that they had not received training in diagnosing ASI, perineal repair and midwives' and doctors' knowledge of the prevalence of perineal outcomes was poor. Given the importance of these skills to women cared for by midwives and doctors, the findings may be used to inform the development of quality improvement activities, including training programs and opportunities for gaining experience and expertise with perineal management. The use of episiotomy and hands on/hands off the perineum in the survey scenarios provides reassurance that doctors and midwives take a number of factors into account in their clinical practice, rather than a preference for one or more interventions over others. abstract_id: PUBMED:33851079 Is mediolateral episiotomy angle associated with postpartum perineal pain in primiparous women? Objective: Our aim is to elucidate the relationship between mediolateral episiotomy (MLE) angle and postpartum perineal pain. Methods: This study was designed prospectively. Primiparous women with MLE in the postpartum period were included in the study and divided into three groups according to episiotomy angle ranges (Group 1: &lt;40°, Group 2: 40°-60°, and Group 3: &gt;60°). Postpartum perineal pain was quantified with the short-form McGill Pain questionnaire (SF-MPQ) consisting of the following three parts: Sensory-affective-verbal descriptions, visual pain scale (VPS), and present pain intensity scale (PPI). Postpartum perineal pain scores on days 1 and 7 were compared among the angle group. Results: Overall, 86 eligible women were enrolled in this study. Seventy-three women (85%) scored the perineal pain between 0 and 3 on the VPS and 13 women (15%) rated the pain from 4 to 6 on the 1st postpartum day. No significant differences were noted among the three groups regarding the total pain scores on SF-MPQ and on the each part of form at the 1st postpartum day. At 7 days postpartum, total pain score was found significantly high in Group 1 [Med; IQR (min-max)=0; 4 (0-5)] compared with Group 2 [Med; IQR (min-max)=0; 0(0-5)]. The pain scores obtained from the sensory, affective, VPS, and PPI parts of the questionnaire were [Med; IQR (min-max)=0; 1 (0-2)], [Med; IQR (min-max)=0; 1 (0-1)], [Med; IQR (min-max)=0; 2 (0-2)], and [Med; IQR (min-max)=0; 0.25 (0-1)], respectively, in Group 1. For Group 2, pain scores obtained from the sensory, affective, and PPI were [Med; IQR (min-max)=0; 0(0-1)]; and VPS was [Med; IQR (min-max)=0; 0(0-2)]. No significant differences were observed between Groups 1 and 2 for each part of the questionnaire on day 7. Percentage of need for analgesics on day 7 was found significantly higher in Group 1 (42.9%) than Group 2 (31.2%). Conclusion: MLE at an angle &lt;40° to the midline is associated with a higher score of perineal pain and an increase need for analgesics during the early postpartum days. abstract_id: PUBMED:28931997 The effects of ratio in the multifidus muscle thickness according to various angle of mediolateral ramp during double stance. [Purpose] This study was conducted to predict the risks of standing on mediolateral ramps by identifying the ratios of the multifidus muscles on the two sides of the spine when standing postures are maintained on mediolateral ramps of diverse angles. [Subjects and Methods] The study was conducted with 15 healthy adult males. All subjects participated voluntarily. Mediolateral ramps at five angles (0°, 5°, 10°, 15°, and 20°) were used. Ultrasonography was used to determine the thicknesses of the multifidus muscles under individual conditions. [Results] The ratio of the left/right multifidus muscles showed statistically significant differences as the angle of the mediolateral ramp increased. Post-hoc test results showed that the use of the two multifidus muscles was asymmetric on mediolateral ramps at angles equal to or larger than 10°. [Conclusion] The asymmetric use of the multifidus muscles began on a mediolateral ramp at an angle of 10°, suggesting that subjects with instability in body control might have risk factors such as falls on mediolateral ramps with angles equal to or larger than 10°. Therefore, caregivers must be attentive and/or assist patients and older adults when they are walking on ramps at angles of approximately 10° and up in their daily lives. abstract_id: PUBMED:18236892 How exact is the performance of mediolateral episiotomy Objective: Evaluation of correctly performed mediolateral episiotomies in vaginal deliveries. Methods: A cohort study of 253 primigravid women after their first vaginal delivery. The main parameter followed was the angle of episiotomy. This is defined as the angle formed by midline and the line of epidermal suturing of the episiotomy. Results: 137 women had deliveries performed by a doctor and 116 by a midwife. There was no significant difference between groups. The number of adequately performed mediolateral episiotomies was 100 (40%) altogether, from them 48 (41%) among midwives and 52 (40%) among doctors. Other parameters were: the length of perineum, the length of episiotomy, and the shortest distance between episiotomy and the centre of the anus. Conclusion: This study shows that majority of episiotomies are cut in the wrong direction. The difference in technique between doctors and midwives was not observed. Answer: The angle at which a mediolateral episiotomy is incised is critical to the risk of obstetric anal sphincter injuries (OASIS). Studies have shown that when a mediolateral episiotomy is incised at least 60 degrees from the midline, it is protective to the anal sphincter (PUBMED:24388846). However, there appears to be a discrepancy between the theoretical knowledge of the appropriate angle and the actual practice among trained midwives and doctors. In a study involving doctors and midwives, both groups stated they would perform a mediolateral episiotomy at an angle of 45 degrees from the midline. However, midwives depicted episiotomies 8 degrees closer to the midline (37.3 degrees) than they described, which is significantly more acute than the recommended angle. The study also found that formal training in how to perform a mediolateral episiotomy had no impact on clinical practice, although those who had been supervised for ten episiotomies before independent practice performed them in keeping with the angle they described (PUBMED:24388846). Another study found that doctors and midwives are unable to correctly "eyeball" the safe angle required due to perineal stretching by the fetal head at crowning. A new scissor instrument, Episcissors-60, was devised to allow cutting a mediolateral episiotomy at a fixed angle of 60 degrees from the perineal midline, which could replace "eyeballing" and form part of a preventative strategy to reduce OASI (PUBMED:24570598). Furthermore, research has shown that the angle of the mediolateral episiotomy line is significantly greater at crowning of the head than when marked during the first stage of labor, indicating that to achieve the desired episiotomy angle, it is important to take into consideration the changes in mediolateral episiotomy angles that occur during labor (PUBMED:28477150). In summary, there is evidence that both trained midwives and doctors may not be approaching mediolateral episiotomy at the optimal angle to minimize the risk of OASIS, despite their training. The use of devices like the Episcissors-60 may help in achieving a more consistent and safer angle for episiotomy incisions.
Instruction: Fall-related injuries in a nursing home setting: is polypharmacy a risk factor? Abstracts: abstract_id: PUBMED:20003327 Fall-related injuries in a nursing home setting: is polypharmacy a risk factor? Background: Polypharmacy is regarded as an important risk factor for fallingand several studies and meta-analyses have shown an increased fall risk in users of diuretics, type 1a antiarrhythmics, digoxin and psychotropic agents. In particular, recent evidence has shown that fall risk is associated with the use of polypharmacy regimens that include at least one established fall risk-increasing drug, rather than with polypharmacy per se. We studied the role of polypharmacy and the role of well-known fall risk-increasing drugs on the incidence of injurious falls. Methods: A retrospective observational study was carried out in a population of elderly nursing home residents. An unmatched, post-stratification design for age class, gender and length of stay was adopted. In all, 695 falls were recorded in 293 residents. Results: 221 residents (75.4%) were female and 72 (24.6%) male, and 133 (45.4%) were recurrent fallers. 152 residents sustained no injuries when they fell, whereas injuries were sustained by 141: minor in 95 (67.4%) and major in 46 (32.6%). Only fall dynamics (p = 0.013) and drugs interaction between antiarrhythmic or antiparkinson class and polypharmacy regimen (&gt; or =7 medications) seem to represent a risk association for injuries (p = 0.024; OR = 4.4; CI 95% 1.21 - 15.36). Conclusion: This work reinforces the importance of routine medication reviews, especially in residents exposed to polypharmacy regimens that include antiarrhythmics or antiparkinson drugs, in order to reduce the risk of fall-related injuries during nursing home stays. abstract_id: PUBMED:28188510 Medication use and risk of falls among nursing home residents: a retrospective cohort study. Background Geriatric falls are leading causes of hospital trauma admissions and injury-related deaths. Medication use is a crucial element among extrinsic risk factors for falls. To reduce fall risk and the prevalence of adverse drug reactions, potentially inappropriate medication (PIM) lists are widely used. Objective Our aim was to investigate the possible predictors of geriatric falls annualized over a 5-year-long period, as well as to evaluate the medication use of nursing home residents. Setting Nursing home residents were recruited from the same institution between 2010 and 2015 in Szeged, Hungary. Method A retrospective epidemiological study was performed. Patient data were analysed for the first 12 months of residency. Chi-squared test and Fisher's-test were applied to compare the categorical variables, Student's t test to compare the continuous variables between groups. Binary logistic regression analysis was carried out to determine the association of falls with other variables found significant in univariate analysis. Microsoft Excel, IBM SPSS Statistics (version 23) and R (3.2.2) programs were used for data analysis. Main outcome measure Falls affected by age, gender, number of chronic medications, polypharmacy, PIM meds. Results A total of 197 nursing home residents were included, 150 (76.2%) women and 47 (23.8%) men, 55 fallers (annual fall prevalence rate was 27.9%) and 142 non-fallers. Gender was not a predisposing factor for falls (prevalence in males: 23.4 vs 29.3% in females, p &gt; 0.05). Fallers were older (mean years ± SD; 84.0 ± 7.0) than non-fallers (80.1 ± 9.3, p &lt; 0.01). The age ≥80 years was a significant risk factor for falls (p &lt; 0.001). The number of chronic medications was higher in male fallers (12.4 ± 4.0) than in non-fallers (6.9 ± 4.2, p &lt; 0.001). Polypharmacy (taking four or more chronic medications) was a significant risk factor of falls (p &lt; 0.01). Those PIMs carrying fall risk were taken by 70.9% of fallers and 75.3% of non-fallers (p &gt; 0.05). Taking pantoprazole, vinpocetine or trimetazidine was a significant risk factor for falls. Conclusion Older age, polypharmacy and the independent use of pantoprazole, vinpocetine, and trimetazidine were found to be major risk factors for falls. Further real-life epidemiological studies are necessary to confirm the role of particular active agents, and to help professionals prescribe, evaluate and review geriatric medication use. abstract_id: PUBMED:33460072 Fall-Related Hospitalizations in Nursing Home Residents Co-Prescribed a Cholinesterase Inhibitor and Beta-Blocker. Background/objectives: To examine the association between hospitalization for a fall-related injury and the co-prescription of a cholinesterase inhibitor (ChEI) among persons with dementia receiving a beta-blocker, and whether this potential drug-drug interaction is modified by frailty. Design: Nested case-control study using population-based administrative databases. Setting: All nursing homes in Ontario, Canada. Participants: Persons with dementia aged 66 and older who received at least one beta-blocker between April 2013 and March 2018 following nursing home admission (n = 19,060). Measurements: Cases were persons with dementia with a hospitalization (emergency department visit or acute care admission) for a fall-related injury with concurrent beta-blocker use. Each case (n = 3,038) was matched 1:1 to a control by age (±1 year), sex, cohort entry year, frailty, and history of fall-related injuries. The association between fall-related injury and exposure to a ChEI in the 90 days prior was examined using multivariable conditional logistic regression. Secondary exposures included ChEI type, daily dose, incident versus prevalent use, and use in the prior 30 days. Subgroup analyses considered frailty, age group, sex, and history of hospitalization for fall-related injuries. Results: Exposure to a ChEI in the prior 90 days occurred among 947 (31.2%) cases and 940 (30.9%) controls. In multivariable models, no association was found between hospitalization for a fall-related injury and prior exposure to a ChEI in persons with dementia dispensed beta-blockers (adjusted odds ratio = .96, 95% confidence interval = .85-1.08). Findings were consistent across secondary exposures and subgroup analyses. Conclusion: Among nursing home residents with dementia receiving beta-blockers, co-prescription of a ChEI was not associated with an increased risk of hospitalization for a fall-related injury. However, we did not assess for its association with falls not leading to hospitalization. This finding could inform clinical guidelines and shared decision making between persons with dementia, caregivers, and clinicians concerning ChEI initiation and/or discontinuation. abstract_id: PUBMED:12919232 Dementia as a risk factor for falls and fall injuries among nursing home residents. Objectives: To compare rates of falling between nursing home residents with and without dementia and to examine dementia as an independent risk factor for falls and fall injuries. Design: Prospective cohort study with 2 years of follow-up. Setting: Fifty-nine randomly selected nursing homes in Maryland, stratified by geographic region and facility size. Participants: Two thousand fifteen newly admitted residents aged 65 and older. Measurements: During 2 years after nursing home admission, fall data were collected from nursing home charts and hospital discharge summaries. Results: The unadjusted fall rate for residents in the nursing home with dementia was 4.05 per year, compared with 2.33 falls per year for residents without dementia (P&lt;.0001). The effect of dementia on the rate of falling persisted when known risk factors were taken into account. Among fall events, those occurring to residents with dementia were no more likely to result in injury than falls of residents without dementia, but, given the markedly higher rates of falling by residents with dementia, their rate of injurious falls was higher than for residents without dementia. Conclusion: Dementia is an independent risk factor for falling. Although most falls do not result in injury, the fact that residents with dementia fall more often than their counterparts without dementia leaves them with a higher overall risk of sustaining injurious falls over time. Nursing home residents with dementia should be considered important candidates for fall-prevention and fall-injury-prevention strategies. abstract_id: PUBMED:30247773 Low-Dose Trazodone, Benzodiazepines, and Fall-Related Injuries in Nursing Homes: A Matched-Cohort Study. Objectives: To evaluate whether risk of fall-related injuries differs between nursing home (NH) residents newly dispensed low-dose trazodone and those newly dispensed benzodiazepines. Design: Retrospective, matched cohort study in linked, population-based administrative data. Matching was based on propensity score ( ± 0.2 standard deviations of the score as a caliper), age ( ± 1 year), sex, frailty status, and history of dementia. The derived propensity score included demographic characteristics, clinical comorbidities, cognitive and functional status, and risk factors for falls. Setting: All NHs in Ontario, Canada. Participants: Propensity score-matched pairs of residents aged 66 and older who received a full clinical assessment between April 1, 2010, and March 31, 2015 (N=7,791). Measurements: Hospitalization (emergency department visit or acute care admission) for a fall-related injury within 90 days of exposure. Subdistribution hazard functions accounted for competing risk of death. Sensitivity analyses were used to examine falls resulting in hip or wrist fracture only, as well as different lengths of follow-up at 30, 60, and 180 days. Results: Cumulative incidence of a fall-related injury in the 90 days after index was 5.7% for low-dose trazodone users and 6.0% for benzodiazepine users (between-group change=-0.29, 95% confidence interval (CI)=-1.02-0.44]; hazard ratio=0.94, 95% CI=0.83-1.08). Findings were consistent across sensitivity analyses. Conclusion: New use of low-dose trazodone was no safer with respect to a risk of a fall-related injury than new use of benzodiazepines. Additional studies to compare the effectiveness and risks of low-dose trazodone with those of a variety of psychotropic drug therapies are required in light of increasing trends in the use of trazodone in NHs. abstract_id: PUBMED:30907365 Screening risk and protective factors of nursing home admission Many aged adults want to stay as long as possible in their own homes. Hence, it is important to identify factors that can predict nursing home admission, in order to prevent this admission and maintain people at home. Several studies have investigated the risk factors of nursing home admission but syntheses are still rare. The present study aimed to identify risk and protective factors for nursing home admission for aged adults. A literature review was conducted using the PubMed search engine. Of 177 relevant reports, 27 were analyzed. We have included studies, literature reviews and meta-analyses that have highlights 59 potential factors. Falls, especially when fall causes serious injuries, cognitive impairment, activities daily living dependencies and stroke were identified as the highest risk factors. In contrast, living with spouse, having adult children, receiving a home care program based on case management or being homeowner were identified as protectives factors. This knowledge of risk and protective factors can help our prevention strategies to delay or find alternatives to nursing home admission. abstract_id: PUBMED:15530179 Falls in the nursing home: are they preventable? Introduction: Falls are prevalent in elderly patients residing in nursing homes, with approximately 1.5 falls occurring per nursing home bed-years. Although most are benign and injury-free, 10% to 25% result in hospital admission and/or fractures. Primary care providers for nursing home residents must therefore aim to reduce both the fall rate as well as the rate of fall-related morbidity in the long-term care setting. Interventions have been demonstrated to be successful in reducing falls in community-dwelling elderly patients. However, less evidence supports the efficacy of fall prevention in nursing home residents. Methods: The authors conducted a Medline search using the key words Falls and Nursing Homes. Results: Several studies examined the efficacy of multifaceted intervention programs on reducing falls in nursing homes with varied results. Components of these intervention programs include: environmental assessment, assistive device evaluation and modification, medication changes, gait assessment and training, staff education, exercise programs, hip protector use, and blood pressure evaluation. Current literature supports the use of environmental assessment and intervention in reducing falls in nursing homes, and demonstrates an association between certain medications and falls. However, there are no studies that examine the effect of medication adjustments on fall rates. Also, the literature does not strongly suggest that exercise programs are effective in fall reduction. Although not effective in reducing fall rates, the use of hip protectors appears to result in less fall-related morbidity. Conclusion: More studies must be done to clarify the effects of high-risk medication reduction, the optimal nature and intensity of exercise programs, and patient targeting criteria to maximize the effectiveness of nursing home fall prevention programs. Based on the current literature, an effective multifaceted fall prevention program for nursing home residents should include risk factor assessment and modification, staff education, gait assessment and intervention, assistive device assessment and optimization, as well as environmental assessment and modification. Although there is no association between the use of hip protectors and fall rates, their use should be encouraged because the ultimate goal of any fall prevention program is to prevent fall-related morbidity. abstract_id: PUBMED:37841940 Chronic kidney disease and polypharmacy as risk factors for recurrent falls in a nursing home population. Background: It is known that nursing home patients who have sustained a previous fall are at a higher average risk for recurrent falls. Therefore, these patients require closer attention and monitoring for fall prevention. Methods: We conducted a retrospective review in our Level 1 Trauma Center, who sustained a ground-level fall in a nursing home from January 2017 to December 2018. Inclusion criteria involved patients aged 65 or older, admitted from nursing homes. Logistic regression analysis was performed to identify factors associated with recurrent fall. Results: A total of 445 patients were identified. Among them, 47 (10.6%) patients sustained recurrent falls, The median age was 83.3 years old and. The recurrent fall group was more likely to have chronic kidney disease (CKD) (27.1% vs. 13.1%, p = 0.02) and diabetes (47.9% vs. 31%, p = 0.02). The median number of medications taken by a patient was 8.78. Overall, 176 (39.5%) patients sustained any injury, and 25 (5.6%) patients died within the study period. The presence of CKD (odds ratio [OR], 2.34; 95% confidence interval [CI], 1.15-4.76, p = 0.02) and polypharmacy (number of medications of 9 or above) (OR, 2.07; 95% CI, 1.12-3.82, p = 0.02) were independent risk factors for recurrent falls. Conclusions: CKD and polypharmacy were associated with a risk of recurrent falls among nursing home patients. The incidence of falls has a multifactorial etiology, and it is important to identify such risk factors to better prevent the morbidities and mortalities associated with fall-related injuries. abstract_id: PUBMED:16551348 The development of a multidisciplinary fall risk evaluation tool for demented nursing home patients in the Netherlands. Background: Demented nursing home patients are at high risk for falls. Falls and associated injuries can have a considerable influence on the autonomy and quality of life of patients. The prevention of falls among demented patients is therefore an important issue. In order to intervene in an efficient way in this group of patients, it is important to systematically evaluate the fall risk profile of each individual patient so that for each patient tailor-made preventive measures can be taken. Therefore, the objective of the present study is to develop a feasible and evidence based multidisciplinary fall risk evaluation tool to be used for tailoring preventive interventions to the needs of individual demented patients. Methods: To develop this multidisciplinary fall risk evaluation tool we have chosen to combine scientific evidence on the one hand and experts' opinions on the other hand. Firstly, relevant risk factors for falling in elderly persons were gathered from the literature. Secondly, a group of Dutch experts in the field of falls and fall prevention in the elderly were consulted to judge the suitability of these risk factors for use in a multidisciplinary fall risk evaluation tool for demented nursing home patients. Thirdly, in order to generate a compact list of the most relevant risk factors for falling in demented elderly, all risk factors had to fulfill a set of criteria indicating their relevance for this specific target population. Lastly the final list of risk factors resulting from the above mentioned procedure was presented to the expert group. The members were also asked to give their opinion about the practical use of the tool. Results: The multidisciplinary fall risk evaluation tool we developed includes the following items: previous falls, use of medication, locomotor functions, and (correct) choice and use of assistive and protective devices. The tool is developed for the multidisciplinary teams of the nursing homes. Conclusion: This evidence and practice based multidisciplinary fall risk evaluation tool targets the preventive interventions aimed to prevent falls and their negative consequences in demented nursing home patients. abstract_id: PUBMED:7594157 Increased fall rates in nursing home residents after relocation to a new facility. Objective: To examine the change in fall rates after relocation of nursing home residents from one facility to another and to identify resident risk factors for changes in falls following relocation. Design: Retrospective review of incident reports to identify falls, followed by chart review of a longitudinal cohort. Setting: An academic nursing home whose residents and programs moved from a 125-year-old, 233-bed facility to a newly constructed 255-bed facility. Patients: A total of 210 nursing home residents were moved from one facility to the other. Of these, 133 individuals who lived in the old facility for 9 months before the move and in the new facility for 6 months after the move formed the longitudinal cohort. Results: In the 3 months after the move, the fall rate increased from 0.34 to 0.70 falls per resident per quarter in the entire nursing home population (P &lt; .001) and subsequently returned to baseline. In the longitudinal subgroup the fall rate went from 0.26 to 0.60 (P &lt; .005). Fall-related injuries in the longitudinal subgroup went from 0.058 injuries per resident per quarter at baseline to 0.15 (P &lt; .001). However, the injury rate per fall did not change. There were no characteristics associated with being a faller in the quarter before the move. Dementia and not being bedbound were associated with being a faller after the move. Individuals who were ambulatory or wheelchair mobile had a significant risk of increasing the number of falls after the move, and individuals with dementia had a strong but insignificant trend in this direction. Conclusions: The incidence of falling doubled after relocation of nursing home residents to a new facility. An increase in falls was seen in individuals who were not bedbound. Although nursing home relocation may be a relatively uncommon occurrence, it is reasonable to infer that older individuals who change their living environments are at increased risk for falls and fall-related injuries. Answer: Yes, polypharmacy is considered a risk factor for fall-related injuries in a nursing home setting. Studies have shown that the use of multiple medications, particularly when they include drugs known to increase fall risk, is associated with a higher incidence of falls among elderly nursing home residents. For instance, a retrospective observational study found that drug interactions between antiarrhythmic or antiparkinson class drugs and polypharmacy regimens (defined as 7 or more medications) were associated with a risk of injuries from falls (PUBMED:20003327). Another study highlighted that polypharmacy, defined as taking four or more chronic medications, was a significant risk factor for falls (PUBMED:28188510). Additionally, chronic kidney disease and polypharmacy were identified as independent risk factors for recurrent falls among nursing home patients (PUBMED:37841940). However, it is important to note that not all studies have found a direct association between certain medications and increased fall risk. For example, a study examining the co-prescription of a cholinesterase inhibitor and beta-blocker found no association with an increased risk of hospitalization for a fall-related injury among nursing home residents with dementia (PUBMED:33460072). Similarly, a study comparing the risk of fall-related injuries between nursing home residents newly dispensed low-dose trazodone and those newly dispensed benzodiazepines did not find a significant difference in risk between the two groups (PUBMED:30247773). Overall, the evidence suggests that while polypharmacy is a risk factor for falls, the relationship between specific medications and fall risk can vary. Therefore, routine medication reviews and careful management of drug regimens, especially in residents exposed to polypharmacy, are recommended to reduce the risk of fall-related injuries in nursing home settings (PUBMED:20003327).
Instruction: Does socioeconomic position moderate the effects of race on cardiovascular disease mortality? Abstracts: abstract_id: PUBMED:15724767 Does socioeconomic position moderate the effects of race on cardiovascular disease mortality? Objective: Cardiovascular disease (CVD) rates differ markedly by minority status, with younger Blacks having some of the highest CVD mortality rates in the United States. A major objective of this study was to assess whether socioeconomic position moderates the effects of race or minority status on CVD mortality. Design: The sample included 443 Black and 21,182 White men, and 415 Black and 24,929 White women, 45 years and older, who died of CVD from 1992-1998, and who had lived in the Twin Cities 5-county area. Using individual and neighborhood level measures of socioeconomic position, we hypothesized that socioeconomic position would moderate the effects of race on CVD mortality. Test hypotheses were analyzed using Poisson regression analysis. Results: Socioeconomic position moderated the effects of race on CVD mortality among older men, but not in older women. Older Black men who lived in more impoverished neighborhoods had significantly and disproportionately higher CVD mortality rates than did older White men living in more impoverished neighborhoods; this was not the case among older Black and White men living in less impoverished neighborhoods. Race was independently related to CVD mortality among younger men and women, with younger Black men and women having significantly higher CVD mortality rates than younger White men and women. The Black-White rate for Black women was twice that of White women. Conclusion: Socioeconomic position as measured by neighborhood poverty can moderate the effects of race on CVD mortality in older Black and White men. This may not have been as apparent had socioeconomic position not been treated as a major variable of interest, and measured at multiple levels. abstract_id: PUBMED:11459398 The socioeconomic position of employed women, risk factors and mortality. Many studies have demonstrated the graded association between socioeconomic position and health. Few of these studies have examined the cumulative effect of socioeconomic position throughout the lifecourse, and even fewer have included women. Those that have explored gender differences affirm the importance of studying the factors that predict women and men's health separately. This study addresses the associations between cross-sectional and longitudinal socioeconomic position, risk factors for cardiovascular disease and mortality from various causes. Analyses are based on data from a cohort of working Scottish women recruited between 1970 and 1973. Five socioeconomic measures were explored in relation to diastolic blood pressure, plasma cholesterol concentration, body mass index, forced expiratory volume in 1 s (FEV1). amount of recreational exercise taken, cigarette smoking and alcohol consumption. In general, for each of the five measures of socioeconomic position, there were significant differences in at least one of the age-adjusted physiological risk factors for cardiovascular disease (diastolic blood pressure, plasma cholesterol concentration, body mass index, FEV1). There were also significant differences in the percentage of current cigarette smokers according to different measures of socioeconomic position, although this was not the case for the other behavioural risk factors for cardiovascular disease (amount of recreational exercise taken, and alcohol consumption). Measures of socioeconomic position were also examined in relation to cause of death for the women who died before 1 January 1999. After adjusting for age and risk factors, a composite measure of lifetime socioeconomic experience was a more potent predictor of all cause mortality and mortality from cardiovascular disease than other measures of socioeconomic position. It therefore seems that conventional measurcs of socioeconomic position, estimated at one point in time, do not adequately capture the effects of socioeconomic circumstances on the risk of mortality among employed women. Thus, a broader range of explanatory factors for mortality differentials than currently exists must be considered, and must include consideration of factors operating throughout the lifecourse. abstract_id: PUBMED:37035105 Individual-Level Socioeconomic Position and Long-Term Prognosis in Danish Heart-Transplant Recipients. Socioeconomic deprivation can limit access to healthcare. Important gaps persist in the understanding of how individual indicators of socioeconomic disadvantage may affect clinical outcomes after heart transplantation. We sought to examine the impact of individual-level socioeconomic position (SEP) on prognosis of heart-transplant recipients. A population-based study including all Danish first-time heart-transplant recipients (n = 649) was conducted. Data were linked across complete national health registers. Associations were evaluated between SEP and all-cause mortality and first-time major adverse cardiovascular event (MACE) during follow-up periods. The half-time survival was 15.6 years (20-year period). In total, 330 (51%) of recipients experienced a first-time cardiovascular event and the most frequent was graft failure (42%). Both acute myocardial infarction and cardiac arrest occurred in ≤5 of recipients. Low educational level was associated with increased all-cause mortality 10-20 years post-transplant (adjusted hazard ratio [HR] 1.95, 95% confidence interval [CI] 1.19-3.19). During 1-10 years post-transplant, low educational level (adjusted HR 1.66, 95% CI 1.14-2.43) and low income (adjusted HR 1.81, 95% CI 1.02-3.22) were associated with a first-time MACE. In a country with free access to multidisciplinary team management, low levels of education and income were associated with a poorer prognosis after heart transplantation. abstract_id: PUBMED:33034339 Associations of Depressive Symptoms With All-Cause and Cause-Specific Mortality by Race in a Population of Low Socioeconomic Status: A Report From the Southern Community Cohort Study. Depression is a leading cause of disability in the United States, but its impact on mortality rates among racially diverse populations of low socioeconomic status is largely unknown. Using data from the Southern Community Cohort Study, 2002-2015, we prospectively evaluated the associations of depressive symptoms with all-cause and cause-specific mortality in 67,781 Black (72.3%) and White (27.7%) adults, a population predominantly with a low socioeconomic status. Baseline depressive symptoms were assessed using the 10-item Center for Epidemiological Studies Depression Scale. The median follow-up time was 10.0 years. Multivariate Cox regression was used to estimate hazard ratios and 95% confidence intervals for death in association with depressive symptoms. Mild, moderate, and severe depressive symptoms were associated with increased all-cause (hazard ratio (HR) = 1.12, 95% confidence interval (CI): 1.03, 1.22; HR = 1.17, 95% CI: 1.06, 1.29; HR = 1.15, 95% CI: 1.03, 1.28, respectively) and cardiovascular disease-associated death (HR = 1.23, 95% CI: 1.05, 1.44; HR = 1.18, 95% CI: 0.98, 1.42; HR = 1.43, 95% CI: 1.17, 1.75, respectively) in Whites but not in Blacks (P for interaction &lt; 0.001, for both). Mild, moderate, or severe depressive symptoms were associated with increased rates of external-cause mortality in both races (HR = 1.24, 95% CI: 1.05, 1.46; HR = 1.31, 95% CI: 1.06, 1.61; HR = 1.42, 95% CI: 1.11, 1.81, respectively; for all study subjects, P for interaction = 0.48). No association was observed for cancer-associated deaths. Our study showed that the association between depression and death differed by race and cause of death in individuals with a low socioeconomic status. abstract_id: PUBMED:24524505 Mortality differentials by immigrant groups in Sweden: the contribution of socioeconomic position. Objectives: We studied mortality differentials between specific groups of foreign-born immigrants in Sweden and whether socioeconomic position (SEP) could account for such differences. Methods: We conducted a follow-up study of 1 997 666 men and 1 964 965 women ages 30 to 65 years based on data from national Swedish total population registers. We examined mortality risks in the 12 largest immigrant groups in Sweden between 1998 and 2006 using Cox regression. We also investigated deaths from all causes, circulatory disease, neoplasms, and external causes. Results: We found higher all-cause mortality among many immigrant categories, although some groups had lower mortality. When studying cause-specific mortality, we found the largest differentials in deaths from circulatory disease, whereas disparities in mortality from neoplasms were smaller. SEP, especially income and occupational class, accounted for most of the mortality differentials by country of birth. Conclusions: Our findings stressed that different aspects of SEP were not interchangeable in relation to immigrant health. Although policies aimed at improving immigrants' socioeconomic conditions might be beneficial for health and longevity, our findings indicated that such policies might have varying effects depending on the specific country of origin and cause of death. abstract_id: PUBMED:27621991 Widening Socioeconomic and Racial Disparities in Cardiovascular Disease Mortality in the United States, 1969-2013. Objectives: This study examined trends and socioeconomic and racial/ethnic disparities in cardiovascular disease (CVD) mortality in the United States between 1969 and 2013. Methods: National vital statistics data and the National Longitudinal Mortality Study were used to estimate racial/ethnic and area- and individual-level socioeconomic disparities in CVD mortality over time. Rate ratios and log-linear regression were used to model mortality trends and differentials. Results: Between 1969 and 2013, CVD mortality rates decreased by 2.66% per year for whites and 2.12% for blacks. Racial disparities and socioeconomic gradients in CVD mortality increased substantially during the study period. In 2013, blacks had 30% higher CVD mortality than whites and 113% higher mortality than Asians/Pacific Islanders. Compared to those in the most affluent group, individuals in the most deprived area group had 11% higher CVD mortality in 1969 but 40% higher mortality in 2007-2011. Education, income, and occupation were inversely associated with CVD mortality in both men and women. Men and women with low education and incomes had 46-76% higher CVD mortality risks than their counterparts with high education and income levels. Men in clerical, service, farming, craft, repair, construction, and transport occupations, and manual laborers had 30-58% higher CVD mortality risks than those employed in executive and managerial occupations. Conclusions And Global Health Implications: Socioeconomic and racial disparities in CVD mortality are marked and have increased over time because of faster declines in mortality among the affluent and majority populations. Disparities in CVD mortality may reflect inequalities in the social environment, behavioral risk factors such as smoking, obesity, physical inactivity, disease prevalence, and healthcare access and treatment. With rising prevalence of many chronic disease risk factors, the global burden of cardiovascular diseases is expected to increase further, particularly in low- and middle-income countries where over 80% of all CVD deaths occur. abstract_id: PUBMED:29685862 Usage of a Digital Health Workplace Intervention Based on Socioeconomic Environment and Race: Retrospective Secondary Cross-Sectional Study. Background: Digital health tools have been associated with improvement of cardiovascular disease (CVD) risk factors and outcomes; however, the differential use of these technologies among various ethnic and economic classes is not well known. Objective: To identify the effect of socioeconomic environment on usage of a digital health intervention. Methods: A retrospective secondary cross-sectional analysis of a workplace digital health tool use, in association with a change in intermediate markers of CVD, was undertaken over the course of one year in 26,188 participants in a work health program across 81 organizations in 42 American states between 2011 and 2014. Baseline demographic data for participants included age, sex, race, home zip code, weight, height, blood pressure, glucose, lipids, and hemoglobin A1c. Follow-up data was then obtained in 90-day increments for up to one year. Using publicly available data from the American Community Survey, we obtained the median income for each zip code as a marker for socioeconomic status via median household income. Digital health intervention usage was analyzed based on socioeconomic status as well as age, gender, and race. Results: The cohort was found to represent a wide sample of socioeconomic environments from a median income of US $11,000 to $171,000. As a whole, doubling of income was associated with 7.6% increase in log-in frequency. However, there were marked differences between races. Black participants showed a 40.5% increase and Hispanic participants showed a 57.8% increase in use with a doubling of income, compared to 3% for Caucasian participants. Conclusions: The current study demonstrated that socioeconomic data confirms no relevant relationship between socioeconomic environment and digital health intervention usage for Caucasian users. However, a strong relationship is present for black and Hispanic users. Thus, socioeconomic environment plays a prominent role only in minority groups that represent a high-risk group for CVD. This finding identifies a need for digital health apps that are effective in these high-risk groups. abstract_id: PUBMED:33653083 County-Level Factors Associated With Cardiovascular Mortality by Race/Ethnicity. Background Persistent racial/ethnic disparities in cardiovascular disease (CVD) mortality are partially explained by healthcare access and socioeconomic, demographic, and behavioral factors. Little is known about the association between race/ethnicity-specific CVD mortality and county-level factors. Methods and Results Using 2017 county-level data, we studied the association between race/ethnicity-specific CVD age-adjusted mortality rate (AAMR) and county-level factors (demographics, census region, socioeconomics, CVD risk factors, and healthcare access). Univariate and multivariable linear regressions were used to estimate the association between these factors; R2 values were used to assess the factors that accounted for the greatest variation in CVD AAMR by race/ethnicity (non-Hispanic White, non-Hispanic Black, and Hispanic/Latinx individuals). There were 659 740 CVD deaths among non-Hispanic White individuals in 2698 counties; 100 475 deaths among non-Hispanic Black individuals in 717 counties; and 49 493 deaths among Hispanic/Latinx individuals across 267 counties. Non-Hispanic Black individuals had the highest mean CVD AAMR (320.04 deaths per 100 000 individuals), whereas Hispanic/Latinx individuals had the lowest (168.42 deaths per 100 000 individuals). The highest CVD AAMRs across all racial/ethnic groups were observed in the South. In unadjusted analyses, the greatest variation (R2) in CVD AAMR was explained by physical inactivity for non-Hispanic White individuals (32.3%), median household income for non-Hispanic Black individuals (24.7%), and population size for Hispanic/Latinx individuals (28.4%). In multivariable regressions using county-level factor categories, the greatest variation in CVD AAMR was explained by CVD risk factors for non-Hispanic White individuals (35.3%), socioeconomic factors for non-Hispanic Black (25.8%), and demographic factors for Hispanic/Latinx individuals (34.9%). Conclusions The associations between race/ethnicity-specific age-adjusted CVD mortality and county-level factors differ significantly. Interventions to reduce disparities may benefit from being designed accordingly. abstract_id: PUBMED:26794164 Socioeconomic Position and Premature Mortality in the AusDiab Cohort of Australian Adults. Objectives: To determine the association of socioeconomic position indicators with mortality, without and with adjustment for modifiable risk factors. Methods: We examined the relationships of 2 area-based indices and educational level with mortality among 9338 people (including 8094 younger than 70 years at baseline) of the Australian Diabetes Obesity and Lifestyle (AusDiab) from 1999-2000 until November 30, 2012. Results: Age- and gender-adjusted premature mortality (death before age 70 years) was more likely among those living in the most disadvantaged areas versus least disadvantaged (hazard ratio [HR] = 1.48; 95% confidence interval [CI] = 1.08, 2.01), living in inner regional versus major urban areas (HR = 1.36; 95% CI = 1.07, 1.73), or having the lowest educational level versus the highest (HR = 1.64; 95% CI = 1.17, 2.30). The contribution of modifiable risk factors (smoking status, diet quality, physical activity, stress, cardiovascular risk factors) in the relationship between 1 area-based index or educational level and mortality was more apparent as age of death decreased. Conclusions: The relation of area-based socioeconomic position to premature mortality is partly mediated by behavioral and cardiovascular risk factors. Such results could influence public health policies. abstract_id: PUBMED:14977644 Childhood and adult socioeconomic conditions and 31-year mortality risk in women. Links between low socioeconomic position and poor health are well established. Most previous research, however, has focused on middle-aged males and has relied on limited socioeconomic data, usually measured at one point over the life course. This paper examines all-cause, cardiovascular, and noncardiovascular mortality in women in relation to socioeconomic position at different stages of the life course. Information was collected in 1965, 1974, 1983, and 1994 and included recalled father's occupation and education as a measure of childhood socioeconomic position and the respondent's household income, education and occupation, and spouse's occupation from a sample of 3,087 women participating in the Alameda County Study. Cox regression models were used to estimate hazard ratios for risk of death. Lower childhood socioeconomic position was associated with an increased mortality due to cardiovascular disease (hazard ratio (HR) = 1.29, 95% confidence interval (CI): 1.09, 1.54) but was unrelated to death due to other causes (HR = 0.97, 95% CI: 0.82, 1.15). Overall mortality was higher among women reporting the lowest level of education (HR = 1.17, 95% CI: 0.99, 1.39), but education was most strongly related to noncardiovascular disease-related deaths (HR = 1.41, 95% CI: 1.10, 1.81). Low household income was also associated with higher mortality, for both cardiovascular disease-related (HR = 1.47, 95% CI: 1.14, 1.91) and noncardiovascular disease-related (HR = 1.30, 95% CI: 1.03, 1.63) deaths. Both early and later life indicators of socioeconomic position contribute to increased mortality risk among socioeconomically disadvantaged women, but these effects appear stronger for cardiovascular mortality. Answer: Yes, socioeconomic position does moderate the effects of race on cardiovascular disease (CVD) mortality. Studies have shown that socioeconomic position, as measured by neighborhood poverty, can moderate the effects of race on CVD mortality in older Black and White men. Specifically, older Black men who lived in more impoverished neighborhoods had significantly and disproportionately higher CVD mortality rates than did older White men living in more impoverished neighborhoods; this was not the case among older Black and White men living in less impoverished neighborhoods (PUBMED:15724767). Additionally, individual-level socioeconomic position (SEP) has been found to impact the prognosis of heart-transplant recipients, with low educational level and low income being associated with increased all-cause mortality and major adverse cardiovascular events post-transplant (PUBMED:37035105). Furthermore, socioeconomic and racial disparities in CVD mortality have been observed to increase over time, with faster declines in mortality among the affluent and majority populations (PUBMED:27621991). Socioeconomic factors, especially income and occupational class, accounted for most of the mortality differentials by country of birth among immigrant groups in Sweden (PUBMED:24524505). In the United States, racial disparities and socioeconomic gradients in CVD mortality have increased substantially, with blacks having higher CVD mortality than whites and Asians/Pacific Islanders, and individuals in the most deprived area group having higher mortality than those in the most affluent group (PUBMED:27621991). Moreover, the usage of digital health interventions, which can improve CVD risk factors and outcomes, has been found to be associated with socioeconomic status, particularly among minority groups. Black and Hispanic participants showed a significant increase in the use of digital health tools with a doubling of income, compared to Caucasian participants (PUBMED:29685862). In summary, socioeconomic position is a significant moderator of the effects of race on CVD mortality, with lower socioeconomic status being associated with higher mortality rates, particularly among racial minorities. This suggests that interventions to reduce CVD mortality disparities should consider both socioeconomic and racial factors.
Instruction: Is higher ASA class associated with an increased incidence of adverse events during procedural sedation in a pediatric emergency department? Abstracts: abstract_id: PUBMED:21465695 Is higher ASA class associated with an increased incidence of adverse events during procedural sedation in a pediatric emergency department? Objective: To prospectively investigate whether American Society of Anesthesiologists (ASA) class, as assigned by nonanesthesiologists, is associated with adverse events during procedural sedation in a pediatric emergency department. Methods: A prospectively collected database of children aged 0 to 21 years undergoing procedural sedation in the emergency department of an urban, tertiary care, children's hospital was retrospectively reviewed. This database included clinical and demographic characteristics, including assigned ASA class. It also included information relative to the procedure, the sedation, and any complications related to the sedation. Complications were defined a priori as persistent oxygen desaturation to less than 93% on pulse oximetry requiring supplemental oxygen, bronchospasm, dizziness, apnea, seizure, hiccoughs, laryngospasm, stridor, arrhythmia, hypotension, rash, vomiting, aspiration, or a disinhibition/agitation/dysphoria emergence reaction. Main outcome measure was the incidence of complications relative to ASA class. Results: Procedural sedation was performed in the emergency department 1232 times during the study period; 30 sedations did not have either ASA class or occurrence of a complication recorded. Thus, 1202 sedations were included in the study. Nine hundred eighty-eight patients were classified as ASA class 1, whereas 214 were classified as ASA class 2 or greater. There were a total of 215 adverse events in the study population. Most of these were hypoxia (185 total) and were more likely to occur in patients with an ASA class 2 or greater (P = 0.021). Conclusions: Adverse events during procedural sedation are more common in patients with higher ASA class. abstract_id: PUBMED:24421451 Adverse events associated with procedural sedation in pediatric patients in the emergency department. Purpose: To determine the agents used by emergency medicine (EM) physicians in pediatric procedural sedation and the associated adverse events (AEs) and to provide recommendations for optimizing drug therapy in pediatric patients. Methods: We conducted a prospective study at Stanford Hospital's pediatric emergency department (ED) from April 2007 to April 2008 to determine the medications most frequently used in pediatric procedural sedation as well as their effectiveness and AEs. Patients, 18 years old or younger, who required procedural sedation in the pediatric ED were eligible for the study. The data collected included medical record number, sex, age, height, weight, procedure type and length, physician, and agents used. For each agent, the dose, route, time from administration to onset of sedation, duration of sedation, AEs, and sedation score were recorded. Use of supplemental oxygen and interventions during procedural sedation were also recorded. Results: We found that in a convenience sample of 196 children (202 procedures) receiving procedural sedation in a university-based ED, 8 different medications were used (ketamine, etomidate, fentanyl, hydromorphone, methohexital, midazolam, pentobarbital, and thiopental). Ketamine was the most frequently used medication (88%), regardless of the procedure. Only twice in the study was the medication that was initially used for procedural sedation changed completely. Fracture reduction was the most frequently performed procedure (41%), followed by laceration/suture repair (32%). There were no serious AEs reported. Conclusion: EM-trained physicians can safely perform pediatric procedural sedation in the ED. In the pediatric ED, the most common procedure requiring conscious sedation is fracture reduction, with ketamine as the preferred agent. abstract_id: PUBMED:32588587 Incidence and predictors of respiratory adverse events in children undergoing procedural sedation with intramuscular ketamine in a paediatric emergency department. Introduction: Although ketamine is one of the commonest medications used in procedural sedation of children, to our knowledge, there is currently no published report on predictors of respiratory adverse events during ketamine sedation in Asian children. We aimed to determine the incidence of and factors associated with respiratory adverse events in children undergoing procedural sedation with intramuscular (IM) ketamine in a paediatric emergency department (ED) in Singapore. Methods: A retrospective analysis was conducted of all children who underwent procedural sedation with IM ketamine in the paediatric ED between 1 April 2013 and 31 October 2017. Demographics and epidemiological data, including any adverse events and interventions, were extracted electronically from the prospective paediatric sedation database. The site of procedure was determined through reviewing medical records. Descriptive statistics were used for incidence and baseline characteristics. Univariate and multivariate logistic regression analyses were performed to determine significant predictors. Results: Among 5,476 children, 102 (1.9%) developed respiratory adverse events. None required intubation or cardiopulmonary resuscitation. Only one required bag-valve-mask ventilation. The incidence rate was higher in children aged less than three years, at 3.6% compared to 1.0% in older children (odds ratio [OR] 3.524, 95% confidence interval [CI] 2.354-5.276; p &lt; 0.001). Higher initial ketamine dose (adjusted OR 2.061, 95% CI 1.371-3.100; p = 0.001) and the type of procedure (adjusted OR 0.190 (95% CI 0.038-0.953; p = 0.044) were significant independent predictors. Conclusion: The overall incidence of respiratory adverse events was 1.9%. Age, initial dose of IM ketamine and type of procedure were significant predictors. abstract_id: PUBMED:28433211 Adverse Events During a Randomized Trial of Ketamine Versus Co-Administration of Ketamine and Propofol for Procedural Sedation in a Pediatric Emergency Department. Background: The co-administration of ketamine and propofol (CoKP) is thought to maximize the beneficial profile of each medication, while minimizing the respective adverse effects of each medication. Objective: Our objective was to compare adverse events between ketamine monotherapy (KM) and CoKP for procedural sedation and analgesia (PSA) in a pediatric emergency department (ED). Methods: This was a prospective, randomized, single-blinded, controlled trial of KM vs. CoKP in patients between 3 and 21 years of age. The attending physician administered either ketamine 1 mg/kg i.v. or ketamine 0.5 mg/kg and propofol 0.5 mg/kg i.v. The physician could administer up to three additional doses of ketamine (0.5 mg/kg/dose) or ketamine/propofol (0.25 mg/kg/dose of each). Adverse events (e.g., respiratory events, cardiovascular events, unpleasant emergence reactions) were recorded. Secondary outcomes included efficacy, recovery time, and satisfaction scores. Results: Ninety-six patients were randomized to KM and 87 patients were randomized to CoKP. There was no difference in adverse events or type of adverse event, except nausea was more common in the KM group. Efficacy of PSA was higher in the KM group (99%) compared to the CoKP group (90%). Median recovery time was the same. Satisfaction scores by providers, including nurses, were higher for KM, although parents were equally satisfied with both sedation regimens. Conclusions: We found no significant differences in adverse events between the KM and CoKP groups. While CoKP is a reasonable choice for pediatric PSA, our study did not demonstrate an advantage of this combination over KM. abstract_id: PUBMED:33323292 Hunger Games: Impact of Fasting Guidelines for Orthopedic Procedural Sedation in the Pediatric Emergency Department. Background: Fasting guidelines for pediatric procedural sedation have historically been controversial. Recent literature suggests that there is no difference in adverse events regardless of fasting status. Objectives: The goal of this study was to examine adverse outcomes and departmental efficiency when fasting guidelines are not considered during pediatric emergency department (PED) sedation for orthopedic interventions. Methods: Retrospective chart review identified 2674 patients who presented to a level I PED and required procedural sedation for orthopedic injuries between February 2011 and July 2018. This was a level III, retrospective cohort study. Patients were categorized into the following groups: already within American Society of Anesthesiologists (ASA) fasting guidelines on presentation to the PED (n = 671 [25%]), had procedural sedation not within the ASA guidelines (n = 555 [21%]), and had procedural sedation after fasting in the PED to meet ASA guidelines (n = 1448 [54%]). Primary outcomes were length of stay, time from admission to start of sedation, length of sedation, time from end of sedation to discharge, and adverse events. Discussion: There was a significant difference in the length of stay and time from admission to sedation-both approximately 80 min longer in those with procedural sedation after fasting in the PED to meet ASA guidelines (p &lt; 0.001). There was no significant difference among groups in length of sedation or time to discharge after sedation. Adverse events were uncommon, with only 55 total adverse events (0.02%). Vomiting during the recovery phase was the most common (n = 17 [0.006%]). Other notable adverse events included nine hypoxic events (0.003%) and five seizures (0.002%). There was no significant difference in adverse events among the groups. Conclusions: Length of stay in the PED was significantly longer if ASA fasting guidelines were followed for children requiring sedation for orthopedic procedures. This is a substantial delay in a busy PED where beds and resources are at a premium. Although providing similar care with equivalent outcomes, the value of spending less time in the PED is evident. Overall, adverse events related to sedation are rare and not related to fasting guidelines. abstract_id: PUBMED:15520704 Preprocedural fasting and adverse events in procedural sedation and analgesia in a pediatric emergency department: are they related? Study Objective: Fasting time before procedural sedation and analgesia in a pediatric emergency department (ED) was recently reported to have no association with the incidence of adverse events. This study further investigates preprocedural fasting and adverse events. Methods: Data were analyzed from a prospectively generated database comprising consecutive sedation events from June 1996 to March 2003. Comparisons were made on the incidence of adverse events according to length of preprocedural fasting time. Results: Two thousand four hundred ninety-seven patients received procedural sedation and analgesia. Four hundred twelve patients were excluded for receiving oral or intranasal drugs (n=95) or for receiving sedation for bronchoscopy by nonemergency physicians (n=317). A total of 2,085 patients received parenteral sedation by emergency physicians. Age range was 19 days to 32.1 years (median age 6.7 years); 59.9% were male patients. Adverse events observed included desaturations (169 [8.1%]), vomiting (156 [7.5%]), apnea (16 [0.8%]), and laryngospasm (3 [0.1%]). Fasting time was documented in 1,555 (74.6%) patients. Median fasting time before sedation was 5.1 hours (range 5 minutes to 32.5 hours). When the incidence of adverse events was compared among patients according to fasting time in hours (0 to 2, 2 to 4, 4 to 6, 6 to 8, &gt;8, and not documented), no significant difference was found. No patients experienced clinically apparent aspiration. Conclusion: No association was found between preprocedural fasting and the incidence of adverse events occurring with procedural sedation and analgesia. abstract_id: PUBMED:28828486 Risk Factors for Adverse Events in Emergency Department Procedural Sedation for Children. Importance: Procedural sedation for children undergoing painful procedures is standard practice in emergency departments worldwide. Previous studies of emergency department sedation are limited by their single-center design and are underpowered to identify risk factors for serious adverse events (SAEs), thereby limiting their influence on sedation practice and patient outcomes. Objective: To examine the incidence and risk factors associated with sedation-related SAEs. Design, Setting, And Participants: This prospective, multicenter, observational cohort study was conducted in 6 pediatric emergency departments in Canada between July 10, 2010, and February 28, 2015. Children 18 years or younger who received sedation for a painful emergency department procedure were enrolled in the study. Of the 9657 patients eligible for inclusion, 6760 (70.0%) were enrolled and 6295 (65.1%) were included in the final analysis. Exposures: The primary risk factor was receipt of sedation medication. The secondary risk factors were demographic characteristics, preprocedural medications and fasting status, current or underlying health risks, and procedure type. Main Outcomes And Measures: Four outcomes were examined: SAEs, significant interventions performed in response to an adverse event, oxygen desaturation, and vomiting. Results: Of the 6295 children included in this study, 4190 (66.6%) were male and the mean (SD) age was 8.0 (4.6) years. Adverse events occurred in 736 patients (11.7%; 95% CI, 6.4%-16.9%). Oxygen desaturation (353 patients [5.6%]) and vomiting (328 [5.2%]) were the most common of these adverse events. There were 69 SAEs (1.1%; 95% CI, 0.5%-1.7%), and 86 patients (1.4%; 95% CI, 0.7%-2.1%) had a significant intervention. Use of ketamine hydrochloride alone resulted in the lowest incidence of SAEs (17 [0.4%]) and significant interventions (37 [0.9%]). The incidence of adverse sedation outcomes varied significantly with the type of sedation medication. Compared with ketamine alone, propofol alone (3.7%; odds ratio [OR], 5.6; 95% CI, 2.3-13.1) and the combinations of ketamine and fentanyl citrate (3.2%; OR, 6.5; 95% CI, 2.5-15.2) and ketamine and propofol (2.1%; OR, 4.4; 95% CI, 2.3-8.7) had the highest incidence of SAEs. The combinations of ketamine and fentanyl (4.1%; OR, 4.0; 95% CI, 1.8-8.1) and ketamine and propofol (2.5%; OR, 2.2; 95% CI, 1.2-3.8) had the highest incidence of significant interventions. Conclusions And Relevance: The incidence of adverse sedation outcomes varied significantly with type of sedation medication. Use of ketamine only was associated with the best outcomes, resulting in significantly fewer SAEs and interventions than ketamine combined with propofol or fentanyl. abstract_id: PUBMED:27311910 Incidence of adverse events in paediatric procedural sedation in the emergency department: a systematic review and meta-analysis. Objective And Design: We conducted a systematic review and meta-analysis to evaluate the incidence of adverse events in the emergency department (ED) during procedural sedation in the paediatric population. Randomised controlled trials and observational studies from the past 10 years were included. We adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Setting: ED. Participants: Children. Interventions: Procedural sedation. Outcomes: Adverse events like vomiting, agitation, hypoxia and apnoea. Meta-analysis was performed with random-effects model and reported as incidence rates with 95% CIs. Results: A total of 1177 studies were retrieved for screening and 258 were selected for full-text review. 41 studies reporting on 13 883 procedural sedations in 13 876 children (≤18 years) were included. The most common adverse events (all reported per 1000 sedations) were: vomiting 55.5 (CI 45.2 to 65.8), agitation 17.9 (CI 12.2 to 23.7), hypoxia 14.8 (CI 10.2 to 19.3) and apnoea 7.1 (CI 3.2 to 11.0). The need to intervene with either bag valve mask, oral airway or positive pressure ventilation occurred in 5.0 per 1000 sedations (CI 2.3 to 7.6). The incidences of severe respiratory events were: 34 cases of laryngospasm among 8687 sedations (2.9 per 1000 sedations, CI 1.1 to 4.7; absolute rate 3.9 per 1000 sedations), 4 intubations among 9136 sedations and 0 cases of aspiration among 3326 sedations. 33 of the 34 cases of laryngospasm occurred in patients who received ketamine. Conclusions: Serious adverse respiratory events are very rare in paediatric procedural sedation in the ED. Emesis and agitation are the most frequent adverse events. Hypoxia, a late indicator of respiratory depression, occurs in 1.5% of sedations. Laryngospasm, though rare, happens most frequently with ketamine. The results of this study provide quantitative risk estimates to facilitate shared decision-making, risk communication, informed consent and resource allocation in children undergoing procedural sedation in the ED. abstract_id: PUBMED:10499949 Adverse events of procedural sedation and analgesia in a pediatric emergency department. Study Objective: To determine the adverse event and complication rate for the use of procedural sedation and analgesia for painful procedures and diagnostic imaging studies performed in a pediatric emergency department. Methods: This prospective case series was conducted in the ED of a large, urban pediatric teaching hospital. Subjects were patients younger than 21 years seen between August 1997 and July 1998, who required intravenous, intramuscular, oral, rectal, intranasal, or inhalational agents for painful procedures or diagnostic imaging. All patients who underwent procedural sedation and analgesia were continually monitored. Adverse events and complications were recorded. The ED controlled substance log was checked weekly and all sedations were reviewed. Adverse events were defined as follows: oxygen desaturation less than 90%, apnea, stridor, laryngospasm, bronchospasm, cardiovascular instability, paradoxical reactions, emergence reactions, emesis, and aspiration. Complications were defined as adverse events that negatively affected outcome or delayed recovery. Results: Of 1,180 patients who underwent procedural sedation and analgesia in the ED, 27 (2.3%) experienced adverse events, which included oxygen desaturation less than 90% requiring intervention (10 patients) [supplemental oxygen (9), bag-mask ventilation (1)], paradoxical reactions (7), emesis (3), paradoxical reaction and oxygen desaturation requiring supplemental oxygen (2), apnea requiring bag-mask ventilation (1), laryngospasm requiring bag-mask ventilation (1), bradycardia (1), stridor and emesis (1) and oxygen desaturation requiring bag-mask ventilation with subsequent emesis (1). There was no statistically significant difference in mean doses for all procedural sedation and analgesia medication regimens between those children who experienced adverse events and those who did not. No single drug or drug regimen was associated with a higher adverse event rate. In addition, there was no significant difference in the adverse event rate between males and females, among the different ages, or among the different indications for procedural sedation and analgesia. No patient required reversal of sedation with naloxone or flumazenil, endotracheal intubation, or hospital admission because of complications from procedural sedation and analgesia. Conclusion: The adverse event rate for procedural sedation and analgesia performed by pediatric emergency physicians was 2.3% with no serious complications noted. abstract_id: PUBMED:29306262 Considerations for physicians using ketamine for sedation of children in emergency departments. Objective: Ketamine use in emergency departments (EDs) for procedural sedation and analgesia is becoming increasingly common. However, few studies have examined patient factors related to adverse events associated with ketamine. This study investigated factors for consideration when using ketamine to sedate pediatric ED patients. Methods: The study included pediatric patients receiving ketamine for laceration repair in the ED. Before sedation, information was collected about upper respiratory tract infection symptoms, allergy history, and fasting time. Patients received 2 mg/kg ketamine intravenously or 4 mg/kg ketamine intramuscularly. The primary outcomes were adverse events due to ketamine. Results: We studied 116 patients aged 8 months to 7 years (2.8±1.5 years). The group with adverse events was significantly younger on average than the group without adverse events (2.5±1.5 vs. 3.1±1.5, P=0.028). Upper respiratory tract infection symptoms were not significant variables affecting ketamine sedation (48.9% vs. 43.7%, P=0.719). There was no significant association between duration of fasting and adverse events (P=0.073 and P=0.897, respectively), or between food type and adverse events (P=0.734). However, the number of attempts to sedate and ketamine dose correlated with adverse events (P&lt;0.001 and P=0.022, respectively). In multiple logistic regression analysis, intravenous injection and ketamine dose were significant factors (odds ratio, 16.77; 95% confidence interval, 1.78 to 498.54; odds ratio, 4.37; 95% confidence interval, 1.59 to 22.9, respectively). Conclusion: Emergency medicine physicians should consider injection type and ketamine dose when using ketamine sedation while suturing lacerations. Answer: Yes, higher ASA class is associated with an increased incidence of adverse events during procedural sedation in a pediatric emergency department. A study found that adverse events during procedural sedation were more common in patients with higher ASA class. Specifically, hypoxia was more likely to occur in patients with an ASA class 2 or greater (P = 0.021) (PUBMED:21465695).
Instruction: Radiographic changes of implant failure after plating for pubic symphysis diastasis: an underappreciated reality? Abstracts: abstract_id: PUBMED:22552765 Radiographic changes of implant failure after plating for pubic symphysis diastasis: an underappreciated reality? Background: Implant failure after symphyseal disruption and plating reportedly occurs in 0% to 21% of patients but the actual occurrence may be much more frequent and the characteristics of this failure have not been well described. Questions/purposes: We therefore determined the incidence and characterized radiographic implant failures in patients undergoing symphyseal plating after disruption of the pubic symphysis. Methods: We retrospectively reviewed 165 adult patients with Orthopaedic Trauma Association (OTA) 61-B (Tile B) or OTA 61-C (Tile C) pelvic injuries treated with symphyseal plating at two regional Level I and one Level II trauma centers. Immediate postoperative and latest followup anteroposterior radiographs were reviewed for implant loosening or breakage and for recurrent diastasis of the pubic symphysis. The minimum followup was 6 months (average, 12.2 months; range, 6-65 months). Results: Failure of fixation, including screw loosening or breakage of the symphyseal fixation, occurred in 95 of the 127 patients (75%), which resulted in widening of the pubic symphyseal space in 84 of those cases (88%) when compared with the immediate postoperative radiograph. The mean width of the pubic space measured 4.9 mm (range, 2-10 mm) on immediate postoperative radiographs; however, on the last radiographs, the mean was 8.4 mm (range, 3-21 mm), representing a 71% increase. In seven patients (6%), the symphysis widened 10 mm or more; however, only one of these patients required revision surgery. Conclusions: Failure of fixation with recurrent widening of the pubic space can be expected after plating of the pubic symphysis for traumatic diastasis. Although widening may represent a benign condition as motion is restored to the pubic symphysis, patients should be counseled regarding a high risk of radiographic failure but a small likelihood of revision surgery. Level Of Evidence: Level IV, case series. See Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:35278092 The radiographic outcome after plating for pubic symphysis diastasis: does it matter clinically? Introduction: Open reduction and internal fixation with plates is the most widespread surgery in traumatic pubic symphysis diastasis. However, implant failure or recurrent diastasis was commonly observed during follow-up. The aim of our study was to evaluate the radiologic findings and clinical outcomes. Materials And Methods: Sixty-five patients with traumatic pubic symphysis diastasis treated with plating between 2008 and 2019 were retrospectively reviewed. The exclusion criteria were a history of malignancy and age under 20 years. Radiographic outcomes were determined by radiograph findings, including pubic symphysis distance (PSD) and implant failure. Clinical outcomes were assessed according to the Majeed score at the final follow-up. Results: Twenty-eight patients were finally included. Nine patients (32%) experienced implant failure, including four (14%) with screw loosening and five (18%) with plate breakage. Only one patient underwent revision surgery. Postoperatively, a significant increase in PSD was observed at 3 months and 6 months. Postoperative PSD was not significantly different between patients with single plating and double plating, but it was significantly greater in the implant-failure group than in the non-failure group. The Majeed score was similar between patients with single plating and double plating or between the implant-failure group and the non-failure group. Body mass index, number of plates, age, and initial injured PSD were not significantly different between the implant-failure group and the non-failure group. Only a significant male predominance was observed in the implant-failure group. Conclusion: A gradual increase in the symphysis distance and a high possibility of implant failure may be the distinguishing features of traumatic pubic symphysis diastasis fixation. The postoperative symphyseal distance achieved stability after 6 months, even after implant failure. Radiographic outcomes, such as increased symphysis distance, screw loosening, and plate breakage, did not affect clinical functional outcomes. abstract_id: PUBMED:18594300 Comparative radiographic and clinical outcome of two-hole and multi-hole symphyseal plating. Objectives: To report on the radiographic and clinical outcome of symphyseal plating techniques, with specific attention to the incidence of implant failure, reoperation secondary to implant complication, and ability to maintain reduction of the pelvic ring. Design: Retrospective chart and radiographic review. Setting: Level 1 trauma center. Patients: A total of 229 skeletally mature patients with traumatic pelvic disruptions associated with pubic symphysis diastasis requiring open reduction internal fixation. Intervention: Symphyseal plating: (1) group THP, a two-hole plate; (2) group MHP, a multi-hole plate (minimum 2 holes/screws on either side of the symphysis). Patients were analyzed with respect to technique of anterior ring fixation and posterior ring injury pattern and fixation. Main Outcome Measurement: Retrospective review of charts and radiographs immediately after the index procedure to latest follow-up was performed. Analysis included pelvic ring injury, type of anterior and/or posterior fixation, maintenance of postoperative reduction, rate of implant failure, and need for reoperation secondary to implant complication. Additionally, logistic regression analysis was performed to detect correlation between any other variable (posterior injury pattern, presence or absence of posterior fixation, time to surgery) and failure or malunion. Statistical analyses were performed using SPSS software. Results: A total of 92 complete data sets were available for review. There were 51 patients in group THP and 41 patients in group MHP. When comparing the results of the 2 different methods of anterior fixation (THP versus MHP), the rate of fixation failure was greater in group THP (17 of 51; 33%) than group MHP (5 of 41; 12%). This was statistically significant (P = 0.018). When evaluating the presence of a malunion as a result of these 2 treatment methods, there were more present in the THP group (29 of 51; 57%) than in the MHP group (6 of 41; 15%). Again, this was highly statistically significant (P = 0.001). Although the reoperation rate was slightly higher in the THP group (16%) as compared to the MHP group (12%), this was not statistically significant (P = 0.67). Logistic regression analysis did not reveal any other variables to correlate as a risk factor for failure or malunion in this group of patients. Conclusions: In this group of patients, the two-hole symphyseal plating technique group had a higher implant failure rate and, more importantly, a significantly higher rate of pelvic malunion. On the basis of these findings, we recommend multi-hole plating of unstable pubic symphyseal disruptions. abstract_id: PUBMED:22183198 Failure of locked design-specific plate fixation of the pubic symphysis: a report of six cases. Objectives: Physiological pelvic motion has been known to lead to eventual loosening of screws, screw breakage, and plate breakage in conventional plate fixation of the disrupted pubic symphysis. Locked plating has been shown to have advantages for fracture fixation, especially in osteoporotic bone. Although design-specific locked symphyseal plates are now available, to our knowledge, their clinical use has not been evaluated and there exists a general concern that common modes of failure of the locked plate construct (such as pullout of the entire plate and screws) could result in complete and abrupt loss of fixation. The purpose of this study was to describe fixation failure of this implant in the acute clinical setting. Design: Retrospective analysis of multicenter case series. Setting: Multiple trauma centers. Patients: Six cases with failed fixation, all stainless steel locked symphyseal plates and screws manufactured by Synthes (Paoli, PA) and specifically designed for the pubic symphysis, were obtained from requests for information sent to orthopaedic surgeons at 10 trauma centers. A four-hole plate with all screws locked was used in 5 cases. A six-hole plate with 4 screws locked (two in each pubic body) was used in one. Intervention: Fixation for disruption of the pubic symphysis using an implant specifically designed for this purpose. Main Outcome Measurements: Radiographic appearance of implant failure. Results: Magnitude of failure ranged from implant loosening (3 cases), resulting in 10-mm to 12-mm gapping of the symphyseal reduction, to early failure (range, 1-12 weeks), resulting in complete loss of reduction (3 cases). Failure mechanism included construct pullout, breakage of screws at the screw/plate interface, and loosening of the locked screws from the plate and/or bone. Backing out of the locking screws resulting from inaccurate insertion technique was also observed. Conclusions: Failure mechanisms of locked design-specific plate fixation of the pubic symphysis include those seen with conventional uniplanar fixation as well as those common to locked plate technology. Specific indications for the use of these implants remain to be determined. Level Of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:22707071 Is fixation failure after plate fixation of the symphysis pubis clinically important? Background: Plate fixation is a recognized treatment for pelvic ring injuries involving disruption of the pubic symphysis. Although fixation failure is well known, it is unclear whether early or late fixation failure is clinically important. Questions/purposes: We therefore determined (1) the incidence and mode of failure of anterior plate fixation for traumatic pubic symphysis disruption; (2) whether failure of fixation was associated with the types of pelvic ring injury or pelvic fixation used; (3) the complications, including the requirement for reoperation or hardware removal; and (4) whether radiographic followup of greater than 1 year alters subsequent management. Methods: We retrospectively reviewed 148 of 178 (83%) patients with traumatic symphysis pubis diastasis treated by plate fixation between 1994 and 2008. Routine radiographic review, pelvic fracture classification, method of fixation, incidence of fixation failure, timing and mode of failure, and the complications were recorded after a minimum followup of 12 months (mean, 45 months; range, 1-14 years). Results: Hardware breakage occurred in 63 patients (43%), of which 61 were asymptomatic. Breakage was not related to type of plate, fracture classification, or posterior pelvic fixation. Five patients (3%) required revision surgery for failure of fixation or symptomatic instability of the symphysis pubis, and seven patients (5%) had removal of hardware for other reasons, including late deep infection in three (2%). Routine radiographic screening as part of annual followup after 1 year did not alter management. Conclusions: Our observations suggest the high rate of late fixation failure after plate fixation of the symphysis pubis is not clinically important. abstract_id: PUBMED:21322359 Advanced trauma life support radiographic trauma series: Part 3--The pelvis radiograph. Pelvic fractures are often high energy injuries and are associated with a high morbidity and mortality. The plain antero-posterior pelvis radiograph is part of the advanced trauma life support radiographic trauma series and is used as a screening test. The main limitation of plain anteroposterior pelvic radiographs is the difficulty in identification of some fractures, in particular posterior fractures, therefore radiographic findings should be considered in conjunction with clinical assessment. abstract_id: PUBMED:26391358 Comparison of reconstruction plate screw fixation and percutaneous cannulated screw fixation in treatment of Tile B1 type pubic symphysis diastasis: a finite element analysis and 10-year clinical experience. Objective: The objective of this study is to compare the biomechanical properties and clinical outcomes of Tile B1 type pubic symphysis diastasis (PSD) treated by percutaneous cannulated screw fixation (PCSF) and reconstruction plate screw fixation (RPSF). Materials And Methods: Finite element analysis (FEA) was used to compare the biomechanical properties between PCSF and RPSF. CT scan data of one PSD patient were used for three-dimensional reconstructions. After a validated pelvic finite element model was established, both PCSF and RPSF were simulated, and a vertical downward load of 600 N was loaded. The distance of pubic symphysis and stress were tested. Then, 51 Tile type B1 PSD patients (24 in the PCSF group; 27 in the RPSF group) were reviewed. Intra-operative blood loss, operative time, and the length of the skin scar were recorded. The distance of pubic symphysis was measured, and complications of infection, implant failure, and revision surgery were recorded. The Majeed scoring system was also evaluated. Results: The maximum displacement of the pubic symphysis was 0.408 and 0.643 mm in the RPSF and PCSF models, respectively. The maximum stress of the plate in RPSF was 1846 MPa and that of the cannulated screw in PCSF was 30.92 MPa. All 51 patients received follow-up at least 18 months post-surgery (range 18-54 months). Intra-operative blood loss, operative time, and the length of the skin scar in the PCSF group were significantly different than those in the RPSF group. No significant differences were found in wound infection, implant failure, rate of revision surgery, distance of pubic symphysis, and Majeed score. Conclusion: PCSF can provide comparable biomechanical properties to RPSF in the treatment of Tile B1 type PSD. Meanwhile, PCSF and RPSF have similar clinical and radiographic outcomes. Furthermore, PCSF also has the advantages of being minimally invasive, has less blood loss, and has shorter operative time and skin scar. abstract_id: PUBMED:10628460 CT cystography: radiographic and clinical predictors of bladder rupture. Objective: Our goal was to identify radiographic and clinical variables that correlate with bladder rupture that may then be used as selection criteria for CT cystography in trauma patients. Subjects And Methods: Hemodynamically stable trauma patients with hematuria were examined under standardized protocol with dynamic oral and i.v. contrast-enhanced CT of the abdomen and pelvis, followed immediately by CT cystography. CT cystography consisted of contiguous 5-mm axial scans of the pelvis after retrograde distention of bladder with 300-400 ml of 4% iodinated contrast material. Radiographic and clinical variables (pelvic fracture, pelvic fluid, intraabdominal visceral injury, degree of hematuria, hematocrit, units of blood transfused, base deficit, injury mechanism, seat belt use, sex, age) were assessed and statistically analyzed using the two-tailed Fisher's exact test and Wilcoxon's rank sum test. Positive and negative individual and multivariate predictors were analyzed. Results: Of the 157 patients entered in our study, 12 (eight males and four females) had bladder rupture. One or more pelvic fractures were present in nine (75%) of the 12 patients (p &lt; 0.001). Pubic symphysis diastasis, sacroiliac diastasis, and sacral, iliac, and pubic rami fractures were statistically associated with bladder rupture. Isolated acetabular fractures did not correlate with rupture. Eight (67%) of the 12 patients with bladder rupture revealed on CT cystography had gross hematuria (p &lt; 0.001). No ruptures were seen in patients with &lt;25 RBC/HPF (red blood cells per high-power field). All patients with rupture had pelvic fluid revealed on standard contrast-enhanced CT (p &lt; 0.001). Conclusion: Gross hematuria, pelvic fluid, and specific pelvic fractures were highly correlated with bladder rupture; identification of these findings may help in selection of trauma patients for CT cystography. abstract_id: PUBMED:34742331 Fixation failure in patients with traumatic diastasis of pubic symphysis: impact of loss of reduction on early functional outcomes. Background: Failure of fixation (FF) in pubic symphysis diastasis (SD) ranges between 12 and 75%, though whether it influences functional outcomes is still debated. The objective of this study is to evaluate the impact of anterior pelvic plate failure and loss of reduction on Majeed's functional scores. Methods: Single center retrospective review of consecutive patients with acute SD treated by means of anterior pubic plating. Thirty-seven patients with a mean age 45.7 ± 14.4 years were included. Demographics, AO classification, pelvic fixation and secondary procedures were recorded. Majeed's functional scores at minimum 6 months follow-up were compared according to the presence of FF and loss of reduction. Results: Fifteen patients presented FF. Eight presented an additional loss of symphyseal reduction. Mean Majeed´s score (MMS) in patients with and without FF was 64.4 ± 13.04 and 81.8 ± 15.65, respectively (p = 0.0012). Differences in MMS in patients without FF and those with FF and maintained or loss of anterior reduction were 11.3 [70.5 vs 81.8] (p = 0.092) and 22.7 [59.1 vs 81.8] (p = 0.001), respectively. Significant association of FF with AO classification was noted. (OR 12.6; p = 0.002). Conclusions: Differences in MMS in the analyzed groups suggest that loss of reduction might be more relevant than failure of the anterior osteosynthesis in functional outcomes. abstract_id: PUBMED:17713770 Internal fixation of traumatic diastasis of pubic symphysis: is plate removal essential? Introduction: Internal fixation of the traumatic diastasis of symphysis pubis is an integral part of the definitive management of pelvic ring injuries. Both functional outcome and long term physiological effects of plate retention have not been clearly addressed and no specific indications regarding implant removal have been proposed in the literature. Materials And Methods: We reviewed 74 patients (18 females and 56 males), with an average age of 40.6 (16-75) years, who underwent open reduction and internal fixation (ORIF) of the pubic symphysis. Except from the demographic data many other parameters like suprapubic pain, sexual disturbance, impotence and dyspareunia were taken under consideration. Furthermore, any correlation between implant failure and functional impairment was recorded and the subject of implant removal in pregnant women was examined. Health outcome was assessed according to EuroQol 5-D (EQ-5D) questionnaire. Mean follow up: 41.7 (28-89) months. Results: Suprapubic pain was present in ten patients. Three men developed neurogenic impotence and one woman had deep dyspareunia. None of these symptoms were related to implant status. Three of the four females who had uncomplicated pregnancy in the post-stabilization period had the plate in situ. Implant failure was seen in four patients but they remained asymptomatic. EQ-5D questionnaire revealed high satisfaction scores among young women and men of all age groups. Conclusion: This study supports the view that routine removal of the plate is not essential. The issue of whether the implant needs to be removed in women of childbearing age requires further investigation. Answer: Radiographic changes indicating implant failure after plating for pubic symphysis diastasis appear to be more common than previously reported and may indeed be an underappreciated reality. A study found that failure of fixation, including screw loosening or breakage of the symphyseal fixation, occurred in 75% of the patients (95 out of 127), which resulted in widening of the pubic symphyseal space in 88% of those cases when compared with the immediate postoperative radiograph (PUBMED:22552765). Another study reported that 32% of patients experienced implant failure, including 14% with screw loosening and 18% with plate breakage, although only one patient required revision surgery (PUBMED:35278092). Comparative studies have shown that the type of plating can influence the rate of implant failure, with two-hole symphyseal plating techniques having a higher implant failure rate and a significantly higher rate of pelvic malunion compared to multi-hole plating (PUBMED:18594300). Additionally, failures of locked design-specific plate fixation of the pubic symphysis have been reported, including construct pullout, breakage of screws at the screw/plate interface, and loosening of the locked screws from the plate and/or bone (PUBMED:22183198). Despite the high rate of radiographic implant failure, it is not always clinically significant. One study found that hardware breakage occurred in 43% of patients, but 61 were asymptomatic and breakage was not related to the type of plate, fracture classification, or posterior pelvic fixation (PUBMED:22707071). Furthermore, radiographic outcomes such as increased symphysis distance, screw loosening, and plate breakage did not affect clinical functional outcomes (PUBMED:35278092). In conclusion, while radiographic changes indicative of implant failure after plating for pubic symphysis diastasis are common, they do not always necessitate clinical intervention or correlate with poor functional outcomes, suggesting that the clinical significance of these radiographic changes may be less than what might be expected from their high incidence.
Instruction: Translumbar hemodialysis catheters in patients with limited central venous access: does patient size matter? Abstracts: abstract_id: PUBMED:23664808 Translumbar hemodialysis catheters in patients with limited central venous access: does patient size matter? Purpose: To describe a single institutional experience with translumbar tunneled dialysis catheters (TDC) and compare outcomes between patients with normal and abnormal body mass index (BMI). Materials And Methods: Translumbar TDCs placed between January 2002 and July 2011 were reviewed retrospectively. There were 33 patients; 18 had a normal BMI&lt;25, and 15 had an abnormal BMI&gt;25. Technical outcome, complications, indications for exchange or removal, and BMI were recorded. Catheter dwell time, catheter occlusion rate, frequency of malposition, and infection rates were collected. Results: There were 92 procedures (33 initial placements) with 7,825 catheter days. The technical success rate was 100%. Two minor (2.2%) and three major (3.3%) complications occurred. The complication rate did not differ significantly between patients with a normal BMI and patients with an abnormal BMI. Median catheter time in situ (interquartile range) for all patients was 61 (113) days, for patients with normal BMI was 66 (114) days, and for patients with abnormal BMI was 56 (105) days (P = .9). Primary device service intervals for all patients, patients with normal BMI, and patients with abnormal BMI were 47 (96) days, 63 (98) days, and 39 (55) days (P = .1). Secondary device service intervals for all patients, patients with normal BMI, and patients with abnormal BMI were 147 (386) days, 109 (124) days, and 409 (503) days (P = .23). Catheter-related central venous thrombosis rate was 0.01 per 100 catheter days (n = 1). Conclusions: Translumbar TDC placement can provide effective hemodialysis in patients with limited venous reserve regardless of the patient's BMI. An abnormal BMI (&gt;25) does not significantly affect complication rate, median catheter time in situ, or primary or secondary device service interval of translumbar TDCs. abstract_id: PUBMED:36987768 Using a Catheter for Hemodialysis Placed in the Inferior Vena Cava for the First Time in N. Macedonia - Translumbar Approach. Maintenance of vascular access for hemodialysis remains a challenge for every doctor. Exhausted conventional vascular access is the cause for the placement of the central venous catheter in unconventional sites such as enlarged collateral vessels, hepatic veins, hemiazygos, azygos, renal veins, and the inferior vena cava. The percutaneous translumbar catheter for hemodialysis in the inferior vena cava was described over 20 years ago. In this article, we report on the procedure and complications arising from the percutaneous translumbar approach of a hemodialysis catheter. This was done for the first time in N. Macedonia. This approach is a potential option in adults and children when conventional approaches are limited. abstract_id: PUBMED:32892538 Placement of hemodialysis catheters with the help of the micropuncture technique in patients with central venous occlusion and limited access Background/aim: This study aims to describe the technical success of the micropuncture technique, which is performed in placement of tunneled hemodialysis catheters in patients with central venous occlusion and limited access. Materials And Methods: A total of 25 patients with central venous occlusion and in need of catheter placement for hemodialysis between 2012 and 2018 were included in this study and analyzed retrospectively. Technical success was defined as the placement of tunneled dialysis catheters with optimal position and function. Results: Internal jugular vein access in 16 patients (14 right and 2 left) and right subclavian vein access in 3 patients were successfully performed in placement of the tunneled dialysis catheter. Although internal jugular and subclavian vein access was attempted bilater- ally, the procedure failed in 6 patients. The overall technical success of recanalization of the occluded central veins was 76% (19/25). No minor or major complications were encountered. Conclusion: Tunneled dialysis catheter placement through the occluded internal jugular and subclavian veins with the micropuncture technique is effective and safe in patients with limited vascular access. The recanalization of the occluded conventional access routes should always be kept in mind to allow for the preservation of vascular accesses for future requirements. abstract_id: PUBMED:27011425 Update on Insertion and Complications of Central Venous Catheters for Hemodialysis. Central venous catheters are a popular choice for the initiation of hemodialysis or for bridging between different types of access. Despite this, they have many drawbacks including a high morbidity from thrombosis and infection. Advances in technology have allowed placement of these lines relatively safely, and national guidelines have been established to help prevent complications. There is an established algorithm for location and technique for placement that minimizes harm to the patient; however, there are significant short- and long-term complications that proceduralists who place catheters should be able to recognize and manage. This review covers insertion and complications of central venous catheters for hemodialysis, and the social and economic impact of the use of catheters for initiating dialysis is reviewed. abstract_id: PUBMED:30309840 Clinical and Regulatory Considerations for Central Venous Catheters for Hemodialysis. Central venous catheters remain a vital option for access for patients receiving maintenance hemodialysis. There are many important and evolving clinical and regulatory considerations for all stakeholders for these devices. Innovation and transparent and comprehensive regulatory review of these devices is essential to stimulate innovation to help promote better outcomes for patients receiving maintenance hemodialysis. A workgroup that included representatives from academia, industry, and the US Food and Drug Administration was convened to identify the major design considerations and clinical and regulatory challenges of central venous catheters for hemodialysis. Our intent is to foster improved understanding of these devices and provide the foundation for strategies to foster innovation of these devices. abstract_id: PUBMED:9747613 Translumbar placement of inferior vena caval catheters: a solution for challenging hemodialysis access. Access to the central venous circulation for hemodialysis has traditionally been achieved via the subclavian or jugular venous routes. With ongoing improvements in medical management, many hemodialysis recipients develop exhaustion of these routes and require alternative means of central venous access. Inferior vena caval (IVC) catheters have been placed with a percutaneous translumbar approach to allow central venous access for chemotherapy, harvesting of stem cells, and total parenteral nutrition. Translumbar placement of IVC catheters has become accepted by some as a useful and reliable alternative in patients who require long-term hemodialysis but have exhausted traditional access sites. IVC catheters have been placed in patients with IVC filters, and IVC filters have been placed in patients with IVC catheters. Complications include those associated with central venous catheters, for example, sepsis, fibrin sheaths, and thrombosis. A complication specific to placement of IVC hemodialysis catheters is migration of the catheter into the subcutaneous soft tissues, retroperitoneum, or iliac veins. Translumbar placement of IVC catheters is performed only in patients considered to have few or no other medical options and is not intended as a primary means of central venous access. abstract_id: PUBMED:24817471 Complex central venous catheter insertion for hemodialysis. Despite the introduction of payment by results in the UK, there has been no decrease in central venous catheter (CVC) use. In part, this may relate to a requirement to dialyse through a CVC while autogenous access matures. Mortality data have improved in parallel and patients on hemodialysis live longer, which may lead to an increased exposure to CVCs.Exposure to CVCs carries a significant risk of infection and occlusion requiring their repositioning or exchange. The mid to long-term sequelae of CVC use is central venous occlusion leaving clinical teams with an ever increasing challenge to find adequate venous access.In this article, we will discuss the challenges faced by operators inserting CVCs into the hemodialysis-dependent patient who has exhausted more tradition insertion sites. These include translumbar caval catheters, transocclusion and transcollateral catheters, transjugular Inferior Vena Cava catheter positioning, and transhepatic catheters. We will demonstrate the techniques employed, complications, and anticipated longevity of function. abstract_id: PUBMED:29675773 CT-Guided Translumbar Placement of Permanent Catheters in the Inferior Vena Cava: Description of the Technique with Technical Success and Complications Data. Purpose: To evaluate indications, technical success rate and complications of CT-guided translumbar catheter placement in the inferior vena cava for long-term central venous access (Port and Hickman catheters) as a bail-out approach in patients with no alternative options for permanent central venous access. Materials And Methods: This retrospective study included 12 patients with a total of 17 interventions. All patients suffered from bilaterally chronically occluded venous vessels of their upper extremities, without patent internal jugular and/or subclavian veins. Catheter implantation was performed as a hybrid procedure with CT-guided translumbar access into the inferior vena cava with subsequent angiography-guided catheter placement of a Hickman-type catheter (7×) or a Port catheter (10×). Results: All interventions were technically successful. The total 30-day complication rate was 11.8% (n = 2). The two detected complications were bleeding at the subcutaneous port hub and subcutaneous kinking of the venous tube. Mean follow-up time was 68.4 ± 41.4 months (range 3.4-160 months). Six patients (50%) died during follow-up from non-procedure-related complications associated with the underlying disease. Late complications occurred in 8/17 (47.1%) cases and were infections of the catheter system in 35.3% (n = 6), mechanical defect of the catheter system in 5.8% (n = 1) and dislocation of the catheter system in 5.8% (n = 1). The overall infection rate was 0.77 per 1000 catheter days. Conclusions: CT-guided translumbar placement of permanent catheters is a technically feasible and safe method for permanent central venous access as last resort in chronically occluded veins of the upper extremities. abstract_id: PUBMED:37197051 Fluoroscopy and CT Guided Translumbar Tunneled Dialysis Catheter for Hemodialysis Access Failure in a Case of Autosomal Dominant Polycystic Kidney Disease. Vascular access in hemodialysis is essential to end-stage renal disease (ESRD) patients' survival. Unfortunately, even after years of recent advances, a significant number of patients may develop multi-access failure for many reasons. In this situation, arterial-venous fistula (AVF) or catheters placement in traditional vascular sites (jugular, femoral, or subclavian) are not feasible. In this scenario, translumbar tunneled dialysis catheters (TLDCs) may be a salvage option. The use of central venous catheters (CVC) is associated with an increased incidence of venous stenosis that can progressively limit future vascular access routes. The common femoral vein can be used for temporary access in patients in whom traditional approaches for permanent central venous access may not be feasible because of either chronically occluded or not accessible vasculature; however, this location is not preferred for long-term venous access because of the high rate of catheter related blood stream infections (CRBSI). In these patients, a direct translumbar approach to the inferior vena cava is a lifesaving alternative. This approach has been described by several authors as a bail-out option. Fluoroscopy-guided access via a translumbar approach into the inferior vena cava bares the risk of hollow-organ perforation or severe bleeding from the inferior vena cava or even the aorta. To minimize the risk of complications caused by a translumbar central venous access, we hereby present a hybrid approach with CT-guided translumbar access of the inferior vena cava followed by a conventional implantation of the permanent central venous catheter. CT scan-guided access of IVC that further helps in our case as patient has large bulky kidneys secondary to autosomal dominant polycystic kidney disease. abstract_id: PUBMED:37790890 Predictive factors of infection in patients with chronic kidney disease using hemodialysis catheters. Background: Infection is the most frequent complication of central venous catheters used for hemodialysis. Objectives: The purpose of this study was to the determine the central venous catheter-related infection rate at a dialysis center in the Brazilian state of Amazonas and to identify risk factors and the microbiological profile of the infections. Methods: This was an observational study with prospective data collection over a 12-month period by chart analysis and face-to-face interviews with patients undergoing hemodialysis using central venous catheters at a dialysis center. Results: 96 central venous catheters were analyzed in 48 patients. 78 of these were non-tunneled central venous catheters (81.3%) and 18 were tunneled central venous catheters (18.7%), 53.1% of the catheters were exchanged because of infection and blood cultures were obtained from 35.2% of the patients who had catheter-related infections. Gram-negative bacteria were isolated from five of the nine blood cultures in which there was bacterial growth and Gram-positive bacteria were isolated from the other four. The most commonly isolated bacteria was Staphylococcus hominis, found in 22.2% of positive blood cultures. Conclusion: The overall hemodialysis venous catheter infection rate was 10.1 episodes/1000 catheter days, 15.1 episodes/1000 catheters days in non-tunneled catheters and 3.3 episodes/1000 catheters days in tunneled catheters. The infection predisposing factors identified were use of non-tunneled catheters and having 2 hemodialysis sessions per week. Regarding the microbiological profile, over half of the bacteria isolated were Gram-negative. Answer: Based on the provided abstracts, patient size, specifically body mass index (BMI), does not significantly affect the outcomes of translumbar hemodialysis catheter (TDC) placement in patients with limited central venous access. A study described in abstract PUBMED:23664808 found that translumbar TDC placement can provide effective hemodialysis regardless of the patient's BMI. The study compared outcomes between patients with a normal BMI (<25) and those with an abnormal BMI (>25) and found no significant difference in complication rate, median catheter time in situ, or primary or secondary device service interval of translumbar TDCs. This suggests that patient size, as measured by BMI, does not have a significant impact on the effectiveness or complication rate of translumbar TDCs in patients with limited venous reserve.
Instruction: Is it possible to compare PSA recurrence-free survival after surgery and radiotherapy using revised ASTRO criterion--"nadir + 2"? Abstracts: abstract_id: PUBMED:18279937 Is it possible to compare PSA recurrence-free survival after surgery and radiotherapy using revised ASTRO criterion--"nadir + 2"? Objectives: The new American Society for Therapeutic Radiology and Oncology/Radiation Therapy Oncology Group consensus definition of biochemical failure after radiotherapy for prostate cancer is defined as a prostate-specific antigen level at or greater than the absolute nadir PSA level plus 2 ng/mL. Because this definition inevitably will be used to compare cancer control rates after radiotherapy to those after surgery, this study examined the effect of this comparison. Methods: We reviewed the data from 2570 men who had undergone radical prostatectomy from 1985 to 2004. Biochemical failure was defined as any measurable PSA level of 0.2 ng/mL or greater. We evaluated how the nadir+2 definition affected the failure rate when applied to this series. Results: The actuarial 5, 10, and 15-year biochemical recurrence-free survival probability with failure defined as a PSA level of 0.2 ng/mL or more and a PSA level of 2 ng/mL or more was 88.6%, 81.2%, and 78.1% and 94.6%, 89.4%, and 84.3%, respectively (P &lt;0.0001). The median time to biochemical progression was 2.8 years for the greater than 0.2 ng/mL definition and 7.9 years for the 2 ng/mL or more definition. The nadir+2 definition systematically overestimated the biochemical recurrence-free survival, even after stratifying patients into standard prognostic risk groups, especially in men who developed local recurrence. Conclusions: When applied to a mature series of surgically treated patients with localized prostate cancer, the American Society for Therapeutic Radiology and Oncology "nadir+2" definition resulted in a systematic delay in the determination of biochemical failure. Because patients in this series who experienced a detectable PSA level took more than 5 years to progress to a PSA level of 2 ng/mL or greater, the 5-year biochemical control rates with the definition of 0.2 ng/mL or more should be compared with the 10-year biochemical control rates using the nadir+2 definition. abstract_id: PUBMED:35291420 PSA nadir predicts biochemical recurrence after external beam radiation therapy combined to high dose rate brachytherapy in the treatment of prostate cancer. Introduction: Prostate cancer (PCa) is the second most prevalent neoplasm among men in the world. Its treatment has a wide spectrum of alternatives and variables, ranging from active surveillance through radio and/or brachytherapy, to surgery. Objective: The present work aimed to identify the predictive factors for biochemical recurrence and to evaluate the toxicity of the treatment using the association of external beam radiation therapy (EBRT) with high dose rate brachytherapy (HDR-BT) applied in the treatment of patients with prostate cancer. Methods: Longitudinal retrospective study, using a prospectively collected database between 2005 and 2014 of 186 consecutive patients records with a diagnosis of low, intermediate, or high-risk prostate cancer treated with EBRT combined with HDR-BT, in a single medical institution located in the city of Campinas, SP, Brazil (Radium Institute). PSA increase over 2 ng/ml above the nadir PSA was considered as biochemical recurrence, following the definition of the Phoenix Consensus. Continuous and clinically relevant categorical variables (age, initial PSA, delivered dose in EBRT, number of implants, number of positive cores in transrectal biopsy, use of hormone blockade, Gleason score, TNM staging, post treatment PSA and PSA Nadir) were evaluated with absolute (n) and percentage (%) values using multiple logistic regression and validated our previously described optimal PSA nadir as predictor of biochemical recurrence. Results: Post treatment PSA was the only independent predictor of biochemical recurrence, P&lt;0.0001. The lower the PSA nadir the lower the biochemical recurrence risk (P=0.0009). PSA nadir &gt;1 was the best cutoff (P=0.018) determinant of biochemical recurrence. The incidence of grade 3 late toxicity to the genitourinary tract was 0.6%, and there were no cases of severe complications to the gastrointestinal tract. Conclusion: External Beam Radiation Therapy conjugated to Brachytherapy in the treatment of Prostate Cancer has demonstrated low biochemical recurrence rates, mainly when PSA nadir &lt;1, with low toxicity into both GU and GI tracts. abstract_id: PUBMED:24385470 Adjuvant radiotherapy after prostatectomy for prostate cancer in Japan: a multi-institutional survey study of the JROSG. In Japan, the use of adjuvant radiotherapy after prostatectomy for prostate cancer has not increased compared with the use of salvage radiotherapy. We retrospectively evaluated the outcome of adjuvant radiotherapy together with prognostic factors of outcome in Japan. Between 2005 and 2007, a total of 87 patients were referred for adjuvant radiotherapy in 23 institutions [median age: 64 years (54-77 years), median initial prostate-specific antigen: 11.0 ng/ml (2.9-284 ng/ml), Gleason score (GS): 6, 7, 8, 9, 10 = 13.8, 35.6, 23.0, 27.6, 0%, respectively]. Rates of positive marginal status, seminal vesicle invasion (SVI) and extra-prostatic extension (EPE) were 74%, 26% and 64%, respectively. Median post-operative PSA nadir: 0.167 ng/ml (0-2.51 ng/ml). Median time from surgery to radiotherapy was 3 months (1-6 months). A total dose of ≥ 60 Gy and &lt;65 Gy was administered to 69% of patients. The median follow-up time was 62 months. The 3- and 5-year biochemical relapse-free survival (bRFS) rates for all patients were 66.5% and 57.1%, respectively. The GS and marginal status (P = 0.019), GS and SVI (P = 0.001), marginal status and EPE (P = 0.017), type of hormonal therapy and total dose (P = 0.026) were significantly related. The 5-year bRFS rate was significantly higher in SVI-negative patients than SVI-positive patients (P = 0.001), and significantly higher in patients with post-operative PSA nadir ≤ 0.2 than in patients with post-operative PSA nadir &gt;0.2 (P = 0.02), and tended to be more favorable after radiotherapy ≤ 3 months from surgery than &gt;3 months from surgery (P = 0.069). Multivariate analysis identified SVI and post-operative PSA nadir as independent prognostic factors for bRFS (P = 0.001 and 0.018, respectively). abstract_id: PUBMED:17006700 PSA recurrence following radical prostatectomy and radiotherapy Relapses after curative therapy for localised prostate cancer using radiotherapy or radical prostatectomy occur in a significant percentage of cases, even in times of continually improving patient selection. The definition of a biochemical relapse after surgery is a PSA value of &gt;or=0.4 ng/ml. After radiotherapy with maintenance of the organ and residual PSA production the definition is more complicated. The current algorithm is based on the ASTRO consensus of 1996 and defines a relapse as three consecutive increases in PSA above the post-therapeutic low. A biochemical relapse can indicate a local relapse, systemic metastasising of the disease or a combination of both. The differentiation of these two possibilities can be made, apart from imaging modalities, primarily on the basis of variation in PSA kinetics, whereby a short PSA doubling time and early PSA increase after primary therapy indicate a systemic problem. abstract_id: PUBMED:18472065 PSA and follow-up after treatment of prostate cancer A first serum total PSA assay is recommended during the first three months after treatment. When PSA is detectable, PSA assay should be repeated three months later to confirm this elevation and to estimate the PSA doubling time (PSADT). In the absence of residual cancer, PSA becomes undetectable by the first month after total prostatectomy: less than 0.1 ng/ml (or less than 0.07 ng/ml) for the ultrasensitive assay method and less than 0.2 ng/ml for the other methods. In the presence of residual cancer, PSA either does not become undetectable or increases after an initial undetectable period. A consensus has been reached to define recurrence as PSA greater than 0.2 ng/ml confirmed on two successive assays. After external beam radiotherapy, PSA can decrease after a mean interval of one to two years to a value less than 1 ng/ml (predictive of recurrence-free survival). Biochemical recurrence after radiotherapy is defined by an increase of PSA by 2 ng or more above the PSA nadir, whether or not it is associated with endocrine therapy. After endocrine therapy, the PSA nadir is correlated with recurrence-free survival. PSA is decreased for a mean of 18 to 24 months followed by a rise in PSA, corresponding to hormone-independence. The time to recurrence or the time to reach the nadir and the PSA doubling time after local therapy with surgery or radiotherapy have a diagnostic value in terms of the site of recurrence, local or metastatic and a prognostic value for survival and response to complementary radiotherapy or endocrine therapy. A PSADT less than eight to 12 months is correlated with a high risk of metastatic recurrence and 10-year mortality. The histological and biochemical characteristics in favour of local recurrence are Gleason score less or equal to seven (3+4), elevation of PSA after a period greater than 12 months and PSADT greater than 10 months. In other cases, recurrence is predominantly metastatic. The risk of demonstrating metastasis in the case of biochemical recurrence after total prostatectomy and before endocrine therapy depends on the PSA level and the PSADT. No consensus has been reached concerning the indication for complementary investigations by bone scan and abdominopelvic CT in patients with biochemical recurrence after treatment of localized cancer without endocrine therapy. However, when PSADT greater than six months, the risk of metastasis is less than 3% even for PSA greater than 30 ng/ml. When PSADT less than six months and PSA greater than 10 ng/ml, the risk of metastasis is close to 50%. abstract_id: PUBMED:19670814 Results of surgical and radiotherapy of prostatic cancer T1-4N0-1M0 To compare the results of radical prostatectomy and conformal radiotherapy in prostatic cancer T1-4N0-1M0, we made a retrospective study of 306 patients with prostatic cancer T1-4N0-1M0 of whom 144 (47.1%) were treated surgically (radical prostatectomy) while 162 (52.9%) were exposed to extracorporeal conformic radiotherapy. Follow-up median was 30.7 +/- 29.8 months. Five and 10-year overall, specific and PSA recurrence free survival in 306 patients was 94.0% and 90.1% (median was not achieved), 96.6% and 94.3% (median was not achieved), 66.1 and 49.2% (median was 84.0 +/- 4.4 months). In multifactorial analysis significant prognostic factors of PSA recurrence free survival were T category (p = 0.021) and Glison's sum (p = 0.002). In the subgroup of patients with local prostatic cancer there was a significant superiority of the operated patients by PSA recurrence free survival over irradiated group in baseline PSA &lt; 10 ng/ ml (p = 0.015), Glison's index &lt; 7 (p = 0.071) and combination of these factors (p = 0.018). A favourable prognosis factor of PSA recurrence free survival in operated patients was operative Glison's index &lt; 7 (p = 0.001), among operated patients--nadir PSA &lt; 1 ng/ ml (p = 0.003). Surgical and radiation treatment of local and locally advanced prostatic cancer provided satisfactory results. In the group of good prognosis (cT1-2N0, PSA &lt; 10 ng/ml, Glison's sum &lt; 7) radical prostatectomy gives advantage of PSA recurrence free survival. In patients with prostatic cancer cT &gt; T2, N+, Glison's index &gt; 7 and PSA &gt; 10 ng/ml surgical treatment and remote radiotherapy are equally effective in respect to survival free of biochemical recurrence. abstract_id: PUBMED:31235444 Recurrence rates for patients with early-stage breast cancer treated with IOERT at a community hospital per the ASTRO consensus statement for APBI. Purpose: To report the recurrence rates after single-fraction intraoperative electron radiotherapy (IOERT) in patients with early-stage breast cancer treated on a single institution prospective Phase I/II protocol at a community hospital. Results were retrospectively analyzed according to suitability criteria from the updated American Society for Radiation Oncology (ASTRO) consensus statement for accelerated partial breast irradiation (APBI). Methods And Materials: Patients over 40 years with early-stage invasive or in situ breast cancer (&lt;2.5 cm and node negative) were enrolled. IOERT 2100 cGy was delivered during breast conservation surgery, and patients were followed up for a median of 3 years (0.8-6.5 years) to determine toxicity and recurrence rates. Results: Single-fraction IOERT was performed in 215 cases (6 bilateral treatments, 196 patients) with 13 patients receiving whole-breast radiation (WBR) after IOERT for adverse pathologic features. Of 202 cases of IOERT without WBR, 89 patients experienced an ipsilateral breast tumor recurrence (IBTR) giving a cumulative incidence of 3.96%. When the ASTRO APBI suitability criteria were applied, the IBTR rate was significantly lower for suitable patients vs. cautionary or unsuitable patients (1.6% vs. 3.4% vs. 21.0%, p = 0.0002). 3-year progression-free survival after IOERT alone was 93.4%. For patients who received standard WBR (4500-5040 cGy) after IOERT, no Grade 3 or 4 toxicities (acute or late) occurred and all patients are disease-free. Conclusions: Single-fraction IOERT results in a low rate of IBTR when strictly adhering to ASTRO criteria for APBI suitability. Standard dose WBR for unfavorable pathologic results after 2100 cGy IOERT is well tolerated. abstract_id: PUBMED:31629640 Comparison of outcome endpoints in intermediate- and high-risk prostate cancer after combined-modality radiotherapy. Purpose: To compare a standard radio-oncological and a surgical biochemical failure definition after combined-modality radiation therapy (CRT) in men with intermediate- and high-risk prostate cancer. Methods: 425 men were treated with external beam radiotherapy (59.4 Gy, 33 fractions) and 125J seed-brachytherapy (S-BT, 100 Gy). Biochemical recurrence (BR) was defined either as radio-oncologic (rBR), using a +2 ng/mL prostate-specific antigen (PSA) increase above a nadir value, or as surgical (sBR), using a 2-year posttreatment PSA of ≥0.2 ng/mL. Biochemical recurrence-free, metastasis-free, cancer-specific, and overall survival were calculated at 5 and 10 years using the Kaplan-Meier method. Standard validation tests were used to compare both thresholds. Results: After a median of 7 years, overall recurrence rates were 10.4% and 31.5% for rBR and sBR definitions, respectively. Both failure definitions proved sensitive for the prediction of metastases and cancer-specific death, whereas the rBR definition was significantly more specific. The accuracies of a correct prediction of metastases and death of prostate cancer were 73.1% vs. 96.2% and 72.2% vs. 92.9% for sBR vs. rBR, respectively. The inferior validity results of the sBR definition were attributable to a PSA-bounce phenomenon occurring in 56% of patients with sBR. Still, using the less suitable sBR definition, the results of CRT compared favorably to BRFS rates of surgical interventions. Conclusion: After CRT, the radio-oncological (aka Phoenix) failure definition is more reliable than a fixed surgical endpoint. Exclusively in high-risk patients, sBR offers a direct comparison across surgical and nonsurgical treatment options at 5 and 10 years. abstract_id: PUBMED:11832720 A standard definition of disease freedom is needed for prostate cancer: undetectable prostate specific antigen compared with the American Society of Therapeutic Radiology and Oncology consensus definition. Purpose: Freedom from prostate cancer is defined by undetectable prostate specific antigen (PSA) after surgery and the American Society of Therapeutic Radiology and Oncology (ASTRO) criteria are recommended for irradiation. Whether these definitions of disease freedom are comparable was evaluated in this study. Materials And Methods: From August 1992 to August 1996 simultaneous irradiation with prostate 125iodine implantation followed by external beam irradiation was performed in 591 consecutive men with stage T1T2NX prostate cancer. All patients had a transperineal implant and none received neoadjuvant hormones. Disease freedom was defined by a PSA cutoff of 0.2 ng./ml. and the ASTRO consensus definition. Median followup was 6 years (range 5 to 8). Results: Of the 591 men in this study 65 had recurrence by ASTRO criteria and 93 had recurrence by a PSA cutoff of 0.2 ng./ml., which was a significant difference (p = 0.001). On multivariate analysis of the factors related to disease-free status the definition of disease freedom, pretreatment PSA and Gleason score were highly significant. Of the 528 men with a minimum 5-year PSA followup the 8-year disease-free survival rate by ASTRO criteria was 99% in those who achieved a PSA nadir of 0.2 ng./ml. and 16% in those with a nadir of 0.3 to 1 ng./ml. Of the 469 disease-free patients by ASTRO criteria with a minimum 5-year followup 455 (97%) achieved a PSA nadir of 0.2 ng./ml. or less. Conclusions: The definition of freedom from prostate cancer significantly affects treatment results. A standard definition is needed and a PSA cutoff of 0.2 ng./ml. is suggested as the standard for all curative treatments for localized prostate cancer. abstract_id: PUBMED:16838606 The meaning of PSA progression after radical prostatectomy. Preliminary results Objectives: To report our findings regarding to the natural history of prostate cancer (PCa) that shows recurrence after radical prostatectomy (RP), in terms of time to development of metastatic disease and death from PCa. To identify independent predictors of PSA recurrence. Material And Methods: Retrospective analysis of 227 patients with clinically localized PCa who underwent RP. The event PSA recurrence was defined as the presence of a postoperative PSA level of 0,2 ng/ml or higher at least 3 months after surgery. Hence, cases with shorter follow-up time were excluded from analysis. No adjuvant therapy (radiotherapy or hormonal therapy) was performed in the included population. Recurrence free survival was calculated during the follow-up period (Kaplan-Meier analysis). Uni and multivariate study was performed in order to assess the ability of factors as preoperative PSA level, Gleason score in surgical specimen, capsular penetration, positive surgical margins (excluding urethral), extracapsular extension, positive pelvic lymph nodes, and seminal vesicle invasion, to predict PSA recurrence. Finally, we selected the group of patients with PSA recurrence and calculated the probability of being free from distant metastatic disease during the follow-up period. Also, function of disease-specific survival was calculated. Results: A total of 208 records were finally included in the study. Median age was 61 years. A total of 47 (22.6%) presented with extracapsular extension. Median follow-up time was 35.8 months, and 49 (23.6%) developed PSA recurrence. Recurrence free survival was 79.9% and 67.4% at 2 and 5 years, respectively. Only three factors were identified with the aid of multivariate analysis as independent predictors of recurrence: preoperative PSA &gt;= 10 ng/ml (hazard ratio--HR--3.03), Gleason score in surgical specimen 8 or higher (HR 3.42), and the finding of capsular penetration (HR 2.17). When only patients with PSA recurrence were considered, 16.3% developed distant metastasis. Probabilities of being free from distant disease after PSA recurrence were 97.7% and 86.9% at 2 and 5 years respectively (actuarial median time 110.8 months). Only 2 patients died from PCa, therefore disease-specific mortality analysis was not performed. Conclusions: Although an important proportion of patients present with PSA recurrence after RP in our setting, the prognosis in term of development of metastatic disease is acceptable in the short-medium term. Anyway, further analysis will be needed to ascertain the evolution of these patients in the long term. Answer: The revised ASTRO criterion, known as the "nadir + 2" definition, is used to define biochemical failure after radiotherapy for prostate cancer as a prostate-specific antigen (PSA) level at or greater than the absolute nadir PSA level plus 2 ng/mL. This definition has been examined for its applicability in comparing cancer control rates after radiotherapy to those after surgery (PUBMED:18279937). When applied to a series of patients treated with radical prostatectomy, the "nadir + 2" definition resulted in a systematic delay in the determination of biochemical failure. This is because patients who experienced a detectable PSA level took more than 5 years to progress to a PSA level of 2 ng/mL or greater. Therefore, the 5-year biochemical control rates using the definition of 0.2 ng/mL or more should be compared with the 10-year biochemical control rates using the "nadir + 2" definition to have a fair comparison (PUBMED:18279937). Additionally, the "nadir + 2" definition has been shown to be sensitive for predicting metastases and cancer-specific death after combined-modality radiation therapy, but it is more specific than a fixed surgical endpoint such as a 2-year posttreatment PSA of ≥0.2 ng/mL. The latter definition was found to be less suitable due to the PSA bounce phenomenon, which occurs in a significant number of patients (PUBMED:31629640). In conclusion, while it is possible to compare PSA recurrence-free survival after surgery and radiotherapy using the revised ASTRO criterion "nadir + 2," it requires careful consideration of the timing of biochemical failure determination. The systematic delay in failure detection with the "nadir + 2" definition must be accounted for to ensure an appropriate comparison between the two treatment modalities.
Instruction: Alcohol and cirrhosis: dose--response or threshold effect? Abstracts: abstract_id: PUBMED:15246203 Alcohol and cirrhosis: dose--response or threshold effect? Background/aims: General population studies have shown a strong association between alcohol intake and death from alcoholic cirrhosis, but whether this is a dose-response or a threshold effect remains unknown, and the relation among alcohol misusers has not been studied. Methods: A cohort of 6152 alcohol misusing men and women aged 15-83 were interviewed about drinking pattern and social issues and followed for 84,257 person-years. Outcome was alcoholic cirrhosis mortality. Data was analyzed by means of Cox-regression models. Results: In this large prospective cohort study of alcohol misusers there was a 27 fold increased mortality from alcoholic cirrhosis in men and a 35 fold increased mortality from alcoholic cirrhosis in women compared to the Danish population. Number of drinks per day was not significantly associated with death from alcoholic cirrhosis, since there was no additional risk of death from alcoholic cirrhosis when exceeding an average daily number of five drinks (&gt;60 g/alcohol) in neither men nor women. Conclusions: The results indicate that alcohol has a threshold effect rather than a dose-response effect on mortality from alcoholic cirrhosis in alcohol misusers. abstract_id: PUBMED:2347553 The onset of sodium retention in experimental cirrhosis in rats is related to a critical threshold of liver function. Although sodium retention is a common complication in advanced liver disease, the relationship between liver and kidney function in cirrhosis has not been well established. The objective of this study was to investigate this relationship in an experimental model of cirrhosis induced in phenobarbital-treated rats by weekly intragastric administration of carbon tetrachloride. Liver function, measured by the aminopyrine breath test, and urinary sodium excretion on a constant salt diet, were measured weekly. Administration of carbon tetrachloride led to cirrhosis, sodium retention, ascites and a reduction in liver function as measured by the amino pyrine breath test in all 15 rats surviving the first 8 wk. The time to develop sodium retention (defined as a decrease in urinary sodium excretion rate to less than 0.3 mmol/24 hr) varied from 9 to 19 wk. The aminopyrine breath test rate constant of elimination was reduced from 24 x 10(-3) min-1 +/- 2 x 10(-3) min-1 at the start of carbon tetrachloride administration by 61% +/- 10% at the time sodium retention occurred. A linear decrease was seen in aminopyrine breath test rate constant of elimination in the weeks preceding the onset of sodium retention. Sodium retention occurred when aminopyrine breath test rate contant of elimination was reduced to a critical threshold of 10 x 10(-3) +/- 1 x 10(-3) min-1, and then permitted to recover above this level by withdrawal of carbon tetrachloride. Sodium retention occurred when the aminopyrine breath test rate constant of elimination fell below the threshold; this was followed by spontaneous diuresis when aminopyrine breath test rate constant of elimination improved above 10 x 10(-3) +/- 1 x 10(-3) min-1.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:28275747 Effect of HLA-DPA1 alleles on chronic hepatitis B prognosis and treatment response. Objective: Chronic hepatitis B (CHB) is a major health problem. The outcome of hepatitis B virus (HBV) infection is associated with variations in HLA-DPA1 alleles. The aim of this study was to investigate possible associations of HLA-DPA1 alleles with treatment response and with hepatitis B virus e antigen (HBeAg) seroconversion. Methods: Eight different HLA-DPA1 alleles from 246 CHB patients were genotyped by polymerase chain reaction with sequence-specific primers at high resolution to investigate the association of HLA-DPA1 alleles with treatment response, development of cirrhosis, HBeAg seroconversion, and disease reoccurrence upon HBeAg loss. Results: There was no significant association between HLA-DPA1 alleles and treatment response, development of cirrhosis, or HBeAg seroconversion. However, HLA-DPA1*04:01 allele was significantly more frequently found in patients who redeveloped disease upon HBeAg seroconversion (100% vs 36.8%: p=0.037; Fisher's exact test). Conclusion: HLA-DPA1*04:01 allele may be a risk factor for reoccurrence of CHB after HBeAg seroconversion. abstract_id: PUBMED:30389550 Non-invasive response prediction in prophylactic carvedilol therapy for cirrhotic patients with esophageal varices. Background & Aims: Non-selective beta-blockers (NSBBs) are the mainstay of primary prophylaxis of esophageal variceal bleeding in patients with liver cirrhosis. We investigated whether non-invasive markers of portal hypertension correlate with hemodynamic responses to NSBBs in cirrhotic patients with esophageal varices. Methods: In this prospective cohort study, 106 cirrhotic patients with high-risk esophageal varices in the derivation cohort received carvedilol prophylaxis, and completed paired measurements of hepatic venous pressure gradient, liver stiffness (LS), and spleen stiffness (SS) at the beginning and end of dose titration. LS and SS were measured using acoustic radiation force impulse imaging. A prediction model for hemodynamic response was derived, and subject to an external validation in the validation cohort (63 patients). Results: Hemodynamic response occurred in 59 patients (55.7%) in the derivation cohort, and in 33 patients (52.4%) in the validation cohort, respectively. Multivariate logistic regression analysis identified that ΔSS was the only significant predictor of hemodynamic response (odds ratio 0.039; 95% confidence interval 0.008-0.135; p &lt;0.0001). The response prediction model (ModelΔSS = 0.0490-2.8345 × ΔSS; score = (exp[ModelΔSS])/(1 + exp[ModelΔSS]) showed good predictive performance (area under the receiver-operating characteristic curve [AUC] = 0.803) using 0.530 as the threshold value. The predictive performance of the ModelΔSS in the validation set improved using the same threshold value (AUC = 0.848). Conclusion: A new model based on dynamic changes in SS exhibited good performance in predicting hemodynamic response to NSBB prophylaxis in patients with high-risk esophageal varices. Lay Summary: Non-selective beta-blockers are the mainstay of primary prophylaxis to prevent variceal bleeding in patients with cirrhosis and high-risk esophageal varices. This prospective study showed that a prediction model based on changes in spleen stiffness before vs. after dose titration might be a non-invasive marker for response to prophylactic non-selective beta-blocker (carvedilol) therapy in patients with cirrhosis and high-risk esophageal varices. ClinicalTrials.gov Identifier: NCT01943318. abstract_id: PUBMED:32829576 Clinical Utility of Mac-2 Binding Protein Glycosylation Isomer in Chronic Liver Diseases. An accurate evaluation of liver fibrosis is clinically important in chronic liver diseases. Mac-2 binding protein glycosylation isomer (M2BPGi) is a novel serum marker for liver fibrosis. In this review, we discuss the role of M2BPGi in diagnosing liver fibrosis in chronic hepatitis B and C, chronic hepatitis C after sustained virologic response (SVR), and nonalcoholic fatty liver disease (NAFLD). M2BPGi predicts not only liver fibrosis but also the hepatocellular carcinoma (HCC) development and prognosis in patients with chronic hepatitis B and C, chronic hepatitis C after SVR, NAFLD, and other chronic liver diseases. M2BPGi can also be used to evaluate liver function and prognosis in patients with cirrhosis. M2BPGi levels vary depending on the etiology and the presence or absence of treatment. Therefore, the threshold of M2BPGi for diagnosing liver fibrosis and predicting HCC development has to be adjusted according to the background and treatment status. abstract_id: PUBMED:22490523 Virological response to entecavir is associated with a better clinical outcome in chronic hepatitis B patients with cirrhosis. Objective: Entecavir (ETV) is a potent inhibitor of viral replication in chronic hepatitis B and prolonged treatment may result in regression of fibrosis. The aim of this study was to investigate the effect of ETV on disease progression. Design: In a multicentre cohort study, 372 ETV-treated patients were investigated. Clinical events were defined as development of hepatocellular carcinoma (HCC), hepatic decompensation or death. Virological response (VR) was defined as HBV DNA &lt;80 IU/ml. Results: Patients were classified as having chronic hepatitis B without cirrhosis (n=274), compensated cirrhosis (n=89) and decompensated cirrhosis (n=9). The probability of VR was not influenced by severity of liver disease (p=0.62). During a median follow-up of 20 months (IQR 11-32), the probability of developing clinical events was higher for patients with cirrhosis (HR 15.41 (95% CI 3.42 to 69.54), p&lt;0.001). VR was associated with a lower probability of disease progression (HR 0.29 (95% CI 0.08 to 1.00), p=0.05) which remained after correction for established risk factors such as age. The benefit of VR was only significant in patients with cirrhosis (HR 0.22 (95% CI 0.05 to 0.99), p=0.04) and remained after excluding decompensated patients (HR 0.15 (95% CI 0.03 to 0.81), p=0.03). A higher HBV DNA threshold of 2000 IU/ml was not associated with the probability of disease progression (HR 0.20 (95% CI 0.03 to 1.10), p=0.10). Conclusion: VR to ETV is associated with a lower probability of disease progression in patients with cirrhosis, even after correction for possible baseline confounders. When using a threshold of 2000 IU/ml, the association between viral replication and disease progression was reduced, suggesting that complete viral suppression is essential for nucleoside/nucleotide analogue treatment, especially in patients with cirrhosis. abstract_id: PUBMED:36152765 Effect of variants in LGP2 on MDA5-mediated activation of interferon response and suppression of hepatitis D virus replication. Background & Aims: Retinoic acid inducible gene I (RIG-I)-like receptors (RLRs), including RIG-I, melanoma differentiation-associated protein 5 (MDA5), and laboratory of genetics and physiology 2 (LGP2), sense viral RNA to induce the antiviral interferon (IFN) response. LGP2, unable to activate the IFN response itself, modulates RIG-I and MDA5 signalling. HDV, a small RNA virus causing the most severe form of viral hepatitis, is sensed by MDA5. The mechanism underlying IFN induction and its effect on HDV replication is unclear. Here, we aimed to unveil the role of LGP2 and clinically relevant variants thereof in these processes. Methods: RLRs were depleted in HDV susceptible HepaRGNTCP cells and primary human hepatocytes. Cells were reconstituted to express different LGP2 versions. HDV and IFN markers were quantified in a time-resolved manner. Interaction studies among LGP2, MDA5, and RNA were performed by pull-down assays. Results: LGP2 is essential for the MDA5-mediated IFN response induced upon HDV infection. This induction requires both RNA binding and ATPase activities of LGP2. The IFN response only moderately reduced HDV replication in resting cells but profoundly suppressed cell division-mediated HDV spread. An LGP2 variant (Q425R), predominating in Africans who develop less severe chronic hepatitis D, mediated detectably higher basal and faster HDV-induced IFN response as well as stronger HDV suppression. Mechanistically, LGP2 RNA binding was a prerequisite for the formation of stable MDA5-RNA complexes. MDA5 binding to RNA was enhanced by the Q425R LGP2 variant. Conclusions: LGP2 is essential to mount an antiviral IFN response induced by HDV and stabilises MDA5-RNA interaction required for downstream signalling. The natural Q425R LGP2 is a gain-of-function variant and might contribute to an attenuated course of hepatitis D. Impact And Implications: HDV is the causative pathogen of chronic hepatitis D, a severe form of viral hepatitis that can lead to cirrhosis and hepatocellular carcinoma. Upon infection, the human immune system senses HDV and mounts an antiviral interferon (IFN) response. Here, we demonstrate that the immune sensor LGP2 cooperates with MDA5 to mount an IFN response that represses HDV replication. We mapped LGP2 determinants required for IFN system activation and characterised several natural genetic variants of LGP2. One of them reported to predominate in sub-Saharan Africans can accelerate HDV-induced IFN responses, arguing that genetic determinants, possibly including LGP2, might contribute to slower disease progression in this population. Our results will hopefully prompt further studies on genetic variations in LGP2 and other components of the innate immune sensing system, including assessments of their possible impact on the course of viral infection. abstract_id: PUBMED:31933304 Response And Tolerability Of Sofosbuvir Plus Daclatasvir In Elderly Patients With Chronic Hepatitis-C. Background: The approval of direct acting anti-viral drugs has expanded the treatment access to all patient populations including elderly patients, who were previously neglected. We evaluated the response and tolerability of sofosbuvir plus daclatasvir in old age patients &gt;60 year infected with HCV. Methods: In this prospective observational study, 100 patients were enrolled and were divided into two groups: aged 60-69 (group A) and aged 70 and older (group B). All the patients were given sofosbuvir plus daclatasvir. Sustained virologic response at 12 weeks was the primary endpoint. Response and tolerability of treatment were analysed and compared between these patient groups. Results: Hundred patients aged ≥60 years were treated with sofosbuvir plus daclatasvir. Sustained virologic response rate was 91% in group A (aged 60- 69 year) and 87.8% in group B (aged 70 year and older). No significant adverse effect was noted in both groups. No treatment discontinuation was encountered. Conclusions: Direct acting antiviral drug therapy is highly efficacious and safe for the treatment of HCV in older adults. abstract_id: PUBMED:31210717 Management of betablocked patients after sustained virological response in hepatitis C cirrhosis. Background: Current guidelines do not address the post-sustained virological response management of patients with baseline hepatitis C virus (HCV) cirrhosis and oesophageal varices taking betablockers as primary or secondary prophylaxis of variceal bleeding. We hypothesized that in some of these patients portal hypertension drops below the bleeding threshold after sustained virological response, making definitive discontinuation of the betablockers a safe option. Aim: To assess the evolution of portal hypertension, associated factors, non-invasive assessment, and risk of stopping betablockers in this population. Methods: Inclusion criteria were age &gt; 18 years, HCV cirrhosis (diagnosed by liver biopsy or transient elastography &gt; 14 kPa), sustained virological response after direct-acting antivirals, and baseline oesophageal varices under stable, long-term treatment with betablockers as primary or secondary bleeding prophylaxis. Main exclusion criteria were prehepatic portal hypertension, isolated gastric varices, and concomitant liver disease. Blood tests, transient elastography, and upper gastrointestinal endoscopy were performed. Hepatic venous pressure gradient (HVPG) was measured five days after stopping betablockers. Betablockers could be stopped permanently if gradient was &lt; 12 mmHg, at the discretion of the attending physician. Results: Sample comprised 33 patients under treatment with propranolol or carvedilol: median age 64 years, men 54.5%, median Model for End-Stage Liver Disease (MELD) score 9, Child-Pugh score A 77%, median platelets 77.000 × 103/µL, median albumin 3.9 g/dL, median baseline transient elastography 24.8 kPa, 88% of patients received primary prophylaxis. Median time from end of antivirals to gradient was 67 wk. Venous pressure gradient was &lt; 12 mmHg in 13 patients (39.4%). In univariate analysis the only associated factor was a MELD score decrease from baseline. On endoscopy, variceal size regressed in 19/27 patients (70%), although gradient was ≥ 12 mmHg in 12/19 patients. The elastography area under receiver operating characteristic for HVPG ≥ 12 mmHg was 0.62. Betablockers were stopped permanently in 10/13 patients with gradient &lt; 12 mmHg, with no bleeding episodes after a median follow-up of 68 wk. Conclusion: Portal hypertension dropped below the bleeding threshold in 39% of patients more than one year after antiviral treatment. Endoscopy and transient elastography are inaccurate for reliable detection of this change. Stopping betablockers permanently seems uneventful in patients with a gradient &lt; 12 mmHg. abstract_id: PUBMED:32802821 Effect of Low Positive End of Treatment Viral Load with Direct-Acting Antiviral Therapy on Sustained Virologic Response. Background: Direct-acting antivirals (DAAs) are highly effective treatments against hepatitis C virus (HCV), with sustained virologic response (SVR) rates of 93-100% against all genotypes. In most patients, viral load (VL) becomes undetectable four weeks into treatment, but rarely a low positive VL may be observed at the end of treatment (EOT). This study was conducted to determine the effect of low positive EOT VLs with DAA therapies on SVR at 12 and 24 weeks. Methods: A retrospective chart review was conducted from January 2014 to December 2018 on 1256 HCV patients of all genotypes (1-6) who had received DAA therapy at two large hepatology referral centers. Baseline demographic data, along with VL at week four, EOT, and SVR12/24 time points were collected for patients that had positive EOT VL. Treatment outcome for any patient with positive EOT VL was noted. Results: Eight out of 1256 patients treated with varying DAA therapies were observed to have low positive EOT VLs ranging from &lt;15 to 235 IU/mL. One patient had a negative EOT VL, but 23 IU/mL at week four after EOT. All eight patients who had low positive EOT VLs and one patient who had a low positive VL at four weeks after EOT achieved SVR at weeks 12 and 24. One of the eight patients had cirrhosis. The majority of patients were genotype 1a. Conclusion: In the DAA treatment era, low levels of detectable HCV RNA at EOT does not predict treatment failure. Answer: The relationship between alcohol intake and mortality from alcoholic cirrhosis among alcohol misusers appears to be a threshold effect rather than a dose-response effect. In a large prospective cohort study of 6152 alcohol misusers, it was found that there was no additional risk of death from alcoholic cirrhosis when exceeding an average daily number of five drinks (>60 g/alcohol) in both men and women. This indicates that once a certain threshold of alcohol consumption is reached, the risk of mortality from alcoholic cirrhosis does not increase with additional alcohol intake (PUBMED:15246203).
Instruction: Neurologic risk in growing rod spine surgery in early onset scoliosis: is neuromonitoring necessary for all cases? Abstracts: abstract_id: PUBMED:19680103 Neurologic risk in growing rod spine surgery in early onset scoliosis: is neuromonitoring necessary for all cases? Study Design: Retrospective case series from a multicenter database. Objective: To evaluate the risk of neurologic injury during growing rod surgeries and to determine whether intraoperative neuromonitoring is necessary for all growing rod procedures. Summary Of Background Data: Although the use of growing rod constructs for early-onset spinal deformity has become a commonly accepted treatment, the incidence of neurologic events during growing rod surgeries remains unknown. Methods: We reviewed data from a multicenter database on 782 growing rod surgeries performed in 252 patients. VEPTR devices and any constructs with rib attachments were excluded. A questionnaire was sent to all surgeons contributing cases requesting detailed information about all neurologic events associated with any growing rod surgery. Results: There were 782 growing rod surgeries performed on 252 patients including 252 primary growing rod implantations, 168 implant exchanges, and 362 lengthenings. Five hundred sixty-nine of 782 (73%) cases were performed with neuromonitoring. Only one clinical injury occurred in the series, resulting in an injury rate of 0.1% (1/782). This deficit occurred during an implant exchange while attempting pedicle screw placement, and resolved within 3 months. There were 2 cases with neuromonitoring changes during primary implant surgeries (0.9%, 2/231), 1 change during implant exchanges (0.9%, 1/116), and 1 neuromonitoring change during lengthenings (0.5%, 1/222). The single monitoring change that occurred during a lengthening was in a child with an intracanal tumor who also had a monitoring change during the primary surgery. There are anecdotal cases (outside this study group series) of neuromonitoring changes during simple lengthenings in children with uneventful primary implantations. Conclusion: Based on our study, the largest reported series of growing rod surgeries, the rate of neuromonitoring changes during primary growing rod implantation (0.9%) and exchange (0.9%) justifies the use of intraoperative neuromonitoring during these surgeries. As there were no neurologic events in 361 lengthenings in patients with no previous neurologic events, the question may be raised as to whether intraoperative neuromonitoring is necessary for simple lengthenings in these patients. However, caution should be maintained when interpreting our results as anecdotal cases of neurologic changes from simple lengthenings do exist outside of this series. abstract_id: PUBMED:27163968 Growing rod erosion through the lamina causing spinal cord compression in an 8-year-old girl with early-onset scoliosis. Background Context: Early-onset scoliosis often occurs by the age of 5 years and is attributed to many structural abnormalities. Syndromic early-onset scoliosis is considered one of the most aggressive types of early-onset scoliosis. Treatment starts with serial casting and bracing, but eventually most of these patients undergo growth-sparing procedures, such as a single growing rod, dual growing rods, or a vertical expandable titanium prosthetic rib. Purpose: This case report aimed to describe an unusual complication of erosion of a growing rod through the lamina that caused spinal cord compression in an 8-year-old girl with early-onset scoliosis. Study Design: This is a case report. Methods: A retrospective chart review was used to describe the clinical course and radiographic findings of this case after rod erosion into the spinal canal. Results: The patient underwent successful revision surgery removing the rod without neurologic complications. Conclusions: Patients with syndromic early-onset scoliosis are more prone to progressive curves and severe rotational deformity. We believe that the severe kyphotic deformity in addition to the dysplastic nature of the deformity in this population may predispose them to this unusual complication. abstract_id: PUBMED:27927389 Comparison of Growing Rod Instrumentation Versus Serial Cast Treatment for Early-Onset Scoliosis. Study Design: A comparison of 2 methods of early-onset scoliosis treatment using radiographic measures and complication rates. Objectives: To determine whether a delaying tactic (serial casting) has comparable efficacy to a surgical method (insertion of growing rod instrumentation [GRI]) in the initial phase of early-onset deformity management. Summary Of Background Data: Serial casts are used in experienced centers to delay operative management of curves of surgical magnitude (greater than 50°) in children up to age 6 years. Methods: A total of 27 casted patients from 3 institutions were matched with 27 patients from a multicenter database according to age (within 6 months of each other), curve magnitude (within 10° of each other), and diagnosis. Outcomes were compared according to major curve magnitude, spine length (T1-S1), duration and number of treatment encounters, and complications. Results: There was no difference in age (5.5 years) or initial curve magnitude (65°) between groups, which reflects the accuracy of the matching process. Six pairs of patients had neuromuscular diagnoses, 11 had idiopathic deformities, and 10 had syndromic scoliosis. Growing rod instrumentation patients had smaller curves (45.9° vs. 64.9°; p = .002) at follow-up, but there was no difference in absolute spine length (GRI = 32.0 cm; cast = 30.6 cm; p = .26), even though GRI patients had been under treatment for a longer duration (4.5 vs. 2.4 years; p &lt; .0001) and had undergone a mean of 5.5 lengthenings compared with 4.0 casts. Growing rod instrumentation patients had a 44% complication rate, compared with 1 cast complication. Of 27 casted patients, 15 eventually had operative treatment after a mean delay of 1.7 years after casting. Conclusions: Cast treatment is a valuable delaying tactic for younger children with early-onset scoliosis. Spine deformity is adequately controlled, spine length is not compromised, and surgical complications associated with early GRI treatment are avoided. abstract_id: PUBMED:37521397 Nuances in Growing Rod Surgery: Our Initial Experience and Literature Review. Introduction: Growing rod construct is one of the most widely acknowledged treatment modalities for early-onset scoliosis around the world, but it is not without complications. Throughout the course of treatment, numerous planned and inadvertent surgical interventions are required, which increase the complexity of the treatment. We share our experience with case examples along with extensive literature search and review to get an insight and document the complications with growing rod treatment. Case Report: These cases underwent surgery with dual growing rod for thoracolumbar idiopathic scoliosis in the view of failed conservative treatment and progressive deformity. Superficial infection is in one case and recurrence of deformity was a common finding though correction of deformity and final fusion was achieved in the cases. Breakage of screws, autofusion of the spanned segments, and profuse bony growths over the implants are common finding to get. Fibrosis and scar tissue from the previous surgeries result in difficulty in the exposure and performing corrective osteotomy. Conclusion: Growing rod surgery has high complication rates. Repeated surgical and anesthesia exposure pose a great risk to the body and immature skeleton of the young patient. Previous studies have put forth many possible course of action to lower down the complication rates but have met with variable results. A better implant design and surgical efficacy are needed to cut down the number of complications and surgical interventions in growing rod surgeries. abstract_id: PUBMED:32590351 Growing rod technique with prior foundation surgery and sublaminar taping for early-onset scoliosis. Objective: The aim of this study was to show the surgical results of growing rod (GR) surgery with prior foundation surgery (PFS) and sublaminar taping at an apex vertebra. Methods: Twenty-two early-onset scoliosis (EOS) patients underwent dual GR surgery with PFS and sublaminar taping. PFS was performed prior to rod placement, including exposure of distal and proximal anchor areas and anchor instrumentation filled with a local bone graft. After a period of 3-5 months for the anchors to become solid, dual rods were placed for distraction. The apex vertebra was exposed and fastened to the concave side of the rods using sublaminar tape. Preoperative, post-GR placement, and final follow-up radiographic parameters were measured. Complications during the treatment period were evaluated using the patients' clinical records. Results: The median age at the initial surgery was 55.5 months (range 28-99 months), and the median follow-up duration was 69.5 months (range 25-98 months). The median scoliotic curves were 81.5° (range 39°-126°) preoperatively, 30.5° (range 11°-71°) after GR placement, and 33.5° (range 12°-87°) at the final follow-up. The median thoracic kyphotic curves were 45.5° (range 7°-136°) preoperatively, 32.5° (range 15°-99°) after GR placement, and 42° (range 11°-93°) at the final follow-up. The median T1-S1 lengths were 240.5 mm (range 188-305 mm) preoperatively, 286.5 mm (range 232-340 mm) after GR placement, and 337.5 mm (range 206-423 mm) at the final follow-up. Complications occurred in 6 patients (27%). Three patients had implant-related complications, 2 patients had alignment-related complications, and 1 patient had a wound-related complication. Conclusions: A dual GR technique with PFS and sublaminar taping showed effective correction of scoliotic curves and a lower complication rate than previous reports when a conventional dual GR technique was used. abstract_id: PUBMED:29200491 Metallosis: A Complication in the Guided Growing Rod System Used in Treatment of Scoliosis. Soft tissue reaction following metallic debris formation with the use of guided growing rod system has not been previously reported in human. The purpose of this study is to report complications caused by metallosis in a guided growing rod system. A 9-year-old female patient, who underwent treatment for the progressive idiopathic scoliosis (with Cobb's angle of 71°) with the guided growing rod system. Her Cobb's angle was corrected to 13° with the index surgery. During the 5 years postoperative period, she manifested recurrent episodes of skin irritation and progressive worsening of lateral curvature of the spine to an angle of 57°. Furthermore, at her final followup, Risser stage 4 with a gain in height of 26.4 cm was achieved. Considering adequate growth attainment and deterioration in the curvature, revision surgery with fusion was performed. Postoperative Cobb's angle of 23° was achieved with the final correction. During the revisional surgery, signs of implant wear and metallosis were observed at the location of the unconstrained screws. On histological evaluation, chronic inflammation with foreign body granules was seen. However, titanium level in the body was within normal range. She was discharged without any complications. More research on implant wear as a complication in the guided growing rod system is necessary before its widespread use. The occurrence of metallosis with the use of guided growing rod system in growing young children should be considered, when designing the implants. abstract_id: PUBMED:34547389 Single distraction-rod constructs in severe early-onset scoliosis: Indications and outcomes. Background Context: Since the study of Thompson, et al in 2005, use of dual-growing rod constructs have become the gold standard for operative treatment in early-onset scoliosis. However, use of dual-growing rod constructs may not be possible, due to patient size and the type, location and severity of the spinal deformity. Purpose: The purpose of this study is to: (1) describe the deformities treated with single-growing rod constructs, and (2) report the outcomes of single-growing rods since 2005. Study Design: Observational, descriptive case series METHODS: A prospective, multi-center, international database of early-onset scoliosis patients were queried to identify all patients with single traditional growing rods (sTGR) or magnetically-controlled growing rods (sMCGR) since the 2005. Patients were excluded if there were greater than 1 rod or if there was less than 2 years of follow-up postoperatively. Twenty-five patients (13 female, 12 male) were identified from the database query, which satisfied the inclusion and exclusion criteria. Results: Mean age at index surgery was 4.7 years (1.3 to 9.3 years) and mean follow-up was 4.3 years (2.0 to 10.6 years). Eleven patients were classified as congenital (all mixed-type), six neuromuscular, five idiopathic and three syndromic. Proximal foundations were ribs in 23 patients and pedicle screws in two patients. The distal foundations were the spine in 25 patients and three pelvic S-hooks. All single rods were on the concave side of the deformity. Interpretation of preoperative radiographs determined in 72% (18/25) of cases dual growing rods would be difficult and/or suboptimal due to patient size (longitudinal a/o weight) and/or kyphosis/kyphoscoliosis with severe rotation. Maximal coronal deformity improved 30% (83.9 degrees to 58.6 degrees) at latest follow-up. Maximal kyphosis increased 17% (45.6 degrees to 57.4 degrees). Postoperative length increase: T1-T12, 17.0 mm (4.6 mm/year); T1-S1, 34 mm (9.4 mm/year). Total secondary surgeries for TGRs were 100: 66 lengthenings, 32 revisions, two unknown. 10 MCGRs secondary surgeries occurred in nine patients (seven for maximized actuators and three for foundation migration). At latest follow-up 20 continued with lengthenings (five TGR &amp; 15 MCGR), four underwent definitive fusions, and one completed lengthening (implants retained). Conclusions: Treatment of severe EOS with single rods demonstrated a 30% coronal correction. T1-S1 length increased at 9.4 mm/year and T1-T12 length at 4.6 mm/year, which are comparable to published reports on dual MCGRs. Single TGRs and MCGRs in EOS can provide acceptable short-term outcomes when dual rods are not deemed appropriate. Clinical Significance: The use of single growing rod constructs, in the 4-8 years old patient with EOS, can achieve reasonable short-term radiographic outcomes. abstract_id: PUBMED:25843064 Ultrasound control of magnet growing rod distraction in early onset scoliosis. The growing rod technique is currently one of the most common procedures used in the management of early onset scoliosis. However, in order to preserve spine growth and control the deformity it requires frequent surgeries to distract the rods. Magnetically driven growing rods have recently been introduced with same treatment goal, but without the inconvenience of repeated surgical distractions. One of the limitations of this technical advance is an increase in radiation exposure due to the increase in distraction frequency compared to conventional growing rods. An improvement of the original technique is presented, proposing a solution to the inconvenience of multiple radiation exposure using ultrasound technology to control the distraction process of magnetically driven growing rods. abstract_id: PUBMED:36275065 Efficacy of the growing rod technique on kyphotic early-onset scoliosis. Objective: To explore the application of the growing rod (GR) technique in the treatment of kyphotic early-onset scoliosis (KEOS) and analyze its surgical efficacy and safety. Methods: The clinical data of 30 children with KEOS who received GR treatment at our department between January 2016 and December 2019 were analyzed retrospectively. There were 18 cases with normal kyphosis (normal kyphosis group) and 12 cases with excessive kyphosis (excessive kyphosis group). Both groups received GR treatment, and all patients received anteroposterior and lateral spine X-ray examinations before, after the initial surgery, and at the final follow-up. The surgical conditions and imaging parameters of the two groups were compared, and the complications were recorded. Results: There was no statistical difference in the Cobb angle of the major curve, apical vertebral translation (AVT), and trunk shift (TS) between the two groups before, after the first surgery, and at the final follow-up (P &gt; 0.05). The Cobb angle of the major curve, the AVT, and the TS in both groups after the first surgery were lower than before the first surgery (P &lt; 0.05), but there was no statistical difference between the two groups (P &gt; 0.05). At the final follow-up, there were increases in both the Cobb angle and the AVT (P &lt; 0.05), while the TS decreased in comparison with findings after the first surgery (P &lt; 0.05). Before and after the first surgery and at the final follow-up appointment, there was a statistical difference in the degree of thoracic kyphosis (TK) between the two groups (P &lt; 0.05), while there was no statistical difference in terms of lumbar lordosis (LL), the proximal junctional angle (PJA), and the distal junctional angle (DJA) (P &gt; 0.05). After the first surgery, TK and LL showed a significant moderate response in both groups (P &lt; 0.05), while there was no significant difference in TK, LL, PJA, and DJA compared with the results at the final follow-up (P &gt; 0.05). Conclusions: The use of the GR technique can improve kyphosis in KEOS treatment. abstract_id: PUBMED:30831304 Preliminary Study of a New Growing Rod System in Immature Swine Model. Background: Growing rod techniques have been demonstrated to be a valuable treatment in early-onset scoliosis; however, repeated surgeries and general anesthesia cannot be avoided. Methods: This study included 12 immature swine that were randomly assigned to receive either a novel growing rod system (experimental group) or a traditional growing rod system (control group). Lengthening was undertaken at 4-week intervals, for a total observation period of 12 weeks. Radiography, computed tomography, and motion analysis of the spine were conducted to evaluate rod extension, growth and nonfusion of the spine, and fixation. Results: One swine (control group) was excluded from the analysis because of a deep wound infection at 1 week after the initial operation. No complications were observed in the remaining 11 swine. In the experimental group, the average lengthening operation lasted 12.1 ± 3.1 minutes, and average incision length was 1.1 ± 0.2 cm; both values were significantly less compared with the control group (P &lt; 0.001). No significant between-group differences in mean trunk length, body mass, or thickness of cephalad-instrumented or caudad-instrumented vertebrae and intervertebral disks were present before the initial operation or at the final assessment (12 weeks after operation; P &gt; 0.05). Spinal growth and motion of instrumented spinal segments were conserved. Conclusions: The novel growing rod system is safe and effective in immature swine, preserving spine growth potential and involving less surgical trauma. Answer: The question of whether neuromonitoring is necessary for all cases of growing rod spine surgery in early onset scoliosis is addressed in the study by PUBMED:19680103. The study evaluated the risk of neurologic injury during growing rod surgeries and the necessity of intraoperative neuromonitoring. The retrospective case series from a multicenter database included 782 growing rod surgeries performed on 252 patients. The surgeries consisted of primary growing rod implantations, implant exchanges, and lengthenings. Neuromonitoring was used in 73% of the cases. The study found only one clinical injury, resulting in an injury rate of 0.1%. Neuromonitoring changes were observed in 0.9% of primary implant surgeries and implant exchanges, and 0.5% during lengthenings. The study concluded that the rate of neuromonitoring changes during primary growing rod implantation and exchange justifies the use of intraoperative neuromonitoring during these surgeries. However, there were no neurologic events in 361 lengthenings in patients with no previous neurologic events, raising the question of whether neuromonitoring is necessary for simple lengthenings in these patients. The authors caution that anecdotal cases of neurologic changes from simple lengthenings do exist outside of this series, suggesting that while the risk may be low, there is still a potential for neurologic events that could be detected by neuromonitoring. In summary, the study suggests that intraoperative neuromonitoring is justified during primary growing rod implantation and exchange due to the observed rate of neuromonitoring changes. However, the necessity of neuromonitoring for simple lengthenings in patients with no previous neurologic events is less clear, and while the risk appears to be low, the potential for neurologic changes still exists.
Instruction: Does smoking influence the type of age related macular degeneration causing visual impairment? Abstracts: abstract_id: PUBMED:16597668 Does smoking influence the type of age related macular degeneration causing visual impairment? Aims: To assess the influence of smoking on the type of age related macular degeneration (AMD) lesion causing visual impairment in a large cohort of patients with AMD at a tertiary referral UK centre. Methods: Prospective, observational, cross sectional study to analyse smoking data on 711 subjects, of western European origin, in relation to the type of AMD lesion present. Colour fundus photographs were graded according to a modified version of the international classification. Multiple logistic regression analysis was performed, adjusting for age and sex using the statistical package SPSS ver 9.0 for Windows. chi(2) tests were also used to assess pack year and ex-smoker data. Results: 578 subjects were graded with neovascular AMD and 133 with non-neovascular AMD. There was no statistically significant association found between smoking status or increasing number of pack years and type of AMD lesion. The odds of "current smokers" compared to "non-smokers" developing neovascular rather than non-neovascular AMD when adjusted for age and sex was 1.88 (95% CI: 0.91 to 3.89; p = 0.09). Conclusions: Smoking is known to be a risk factor for AMD and this study suggests that smokers are at no more risk of developing neovascular than atrophic lesions. abstract_id: PUBMED:15834082 28,000 Cases of age related macular degeneration causing visual loss in people aged 75 years and above in the United Kingdom may be attributable to smoking. Background: Age related macular degeneration (AMD) causing visual impairment is common in older people. Previous studies have identified smoking as a risk factor for AMD. However, there is limited information for the older population in Britain. Methods: Population based cross sectional analytical study based in 49 practices selected to be representative of the population of Britain. Cases were people aged 75 years and above who were visually impaired (binocular acuity &lt;6/18) as a result of AMD. Controls were people with normal vision (6/6 or better). Smoking history was ascertained using an interviewer administered questionnaire. Results: After controlling for potentially confounding factors, current smokers were twice as likely to have AMD compared to non-smokers (odds ratio 2.15, 95% CI 1.42 to 3.26). Ex-smokers were at intermediate risk (odds ratio 1.13, 0.86 to 1.47). People who stopped smoking more than 20 years previously were not at increased risk of AMD causing visual loss. Approximately 28,000 cases of AMD in older people in the United Kingdom may be attributable to smoking. Conclusion: This is the largest study of the association of smoking and AMD in the British population. Smoking is associated with a twofold increased risk of developing AMD. An increased risk of AMD, which is the most commonly occurring cause of blindness in the United Kingdom, is yet another reason for people to stop smoking and governments to develop public health campaigns against this hazard. abstract_id: PUBMED:9635902 The association between cigarette smoking and ocular diseases. Tobacco smoke is composed of as many as 4,000 active compounds, most of them toxic on either acute or long-term exposure. Many of them are also poisonous to ocular tissues, affecting the eye mainly through ischemic or oxidative mechanisms. The list of ophthalmologic disorders associated with cigarette smoking continues to grow. Most chronic ocular diseases, with the possible exception of diabetic retinopathy and primary open-angle glaucoma, appear to be associated with smoking. Both cataract development and age-related macular degeneration, the leading causes of severe visual impairment and blindness, are directly accelerated by smoking. Other common ocular disorders, such as retinal ischemia, anterior ischemic optic neuropathy, and Graves ophthalmopathy, are also significantly linked to this harmful habit. Tobacco smoking is the direct cause of tobacco-alcohol amblyopia, a once common but now rare disease characterized by severe visual loss, which is probably a result of toxic optic nerve damage. Cigarette smoking is highly irritating to the conjunctival mucosa, also affecting the eyes of nonsmokers by passive exposure (secondhand smoking). The dangerous effects of smoking are transmitted through the placenta, and offspring of smoking mothers are prone to develop strabismus. Efforts should be directed toward augmenting the campaign against tobacco smoking by adding the increased risk of blindness to the better-known arguments against smoking. We should urge our patients to quit smoking, and we must make them keenly aware of the afflictions that can develop when smoke gets in our eyes. abstract_id: PUBMED:21897240 Effects of smoking on ocular health. Purpose Of Review: To review recent data on the effects of smoking on ocular health. Recent Findings: Smoking has been associated with a myriad of negative ocular health effects including age-related macular degeneration (ARMD) and cataract. Most recently, several papers have demonstrated a connection between smoking and ocular inflammation. Smokers are both more likely to develop ocular inflammation and to have more severe disease as manifested by poorer presenting vision and a higher risk of recurrent disease compared to nonsmokers. Smoking has also been shown to enhance the effect of genetic susceptibility with regards to the presence and development of ARMD. Finally, the negative effects of smoking on ocular disease have been increasingly documented in nonwhite populations outside of the USA. However, despite the abundance of data, public awareness on the adverse consequences of smoking on vision is lacking in the USA. In contrast, Australia improved public knowledge by launching a successful antitobacco health campaign highlighting the effects of smoking on ocular health. Summary: These findings suggest that eye care professionals should discuss and offer options for smoking cessation as part of the management of patients with ocular diseases, especially in those with ocular inflammation, ARMD, lens opacities/cataract, and thyroid-associated orbitopathy. Health campaigns using existing medical data can improve public awareness on the connection between tobacco and visual impairment. abstract_id: PUBMED:20238045 The pathophysiology of cigarette smoking and age-related macular degeneration. Age-related macular degeneration (AMD) is the most common form of visual impairment, in people over 65, in the Western world. AMD is a multifactorial disease with genetic and environmental factors influencing disease progression. Cigarette smoking is the most significant environmental influence with an estimated increase in risk of 2- to 4-fold. Smoke-induced damage in AMD is mediated through direct oxidation, depletion of antioxidant protection, immune system activation and atherosclerotic vascular changes. Moreover, cigarette smoke induces angiogenesis promoting choroidal neovascularisation and progression to neovascular AMD. Further investigation into the effects of cigarette smoke through in vitro and in vivo experimentation will provide a greater insight into the pathogenesis of age-related macular degeneration. abstract_id: PUBMED:9703035 Cigarette smoking and age-related macular degeneration. Background: Age-related macular degeneration (ARMD) is one of the leading causes of severe visual impairment among older Americans. Several hypotheses have been proposed regarding the pathogenesis of ARMD. The possible association of cigarette smoking and ARMD remains controversial. Methods: Studies concerning the relationship between cigarette smoking and ARMD are identified through the use of Vision Articles Online and PubMed. Articles published since 1970 are reviewed. Results: The literature reviewed strongly supports a link between smoking and ARMD. Conclusions: The identification of smoking as a risk factor can lead to early intervention. Such intervention may lessen visual loss from this disease, which has limited medical treatment options. abstract_id: PUBMED:11760242 Smoke gets in your eyes: smoking and visual impairment in New Zealand. Aim: To estimate the burden of visual impairment attributable to smoking in New Zealand. Methods: Review of Medline-indexed literature on the relationship between smoking and eye disease and use of relevant New Zealand morbidity and smoking prevalence data. Results: The international literature indicates there is strong evidence that smoking is a major cause of eye disease and blindness--particularly for cataracts and age-related macular degeneration (AMD). Using the most relevant international risk estimates, we estimated that 1335 people who are registered blind in New Zealand have AMD attributable to current and past smoking (26.8% of all AMD cases in the 55 years plus age-group). It was also estimated that 31 of the registered cases of visual impairment due to cataract and 396 hospitalisations for cataract surgery per year, are attributable to smoking. While subject to various methodological limitations, these estimates are probably under-estimates of the true burden of eye disease attributable to smoking. Conclusions: Smoking is a major cause of untreatable visual impairment and also a significant reason for cataract surgery in New Zealand. There is a need for more intensive tobacco control activities in New Zealand. abstract_id: PUBMED:16288198 Joint effects of smoking history and APOE genotypes in age-related macular degeneration. Purpose: Age-related macular degeneration (AMD) is a leading cause of severe visual impairment in older adults worldwide. Cigarette smoking is one of the most consistently identified environmental risk factors for the disease. Several studies have implicated the apolipoprotein E (APOE) gene as modulating AMD risk. The purpose of this study was to investigate whether APOE genotypes modify the smoking-associated risk of AMD. Methods: Patients with early- and late-stage AMD (n=377) and a group of unrelated ethnically matched controls of similar age (n=198) were ascertained at two sites in the southeastern United States. Smoking history and APOE genotype distribution in cases and controls were compared by multivariable logistic regression. Results: All measures of smoking history showed a highly significant association with AMD, and odds ratio estimates were consistently higher when only patients with exudative AMD were compared to controls. Main effects of APOE genotypes in the overall analysis did not reach statistical significance. The analysis of exudative AMD patients suggested that the risk increase due to smoking was greatest in carriers of the APOE-2 allele, with genotype-specific odds ratios increasing from 1.9 for APOE-4 carriers (p=0.11) to 2.2 for APOE-3/3 homozygotes (p=0.007) to 4.6 (p=0.001) for APOE-2 carriers, compared to nonsmoking APOE-3/3 individuals. Measures of statistical interaction indicated more than additive, and possibly more than multiplicative, joint effects of APOE and smoking history, however, the interaction was not statistically significant on either scale. Conclusions: We hypothesize that a history of smoking is a stronger risk factor for exudative AMD in carriers of the APOE-2 allele, compared to carriers of APOE-4 and the most common APOE-3/3 genotype. To further clarify the association of AMD with APOE and smoking history, future studies should consider both factors simultaneously. abstract_id: PUBMED:21672408 Smoking and visual impairment among older adults with age-related eye diseases. Introduction: Tobacco use is the leading preventable cause of death in the United States. Visual impairment, a common cause of disability in the United States, is associated with shorter life expectancy and lower quality of life. The relationship between smoking and visual impairment is not clearly understood. We assessed the association between smoking and visual impairment among older adults with age-related eye diseases. Methods: We analyzed Behavioral Risk Factor Surveillance System data from 2005 through 2008 on older adults with age-related eye diseases (cataract, glaucoma, age-related macular degeneration, and diabetic retinopathy; age ≥50 y, N = 36,522). Visual impairment was defined by self-reported difficulty in recognizing a friend across the street or difficulty in reading print or numbers. Current smokers were respondents who reported having smoked at least 100 cigarettes ever and still smoked at the time of interview. Former smokers were respondents who reported having ever smoked at least 100 cigarettes but currently did not smoke. We used multivariate logistic regressions to examine the association and to adjust for potential confounders. Results: Among respondents with age-related eye diseases, the estimated prevalence of visual impairment was higher among current smokers (48%) than among former smokers (41%, P &lt; .05) and respondents who had never smoked (42%, P &lt; .05). After adjustment for age, sex, race/ethnicity, education, and general health status, current smokers with age-related eye diseases were more likely to have visual impairment than respondents with age-related eye diseases who had never smoked (odds ratio, 1.16, P &lt; .05). Furthermore, respondents with cataract who were current smokers were more likely to have visual impairment than respondents with cataract who had never smoked (predictive margin, 44% vs 40%, P = .03), and the same was true for respondents with age-related macular degeneration (65% of current smokers vs 57% of never smokers, P = .02). This association did not hold true among respondents with glaucoma or diabetic retinopathy. Conclusion: Smoking is linked to self-reported visual impairment among older adults with age-related eye diseases, particularly cataract and age-related macular degeneration. Longitudinal evaluation is needed to assess smoking cessation's effect on vision preservation. abstract_id: PUBMED:8827966 A prospective study of cigarette smoking and age-related macular degeneration in women. Objective: To evaluate the relationship between cigarette smoking and incidence of age-related macular degeneration (AMD) among women. Design: Prospective cohort study with 12 years of follow-up (1980 to 1992), in which information on smoking habits was updated every 2 years. Setting: Eleven states throughout the United States. Participants: A total of 31 843 registered nurses enrolled in the Nurses' Health Study who were aged 50 to 59 years in 1980 and did not report a diagnosis of cancer or AMD at the beginning of the study. Additional women entered the analytic cohort as they reached 50 years of age. Main Outcome Measure: Incidence of AMD with visual loss. Results: During 556 338 person-years of follow-up, 215 women were newly diagnosed as having AMD. After adjusting for other risk factors for AMD, women who currently smoked 25 or more cigarettes per day had a relative risk (RR) of AMD of 2.4 (95% confidence interval [CI], 1.4-4.0) compared with women who never smoked. Past smokers of this amount also had a 2-fold increased risk (RR=2.0; 95% CI, 1.2-3.4) relative to never smokers. Compared with current smokers, little reduction in risk was suggested even after quitting smoking for 15 or more years. Risk of AMD also increased with an increasing number of pack-years smoked (P for trend &lt;.001); among women who smoked for 65 or more pack-years, the risk was 2.4 times the risk of never smokers (95% CI, 1.5-3.8). Analyses of dry and exudative types of AMD and other alternative definitions of AMD revealed similar results. Conclusions: Cigarette smoking is an independent and avoidable risk factor for AMD among women. Because AMD is the most common cause of severe visual impairment among the elderly and treatment is not available or is ineffective for most patients, reducing the risk of this disease is another important reason to avoid smoking. Answer: Based on the provided abstracts, smoking does not appear to significantly influence the type of age-related macular degeneration (AMD) lesion causing visual impairment. A study analyzing data from 711 subjects did not find a statistically significant association between smoking status or increasing number of pack years and the type of AMD lesion (PUBMED:16597668). The odds of current smokers developing neovascular rather than non-neovascular AMD, when adjusted for age and sex, was 1.88, but this was not statistically significant (p = 0.09). Therefore, while smoking is known to be a risk factor for AMD, this study suggests that smokers are at no more risk of developing neovascular than atrophic lesions.
Instruction: Do wine polyphenols modulate p53 gene expression in human cancer cell lines? Abstracts: abstract_id: PUBMED:11522280 Do wine polyphenols modulate p53 gene expression in human cancer cell lines? Background: The p53 gene is an established tumor suppressor and an inducer of apoptosis. We here attempt to determine whether the putative anticarcinogenic properties attributed to red wine and its polyphenolic constituents depend, at least in part, upon their ability to modulate p53 expression in cancer cells. Methods: Three human breast cancer cell lines (MCF-7, T47D; MDA-MB-486) and one human colon cancer cell line [Colo 320 HSR (+)] were treated for 24-h with each of four polyphenols [quercetin; (+)-catechin, trans-resveratrol; caffeic acid] at concentrations ranging from 10(-7) M to 10(-4) M, after which, p53 concentrations were measured in cell lysates by a time-resolved fluorescence immunoassay. Results: None of the polyphenols tested affected p53 expression in the breast cancer cell lines T-47D and MDA-MB-486. p53 content of MCF-7 breast cancer cells (wild-type) was increased by caffeic acid, decreased by resveratrol, and showed a twofold increase with catechin, that reached borderline statistical significance; however, none of these effects were dose-responsive. Colo 320 HSR (+) cells (with a mutant p53 gene) had lower p53 content upon stimulation, reaching borderline statistical significance, but without being dose-responsive, in the presence of caffeic acid and resveratrol. Apart from toxicity at 10(-4) M, quercetin had no effect upon these four cell lines. Conclusions: The observed p53 concentration changes upon stimulation by polyphenols are relatively small, do not follow a uniform pattern in the four cell lines tested, and do not exhibit a dose-response effect. For these reasons, we speculate that the putative anticarcinogenic properties of wine polyphenols are unlikely to be mediated by modulation of p53 gene expression. abstract_id: PUBMED:19943082 Selective proapoptotic activity of polyphenols from red wine on teratocarcinoma cell, a model of cancer stem-like cell. Cancer stem cells are expected to be responsible for tumor initiation and metastasis. These cells are therefore potential targets for innovative anticancer therapies. However, the absence of bona fide cancer stem cell lines is a real problem for the development of such approaches. Since teratocarcinoma cells are totipotent stem cells with a high degree of malignancy, we used them as a model of cancer stem cells in order to evaluate the anticancer chemopreventive activity of red wine polyphenols (RWPs) and to determine the underlying cellular and molecular mechanisms. We therefore investigated the effects of RWPs on the embryonal carcinoma (EC) cell line P19 which was grown in the same culture conditions as the most appropriate normal cell line counterpart, the pluripotent embryonic fibroblast cell line NIH/3T3. The present study indicates that RWPs selectively inhibited the proliferation of P19 EC cells and induced G1 cell cycle arrest in a dose-dependent manner. Moreover, RWPs treatment specifically triggered apoptosis of P19 EC cells in association with a dramatic upregulation of the tumor suppressor gene p53 and caspase-3 activation. Our findings suggest that the chemopreventive activity of RWPs on tumor initiation and development is related to a growth inhibition and a p53-dependent induction of apoptosis in teratocarcinoma cells. In addition, this study also shows that the EC cell line is a convenient source for studying the responses of cancer stem cells to new potential anticancer agents. abstract_id: PUBMED:31004736 Anti-cancer effects of polyphenols via targeting p53 signaling pathway: updates and future directions. The anticancer effects of polyphenols are ascribed to several signaling pathways including the tumor suppressor gene tumor protein 53 (p53). Expression of endogenous p53 is silent in various types of cancers. A number of polyphenols from a wide variety of dietary sources could upregulate p53 expression in several cancer cell lines through distinct mechanisms of action. The aim of this review is to focus the significance of p53 signaling pathways and to provide molecular intuitions of dietary polyphenols in chemoprevention by monitoring p53 expression that have a prominent role in tumor suppression. abstract_id: PUBMED:32934816 Cellular expression profiles of Epstein-Barr virus-transformed B-lymphoblastoid cell lines. Epstein-Barr virus (EBV) can infect human B cells and is associated with various types of B cell lymphomas. Studies on the global alterations of the cellular pathways mediated by EBV-induced B cell transformation are limited. In the present study, microarray analysis was performed following generation of two EBV-infected B-lymphoblastoid cell lines (BLCL), in which normal B cells obtained from two healthy Thai individuals and transcriptomic profiles were compared with their respective normal B cells. The two EBV-transformed BLCL datasets exhibited a high degree of similarity between their RNA expression profiles, whereas the two normal B-cell datasets did not exhibit the same degree of similarity in their RNA expression profiles. Differential gene expression analysis was performed, and the results showed that EBV infection was able to dysregulate several cellular pathways in the human B-cell genes involved in cancer and cell activation, such as the MAPK, WNT and PI3K-Akt signaling pathways, which were upregulated in the BLCL and were associated with increased cellular proliferation and immortalization of EBV-infected B cells. Expression of proteins located in the plasma membrane, which initiate a biological response to ligand binding, were also notably upregulated. Expression of genes involved in cell cycle control, the p53 signaling pathway and cellular senescence were downregulated. In conclusion, genes that were markedly upregulated by EBV included those involved in the acquisition of a tumorigenic phenotype of BLCL, which was positively correlated with several hallmarks of cancer. abstract_id: PUBMED:25572695 Anti-proliferative effects of polyphenols from pomegranate rind (Punica granatum L.) on EJ bladder cancer cells via regulation of p53/miR-34a axis. miRNAs and their validated miRNA targets appear as novel effectors in biological activities of plant polyphenols; however, limited information is available on miR-34a mediated cytotoxicity of pomegranate rind polyphenols in cancer cell lines. For this purpose, cell viability assay, Realtime quantitative PCR for mRNA quantification, western blot for essential protein expression, p53 silencing by shRNA and miR-34a knockdown were performed in the present study. EJ cell treatment with 100 µg (GAE)/mL PRE for 48 h evoked poor cell viability and caspase-dependent pro-apoptosis appearance. PRE also elevated p53 protein and triggered miR-34a expression. The c-Myc and CD44 were confirmed as direct targets of miR-34a in EJ cell apoptosis induced by PRE. Our results provide sufficient evidence that polyphenols in PRE can be potential molecular clusters to suppress bladder cancer cell EJ proliferation via p53/miR-34a axis. abstract_id: PUBMED:23145928 Tea polyphenols modulate antioxidant redox system on cisplatin-induced reactive oxygen species generation in a human breast cancer cell. Tea polyphenols (TPP) have potent antioxidant and anticancer properties, particularly in patients undergoing radiation or chemotherapy. However, few studies have been conducted on treatments using a combination of TPP and the conventional chemical anticancer drug cisplatin (CP). This study was designed to investigate the mechanism of the cytotoxicity of total TPP and CP, which may synergistically induce cell death in cancer cells. Here, breast cancer cells (MCF-7) were treated with various concentrations of TPP alone or in combination with the chemotherapeutic drug CP. The effect of TPP on cell growth, intracellular reactive oxygen species (ROS) level, apoptosis and gene expression of caspase-3, caspase-8 and caspase-9 and p53 was investigated. The MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay revealed that the MCF-7 cells were less sensitive to growth inhibition by TPP treatment than either CP or the combination therapy. Propidium iodide nuclear staining indicated that exposure to this combination increased the proportion of apoptotic nuclei compared with a single-agent treatment. Flow cytometry analysis was used to quantify changes in intracellular ROS. Detection of activated caspases by fluorescently labelled inhibitors of caspases (FLICA) combined with the plasma membrane permeability assay demonstrated that the percentage of early and late apoptotic/secondary necrotic cells was higher in the cells treated with the combination than in those treated with either TPP or CP alone. The combined TPP and CP treatment synergistically induced apoptosis through both caspase-8 and caspase-9 activation and p53 over-expression. This suggests that TPP plus CP may be used as an efficient antioxidant-based combination therapy for estrogen receptor (ER)-positive and p53-positive breast cancer. abstract_id: PUBMED:20369042 Primary cell lines: false representation or model system? a comparison of four human colorectal tumors and their coordinately established cell lines. Cultured cell lines have played an integral role in the study of tumor biology since the early 1900's. The purpose of this study is to evaluate colorectal cancer (CRC) cell lines with respect to progenitor tumors and assess whether these cells accurately and reliably represent the cancers from which they are derived. Primary cancer cell lines were derived from fresh CRC tissue. Tumorigenicity of cell lines was assessed by subcutaneous injection of cells into athymic mice and calculation of tumor volume after 3 weeks. DNA ploidy was evaluated by flow cytometry. Invasive ability of the lines was tested with the MATRIGEL invasion assay at 24 or 48 hours. Cells were assessed for the presence of Kirsten-Ras (K-Ras), p-53, deleted in colon cancer (DCC), and Src. Protein profiling of cells and tissue was performed by surface enhanced laser desorption/ionization-time of flight/mass spectroscopy. microRNA expression in cell and tumor tissue samples was evaluated by FlexmiR MicroRNA Assays. Four cell lines were generated from tumor tissue from patients with CRC and confirmed to be tumorigenic (mean tumor volume 158.46 mm(3)). Two cell lines were noted to be diploid; two were aneuploid. All cell lines invaded the MATRIGEL starting as early as 24 hours. K-Ras, p53, DCC, and Src expression were markedly different between cell lines and corresponding tissue. Protein profiling yielded weak-to-moderate correlations between cell samples and respective tissues of origin. Weak-to-moderate tau correlations for levels of expression of human microRNAs were found between cells and respective tissue samples for each of the four pairings. Although our primary CRC cell lines vary in their expression of several tumor markers and known microRNAs from their respective progenitor tumor tissue, they do not statistically differ in overall profiles. Characteristics are retained that deem these cell lines appropriate models of disease; however, data acquired through the use of cell culture may not always be a completely reliable representation of tumor activity in vivo. abstract_id: PUBMED:30179144 Determination of Vulpinic Acid Effect on Apoptosis and mRNA Expression Levels in Breast Cancer Cell Lines. Objective: Breast cancer is one of the most common diseases among women worldwide and it is characterized by a high ratio of malignancy and metastasis and low rate of survival of patients. Due to limited treatment options, the discovery of alternative therapeutic agents and clarifying the molecular mechanism of breast cancer development may offer new hope for its treatment. Lichen secondary metabolites may be one of these therapeutic agents. Methods: In this study, the effects of Vulpinic Acid (VA) lichen secondary metabolite on the cell viability and apoptosis of breast cancer cells and non-cancerous cell line were investigated. Quantitative polymerase chain reaction was also performed to determine changes in the expression of apoptosis-related genes at a molecular level. Results: The results demonstrated that VA significantly inhibited the cell viability and induced apoptosis of human breast cancer cells. The highest rates of decreased growth were determined using the IC50 value of VA for 48h on MCF-7 breast cancer cell. Interestingly, VA treatment significantly reduced cell viability in all examined breast cancer cell lines compared to their non-cancerous human breast epithelial cell line. This is the first study on the investigation of the effects of VA on the molecular mechanisms associated with the expression of apoptosis-related genes in breast cancer cell lines. Results demonstrated that the gene expression of P53 genes was altered up to fourteen-fold levels in SK-BR-3 cell lines whereas it reached 2.5-fold in the MCF-12A cell line after treatment with VA. These observations support that VA induces apoptosis on the breast cancer cells compared with the non-cancerous human breast epithelial cell line. Conclusion: It is implicated that VA may be a promising novel molecule for the induction of apoptosis on breast cancer cells. abstract_id: PUBMED:23209617 Expression of microRNAs in the NCI-60 cancer cell-lines. The NCI-60 panel of 60 human cancer cell-lines of nine different tissues of origin has been extensively characterized in biological, molecular and pharmacological studies. Analyses of data from such studies have provided valuable information for understanding cellular processes and developing strategies for the diagnosis and treatment of cancer. Here, Affymetrix® GeneChip™ miRNA version 1 oligonucleotide microarrays were used to quantify 847 microRNAs to generate an expression dataset of 495 (58.4%) microRNAs that were identified as expressed in at least one cell-line of the NCI-60 panel. Accuracy of the microRNA measurements was partly confirmed by reverse transcription and polymerase chain reaction assays. Similar to that seen among the four existing NCI-60 microRNA datasets, the concordance of the new expression dataset with the other four was modest, with mean Pearson correlation coefficients of 0.37-0.54. In spite of this, comparable results with different datasets were noted in clustering of the cell-lines by their microRNA expression, differential expression of microRNAs by the lines' tissue of origin, and correlation of specific microRNAs with the doubling-time of cells or their radiation sensitivity. Mutation status of the cell-lines for the TP53, PTEN and BRAF but not CDKN2A or KRAS cancer-related genes was found to be associated with changes in expression of specific microRNAs. The microRNA dataset generated here should be valuable to those working in the field of microRNAs as well as in integromic studies of the NCI-60 panel. abstract_id: PUBMED:32878786 CDKN1A Gene Expression in Two Multiple Myeloma Cell Lines With Different P53 Functionality. Background/aim: Multiple myeloma is a highly heterogeneous disease of clonal plasma cells. Histone deacetylase (HDAC) inhibitors are promising anticancer drugs but their precise mechanisms of actions are not well understood. Materials And Methods: Cell-cycle regulation and pro-apoptotic effects of two histone deacetylase inhibitors, suberohydroxamic acid (SAHA) and suberoylanilide hydroxamic acid (SBHA), were analyzed in multiple myeloma cell lines RPMI8226 and U266 with differing TP53 status using gene-expression analysis. Results: Enhanced expression of cyclin-dependent kinase inhibitor 1A (CDKN1A/p21WAF/CIP1) detected in the TP53-deleted U266 cell line after SAHA treatment indicates the P53-independent mode of transcriptional activation of CDKN1A gene. In contrast, CDKN1A gene expression was significantly increased by both SBHA and SAHA treatment of TP53-mutated RPMI8226 cells. Conclusion: SAHA appears to be a potentially effective pro-apoptotic and anticancer drug with universal application in the treatment of heterogeneous populations of multiple myeloma cells. Answer: The effect of wine polyphenols on p53 gene expression in human cancer cell lines has been investigated, but the results are not entirely conclusive. One study found that different polyphenols from red wine had varying effects on p53 expression in different cancer cell lines. For instance, caffeic acid increased p53 content in MCF-7 breast cancer cells, resveratrol decreased it, and catechin showed a twofold increase that was borderline statistically significant. However, these effects were not dose-responsive and did not follow a uniform pattern across the four cell lines tested, leading to speculation that the anticarcinogenic properties of wine polyphenols are unlikely to be mediated by modulation of p53 gene expression (PUBMED:11522280). In contrast, another study indicated that red wine polyphenols (RWPs) selectively inhibited the proliferation of P19 embryonal carcinoma cells, a model of cancer stem-like cells, and induced apoptosis in association with a dramatic upregulation of the tumor suppressor gene p53 (PUBMED:19943082). This suggests that RWPs may have a chemopreventive activity related to growth inhibition and p53-dependent induction of apoptosis in certain cancer cell types. Furthermore, a review highlighted that various polyphenols could upregulate p53 expression in several cancer cell lines through different mechanisms, emphasizing the significance of the p53 signaling pathway in the chemopreventive action of dietary polyphenols (PUBMED:31004736). In summary, while some studies suggest that wine polyphenols can modulate p53 gene expression in certain human cancer cell lines, the effects are not consistent across all cell lines or polyphenols, and the mechanisms of action may vary. Therefore, the relationship between wine polyphenols and p53 modulation in cancer cells remains an area for further research.
Instruction: The impact of health care reform on gastroenterology fellows: are training programs preparing them for the future? Abstracts: abstract_id: PUBMED:9517633 The impact of health care reform on gastroenterology fellows: are training programs preparing them for the future? American College of Gastroenterology Educational Affairs Subcommittee on Training. Objective: Health care reform is dramatically changing the practice and delivery of medical care. The goal of this investigation was to examine gastroenterology trainees' outlook on the impact of health care reform on training programs. Methods: A 24-question survey was mailed in February 1996 to 780 GI fellows obtained from the comprehensive American College of Gastroenterology (ACG) database. Results: A total of 362 fellows responded (46%): 85% were male, 57% Caucasian, 75% married, and 86% were university-based. Ninety-six percent of fellows believed that health care reform is adversely affecting the quality of health care and 94.1% felt that it was adversely affecting fellowship training. Eighty-eight percent expressed concern over the impact of health care reform on practice opportunities. Only 9% of fellows reported that their training program had established a specific educational program addressing health care reform, whereas 83% of fellows felt that their program should do so. Conclusion: Gastroenterology fellows are concerned about the impact of health care reform on the quality of care and the quality of their fellowship training. Trainees believe that programs are not providing sufficient education to help them respond to the changes in health care. abstract_id: PUBMED:22706991 Supreme Court review of the Affordable Care Act: the future of health care reform and practice of gastroenterology. After decades of failed attempts to enact comprehensive health care reform, President Obama signed the Patient Protection and Affordable Care Act into law on March 23, 2010. The Affordable Care Act (ACA) has been regarded as the most significant piece of domestic policy legislation since the establishment of Medicare in 1965. The ACA would cover an estimated 32 of the 50 million uninsured Americans by expanding Medicaid, providing subsidies to lower income individuals, establishing health insurance exchanges, and restricting insurance companies from excluding patients from coverage. The ACA also includes many payment and health care delivery system reforms intended to improve quality of care and control health care spending. Soon after passage of the ACA, numerous states and interest groups filed suits challenging its legality. Supreme Court consideration was requested in five cases and the Supreme Court selected one case, brought by 26 states, for review. Oral arguments were heard this spring, March 26-28. The decision will have far reaching consequences for health care in America and the practice of gastroenterology for decades to come. This article reviews the four major issues before the Supreme Court and implications for health care reform and future practice of gastroenterology. Payment reforms, increased accountability, significant pressures for cost control, and new care delivery models will significantly change the future practice of gastroenterology. With these challenges however is a historic opportunity to improve access to care and help realize a more equitable, sustainable, and innovative health care system. abstract_id: PUBMED:8027482 Master's degree nursing education and health care reform: preparing for the future. Current and anticipated changes in health care delivery indicate a need for reform within nursing education. Master's degree education can be a valuable component in the preparation of future nurses, but assessment and revision of existing programs are necessary. The American Association of Colleges of Nursing's position paper, Nursing Education's Agenda for the 21st Century, provides recommendations for educational reform in general and for master's degree education specifically. Overall recommendations include greater focus on the development of unique aspects in each school's mission, emphasis on nursing as a practice discipline, and the inclusion of all aspects--content, processes, and outcomes--in curricular revisions. Master's degree education is reaffirmed as preparation for those who will advance practice. In keeping with health care delivery trends, advanced practice nurses will require substantial expertise in health promotion, primary health care, case management, health care economics, and change strategies. Many questions remain unanswered regarding appropriate future directions for master's degree education. There is little consensus on core knowledge or a single appropriate title for advanced practice nurses. The amount and type of research preparation, and the need for role preparation are other controversial issues. The future holds exciting potential, but there will be significant challenges in program revisioning, faculty redevelopment and clarification of goals and methods for master's degree education. abstract_id: PUBMED:22099716 The impact of health reform on gastroenterology reimbursement. The budgetary impact of the cost of health care on the United States economy is far-reaching. An understanding of the provisions in the Affordable Care Act is essential to preparing one's practice to proactively deal with a rapidly changing and evolving system whereby local, regional, and national actions are affecting the ability of clinicians to maintain success on a daily basis. abstract_id: PUBMED:22099708 Health care reform: 2012 update. The recent landmark health care reform legislation seeks to expand health insurance coverage, change incentives, and improve the quality and flow of information. This article reviews the elements of health care reform most relevant to clinical gastroenterology, discusses the ongoing challenges that health care reform legislation faces, and considers the potential implications for clinical practice. abstract_id: PUBMED:7760962 Advanced practice nursing after health care reform. The purpose of this article is to review the literature on the role of advanced practice nurses in the post health care reform environment. The effect of the failed health care reform initiative, the impact of managed care, and the composition of patient populations and the work force in the future are described. abstract_id: PUBMED:10138702 Health care reform and long-term care pharmacy. "Waiting and preparing" describes the health care reform activity of most of the larger long-term care pharmacy providers. The complexity of this service sector, in combination with the potentially enormous cost and actuarial uncertainty, seems to have averted major federal action at this time. The frenzy caused by the reform debate, however, has precipitated several changes at the state level that may not be as prudent. abstract_id: PUBMED:8610187 Working on reform. How workers' compensation medical care is affected by health care reform. The medical component of workers' compensation programs-now costing over $24 billion annually-and the rest of the nation's medical care system are linked. They share the same patients and providers. They provide similar benefits and services. And they struggle over who should pay for what. Clearly, health care reform and restructuring will have a major impact on the operation and expenditures of the workers' compensation system. For a brief period, during the 1994 national health care reform debate, these two systems were part of the same federal policy development and legislative process. With comprehensive health care reform no longer on the horizon, states now are tackling both workers' compensation and medical system reforms on their own. This paper reviews the major issues federal and state policy makers face as they consider reforms affecting the relationship between workers' compensation and traditional health insurance. What is the relationship of the workers' compensation cost crisis to that in general health care? What strategies are being considered by states involved in reforming the medical component of workers compensation? What are the major policy implications of these strategies? abstract_id: PUBMED:22331478 Health care reform: preparing the psychology workforce. This article is based on the opening presentation by the author to the Association of Psychologists in Academic Health Centers' 5th National Conference, "Preparing Psychologists for a Rapidly Changing Healthcare Environment" held in March, 2011. Reviewing the patient protection and affordable care act (ACA), that presentation was designed to set the stage for several days of symposia and discussions anticipating upcoming changes to the healthcare system. This article reviews the ACA; general trends that have impacted healthcare reform; the implications of the Act for psychology's workforce including the growing focus on interprofessional education, training, and practice, challenges to address in order to prepare for psychology's future; and recommendations for advocating for psychology's future as a healthcare profession. abstract_id: PUBMED:8244247 Health care reform and rural health networks. Health care reform is likely to raise unique issues for rural communities and providers. This paper identifies and discusses several of these issues, with a particular focus on the potential relationship between health care reform and rural health networks. Topics addressed include the likely impact of health reform on the organization and development of rural health networks, the reimbursement of rural providers, rural medical practice, and state roles in the organization, delivery, and oversight of rural health care. Answer: The impact of health care reform on gastroenterology fellows has raised concerns regarding the quality of health care and fellowship training. A survey conducted in February 1996, which included 362 gastroenterology fellows, revealed that a vast majority (96%) believed that health care reform is adversely affecting the quality of health care, and 94.1% felt that it was adversely affecting fellowship training. Additionally, 88% of the fellows expressed concern over the impact of health care reform on practice opportunities. Despite these concerns, only 9% of fellows reported that their training program had established a specific educational program addressing health care reform, while 83% felt that their program should do so (PUBMED:9517633). The Affordable Care Act (ACA), signed into law on March 23, 2010, is considered the most significant piece of domestic policy legislation since the establishment of Medicare in 1965. The ACA aims to cover an estimated 32 of the 50 million uninsured Americans and includes payment and health care delivery system reforms intended to improve the quality of care and control health care spending. These reforms will significantly change the future practice of gastroenterology, presenting both challenges and opportunities to improve access to care and create a more equitable, sustainable, and innovative health care system (PUBMED:22706991). In light of these changes, it is essential for gastroenterology training programs to prepare fellows for the evolving health care landscape. This includes understanding the implications of health care reform for clinical practice (PUBMED:22099708), the impact on reimbursement (PUBMED:22099716), and the broader context of health care delivery and policy (PUBMED:7760962, PUBMED:10138702, PUBMED:8610187, PUBMED:22331478, PUBMED:8244247). Training programs need to incorporate education on health care economics, change strategies, and new care delivery models to ensure that gastroenterology fellows are equipped to navigate the future of health care effectively.
Instruction: Whether Post-Tonsillectomy Medication Should be Liquid Based or Can be Solid? Abstracts: abstract_id: PUBMED:26742388 Whether Post-Tonsillectomy Medication Should be Liquid Based or Can be Solid? A Randomised, Single-Blinded, Controlled Trial. Background: Tonsillectomy is a common operation for the otolaryngologist. There has been a discrepancy in recommending prescribing a liquid based or solid medication for post-tonsillectomy patients. Objective: To compare the pain scores, adverse effects and complications between the post-tonsillectomy patients who were given liquid medication in comparison with patients who were given non-restricted medication. Material And Method: Patients with chronic hypertrophic tonsillitis who underwent tonsillectomy were recruited. In the control group, patients were given liquid medication. The experimental group was given a non-restricted form of medication. Pain scores, adverse effects and complications and patient satisfaction data were collected. Results: Twenty-six patients were enrolled. The pain score difference between the 2 groups at 4 hours was -0.23 (95% CI -1.57 to 1.11, p = 0.73) and 0.15 (95% CI -0.77 to 1.08, p = 0.73) at 72 hours. There was no statistically significant difference between the early and late complications between the control group and the experimental group (p &gt; 0.05). Conclusion: There was no statistical difference in the pain scores, adverse effects and complications between groups. There is no necessity to restrict patients to liquid medication. abstract_id: PUBMED:36742880 Role of Pre-incisional Bupivacaine Infiltration in Post-tonsillectomy Analgesia. Tonsillectomy is one of the common surgeries performed by Otorhinolaryngologists and is associated with several morbidities with pain being the commonest, which can cause considerable delay in oral intake and discharge from the hospital. As a commonly performed day care procedure nowadays, pain control is much better than what it used to be previously observed. Therefore newer drugs are being constantly studied inorder to give better analgesia and post operative comfort to the patient with minimal side effects. The main obstacle being finding the best medical method to control pain with minimum side effects, but at the same time making sure that the patient is adequately hydrated and they resume regular eating as soon as possible. Our aim is to study the role of pre-incisional 0.5% bupivacaine versus normal saline infiltration in post-tonsillectomy analgesia. Over a period of 1 year, 30 patients with each group of 15 were compared for the efficacy of 0.5% bupivacaine and 0.9% normal saline in post-operative tonsillectomy pain management. After thorough clinical examination and investigations, all patients underwent tonsillectomy by dissection and snare method. After intubation, 0.5% bupivacaine or normal saline was infiltrated in the tonsillar fossa and pain scores was obtained using Visual Analogue Scale (V.A.S) at 6, 12 and 24 h post operatively. Using Mann-Whitney non-parametric statistical test, inter-group analysis was done which showed highly significant p-value (&lt;0.0001) indicating that the pre-incisional bupivacaine infiltration is highly effective in reducing the post-tonsillectomy pain. Hence, we recommend the routine use of pre-incisional peritonsillar infiltration of 0.5% bupivacaine in all tonsillectomy/adenotonsillectomy cases, irrespective of the age of the patient to reduce the post tonsillectomy pain and other discomfort associated with it. abstract_id: PUBMED:29805060 Effect of paracetamol/prednisolone versus paracetamol/ibuprofen on post-operative recovery after adult tonsillectomy. Objective: To compare the effect of Paracetamol/Prednisolone versus Paracetamol/Ibuprofen on post-operative recovery after adult tonsillectomy. Background: Various analgesic protocols have been proposed for the control of post-tonsillectomy morbidity with need for better control in adult population for having higher severity of post-operative pain and risk of secondary post-tonsillectomy bleeding. Methods: This is a prospective cohort study conducted on 248 patients with age of 12 years or older distributed as two equal groups; the first one receiving Paracetamol/Prednisolone and the second one receiving Paracetamol/Ibuprofen. Both groups were compared at 7 days post-operative regarding pain at rest, tiredness of speech, dietary intake, and decrease in sleep duration. Both groups were compared regarding incidence of nausea and vomiting at 2 days post-operative. The incidence and severity of secondary post-tonsillectomy hemorrhage was compared between the two groups. Results: Pain at rest (no swallowing - no talking) was less in group I but not reaching statistical significance (p = 0.36). In addition, dietary intake was better in group I but not reaching statistical significance (P = 0.17). However, talking ability was better with statistically significant difference (P = 0.03) in group I. Impairment of sleep was less with group II but not reaching statistical significance (p = 0.31). The incidence of vomiting at second post-operative day was less in group I with statistical significance (p = 0.049). The incidence of secondary post-tonsillectomy bleeding was significantly higher in group II with statistical significance (p = 0.046). The severity of bleeding episodes was also significantly higher in group II (p = 0.045). Conclusion: Both ibuprofen and prednisolone were effective as a part of post-operative medication regimen after adult tonsillectomy. However, prednisolone was superior to ibuprofen regarding improvement of pain at rest, dietary intake, tiredness of speech and post-operative nausea and vomiting. However, ibuprofen had a better impact on sleep. The incidence and severity of secondary post-tonsillectomy hemorrhage were significantly higher with ibuprofen favoring the selection of prednisolone to be combined with paracetamol in the post-operative medication protocol following tonsillectomy. abstract_id: PUBMED:24303442 The effect of local injection of epinephrine and bupivacaine on post-tonsillectomy pain and bleeding. Introduction: Tonsillectomy is one of the most common surgeries in the world and the most common problem is post-tonsillectomy pain and bleeding. The relief of postoperative pain helps increase early food intake and prevent secondary dehydration. One method for relieving pain is peritonsillar injection of epinephrine along with an anesthetic, which has been shown to produce variable results in previous studies. Study Deign: Prospective case-control study. Setting: A tertiary referral centers with accredited otorhinolaryngology-head &amp; neck surgery and anesthesiology department. Materials And Methods: Patients under 15 years old, who were tonsillectomy candidates, were assigned into one of three groups: placebo injection, drug injection before tonsillectomy, and drug injection after tonsillectomy. The amount of bleeding, intensity of pain, and time of first post-operative food intake were evaluated during the first 18 hours post operation. Results: The intensity of pain in the first 30 minutes after the operation was lower in the patients who received injections, but the difference was not significant during the first 18 hours. The intensity of pain on swallowing during the first 6 hours was also lower in the intervention groups as compared with the placebo group. The amount of bleeding during the first 30 minutes post operation was lower in the two groups who received injections, but after 30 minutes there was no difference. Conclusion: Injection of epinephrine and bupivacaine pre- or post- tonsillectomy is effective in reducing pain and bleeding. The treatment also decreases swallowing pain in the hours immediately after surgery. abstract_id: PUBMED:30611028 Does suturing tonsil pillars post-tonsillectomy reduce postoperative hemorrhage?: A literature review. Objective: Literature review comparing post-tonsillectomy hemorrhage in pediatric and adult patients with and without suturing tonsil pillars to investigate whether suturing tonsil pillars reduces the risk of post-tonsillectomy hemorrhage. Review Methods: Online journal databases were searched using the key phrases "post tonsillectomy hemorrhage", "post tonsillectomy bleed", and "tonsil pillar suture". 10 published studies were found regarding tonsil pillar suturing, four directly related to postoperative bleeding and five focusing on postoperative pain reduction. There was one study that evaluated both pain and bleeding. The pain reduction studies were comprised of 225 patients while the postoperative bleeding studies included 3987 patients. Conclusions: Suturing tonsil pillars after tonsillectomy may be beneficial after cold tonsillectomy. Implications For Practice: Post-operative bleeding is one of the most common complications that can result in increased patient distress and hospitalization. In this article, we provide a literature review of tonsil pillar suturing and post-tonsillectomy hemorrhage. Our study suggests suturing the tonsil pillars immediately post-tonsillectomy may reduce the risk of severe post-operative bleeding requiring return to the operating room for certain patients. abstract_id: PUBMED:37974868 Effect of Body Mass Index on Post Tonsillectomy Hemorrhages. Aims: Obesity affects adverse outcomes in patients undergoing various surgeries. The study was carried out to assess the clinical association between body mass index and post tonsillectomy hemorrhages. Materials And Methods: This prospective study was carried out on 60 patients, age between 5 and 40 years, admitted in Department of ENT with chronic tonsillitis. Body mass index and post tonsillectomy hemorrhage were evaluated in all patients who underwent surgery. Bleeding episode were categorized according to the Austrian tonsil study. Results: This prospective study was carried out on 60 patients (adults and children), between December 2021 and November 2022. All patients underwent tonsillectomy under general anaesthesia. It was seen that most of the patients did not have any significant bleeding i.e., Grade A1 (Dry, no clot), and A2 (Clot, but no active bleeding after clot removal) whereas 4 patients (6.7%) had Grade B1 post tonsillectomy hemorrhage (Minimal bleeding requiring minimal intervention by vasoconstriction using adrenaline swab). Post tonsillectomy hemorrhage was seen more in adults. Post tonsillectomy bleeding of Grade B1 was recored in 28.6% of underweight patients, 8% of normal weight patients and no significant bleeding occurred in any of the overweight and obese patients (p-value 0.256). Conclusion: Overweight and obesity (higher BMI) did not increase the risk of post tonsillectomy hemorrhage in either children or adults. abstract_id: PUBMED:29314032 Reducing rates of operative intervention for pediatric post-tonsillectomy hemorrhage. Objectives/hypothesis: The aims of this study were to determine the frequency of rebleeding in patients admitted for observation after presentation for nonactive hemorrhage in the post-tonsillectomy period, compare rebleeding rates between patients managed with observation versus initial operative control, and describe the complication profile associated with observation as a management strategy for post-tonsillectomy bleeding. Study Design: Case series with retrospective review of patients. Methods: Patients presenting from September 1, 2013 to August 31, 2015 for post-tonsillectomy hemorrhage to a tertiary pediatric care center were evaluated for inclusion in the study. Inclusion criteria included patients ≤18 years of age without active bleeding at the time of the initial examination. Proportions were compared using χ2 and Fisher exact tests, whereas continuous data were compared using the Wilcoxon rank sum test. Results: Of 3,866 tonsillectomy patients, 285 (7.4%) presented with concern for oropharyngeal bleeding in the postoperative period, of whom 224 were admitted for nonactive bleeding. Of patients with nonactive bleeding, 203 (90.6%) were managed with observation and 21 (9.4%) with operative intervention. Rate of rebleeding was 26/203 (12.8%) after inpatient observation and 3/21 (14.3%) after operative intervention (P = 1.000). Frequency of rebleeding requiring operative control in patients undergoing initial observation was 14/203 (6.9%). Conclusions: In our pilot study, rates of rebleeding in patients observed for nonactive post-tonsillectomy hemorrhage was not statistically different than those managed with initial operative exploration. Although preliminary in nature, our data suggest observation may have comparable safety and efficacy when compared to operative management for pediatric patients presenting with nonactive post-tonsillectomy bleeding. Further data collection to establish an optimal management algorithm is ongoing. Level Of Evidence: 4 Laryngoscope, 1958-1962, 2018. abstract_id: PUBMED:37842537 New parameters measured via preoperative tonsil photos to evaluate the post-tonsillectomy pain: an analysis assisted by machine learning. Background: Postoperative pain is the most common complication after tonsillectomy. We aimed to explore new parameters related to post-tonsillectomy pain, as well as to construct and validate a model for the preoperative evaluation of patients' risk for postoperative pain. Methods: Data collected from patients who underwent tonsillectomy by the same surgeon at Beijing Chaoyang Hospital from January 2019 to May 2022 were analyzed. Preoperative tonsil images from all patients were taken, and the ratios of the distance between the upper pole of the tonsil and the base of the uvula (L1 for the left side and R1 for the right side) to the width of the uvula (U1) or the length of the uvula (U2) were measured. The following six ratios were calculated: L1/U1, R1/U1, LR1/U1 (the add of L1 and R1, and then divide U1), L1/U2, R1/U2, LR1/U2 (the add of L1 and R1, and then divide U2). The post-tonsillectomy pain was recorded. In addition, machine learning (ML) algorithm and feature importance analysis were used to evaluate the value of the parameters. Results: A total of 100 patients were involved and divided into the training set (60%) and the validation set (40%). All six parameters are negatively correlated with post-tonsillectomy pain. The accuracy, sensitivity, and specificity of the model were 75.0%, 72.7%, and 77.8%, respectively. LR1/U1 and LR1/U2 are the most valuable parameters to evaluate post-tonsillectomy pain. Conclusions: We have discovered new parameters that can be measured using preoperative tonsil images to evaluate post-tonsillectomy pain. ML models based on these parameters could predict whether these patients will have intolerable pain after tonsillectomy and manage it promptly. abstract_id: PUBMED:28583496 The effect of perioperative dexamethasone dosing on post-tonsillectomy hemorrhage risk. Objectives: Dexamethasone is currently recommended for routine prophylaxis against postoperative nausea and vomiting after tonsillectomy procedures. However, some studies have raised concern that dexamethasone use may lead to higher rates of post-tonsillectomy hemorrhage. Our objective was to determine whether higher doses of dexamethasone administered perioperatively during tonsillectomy procedures are associated with an increased risk of secondary post-tonsillectomy hemorrhage. Methods: We conducted a retrospective review of 9843 patients who underwent tonsillectomy and received dexamethasone at our institution from January 2010 to October 2014. We compared the dose of dexamethasone administered to patients who did and did not develop secondary post-tonsillectomy hemorrhage using Mann Whitney U tests. Multivariable logistic regression models were used to evaluate the association between dexamethasone dose and post-tonsillectomy hemorrhage after adjustment for demographic and clinical characteristics. Results: A total of 280 (2.8%) patients developed secondary post-tonsillectomy hemorrhage. Patients who developed hemorrhage tended to be older (median (interquartile range) 7 (4-11) vs. 5 (3-8) years), p &lt; 0.001) and had undergone tonsillectomy more often for chronic tonsillitis but less often for tonsillar or adenotonsillar hypertrophy or sleep disturbances. Dexamethasone dose was significantly lower on average in patients who experienced secondary post-tonsillectomy hemorrhage (median (interquartile range) 0.19 (0.14, 0.23) mg/kg vs. 0.21 (0.17, 0.30), p &lt; 0.001). Multivariable modeling demonstrated that the dose of dexamethasone was not significantly associated with post-tonsillectomy hemorrhage after adjustment for age. Conclusions: There does not appear to be a dose-related increase in the risk of post-tonsillectomy hemorrhage for patients receiving dexamethasone during tonsillectomy procedures. abstract_id: PUBMED:30641307 Role of antibiotics in post-tonsillectomy morbidities; A systematic review. Objective: To evaluate the role of postoperative antibiotics on post-tonsillectomy morbidities. Study Design: Systematic Review. Methods: Published papers and electronic databases (Medline, Web of Science, Embase) were searched from January 1985 up to March 2016 using the following key words in different combinations; Tonsil; Tonsillectomy; Post-tonsillectomy; Adenotonsillectomy; Antibiotics; Post-tonsillectomy morbidity; Bleeding; Secondary Hemorrhage. Twelve randomized control clinical trials fit the inclusion criteria and were included in the meta-analysis. We evaluated 5 outcomes, hemorrhage, return to normal diet, return to normal activities, fever and pain. Results: As regards secondary hemorrhage pooled analysis of 1397 patients revealed a relative risk (risk ratio, RR) of 1.052 with a 95% confidence interval (95% CI) of 0.739-1.497 (P-value, 0.779). As for return to normal diet pooled analysis of 527 patients showed a standardized mean difference (SMD) of -0.058 day with 95% CI of -0.233 to 0.118 (P-value, 0.518). As for return to normal activities pooled analysis of 257 patients showed a SMD of -0.014 day with a 95% CI of -0.258 to 0.230 (P-value, 0.908). As for Fever pooled analysis of 656 patients revealed a relative risk of 1.265 with 95% CI of 0.982-1.629 (P-value, 0.068). Finally for the postoperative pain due to the variability in the parameters used to assess the pain following tonsillectomy, we could not perform meta-analysis for this outcome. Conclusion: The results of this study fail to support clear evidence to use routinely post-operative antibiotics to reduce post-tonsillectomy morbidities. Answer: Based on the findings from a randomised, single-blinded, controlled trial, there is no statistical difference in pain scores, adverse effects, and complications between post-tonsillectomy patients who were given liquid medication and those who were given non-restricted (solid) medication. Therefore, it is not necessary to restrict patients to liquid medication after a tonsillectomy (PUBMED:26742388).
Instruction: Does a prediction model for pregnancy of unknown location developed in the UK validate on a US population? Abstracts: abstract_id: PUBMED:20716562 Does a prediction model for pregnancy of unknown location developed in the UK validate on a US population? Background: A logistic regression model (M4) was developed in the UK to predict the outcome for women with a pregnancy of unknown location (PUL) based on the initial two human chorionic gonadotrophin (hCG) values, 48 h apart. The purpose of this paper was to assess the utility of this model to predict the outcome for a woman (PUL) in a US population. Methods: Diagnostic variables included log-transformed serum hCG average of two measurements, and linear and quadratic hCG ratios. Outcomes modeled were failing PUL, intrauterine pregnancy (IUP) and ectopic pregnancy (EP). This model was applied to a US cohort of 604 women presenting with symptomatic first-trimester pregnancies, who were followed until a definitive diagnosis was made. The model was applied before and after correcting for differences in terminology and diagnostic criteria. Results: When retrospectively applied to the adjusted US population, the M4 model demonstrated lower areas under the curve compared with the UK population, 0.898 versus 0.988 for failing PUL/spontaneous miscarriage, 0.915 versus 0.981 for IUP and 0.831 versus 0.904 for EP. Whereas the model had 80% sensitivity for EP using UK data, this decreased to 49% for the US data, with similar specificities. Performance only improved slightly (55% sensitivity) when the US population was adjusted to better match the UK diagnostic criteria. Conclusions: A logistic regression model based on two hCG values performed with modest decreases in predictive ability in a US cohort for women at risk for EP compared with the original UK population. However, the sensitivity for EP was too low for the model to be used in clinical practice in its present form. Our data illustrate the difficulties of applying algorithms from one center to another, where the definitions of pathology may differ. abstract_id: PUBMED:32538482 External validation of risk prediction model M4 in an Australian population: Rationalising the management of pregnancies of unknown location. Background: The prediction model M4 can successfully classify pregnancy of unknown location (PUL) into a low- or high-risk group in developing ectopic pregnancy. M4 was validated in UK centres but in very few other countries outside UK. Aim: To validate the M4 model's ability to correctly classify PULs in a cohort of Australian women. Materials And Methods: A retrospective analysis of women classified with PUL, attending a Sydney-based teaching hospital between 2006 and 2018. The reference standard was the final characterisation of PUL: failed PUL (FPUL) or intrauterine pregnancy (IUP; low risk) vs ectopic pregnancy (EP) or persistent PUL (PPUL; high risk). Each patient was entered into the M4 model calculator and an estimated risk of FPUL/IUP or EP/PPUL was recorded. Diagnostic accuracy of the M4 model was evaluated. Results: Of 9077 consecutive women who underwent transvaginal sonography, 713 (7.9%) classified with a PUL. Six hundred and seventy-seven (95.0%) had complete study data and were included. Final outcomes were: 422 (62.3%) FPULs, 150 (22.2%) IUPs, 105 (15.5%) EPs and PPULs. The M4 model classified 455 (67.2%) as low-risk PULs of which 434 (95.4%) were FPULs/IUPs and 21 (4.6%) were EPs or PPULs. EPs/PPULs were correctly classified with sensitivity of 80.0% (95% CI 71.1-86.5%), specificity of 75.9% (95% CI 72.2-79.3%), positive predictive value of 37.8% (95% CI 33.8-42.1%) and negative predictive value of 95.3% (95% CI 93.1-96.9%). Conclusions: We have externally validated the prediction model M4. It classified 67.2% of PULs as low risk, of which 95.4% were later characterised as FPULs or IUPs while still classifying 80.0% of EPs as high risk. abstract_id: PUBMED:36043136 Triaging Women with Pregnancy of Unknown Location: Evaluation of Protocols Based on Single Serum Progesterone, Serum hCG Ratios, and Model M4. Background: The purpose of the current study was to evaluate the ability of three protocols to triage women presenting with pregnancy of unknown location (PUL). Methods: Women with pregnancy of unknown location were recruited from Aziz Medical Centre from 1st August, 2018 to 31st July, 2020. The criterion of progesterone, human chorionic gonadotrophin (hCG) ratio, and M4 algorithm were used to predict risk of adverse pregnancy outcomes and classify women. Finally, 3 groups were established including ectopic pregnancy, failed pregnancy of unknown location, and intrauterine pregnancy (IUP). The primary outcome was to assign women to ectopic pregnancy group using these protocols. The secondary outcome was to compare the sensitivity and specificity of the three protocols relative to the final outcome. Results: Of the 288 women, 66 (22.9%) had ectopic pregnancy, 144 (50.0%) had intrauterine pregnancy, and 78 (27.1%) had failed pregnancy of unknown location. The criterion of progesterone had a sensitivity of 81.8%, specificity of 27%, negative predictive value (NPV) of 83.3%, and positive predictive value (PPV) of 25% for high risk result (ectopic pregnancy). The hCG ratio had sensitivity of 72%, specificity of 73%, NPV of 90%, and PPV of 44% for high risk result (ectopic pregnancy). However, model M4 had sensitivity of 86.4%, specificity of 91.9%, NPV of 95.8%, and PPV of 76% for high risk result. Conclusion: Based on the findings of the study, it was revealed that prediction model of M4 had the highest sensitivity, specificity, negative predictive value and positive predictive value for high risk result (ectopic pregnancy). abstract_id: PUBMED:34800739 Retrospective validation of a model to predict the outcome of pregnancies of unknown location Objective: The prediction model M6 classifies pregnancy of unknown location (PUL) into a low-risk or a high-risk group in developing ectopic pregnancy (EP). The aim of this study was to validate the two-step M6 model's ability to classify PUL in French women. Material And Methods: All women with a diagnosis of PUL over a year were included in this single center retrospective study. Patients with a diagnosis of EP at the first consultation of with incomplete data were excluded. For each patient, the M6 model calculator was used to classified them into "high risk of EP" and "low risk of EP" group. The reference standard was the final diagnostic: failed PUL (FPUL), intrauterine pregnancy (IUP) of EP. The statistical measures of the test's performance were calculated. Results: Over the period, 255 women's consulted for a PUL, 197 has been included in the study. Final diagnosis were: 94 FPUL (94/197; 47.7%), 74 IUP (74/197; 37.6%) et 29 EP (29/197; 14.7%). The first step of the M6 model classified 16 women in the FPUL group of which 15 (15/16; 93.7%) correctly. The second step of the M6 model classified 181 women: 90 (90/181; 49.7%) in the "high risk of EP" group of which 63 (63/90; 70%) were FPUL/IUP and 27 (27/90; 30%) were EP. 91 (91/181; 50.3%) was classified in the "low risk of EP" group of which 90 (90/91; 98.9%) were FPUL/IUP and 1 (1/91; 1.1%) were EP. EP were correctly classified with sensitivity of 96.4%, negative predictive value of 98.9%, specificity of 58.8% and positive predictive value of 30.0%. Conclusions: The prediction model of PUL M6 classified EP in "high risk of EP group" with a sensitivity of 96.4%. It classified 50.3% of PUL in a "low risk of EP" group with a negative predictive value of 98.9%. abstract_id: PUBMED:32985693 How do the M4 and M6 models perform in an Australian pregnancy of unknown location population? Background: The diagnosis of a pregnancy of unknown location (PUL) is made when there is an elevated serum β human chorionic gonadotropin (βhCG) and no pregnancy on transabdominal and transvaginal ultrasound. Most of these pregnancies end as intra-uterine pregnancies or unsuccessful pregnancies and can be safely managed expectantly. However, up to 20% of these women will have an ectopic pregnancy. Several mathematical models, including the M4 and M6 protocols, have been developed using biochemical markers to triage PUL presentations. This rationalises numbers of tests and visits made without compromising safety and allowing timely intervention. Aims: We aimed to externally validate the M4 and M6 models in an Australian tertiary early pregnancy assessment service (EPAS). Materials And Methods: We performed a retrospective single-centre cohort study across five years. Our study population included all women attending our EPAS with a PUL who had at least two serum βhCG levels and one progesterone level measured. The M4 and M6 models were retrospectively applied. Results: Of the 360 women in the study population, there were 26 confirmed ectopic pregnancies (7.2%) and six persisting PULs (2%). The M4 model had a sensitivity and specificity of 72%. The M6P model had a sensitivity of 91% and specificity of 63%. The M6P misclassified two ectopic pregnancies into the low-risk group, compared with seven in the M4 model. Conclusions: The M6P model has the highest sensitivity of the three models and a negative predictive value of 99%. These numbers are comparable to the original United Kingdom population. Further prospective validation is planned. abstract_id: PUBMED:24592084 Pregnancy of unknown location. Pregnancy of unknown location (PUL) is defined as the situation when the pregnancy test is positive but there are no signs of intrauterine pregnancy or an extrauterine pregnancy via transvaginal ultra-sonography. It is not always possible to determine the location of the pregnancy in cases of PUL. The reported rate of PUL among women attending early pregnancy units varies between 5 and 42% in the literature and the frequency of PUL incidents has increased with the increase in the number of early pregnancy units. The management of PUL seems to be highly crucial in obstetrics clinics. Therefore, in the current review, issues identified from the literature related to pregnancy of unknown location, potential tools for prediction and algorithms will be discussed. abstract_id: PUBMED:32931087 External validation of models to predict the outcome of pregnancies of unknown location: a multicentre cohort study. Objective: To validate externally five approaches to predict ectopic pregnancy (EP) in pregnancies of unknown location (PUL): the M6P and M6NP risk models, the two-step triage strategy (2ST, which incorporates M6P), the M4 risk model, and beta human chorionic gonadotropin ratio cut-offs (BhCG-RC). Design: Secondary analysis of a prospective cohort study. Setting: Eight UK early pregnancy assessment units. Population: Women presenting with a PUL and BhCG &gt;25 IU/l. Methods: Women were managed using the 2ST protocol: PUL were classified as low risk of EP if presenting progesterone ≤2 nmol/l; the remaining cases returned 2 days later for triage based on M6P. EP risk ≥5% was used to classify PUL as high risk. Missing values were imputed, and predictions for the five approaches were calculated post hoc. We meta-analysed centre-specific results. Main Outcome Measures: Discrimination, calibration and clinical utility (decision curve analysis) for predicting EP. Results: Of 2899 eligible women, the primary analysis excluded 297 (10%) women who were lost to follow up. The area under the ROC curve for EP was 0.89 (95% CI 0.86-0.91) for M6P, 0.88 (0.86-0.90) for 2ST, 0.86 (0.83-0.88) for M6NP and 0.82 (0.78-0.85) for M4. Sensitivities for EP were 96% (M6P), 94% (2ST), 92% (N6NP), 80% (M4) and 58% (BhCG-RC); false-positive rates were 35%, 33%, 39%, 24% and 13%. M6P and 2ST had the best clinical utility and good overall calibration, with modest variability between centres. Conclusions: 2ST and M6P performed best for prediction and triage in PUL. Tweetable Abstract: The M6 model, as part of a two-step triage strategy, is the best approach to characterise and triage PULs. abstract_id: PUBMED:37307765 Follow-up and outcomes of patients with a pregnancy of unknown location: A comparison of two prediction models. Background: The time period while delineating the final diagnosis following presentation with a pregnancy of unknown location (PUL) can be an anxious time, as well as being time and resource intensive. Prediction models have been utilised in order to tailor counselling, frame expectations and plan care. Objectives: We aimed to review diagnoses of PUL in our population and assess the value of two prediction models. Study Design: We reviewed all 394 PUL diagnoses over a three year period in a tertiary level maternity hospital. We then retrospectively applied the M1 and M6NP models to assess their accuracy when compared to the final diagnosis. Results: PUL comprises of 2.9% (394/13401) of attendances in our unit, requiring 752 scans and 1613 separate blood tests. Just under one in ten women (9.9%, n = 39) presenting with a PUL had a viable pregnancy at discharge, however of the remainder, only 18.0% (n = 83) required medical or surgical treatment for a PUL. The M1 model was more successful at predicting an ectopic pregnancy than the M6NP, with the latter over-predicting viable pregnancies (33.4%, n = 77). Conclusions: We demonstrate that the management of women with a PUL could be stratified through the application of outcome prediction models, having positive results for framing expectations and potentially reducing this resource-intensive diagnosis. abstract_id: PUBMED:28660799 Factors to consider in pregnancy of unknown location. The management of women with a pregnancy of unknown location (PUL) can vary significantly and often lacks a clear evidence base. Intensive follow-up is usually required for women with a final outcome of an ectopic pregnancy. This, however, only accounts for a small proportion of women with a pregnancy of unknown PUL location. There remains a clear clinical need to rationalize the follow-up of PUL so women at high risk of having a final outcome of an ectopic pregnancy are followed up more intensively and those PUL at low risk of having an ectopic pregnancy have their follow-up streamlined. This review covers the main management strategies published in the current literature and aims to give clinicians an overview of the most up-to-date evidence that they can take away into their everyday clinical practice when caring for women with a PUL. abstract_id: PUBMED:37334250 Diagnostic value of a urine test in pregnancy of unknown location. Backgro: Pregnancy of unknown location (PUL) is a term used when there is a positive pregnancy test but no sonographic evidence for an intrauterine pregnancy (IUP) or ectopic pregnancy (EP). This term is a classification and not a final diagnosis. Objective: This study aimed to evaluate the diagnostic value of the Inexscreen test on the outcome of patients with pregnancies of unknown location. Study Design: In this prospective study, a total of 251 patients with a diagnosis of pregnancy of unknown location at the gynecologic emergency department of the La Conception Hospital, Marseille, France, between June 2015 and February 2019 were included. The Inexscreen (semiquantitative determination of intact human urinary chorionic gonadotropin) test was performed on patients with a diagnosis of pregnancy of unknown location. They participated in the study after information and consent collection. The main outcome measures (sensitivity, specificity, predictive values, and the Youden index) of Inexscreen were calculated for the diagnosis of abnormal pregnancy (nonprogressive pregnancy) and ectopic pregnancy. Results: The sensitivity and specificity of Inexscreen for the diagnosis of abnormal pregnancy in patients with pregnancy of unknown location were 56.3% (95% confidence interval, 47.0%-65.1%) and 62.8% (95% confidence interval, 53.1%-71.5%), respectively. The sensitivity and specificity of Inexscreen for the diagnosis of ectopic pregnancy in patients with pregnancy of unknown location were 81.3% (95% confidence interval, 57.0%-93.4%) and 55.6% (95% confidence interval, 48.6%-62.3%), respectively. The positive predictive value and negative predictive value of Inexscreen for ectopic pregnancy were 12.9% (95% confidence interval, 7.7%-20.8%) and 97.4% (95% confidence interval, 92.5%-99.1%), respectively. Conclusion: Inexscreen is a rapid, non-operator-dependent, noninvasive, and inexpensive test that allows the selection of patients at high risk of ectopic pregnancy in case of pregnancy of unknown location. This test allows an adapted follow-up according to the technical platform available in a gynecologic emergency service. Answer: The prediction model for pregnancy of unknown location (PUL) developed in the UK, known as the M4 model, did not validate as effectively on a US population. When applied to a US cohort, the M4 model demonstrated lower areas under the curve compared with the UK population, with 0.898 versus 0.988 for failing PUL/spontaneous miscarriage, 0.915 versus 0.981 for intrauterine pregnancy (IUP), and 0.831 versus 0.904 for ectopic pregnancy (EP). Specifically, the sensitivity for EP using UK data was 80%, but this decreased to 49% for the US data, with similar specificities. Even after adjusting the US population to better match UK diagnostic criteria, the performance only improved slightly to 55% sensitivity for EP. The study concluded that the sensitivity for EP was too low for the model to be used in clinical practice in its present form in the US, illustrating the difficulties of applying algorithms from one center to another where definitions of pathology may differ (PUBMED:20716562).
Instruction: Repair of Paraesophageal Hiatal Hernias—Is a Fundoplication Needed? Abstracts: abstract_id: PUBMED:36825913 Laparoscopic Hiatal Hernia Repair With Concomitant Transoral Incisionless Fundoplication. Patients with gastroesophageal reflux disease and a large hiatal hernia can have life-disrupting symptoms, such as heartburn, regurgitation, cough, and hoarseness. Gastroesophageal reflux disease symptoms are often treated with proton pump inhibitors and occasionally treated with surgery. The last decade has seen the development of a new procedure-laparoscopic hiatal hernia repair with concomitant transoral incisionless fundoplication. When transoral incisionless fundoplication is performed immediately after a laparoscopic hiatal hernia repair, it may enable the discontinuation of proton pump inhibitors and improve a patient's quality of life. This article explores the development of the transoral incisionless fundoplication procedure as well as its concomitant use after hiatal hernia repair at all stages of perioperative care. Also included is a hypothetical case study that illustrates the perioperative nursing care of a patient undergoing this procedure. abstract_id: PUBMED:35024937 Is fundoplication necessary after paraesophageal hernia repair? A meta-analysis and systematic review. Introduction: Paraesophageal hernias are often asymptomatic, but when symptomatic they should be fixed laparoscopically. A cruroplasty of the diaphragmatic pillars is performed and a fundoplication is usually performed at the time. However, there are times, especially in emergency cases, where it is not always possible to perform a fundoplication. We hypothesized there would be no difference in outcomes whether or not a fundoplication is performed as part of a paraesophageal hernia repair. Methods: A literature review of available clinical databases was performed using PubMed, Clinical Key and Google Scholar. Our search terms were: "paraesophageal hernia" "paraesophageal hernia repair" "fundoplication" "emergency surgery" "no fundoplication" We excluded studies that were in languages other than English, abstracts and small case series. Results: Our search criteria yielded a total of 22 studies published between 1997 and 2020. There were a total of 8600 subjects enrolled into this study. The overall pooled prevalence of fundoplication were estimated as 69% (95% CI: 59%-78%). In patients who underwent fundoplication, the risk of gastroesophageal reflux disease (GERD) was reduced as compared to patients who did not undergo fundoplication (RR: 0.64, 95% CI: 0.40-1.04, p = 0.069, I2 = 47.2%). A similar trend was also observed in recurrence (RR: 0.53, 95% CI: 0.27-1.03, p = 0.061, I2 = 0.0%) and reoperations (RR: 0.25, 95% CI: 0.02-2.69, p = 0.25, I2 = 56.7%). However, patients who underwent fundoplication had an increased risk of dysphagia (RR: 1.68, 95% CI: 0.59-4.81, p = 0.83, I2 = 42%). Conclusions: There is a higher rate of recurrence of gastroesophageal reflux disease, recurrence of hernia and reoperation when no fundoplication is performed during a paraesophageal hernia repair but a lower risk of dysphagia, but none of these reached statistical significance.(Comment 1) Paraesophageal hernia repair with fundoplication should be performed, but it is acceptable to not do it in certain situations. abstract_id: PUBMED:30478699 Investigating rates of reoperation or postsurgical gastroparesis following fundoplication or paraesophageal hernia repair in New York State. Background: Little is known of the natural history of fundoplication or paraesophageal hernia (PEH) repair in terms of reoperation or the incidence treatment of postsurgical gastroparesis (PSG) in large series. Repeat fundoplications or PEH repairs, as well as pyloroplasty/pyloromyotomy operations, have proven to be effective in the context of PSG or recurrence. In this study, we analyzed the incidences of PSG and risk factors for these revisional surgeries following fundoplication and PEH repair procedures in the state of New York. Methods: The New York State Planning and Research Cooperative System (NY SPARCS) database was utilized to examine all adult patients who underwent fundoplication or PEH repair for the treatment of GERD between 2005 and 2010. The primary outcome was the incidence of each type of reoperation and the timing of the follow-up procedure/diagnosis of gastroparesis. Generalized linear mixed models were used to examine the risk factors for follow-up procedures/diagnosis. Results: A total of 5656 patients were analyzed, as 3512 (62.1%) patients underwent a primary fundoplication procedure and 2144 (37.9%) patients underwent a primary PEH repair. The majority of subsequent procedures (n = 254, 65.5%) were revisional procedures (revisional fundoplication or PEH repair) following a primary fundoplication. A total of 134 (3.8%) patients who underwent a primary fundoplication later had a diagnosis of gastroparesis or a follow-up procedure to treat gastroparesis, while 95 (4.4%) patients who underwent a primary PEH repair were later diagnosed with gastroparesis or underwent surgical treatment of gastroparesis. Conclusion: The results revealed low reoperation rates following both fundoplication and PEH repairs, with no significant difference between the two groups. Additionally, PEH repair patients tended to be older and were more likely to have a comorbidity compared to fundoplication patients, particularly in the setting of hypertension, obesity, and fluid and electrolyte disorders. Further research is warranted to better understand these findings. abstract_id: PUBMED:30675094 pH Scores in Hiatal Repair with Transoral Incisionless Fundoplication. Background And Objectives: Transoral incisionless fundoplication is an alternative to traditional laparoscopic fundoplications. Recently, hiatal hernia repair combined with transoral incisionless fundoplication has become an accepted modification of the original procedure; however, outcomes information, particularly objective pH monitoring, has been sparse. We retrospectively review the subjective and objective outcomes of transoral incisionless fundoplication combined with hiatal hernia repair. Methods: Ninety-seven consecutive patients presenting for reflux evaluation were reviewed for outcomes after evaluation and treatment. Fifty-five patients proceeded to hiatal hernia repair with transoral incisionless fundoplication. Twenty-nine patients (53%) were found to have matched preoperative and postoperative validated surveys and pH evaluations. Results: There were no serious complications. The mean followup was 296 days (SD, 117 days). The mean Gastroesophageal Reflux Disease Health Related Quality of Life score improved from 33.7 (SD, 22.0) to 9.07 (SD, 13.95), P &lt; .001. The mean Reflux Symptom Index score improved from 20.32 (SD, 13) to 8.07 (SD, 9.77), P &lt; .001. The mean pH score improved from 35.3 (SD, 2.27) to 10.9 (SD, 11.5), P &lt; .001. Twenty-two of the 29 patients were judged to have an intact hiatal repair with transoral incisionless fundoplication (76%). Of the 22 patients with an intact hiatal repair and intact fundoplication, 21 (95%) had normalized their pH exposure. Conclusions: In this retrospective review, hiatal hernia repair combined with transoral incisionless fundoplication significantly improved outcomes in patients with gastroesophageal reflux disease in both subjective Gastroesophageal Reflux Disease Health Related Quality of Life and Reflux Symptom Index measurements as well as in objective pH scores. abstract_id: PUBMED:23711265 The need for fundoplication at the time of laparoscopic paraesophageal hernia repair. Most authors recommend an antireflux operation at the time of laparoscopic paraesophageal hernia (PEH) repair. A fundoplication combats the potential postoperative reflux resulting from disruption of the hiatal anatomy and may minimize recurrence. The purpose of this study is to evaluate the differences in postoperative dysphagia, reflux symptoms, and hiatal hernia recurrence in patients with and without a fundoplication at the time of laparoscopic paraesophageal hernia repair. Patients undergoing laparoscopic PEH repair from July 2006 to June 2012 were identified. Open repairs and reoperative cases were excluded. Patient characteristics, operative details, complications, and postoperative outcomes were recorded. Over the six-year period, 152 laparoscopic PEH repairs were performed. Mean age was 65.8 years (range, 31 to 92) and average body mass index was 29.9 kg/m(2) (range, 18 to 52 kg/m(2)). Concomitant fundoplication was performed in 130 patients (86%), which was determined based on preoperative symptoms and esophageal motility. Mean operative times were similar with fundoplication (188 minutes) and without (184.5 minutes). At a mean follow-up of 13.9 months, there were 19 recurrences: 12.3 per cent (16 of 130) in the fundoplication group and 13.6 per cent (three of 22) in those without. Dysphagia lasting greater than six weeks was present in eight patients in the fundoplication group (6.2%) and in none in those without (P = 0.603). Eighteen percent of patients without a fundoplication reported postoperative reflux compared with 5.4 per cent of patients with a fundoplication (P = 0.055). In the laparoscopic repair of PEH, the addition of a fundoplication minimizes postoperative reflux symptoms without additional operative time. Neither dysphagia nor hiatal hernia recurrence is affected by the presence of a fundoplication. abstract_id: PUBMED:24948540 Fever after redo Nissen fundoplication with hiatal hernia repair. Background: Fevers often arise after redo fundoplication with hiatal hernia repair. We reviewed our experience to evaluate the yield of a fever work-up in this population. Methods: We performed a retrospective review of children undergoing redo Nissen fundoplication with hiatal hernia repair between December 2001 and September 2012. Temperatures and fever evaluations of those children receiving a mesh repair were compared with those without mesh. A fever defined as temperature ≥38.4°C. Results: Fifty one children received 46 laparoscopic, 4 open, and 1 laparoscopic converted to open procedures. Biosynthetic mesh was used in 25 children whereas 26 underwent repair without mesh. A fever occurred in 56% of those repaired with mesh compared with 23.1% without mesh (P = 0.02). A fever evaluation was conducted in 32% of those with mesh compared with 11.5% without mesh (P = 0.52). A urinary tract infection was identified in one child after mesh use and an infection was identified in two children without mesh, one pneumonia and one wound infection (P = 1). In those repaired with mesh, there was no significant difference in maximum temperature. Conclusions: Fever is common after redo Nissen fundoplication with hiatal hernia repair and occurs more frequently, and with higher temperatures in those with mesh. Fever work-up in these patients is unlikely to yield an infectious source and is attributed to the extensive dissection during the redo procedure. abstract_id: PUBMED:31314184 Necessity of fundoplication and mesh in the repair of the different types of paraesophageal hernia. Background: The management of paraesophageal hernia (PEH) has changed significantly since the introduction of laparoscopic surgery in the 1990's. This study aims to explore the need of a Nissen fundoplication or a posterior gastropexy and the use of mesh reinforcement in the surgical repair of PEH. Patients And Methods: Seventy-three patients with a symptomatic and documented PEH type II, III or IV were included in this retrospective study. The following data were collected: type of PEH, surgical procedure, complications, length of hospital stay, recurrences, time to recurrence, type of PEH recurrence and treatment of recurrent PEH. Results: All 73 patients underwent laparoscopic surgery without any conversion to open surgery. In 80% a posterior gastropexy was performed, while the remaining 20% suffered from GERDsymptoms and were treated with a Nissen fundoplication. In 18% of the patients a mesh was used as reinforcement of the repair. The surgical repair differed significantly according to the type of PEH. Fourteen percent of the patients suffered from a postoperative complication, pneumothorax and dysphagia being the most frequent. There were no perioperative deaths. The recurrence rate was 22% with a median time to recurrence of 12 months. Conclusion: Laparoscopic PEH repair is a safe and efficacious procedure with no mortality and minimal early morbidity. The surgical repair of PEH should be adjusted to the type of PEH. However, up until now the literature fails to produce clear guidelines on when to perform a gastropexy or Nissen fundoplication and which patients might benefit from a mesh reinforcement. abstract_id: PUBMED:28840522 Endoscopic Evaluation of Post-Fundoplication Anatomy. Purpose Of Review: We aim to review the endoscopic evaluation of post-fundoplication anatomy and its role in assessment of fundoplication outcomes and in pre-operative planning for reoperation in failed procedures. Recent Findings: There is no universally accepted system for evaluating post-fundoplication anatomy endoscopically. However, multiple reports described the usefulness of post-operative endoscopy as a quality control measure and in the evaluation of complex cases such as repeat procedures and paraesophageal hernias (PEH). Endoscopic evaluation of post-fundoplication anatomy has an important role in assessing the outcomes of operative repair and pre-operative planning for failed fundoplications. Attempts have been made to characterize the appearance of the newly formed gastroesophageal valve after successful repairs and to standardize endoscopic reporting and classification of anatomic descriptions of failed fundoplications. However, there is no consensus. More studies are needed to evaluate the applicability and reproducibility of proposed endoscopic evaluation systems in order for such tools to become widely accepted. abstract_id: PUBMED:36754871 Transthoracic fundoplication using the Belsey Mark IV technique versus Nissen fundoplication: A systematic review and meta-analysis. Background: Nissen fundoplication is considered the cornerstone surgical treatment for hiatal hernia repair. Belsey Mark IV (BMIV) transthoracic fundoplication is an alternative approach that is rarely utilized in today's minimally invasive era. This study aims to summarize the safety and efficacy of BMIV and to compare it with Nissen fundoplication. Methods: We searched MEDLINE, Scopus, and Cochrane Library databases for single arm and comparative studies published by March 31st, 2022, according to PRISMA statement. Inverse-variance weights were used to estimate the proportion of patients experiencing the studied outcome and random-effects meta-analyses were performed. Results: 17 studies were identified, incorporating 2136 and 638 patients that underwent Belsey Mark IV or Nissen fundoplication, respectively. A total of 13.8% (95% CI: 9.6-18.6) of the patients that underwent fundoplication with the BMIV technique had non-resolution of their symptoms and 3.5% (95% CI: 2.0-5.4) required a reoperation. Overall, 14.8% (95% CI: 9.5-20.1) of the BMIV arm patients experienced post-operative complications, 5.0% (95% CI: 2.0-9.0) experienced chronic postoperative pain and 6.9% (95% CI: 3.1-11.9) had a hernia recurrence. No statistically significant difference was observed between Belsey Mark IV and Nissen fundoplication in terms of post-interventional non-resolution of symptoms (odds ratio [OR]: 1.49 [95% Confidence Interval (95%CI):0.6-4.0]; p = 0.42), post-operative complications (OR:0.83, 95%CI: 0.5-1.5, p = 0.54) and in-hospital mortality (OR:0.69, 95%CI: 0.13-3.80, p = 0.67). Belsey Mark IV arm had significantly lower reoperation rates compared to Nissen arm (OR:0.28, 95%CI: 0.1-0.7, p = 0.01). Conclusions: BMIV fundoplication is a safe and effective but technically challenging. The BMIV technique may offer benefits to patients compared to the laparoscopic Nissen fundoplication. These benefits, however, are challenged by the increased morbidity of a thoracotomy. abstract_id: PUBMED:36270209 Laparoscopic mesh repair and Toupet fundoplication for parahiatal hernia complicated by sliding hiatal hernia: A case report. Introduction: The parahiatal hernia is a rare type of diaphragmatic hernia in adults. Although there have been occasional reports of parahiatal hernias, few have reported simultaneous laparoscopic mesh repair of a parahiatal hernia with a hiatal hernia. This report describes laparoscopic mesh repair and fundoplication for a parahiatal hernia combined with an esophageal hiatal hernia. Presentation Of Case: A 39-year-old woman presented with left-side postprandial abdominal pain. Esophagogastroduodenoscopy revealed a parahiatal hernia and sliding hiatal hernia. Computed tomography (CT) showed that the stomach had prolapsed into the thorax from the outside of the left diaphragm. The preoperative diagnosis was parahiatal hernia with a hernial sac complicated by sliding hiatal hernia. Laparoscopic mesh repair was planned. The stomach had prolapsed on the left side of the esophagus and was extruded. The diagnosis of a parahiatal hernia with a hernial sac complicated by a sliding hiatal hernia was confirmed. The esophageal hiatal hernia was repaired using two non-absorbable sutures. The congenital defect was further reinforced with mesh fixed to the orifice of the adjacent parahiatal hernia. We performed Toupet fundoplication to treat gastroesophageal reflux disease and sutured the right diaphragmatic crus and stomach to prevent migration. The patient was discharged home on postoperative day 5. Conclusion: We encountered a patient with a parahiatal hernia complicated by a sliding hiatal hernia. The parahiatal hernia can be diagnosed by CT imaging. Preoperative diagnostic imaging can lead to appropriate treatment. Answer: The question of whether a fundoplication is necessary during the repair of paraesophageal hiatal hernias (PEH) has been explored in various studies, and the consensus is not absolute. Some studies suggest that fundoplication should be performed during PEH repair to minimize postoperative reflux symptoms and potentially reduce the risk of hernia recurrence and reoperation, although these benefits may not always reach statistical significance (PUBMED:35024937; PUBMED:23711265). A meta-analysis and systematic review indicated that patients who underwent fundoplication had a reduced risk of gastroesophageal reflux disease (GERD), recurrence, and reoperations compared to those who did not undergo fundoplication. However, there was an increased risk of dysphagia in patients who underwent fundoplication (PUBMED:35024937). Another study found that the addition of a fundoplication minimized postoperative reflux symptoms without increasing operative time, and it did not affect dysphagia or hiatal hernia recurrence rates (PUBMED:23711265). On the other hand, some studies suggest that it is acceptable not to perform a fundoplication in certain situations, such as emergency cases where it may not be possible (PUBMED:35024937). Additionally, a study on laparoscopic PEH repair indicated that the surgical repair should be adjusted to the type of PEH, and clear guidelines on when to perform a gastropexy or Nissen fundoplication are still lacking (PUBMED:31314184). Furthermore, the development of laparoscopic hiatal hernia repair with concomitant transoral incisionless fundoplication (TIF) has been shown to improve patient outcomes, including quality of life and objective pH monitoring results, suggesting that this combined approach may be beneficial for patients with GERD and large hiatal hernias (PUBMED:36825913; PUBMED:30675094). In conclusion, while fundoplication is generally recommended during PEH repair to combat potential postoperative reflux and minimize recurrence, there are situations where it may not be performed, and the decision should be tailored to the individual patient's condition and the type of hernia. The use of concomitant TIF after hiatal hernia repair is also an emerging option that may improve patient outcomes.
Instruction: Is there really a clinical benefit of using minimized extracorporeal circulation for coronary artery bypass grafting? Abstracts: abstract_id: PUBMED:18278679 Is there really a clinical benefit of using minimized extracorporeal circulation for coronary artery bypass grafting? Background: Minimized extracorporeal circulation is intended to reduce the negative effects associated with cardiopulmonary bypass. This prospective study was performed to evaluate whether minimized extracorporeal circulation has a clinical benefit for coronary artery surgery patients compared to standard extracorporeal circulation. Methods: Sixty patients were randomized into two study groups: 30 patients underwent coronary artery bypass grafting using minimized extracorporeal circulation and 30 patients were operated using standard extracorporeal circulation. Baseline characteristics, intraoperative details, postoperative data, perioperative blood chemistry determinations of hematocrit, platelets, muscle-brain fraction of the creatine kinase, cardiac troponin T and colloid osmotic pressure as measurements of intrathoracic blood volume index and extravascular lung water index were compared. Results: Baseline characteristics and intraoperative details of both groups were similar. Patients who underwent minimized extracorporeal circulation showed more short-term dependency on norepinephrine ( P &lt; 0.01). Their maximal postoperative muscle-brain fraction of the creatine kinase was lower ( P &lt; 0.05) and their hematocrit on arrival in the intensive care unit was higher ( P &lt; 0.01). No other significant differences were found. In both collectives, values for hematocrit ( P &lt; 0.001), platelets ( P &lt; 0.001), colloid osmotic pressure ( P &lt; 0.001) and intrathoracic blood volume index ( P &lt; 0.05) decreased, while the extravascular lung water index did not change significantly during cardiopulmonary bypass. Conclusions: A clinical advantage of minimized over standard extracorporeal circulation was not found. Furthermore, a higher number of patients in the minimized extracorporeal circulation group required postoperative norepinephrine infusions for hemodynamic stabilization. In summary, the presumed superiority of minimized extracorporeal circulation for coronary artery bypass grafting in standard patients could not be confirmed. abstract_id: PUBMED:26034198 Minimized extracorporeal circulation is improving outcome of coronary artery bypass surgery in the elderly. Advanced age is a known risk factor for morbidity and mortality after coronary artery bypass grafting (CABG). Minimized extracorporeal circulation (MECC) has been shown to reduce the negative effects associated with conventional extracorporeal circulation (CECC). This trial assesses the impact of MECC on the outcome of elderly patients undergoing CABG. Eight hundred and seventy-five patients (mean age 78.35 years) underwent isolated CABG using CECC (n=345) or MECC (n=530). The MECC group had a significantly shorter extracorporeal circulation time (ECCT), cross-clamp time and reperfusion time and lower transfusion needs. Postoperatively, these patients required significantly less inotropic support, fewer blood transfusions, less postoperative hemodialysis and developed less delirium compared to CECC patients. In the MECC group, intensive care unit (ICU) stay was significantly shorter and 30-day mortality was significantly reduced [2.6% versus 7.8%; p&lt;0.001]. In conclusion, MECC improves outcome in elderly patients undergoing CABG surgery. abstract_id: PUBMED:20847981 Minimized extracorporeal circulation for the robotic totally endoscopic coronary artery bypass grafting hybrid procedure. Robotically assisted totally endoscopic coronary artery bypass grafting (TECAB) can be performed on the beating heart with cardiopulmonary bypass support in high-risk patients or patients for whom technical difficulties are expected with a complete off-pump approach. To minimize the inflammatory response and reduce the requirement for transfusion, minimized extracorporeal circulation is an attractive option for robotic TECAB procedures. The present report describes a case for which minimized extracorporeal circulation was used for the first time in TECAB performed using the da Vinci telemanipulation system. abstract_id: PUBMED:20515982 Haematological effects of minimized compared to conventional extracorporeal circulation after coronary revascularization procedures. During the last decade, minimized extracorporeal circulation (MECC) systems have shown beneficial effects to the patients over the conventional cardiopulmonary bypass (CECC) circuits. This is a prospective randomized study of 99 patients who underwent coronary artery bypass grafting (CABG) surgery, evaluating the postoperative haematological effects of these systems. Less haemodilution (p=0.001) and markedly less haemolysis (p&lt;0.001), as well as better preservation of the coagulation system integrity (p=0.01), favouring the MECC group, was found. As a clinical result, less bank blood requirements were noted and a quicker recovery, as far as mechanical ventilation support and ICU stay are concerned, was evident with the use of MECC systems. As a conclusion, minimized extracorporeal circulation systems may attenuate the adverse effects of conventional circuits on the haematological profile of patients undergoing CABG surgery. abstract_id: PUBMED:31293800 Minimized extracorporeal circulation in non-coronary surgery. Minimally invasive extracorporeal circulation (MiECC) technology is characterized by improved biocompatibility due to closed-loop design, minimized priming, and markedly reduced artificial surface. Despite well-evidenced clinical advantages in coronary surgery, MiECC penetration in complex open-heart surgery is low. Concerns have been raised by surgeons and perfusionist regarding safety of perfusion in situations when the heart is opened and air is entering the closed system. Moreover, issues of blood and volume management are deemed impractical without having a reservoir. In the evolution of MiECC safety aspects as well as means of air and volume management have been addressed. The integration of active air removal devices, and the possibility of venting and volume buffering made MiECC suitable for valvular or even more complex surgery. However, typical clinical benefits found with MiECC in coronary artery bypass grafting (CABG) surgery, in particular blood sparing effects, were not reproducible. Air handling and blood management remain the main issues of MiECC in non-coronary surgery. With the introduction of modular (type IV) MiECC systems containing a second, accessory circuit for immediate conversion to open cardiopulmonary bypass (CPB), the last obstacles seem to be cleared away. The first reports using this latest development in MiECC technology sound promising. It is now up to the cardiac surgical community to adopt this technology and produce data helping to answer the question whether MiECC is the best perfusion strategy for all comer's cardiac surgery. abstract_id: PUBMED:21801930 Successful use of hirudin during cardiac surgery using minimized extracorporeal circulation in patients with heparin-induced thrombocytopenia. In this case series, we describe our successful use of a reduced hirudin dosage as an anticoagulant during cardiac surgery using minimized extracorporeal circulation in patients with heparin-induced thrombocytopenia. abstract_id: PUBMED:21245800 Myocardial protection in patients undergoing coronary artery bypass grafting surgery using minimized extracorporeal circulation in combination with volatile anesthetic. The minimized extracorporeal circulation (ECC) is a safe alternative for coronary artery bypass grafting (CABG) and allows a reduction of the negative effects associated with conventional extracorporeal circulation. Experimental and clinical data indicate that the anesthetic regime might influence the ischemia-reperfusion injury in CABG surgery. The aim of our retrospective study was to investigate the cardioprotective effects of two different minimized ECC systems in combination with two different anesthetic concepts and to determine the impact on oxygen consumption during aortic cross-clamping (ACC). Data of 1,182 patients who underwent elective isolated CABG with minimized ECC from January 1, 2003, to December 31, 2008, were enrolled in a retrospective manner. Patients were allocated either to sevoflurane-based volatile anesthesia using PRECiSe system (SEVO group) or to propofol-based intravenous anesthesia using MECC system (PROP group). Postoperatively, the SEVO group showed lower concentrations of myocardial fraction of creatine kinase compared with the PROP group (p &lt; 0.001). During the period of ACC, the values of systemic vascular resistance (SVR) were higher in SEVO group (p &lt; 0.005). Also, the SEVO group showed lower oxygen consumption at each time point ACC (p &lt; 0.0001). In conclusion, PRECiSe system using a microporous capillary oxygenator in combination with sevoflurane-based volatile anesthetic regimen seem to provide lower postoperative myocardial cell damage and to allow improved perfusion with higher SVRs and lower oxygen consumption during ACC. abstract_id: PUBMED:32522075 Minimally invasive extracorporeal circulation is a cost-effective alternative to conventional extracorporeal circulation for coronary artery bypass grafting: propensity matched analysis. Introduction: Minimally invasive extracorporeal circulation has developed with the aim of reducing the impact of the adverse effects associated with conventional extracorporeal circulation. The aim of this study was to compare outcomes for patients undergoing coronary artery bypass grafting using minimally invasive extracorporeal circulation with those performed using conventional extracorporeal circulation. Methods: A retrospective analysis was performed of patients undergoing minimally invasive extracorporeal circulation coronary artery bypass grafting at a single centre. 2:1 propensity matching was performed to identify control patients undergoing conventional extracorporeal circulation coronary artery bypass grafting. Outcomes were compared using univariate analysis. Results: A total of 354 patients were included in the study, with 118 patients undergoing minimally invasive extracorporeal circulation coronary artery bypass grafting. Patients were well matched on baseline characteristics. The mean logistic EuroSCORE was 3.95 ± 4.20. Operative times (3.31 ± 1.52 vs. 3.56 ± 0.73, p = 0.03) were significantly shorter in minimally invasive extracorporeal circulation cases. Patients who underwent surgery with minimally invasive extracorporeal circulation had significantly less 12-hour blood loss (322.3 ± 13.2 mL vs. 380.8 ± 15.2 mL, p &lt; 0.01). Correspondingly, a significantly lower proportion of patients were transfused (25.8% vs. 36%, p = 0.04), and the mean number of red blood cells transfused was lower (0.45 ± 0.95 vs. 0.97 ± 2.13, p = 0.01). Similarly, the number of coagulation products administered was lower (0.161 ± 0.05 vs. 0.40 ± 0.09, p = 0.05). There was a significantly lower incidence of acute kidney injury (11.0% vs. 19.9%, p = 0.03). Minimally invasive extracorporeal circulation was associated with a £679.50 cost saving per patient. Discussion: Minimally invasive extracorporeal circulation for coronary artery bypass grafting is associated with a reduced requirement for blood transfusion, reduced incidence of acute kidney injury and a significant cost saving. Minimally invasive extracorporeal circulation should be considered as an adjunct for all patients undergoing coronary artery bypass grafting. abstract_id: PUBMED:25323401 Minimized extracorporeal circulation does not impair cognitive brain function after coronary artery bypass grafting. Objectives: Objective evaluation of the impact of minimized extracorporeal circulation (MECC) on perioperative cognitive brain function in coronary artery bypass grafting (CABG) by electroencephalogram P300 wave event-related potentials and number connection test (NCT) as metrics of cognitive function. Methods: Cognitive brain function was assessed in 31 patients in 2013 with a mean age of 65 years [standard deviation (SD) 10] undergoing CABG by the use of MECC with P300 auditory evoked potentials (peak latencies in milliseconds) directly prior to intervention, 7 days after and 3 months later. Number connection test, serving as method of control, was performed simultaneously in all patients. Results: Seven days following CABG, cognitive P300 evoked potentials were comparable with preoperative baseline values [vertex (Cz) 376 (SD 11) ms vs 378 (18) ms, P = 0.39; frontal (Fz) 377 (11) vs 379 (21) ms, P = 0.53]. Cognitive brain function at 3 months was compared with baseline values [(Cz) 376 (11) ms vs 371 (14 ms) P = 0.09; (Fz) 377 (11) ms vs 371 (15) ms, P = 0.04]. Between the first postoperative measurement and 3 months later, significant improvement was observed [(Cz) 378 (18) ms vs 371 (14) ms, P = 0.03; (Fz) 379 (21) vs 371 (15) ms, P = 0.02]. Similar clearly corresponding patterns could be obtained via the number connection test. Results could be confirmed in repeated measures analysis of variance for Cz (P = 0.05) and (Fz) results (P = 0.04). Conclusions: MECC does not adversely affect cognitive brain function after CABG. Additionally, these patients experience a substantial significant cognitive improvement after 3 months, evidentiary proving that the concept of MECC ensures safety and outcome in terms of brain function. abstract_id: PUBMED:20514574 Minimized extracorporeal circulation in coronary artery bypass surgery is equivalent to standard extracorporeal circulation in patients with reduced left ventricular function. Background: Minimized extracorporeal circulation (MECC) is a promising alternative to standard extracorporeal circulation (ECC) and its use is increasing in routine coronary bypass surgery. We analyzed the clinical outcome of patients with reduced left ventricular function who underwent coronary artery bypass surgery with MECC or with standard ECC. Methods: From January 2003 to September 2008, 238 patients with a left ventricular function &lt; 30 % underwent bypass surgery with ECC or MECC. The primary end point of our retrospective observational study was 30-day mortality. Secondary endpoints were the transfusion requirements, as well as intensive care and the in-hospital course. Results: Demographic data, comorbidities and left ventricular function were similar in the study groups. MECC patients had a tendency towards a lower 30-day mortality rate, a better postoperative renal function and reduced ventilation times. Extracorporeal circulation time and postoperative high-dose inotropic support were significantly lower in the MECC group, while the stays in the intensive care unit and in hospital were comparable between the two groups. In our study, age in the ECC group, and previous infarction and New York Heart Association grade IV in the MECC group were preoperative risk factors associated with a higher mortality. Conclusion: Coronary bypass surgery using MECC is feasible and safe for patients with severely impaired left ventricular function. It is a promising alternative to ECC with a low mortality rate and a more favorable postoperative course. Answer: The clinical benefit of using minimized extracorporeal circulation (MECC) for coronary artery bypass grafting (CABG) is supported by several studies, although the evidence is not entirely consistent across all research. One study found that MECC did not show a significant clinical advantage over standard extracorporeal circulation (ECC) in terms of postoperative outcomes, and a higher number of patients in the MECC group required postoperative norepinephrine infusions for hemodynamic stabilization (PUBMED:18278679). However, other studies have reported positive outcomes associated with MECC. For instance, MECC has been shown to improve outcomes in elderly patients undergoing CABG, with benefits such as shorter extracorporeal circulation time, lower transfusion needs, less inotropic support, fewer postoperative hemodialysis, less delirium, shorter ICU stay, and significantly reduced 30-day mortality (PUBMED:26034198). Additionally, MECC has been associated with less hemodilution, markedly less hemolysis, better preservation of the coagulation system, less bank blood requirements, and quicker recovery in terms of mechanical ventilation support and ICU stay (PUBMED:20515982). In non-coronary surgery, MECC has been characterized by improved biocompatibility and reduced artificial surface, although its penetration in complex open-heart surgery is low due to concerns about safety and blood management (PUBMED:31293800). Furthermore, MECC has been used successfully with a reduced hirudin dosage in patients with heparin-induced thrombocytopenia (PUBMED:21801930), and it has been shown to not impair cognitive brain function after CABG (PUBMED:25323401). In patients with reduced left ventricular function, MECC was found to be a feasible and safe alternative to standard ECC, with a tendency towards lower 30-day mortality rate, better postoperative renal function, and reduced ventilation times (PUBMED:20514574). A cost-effectiveness analysis also indicated that MECC is associated with reduced requirements for blood transfusion, a lower incidence of acute kidney injury, and significant cost savings (PUBMED:32522075). In summary, while one study did not find a clear clinical advantage of MECC over standard ECC (PUBMED:18278679), the majority of the studies suggest that MECC can offer clinical benefits in terms of reduced morbidity , mortality, and resource utilization for patients undergoing CABG (PUBMED:26034198, PUBMED:20515982, PUBMED:31293800, PUBMED:21801930, PUBMED:25323401, PUBMED:20514574, PUBMED:32522075). These benefits include improved outcomes, particularly in elderly patients, better preservation of hematological parameters, successful use in patients with contraindications to heparin, and no adverse effects on cognitive brain function postoperatively. Additionally, MECC has been associated with a more favorable postoperative course in patients with severely impaired left ventricular function. Overall, the evidence suggests that MECC can be a valuable alternative to standard ECC in CABG, offering potential advantages that may translate into clinical benefits for certain patient populations.
Instruction: Is there a role for ADORA2A polymorphisms in levodopa-induced dyskinesia in Parkinson's disease patients? Abstracts: abstract_id: PUBMED:25872644 Is there a role for ADORA2A polymorphisms in levodopa-induced dyskinesia in Parkinson's disease patients? Aim: Levodopa is first line treatment of Parkinson's disease (PD). However, its use is associated with the presence of motor fluctuations and dyskinesias. In recent years, adenosine A2A receptor (A2AR) is rising as a therapeutic target for PD. The aim of the present study was to investigate whether ADORA2A is associated with levodopa adverse effects. Patients & Methods: Two hundred and eight PD patients on levodopa therapy were investigated. rs2298383 and rs3761422 at the ADORA2A gene were genotyped by allelic discrimination assays. Results: A trend for association was observed for both polymorphism and diplotypes with dyskinesia. Conclusion: The present results should be considered as positive preliminary evidence. Further studies are needed to determine the association between ADORA2A and dyskinesia. Original submitted 3 December 2014; Revision submitted 13 February 2015. abstract_id: PUBMED:25175963 Adenosine receptors and dyskinesia in pathophysiology. First, the recent progress in the pathogenesis of levodopa-induced dyskinesia was described. Serotonin neurons play an important role in conversion from levodopa to dopamine and in the release of converted dopamine into the striatum in the Parkinsonian state. Since serotonin neurons lack buffering effects on synaptic dopamine concentration, the synaptic dopamine markedly fluctuates depending on the fluctuating levodopa concentration in the serum after taking levodopa. The resultant pulsatile stimulation makes the striatal direct-pathway neurons get potential that releases excessive GABA into the output nuclei of the basal ganglia. When levodopa is administered, the stored GABA is released, the output nuclei become hypoactive, and then dyskinesias emerge. Second, effects of adenosine A2A receptor antagonists on dyskinesia were described. It has been demonstrated that the expression of adenosine A2A receptors is increased in Parkinson's disease (PD) patients with dyskinesias, suggesting that blockade of A2A receptors is beneficial for dyskinesias. Preclinical studies have shown that A2A receptor antagonists reduce liability of dyskinesias in PD models. Clinical trials have demonstrated that A2A antagonists increase functional ON-time (ON without troublesome dyskinesia) in PD patients suffering from wearing-off phenomenon, although they may increase dyskinesia in patients with advanced PD. abstract_id: PUBMED:32294749 Diagnostic prediction model for levodopa-induced dyskinesia in Parkinson's disease. Background: There are currently no methods to predict the development of levodopa-induced dyskinesia (LID), a frequent complication of Parkinson's disease (PD) treatment. Clinical predictors and single nucleotide polymorphisms (SNP) have been associated to LID in PD. Objective: To investigate the association of clinical and genetic variables with LID and to develop a diagnostic prediction model for LID in PD. Methods: We studied 430 PD patients using levodopa. The presence of LID was defined as an MDS-UPDRS Part IV score ≥1 on item 4.1. We tested the association between specific clinical variables and seven SNPs and the development of LID, using logistic regression models. Results: Regarding clinical variables, age of PD onset, disease duration, initial motor symptom and use of dopaminergic agonists were associated to LID. Only CC genotype of ADORA2A rs2298383 SNP was associated to LID after adjustment. We developed two diagnostic prediction models with reasonable accuracy, but we suggest that the clinical prediction model be used. This prediction model has an area under the curve of 0.817 (95% confidence interval [95%CI] 0.77‒0.85) and no significant lack of fit (Hosmer-Lemeshow goodness-of-fit test p=0.61). Conclusion: Predicted probability of LID can be estimated with reasonable accuracy using a diagnostic clinical prediction model which combines age of PD onset, disease duration, initial motor symptom and use of dopaminergic agonists. abstract_id: PUBMED:35532631 The Influence of ADORA2A on Levodopa-Induced Dyskinesia. Background: Dopamine deficiency causes Parkinson's disease (PD), and on treatment, levodopa is the gold standard. Various drug-metabolizing enzymes and drug receptors are believed to be involved in prompting dyskinesias due to the extended usage of levodopa. Shreds of evidence in genomic studies have presented that ADORA2A receptor antagonism has beneficial outcomes to avoid these drug-induced side effects. Objective: The aim of this study was to study the polymorphisms of rs2298383, rs35060421, and rs5751876 in the ADORA2A in patients diagnosed as PD and describe their possible relationships with levodopa-induced dyskinesias (LID). Methods: One-hundred and seventy-two patients were recruited and separated as the study and the control group. DNA was achieved from peripheral venous blood, high resolution melting analysis, and reverse-transcriptase PCR was performed. Results: The allele differences among the groups were not statistically significant. Although it was not statistically significant, the rs35060421 allele was observed to repeat more frequently. However, we did not find an association between such polymorphisms of ADORA2A and LID. Conclusions: Although this result showed that a higher sample number might produce different results as possible, current results in the Turkish sample indicated that these alleles of ADORA2A might not be related to LID in patients. abstract_id: PUBMED:15899244 Pharmacological validation of a mouse model of l-DOPA-induced dyskinesia. Dyskinesia (abnormal involuntary movements) is a common complication of l-DOPA pharmacotherapy in Parkinson's disease, and is thought to depend on abnormal cell signaling in the basal ganglia. Dopamine (DA) denervated mice can exhibit behavioral and cellular signs of dyskinesia when they are treated with l-DOPA, but the clinical relevance of this animal model remains to be established. In this study, we have examined the pharmacological profile of l-DOPA-induced abnormal involuntary movements (AIMs) in the mouse. C57BL/6 mice sustained unilateral injections of 6-hydroxydopamine (6-OHDA) in the striatum. The animals were treated chronically with daily doses of l-DOPA that were sufficient to ameliorate akinetic features without inducing overt signs of dyskinesia upon their first administration. In parallel, other groups of mice were treated with antiparkinsonian agents that do not induce dyskinesia when administered de novo, that is, the D2/D3 agonist ropinirole, and the adenosine A2a antagonist KW-6002. During 3 weeks of treatment, l-DOPA-treated mice developed AIMs affecting the head, trunk and forelimb on the side contralateral to the lesion. These movements were not expressed by animals treated with ropinirole or KW-6002 at doses that improved forelimb akinesia. The severity of l-DOPA-induced rodent AIMs was significantly reduced by the acute administration of compounds that have been shown to alleviate l-DOPA-induced dyskinesia both in parkinsonian patients and in rat and monkey models of Parkinson's disease (amantadine, -47%; buspirone, -46%; riluzole, -33%). The present data indicate that the mouse AIMs are indeed a functional equivalent of l-DOPA-induced dyskinesia. abstract_id: PUBMED:28577977 Zonisamide ameliorates levodopa-induced dyskinesia and reduces expression of striatal genes in Parkinson model rats. To investigate the difference in results according to the mode of levodopa administration and the effect of zonisamide (ZNS), we analyzed the mRNA expression of dopaminergic and non-dopaminergic receptors in the striatum of Parkinson model rats in relation to the development of levodopa-induced dyskinesia (LID). Unilateral Parkinson model rats were subdivided into 4 groups and treated as follows: no medication (group N), continuous levodopa infusion (group C), intermittent levodopa injection (group I), and intermittent levodopa and ZNS injection (group Z). Two weeks after the treatment, LID was observed in group I and Z, but less severe in group Z. The level of both D1 and D2 receptor mRNAs was elevated in groups I and Z, but only D2 receptor mRNA expression was elevated in group C. Adenosine A2A receptor mRNA showed increased expression only in group I. The level of endocannabinoid CB1 receptor mRNA was elevated in groups N, C, and I, but not in group Z. Intermittent injection of levodopa caused LID, in association with elevated expression of D1 and A2A receptors. ZNS ameliorated the development of LID and inhibited up-regulation of A2A and CB1 receptors. Modulation of these receptors may lead to therapeutic approaches for dyskinesia. abstract_id: PUBMED:29396609 Role of adenosine A2A receptors in motor control: relevance to Parkinson's disease and dyskinesia. Adenosine is an endogenous purine nucleoside that regulates several physiological functions, at the central and peripheral levels. Besides, adenosine has emerged as a major player in the regulation of motor behavior. In fact, adenosine receptors of the A2A subtype are highly enriched in the caudate-putamen, which is richly innervated by dopamine. Moreover, several studies in experimental animals have consistently demonstrated that the pharmacological antagonism of A2A receptors has a facilitatory influence on motor behavior. Taken together, these findings have envisaged A2A receptors as a promising target for symptomatic therapies aimed at ameliorating motor deficits. Accordingly, A2A receptor antagonists have been extensively studied as new agents for the treatment of Parkinson's disease (PD), the epitome of motor disorders. In this review, we provide an overview of the effects that adenosine A2A receptor antagonists elicit in rodent and primate experimental models of PD, with regard to the counteraction of motor deficits as well as to manifestation of dyskinesia and motor fluctuations. Moreover, we briefly present the results of clinical trials of A2A receptor antagonists in PD patients experiencing motor fluctuations, with particular regard to dyskinesia. Finally, we discuss the interaction between A2A receptor antagonists and serotonin receptor agonists, since combined administration of these drugs has recently emerged as a new potential therapeutic strategy in the treatment of dyskinesia. abstract_id: PUBMED:36608814 Serotonin 5-HT1A receptors and their interactions with adenosine A2A receptors in Parkinson's disease and dyskinesia. The dopamine neuronal loss that characterizes Parkinson's Disease (PD) is associated to changes in neurotransmitters, such as serotonin and adenosine, which contribute to the symptomatology of PD and to the onset of dyskinetic movements associated to levodopa treatment. The present review describes the role played by serotonin 5-HT1A receptors and the adenosine A2A receptors on dyskinetic movements induced by chronic levodopa in PD. The focus is on preclinical and clinical results showing the interaction between serotonin 5-HT1A receptors and other receptors such as 5-HT1B receptors and adenosine A2A receptors. 5-HT1A/1B receptor agonists and A2A receptor antagonists, administered in combination, contrast dyskinetic movements induced by chronic levodopa without impairing motor behaviour, suggesting that this drug combination might be a useful therapeutic approach for counteracting the PD motor deficits and dyskinesia associated with chronic levodopa treatment. This article is part of the Special Issue on "The receptor-receptor interaction as a new target for therapy". abstract_id: PUBMED:23339054 Caffeine consumption and risk of dyskinesia in CALM-PD. Background: Adenosine A2A receptor antagonists reduce or prevent the development of dyskinesia in animal models of levodopa-induced dyskinesia. Methods: We examined the association between self-reported intake of the A2A receptor antagonist caffeine and time to dyskinesia in the Comparison of the Agonist Pramipexole with Levodopa on Motor Complications of Parkinson's Disease (CALM-PD) and CALM Cohort extension studies, using a Cox proportional hazards model adjusting for age, baseline Parkinson's severity, site, and initial treatment with pramipexole or levodopa. Results: For subjects who consumed &gt;12 ounces of coffee/day, the adjusted hazard ratio for the development of dyskinesia was 0.61 (95% CI, 0.37-1.01) compared with subjects who consumed &lt;4 ounces/day. For subjects who consumed between 4 and 12 ounces/day, the adjusted hazard ratio was 0.73 (95% CI, 0.46-1.15; test for trend, P = .05). Conclusions: These results support the possibility that caffeine may reduce the likelihood of developing dyskinesia. abstract_id: PUBMED:17192438 Forebrain adenosine A2A receptors contribute to L-3,4-dihydroxyphenylalanine-induced dyskinesia in hemiparkinsonian mice. Adenosine A2A receptor antagonists provide a promising nondopaminergic approach to the treatment of Parkinson's disease (PD). Initial clinical trials of A2A antagonists targeted PD patients who had already developed treatment complications known as L-3,4-dihydroxyphenylalanine (L-DOPA)-induced dyskinesia (LID) in an effort to improve symptoms while reducing existing LID. The goal of this study is to explore the effect of A2A antagonists and targeted A2A receptor depletion on the actual development of sensitized responses to L-DOPA in mouse models of LID in PD. Hemiparkinsonian mice (unilaterally lesioned with 6-OHDA) were treated daily for 3 weeks with a low dose of L-DOPA (2 mg/kg) preceded by a low dose of selective A2A antagonist (KW-6002 [(E)-1,3-diethyl-8-(3,4-dimethoxystyryl)-7-methyl-3,7-dihydro-1H-purine-2,6-dione] at 0.03 or 0.3 mg/kg, or SCH58261 [5-amino-7-(2-phenylethyl)-2-(2-furyl)-pyrazolo[4,3-e]-1,2,4-triazolo[1,5-c]pyrimidine] at 0.03 mg/kg) or vehicle intraperitoneally. In control mice, contralateral rotational responses to daily L-DOPA gradually increased over the initial week before reaching a persistent maximum. Both A2A antagonists inhibited the development of sensitized contralateral turning, with KW-6002 pretreatment reducing the sensitized rotational responses by up to threefold. The development of abnormal involuntary movements (a measure of LID) as well as rotational responses was attenuated by the postnatal depletion of forebrain A2A receptors in conditional (Cre/loxP system) knock-out mice. These pharmacological and genetic data provide evidence that striatal A2A receptors play an important role in the neuroplasticity underlying behavioral sensitization to L-DOPA, supporting consideration of early adjunctive therapy with an A2A antagonist to reduce the risk of LID in PD. Answer: Yes, there is evidence suggesting a role for ADORA2A polymorphisms in levodopa-induced dyskinesia (LID) in Parkinson's disease (PD) patients. Studies have investigated the association between ADORA2A polymorphisms and the adverse effects of levodopa therapy, which is the first-line treatment for PD. Levodopa use is associated with motor fluctuations and dyskinesias, and the adenosine A2A receptor (A2AR) has been identified as a therapeutic target for PD (PUBMED:25872644). Research has shown that the expression of adenosine A2A receptors is increased in PD patients with dyskinesias, and A2A receptor antagonists have been found to reduce the liability of dyskinesias in PD models (PUBMED:25175963). Furthermore, a study that included 430 PD patients using levodopa found that the CC genotype of the ADORA2A rs2298383 single nucleotide polymorphism (SNP) was associated with LID after adjustment (PUBMED:32294749). This suggests that genetic variations in ADORA2A may influence the development of LID. However, another study with 172 patients did not find a statistically significant association between ADORA2A polymorphisms (rs2298383, rs35060421, and rs5751876) and LID, although the authors noted that a larger sample size might yield different results (PUBMED:35532631). Despite this, the collective evidence from multiple studies supports the notion that ADORA2A polymorphisms could play a role in the development of LID in PD patients, and further research with larger sample sizes and more diverse populations may help to clarify the extent of this association.
Instruction: Is screening for fetal anomalies reliable in HIV-infected pregnant women? Abstracts: abstract_id: PUBMED:18784463 Is screening for fetal anomalies reliable in HIV-infected pregnant women? A multicentre study. Objective: To assess the impact of HIV infection on the reliability of the first-trimester screening for Down syndrome, using free beta-human chorionic gonadotrophin, pregnancy-associated plasma protein-A and fetal nuchal translucency, and of the second-trimester screening for neural tube defects, using alpha-fetoprotein. Patients And Methods: Multicentre study comparing the multiples of the median of markers for Down syndrome and neural tube defect screening among 214 HIV-infected pregnant women and 856 HIV-negative controls undergoing a first-trimester Down syndrome screening test, and 209 HIV-positive women and 836 HIV-negative controls with a risk evaluation for neural tube defect. The influence of treatment, chronic hepatitis and HIV disease characteristics were also evaluated. Results: Multiples of the median medians for pregnancy-associated plasma protein-A and beta-human chorionic gonadotrophin were lower in HIV-positive women than controls (0.88 vs. 1.05 and 0.84 vs. 1.09, respectively; P &lt; 0.005), but these differences had no impact on risk estimation; no differences were observed for the other markers. No association was found between HIV disease characteristics, antiretroviral treatment use at the time of screening or chronic hepatitis and marker levels. Conclusion: Screening for Down syndrome during the first trimester and for neural tube defect during the second trimester is accurate for HIV-infected women and should be offered, similar to HIV-negative women. abstract_id: PUBMED:38036918 Prenatal ultrasound screening and pregnancy outcomes in HIV-positive women in Germany: results from a retrospective single-center study at the Charité-Universitätsmedizin Berlin. Objectives: The aim of this study was to investigate the rate of Mother-to-child-transmission (MTCT) in women living with HIV (WLWH) in a tertiary care institution. Furthermore, we aimed to assess prenatal ultrasound screening for fetal anomalies and outcomes in high-risk pregnancies due to maternal HIV infection." Methods: In this single-center study, retrospective data related to pregnancy and childbirth were collected from 420 WLWH. All data were evaluated descriptively. Results: From January 2014 to December 2020, a total number of 420 pregnant WLWH delivered 428 newborns. 415 (98.8%) were receiving antiretroviral therapy (ART) and 88.8% had a viral load of &lt; 50 cop/ml prior delivery. 46 (11%) of the newborns were born prematurely. Low birth weight &lt; 2500 g occurred in 38 (9.1%) of the children. 219 (52.1%) caesarean sections (CS) were performed. The most frequent indication for an elective CS was a previous CS (70.2%). 8 severe malformations were detected using first and second trimester ultrasound. In one child, MTCT was detected postpartum, resulting in an HIV transmission rate of 0.2% in the presented cohort. Conclusions: The low rate of vertical HIV-transmission in our cohort of 0.2% is the result of interdisciplinary prenatal care and high experience of healthcare providers in treatment of WLWH. Despite high ART coverage and adherence, good maternal immune system and very low vertical HIV transmission rate, maternal HIV infection remains a challenge in obstetric care. First and second ultrasound screening should be a part of prenatal care for HIV-infected women and should also be offered to HIV-negative women. A reduction of the rate of unnecessary elective caesarean deliveries in WLWH is necessary to reduce complications in subsequent pregnancies. abstract_id: PUBMED:24194633 Prenatal ultrasound screening for fetal anomalies and outcomes in high-risk pregnancies due to maternal HIV infection: a retrospective study. Objective: To assess the prevalence of prenatal screening and of adverse outcome in high-risk pregnancies due to maternal HIV infection. Study Design: The prevalence of prenatal screening in 330 pregnancies of HIV-positive women attending the department for prenatal screening and/or during labour between January 1, 2002 and December 31, 2012, was recorded. Screening results were compared with the postnatal outcome and maternal morbidity, and mother-to-child transmission (MTCT) was evaluated. Results: One hundred of 330 women (30.5%) had an early anomaly scan, 252 (74.5%) had a detailed scan at 20-22 weeks, 18 (5.5%) had a detailed scan prior to birth, and three (0.9%) had an amniocentesis. In seven cases (2.12%), a fetal anomaly was detected prenatally and confirmed postnatally, while in eight (2.42%) an anomaly was only detected postnatally, even though a prenatal scan was performed. There were no anomalies in the unscreened group. MTCT occurred in three cases (0.9%) and seven fetal and neonatal deaths (2.1%) were reported. Conclusion: The overall prevalence of prenatal ultrasound screening in our cohort is 74.5%, but often the opportunity for prenatal ultrasonography in the first trimester is missed. In general, the aim should be to offer prenatal ultrasonography in the first trimester in all pregnancies. This allows early reassurance or if fetal disease is suspected, further steps can be taken. abstract_id: PUBMED:23721372 Birth defects in a national cohort of pregnant women with HIV infection in Italy, 2001-2011. Objective: We used data from a national study of pregnant women with HIV to evaluate the prevalence of congenital abnormalities in newborns from women with HIV infection. Design: Observational study. Setting: University and hospital clinics. Population: Pregnant women with HIV exposed to antiretroviral treatment at any time during pregnancy. Methods: The total prevalence of birth defects was assessed on live births, stillbirths, and elective terminations for fetal anomaly. The associations between potentially predictive variables and the occurrence of birth defects were expressed as odds ratios (ORs) with 95% confidence intervals (95% CIs) for exposed versus unexposed cases, calculated in univariate and multivariate logistic regression analyses. Main Outcome Measures: Birth defects, defined according to the Antiretroviral Pregnancy Registry criteria. Results: A total of 1257 pregnancies with exposure at any time to antiretroviral therapy were evaluated. Forty-two cases with major defects were observed. The total prevalence was 3.2% (95% CI 1.9-4.5) for exposure to any antiretroviral drug during the first trimester (23 cases with defects) and 3.4% (95% CI 1.9-4.9) for no antiretroviral exposure during the first trimester (19 cases). No associations were found between major birth defects and first-trimester exposure to any antiretroviral treatment (OR 0.94, 95% CI 0.51-1.75), main drug classes (nucleoside reverse transcriptase inhibitors, OR 0.95, 95% CI 0.51-1.76; non-nucleoside reverse transcriptase inhibitors, OR 1.20, 95% CI 0.56-2.55; protease inhibitors, OR 0.92, 95% CI 0.43-1.95), and individual drugs, including efavirenz (prevalence for efavirenz, 2.5%). Conclusions: This study adds further support to the assumption that first-trimester exposure to antiretroviral treatment does not increase the risk of congenital abnormalities. abstract_id: PUBMED:34509197 Weekly 17 alpha-hydroxyprogesterone caproate to prevent preterm birth among women living with HIV: a randomised, double-blind, placebo-controlled trial. Background: Women with HIV face an increased risk of preterm birth. 17 alpha-hydroxyprogesterone caproate (17P) has been shown in some trials to reduce early delivery among women with a history of spontaneous preterm birth. We investigated whether 17P would reduce this risk among women with HIV. Methods: We did a randomised, double-blind, placebo-controlled trial in pregnant women with HIV at the University Teaching Hospital and Kamwala District Health Centre in Lusaka, Zambia. Eligible patients were women aged 18 years or older with confirmed HIV-1 infection, viable intrauterine singleton pregnancy at less than 24 weeks of gestation, and were receiving or intending to commence antiretroviral therapy during pregnancy. Exclusion criteria were major uterine or fetal anomaly; planned or in situ cervical cerclage; evidence of threatened miscarriage, preterm labour, or ruptured membranes at screening; medical contraindication to 17P; previous participation in the trial; or history of spontaneous preterm birth. Eligible participants provided written informed consent and were randomly assigned (1:1) to receive 250 mg intramuscular 17P or placebo once per week, starting between 16 and 24 weeks of gestation until delivery, stillbirth, or reaching term (37 weeks). Participants and study staff were masked to assignment, except for pharmacy staff who did random assignment and prepared injections but did not interact with participants. The primary outcome was a composite of delivery before 37 weeks or stillbirth at any gestational age. Patients attended weekly visits for study drug injections and antenatal care. We estimated the absolute and relative difference in risk of the primary outcome and safety events between treatment groups by intention to treat. This trial is registered with ClinicalTrials.gov, NCT03297216, and is complete. Findings: Between Feb 7, 2018 and Jan 13, 2020, we assessed 1042 women for inclusion into the study. 242 women were excluded after additional assessments, and 800 eligible patients were enrolled and randomly assigned to receive intramuscular 17P (n=399) or placebo (n=401). Baseline characteristics were similar between groups. Adherence to study drug injections was 98% in both groups, no patients were lost to follow-up, and the final post-partum visit was on Aug 6, 2020. 36 (9%) of 399 participants assigned to 17P had preterm birth or stillbirth, compared with 36 (9%) of 401 patients assigned to placebo (risk difference 0·1, 95% CI -3·9 to 4·0; relative risk 1·0, 95% CI 0·6 to 1·6; p=0·98). Intervention-related adverse events were reported by 140 (18%) of 800 participants and occurred in similar proportions in both randomisation groups. No serious adverse events were reported. Interpretation: Although 17P seems to be safe and acceptable to participants, available data do not support the use of the drug to prevent preterm birth among women whose risk derives solely from HIV infection. The low risk of preterm birth in both randomisation groups warrants further investigation. Funding: US National Institutes of Health and the Bill and Melinda Gates Foundation. abstract_id: PUBMED:27319948 Amniocentesis and chorionic villus sampling in HIV-infected pregnant women: a multicentre case series. Objectives: To assess in pregnant women with HIV the rates of amniocentesis and chorionic villus sampling (CVS), and the outcomes associated with such procedures. Design: Observational study. Data from the Italian National Program on Surveillance on Antiretroviral Treatment in Pregnancy were used. Setting: University and hospital clinics. Population: Pregnant women with HIV. Methods: Temporal trends were analysed by analysis of variance and by the Chi-square test for trend. Quantitative variables were compared by Student's t-test and categorical data by the Chi-square test, with odds ratios and 95% confidence intervals calculated. Main Outcome Measures: Rate of invasive testing, intrauterine death, HIV transmission. Results: Between 2001 and 2015, among 2065 pregnancies in women with HIV, 113 (5.5%) had invasive tests performed. The procedures were conducted under antiretroviral treatment in 99 cases (87.6%), with a significant increase over time in the proportion of tests performed under highly active antiretroviral therapy (HAART) (100% in 2011-2015). Three intrauterine deaths were observed (2.6%), and 14 pregnancies were terminated because of fetal anomalies. Among 96 live newborns, eight had no information available on HIV status. Among the remaining 88 cases with either amniocentesis (n = 75), CVS (n = 12), or both (n = 1), two HIV transmissions occurred (2.3%). No HIV transmission occurred among the women who were on HAART at the time of invasive testing, and none after 2005. Conclusions: The findings reinforce the assumption that invasive prenatal testing does not increase the risk of HIV vertical transmission among pregnant women under suppressive antiretroviral treatment. Tweetable Abstract: No HIV transmission occurred among women who underwent amniocentesis or CVS under effective anti-HIV regimens. abstract_id: PUBMED:19165088 Antiretroviral therapy and congenital abnormalities in infants born to HIV-infected women in the UK and Ireland, 1990-2007. Objective: To explore the rate of reported congenital abnormalities in infants exposed to antiretroviral therapy in utero. Design: Comprehensive national surveillance study in the UK and Ireland. Methods: Births to diagnosed HIV-infected women are reported to the National Study of HIV in Pregnancy and Childhood. Infants born between 1990 and 2007 were included. Results: The rate of reported major and minor congenital abnormality was 2.8% (232/8242) overall, and there was no significant difference by timing of ART exposure: 2.8% (14/498) in unexposed infants, 2.7% (147/5427) following second or third trimester exposure, and 3.1% (53/1708) following first trimester exposure (P = 0.690). There was no difference in abnormality rates by class of ART exposure in the first trimester (P = 0.363), and no category of abnormality was significantly associated with timing of ART, although numbers in these groups were small. There was no increased risk of abnormalities in infants exposed to efavirenz (P = 0.672) or didanosine (P = 0.816) in the first trimester. Conclusion: These findings, based on a large, national, unselected population provide further reassurance that ART in utero does not pose a major risk of fetal anomaly. abstract_id: PUBMED:9003948 Maternal and fetal outcomes in hyperemesis gravidarum. Objective: This study sought to evaluate maternal characteristics and pregnancy outcomes among women with hyperemesis gravidarum. Methods: We performed a retrospective analysis of pregnancy records of obstetric admissions during a 6-year period. Women treated as out-patients for hyperemesis were also identified. Hyperemesis was defined as excessive nausea and vomiting resulting in dehydration, extensive medical therapy, and/or hospital admission. Statistical analysis was by t-test and chi square. Results: We identified 193 women (1.5%) who developed hyperemesis among 13,053 women. Racial status, marital status, age, and gravidity were similar between the hyperemesis patients and the general population. However, there were less women with hyperemesis who were para 3 or greater. Forty-six women (24%) required hospitalization for hyperemesis, mean hospital stay 1.8 days, range 1-10 days. One patient required parenteral nutrition, two had yeast esophagitis, none had HIV infection, psychiatric pathology or thyroid disease. Pregnancy outcomes between hyperemesis patients and the general population were similar for mean birth weight, mean gestational age, deliveries less than 37 weeks, Apgar scores, perinatal mortality or incidence of fetal anomalies. Our incidence of hyperemesis (1.5%) is similar to that of other published reports. Conclusion: Women with hyperemesis have similar demographic characteristics to the general obstetric population, and have similar obstetric outcomes. abstract_id: PUBMED:37167996 Efficacy and safety of three antiretroviral therapy regimens started in pregnancy up to 50 weeks post partum: a multicentre, open-label, randomised, controlled, phase 3 trial. Background: Drugs taken during pregnancy can affect maternal and child health outcomes, but few studies have compared the safety and virological efficacy of different antiretroviral therapy (ART) regimens. We report the primary safety outcomes from enrolment up to 50 weeks post partum and a secondary virological efficacy outcome at 50 weeks post partum of three commonly used ART regimens for HIV-1. Methods: In this multicentre, open-label, randomised, controlled, phase 3 trial, we enrolled pregnant women aged 18 years or older with confirmed HIV-1 infection at 14-28 weeks of gestation. Women were enrolled at 22 clinical research sites in nine countries (Botswana, Brazil, India, South Africa, Tanzania, Thailand, Uganda, the USA, and Zimbabwe). Participants were randomly assigned (1:1:1) to one of three oral regimens: dolutegravir, emtricitabine, and tenofovir alafenamide; dolutegravir, emtricitabine, and tenofovir disoproxil fumarate; or efavirenz, emtricitabine, and tenofovir disoproxil fumarate. Up to 14 days of antepartum ART before enrolment was permitted. Women with known multiple gestation, fetal anomalies, acute significant illness, transaminases more than 2·5 times the upper limit of normal, or estimated creatinine clearance of less than 60 mL/min were excluded. Primary safety analyses were pairwise comparisons between ART regimens of the proportion of maternal and infant adverse events of grade 3 or higher up to 50 weeks post partum. Secondary efficacy analyses at 50 weeks post partum included a comparison of the proportion of women with plasma HIV-1 RNA of less than 200 copies per mL in the combined dolutegravir-containing groups versus the efavirenz-containing group. Analyses were done in the intention-to-treat population, which included all randomly assigned participants with available data. This trial was registered with ClinicalTrials.gov, NCT03048422. Findings: Between Jan 19, 2018, and Feb 8, 2019, we randomly assigned 643 pregnant women to the dolutegravir, emtricitabine, and tenofovir alafenamide group (n=217), the dolutegravir, emtricitabine, and tenofovir disoproxil fumarate group (n=215), and the efavirenz, emtricitabine, and tenofovir disoproxil fumarate group (n=211). At enrolment, median gestational age was 21·9 weeks (IQR 18·3-25·3), median CD4 count was 466 cells per μL (308-624), and median HIV-1 RNA was 903 copies per mL (152-5183). 607 (94%) women and 566 (92%) of 617 liveborn infants completed the study. Up to the week 50 post-partum visit, the estimated probability of experiencing an adverse event of grade 3 or higher was 25% in the dolutegravir, emtricitabine, and tenofovir alafenamide group; 31% in the dolutegravir, emtricitabine, and tenofovir disoproxil fumarate group; and 28% in the efavirenz, emtricitabine, and tenofovir disoproxil fumarate group (no significant difference between groups). Among infants, the estimated probability of experiencing at least one adverse event of grade 3 or higher by postnatal week 50 was 28% overall, with small and non-statistically significant differences between groups. By postnatal week 50, 14 infants whose mothers were in the efavirenz-containing group (7%) died, compared with six in the combined dolutegravir groups (1%). 573 (89%) women had HIV-1 RNA data available at 50 weeks post partum: 366 (96%) in the dolutegravir-containing groups and 186 (96%) in the efavirenz-containing group had HIV-1 RNA less than 200 copies per mL, with no significant difference between groups. Interpretation: Safety and efficacy data during pregnancy and up to 50 weeks post partum support the current recommendation of dolutegravir-based ART (particularly in combination with emtricitabine and tenofovir alafenamide) rather than efavirenz, emtricitabine, and tenofovir disoproxil fumarate, when started in pregnancy. Funding: National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, and the National Institute of Mental Health. abstract_id: PUBMED:33341441 Dolutegravir in pregnant mice is associated with increased rates of fetal defects at therapeutic but not at supratherapeutic levels. Background: Dolutegravir (DTG) is a preferred regimen for all people with HIV including pregnant women, but its effects on the fetus are not fully understood. Periconceptional exposure to DTG has been associated with increased rates of neural tube defects (NTDs), although it is unknown whether this is a causal relationship. This has led to uncertainty around the use of DTG in women of reproductive potential. Methods: Pregnant C57BL/6J mice were randomly allocated to control (water), 1x-DTG (2.5 mg/kg-peak plasma concentration ~3000 ng/ml - therapeutic level), or 5x-DTG (12.5 mg/kg-peak plasma concentration ~12,000 ng/ml - supratherapeutic level), once daily from gestational day 0.5 until sacrifice. DTG was administered with 50 mg/kg tenofovir+33.3 mg/kg emtricitabine. Fetal phenotypes were determined, and maternal and fetal folate levels were quantified by mass-spectrometry. Findings: 352 litters (91 control, 150 1x-DTG, 111 5x-DTG) yielding 2776 fetuses (747 control, 1174 1x-DTG, 855 5x-DTG) were assessed. Litter size and viability rates were similar between groups. Fetal and placenta weights were lower in the 1x-DTG vs. control. Placental weight was higher in the 5x-DTG vs. control. Five NTDs were observed, all in the 1x-DTG group. Fetal defects, including microphthalmia, severe edema, and vascular/bleeding defects were more frequent in the 1x-DTG group. In contrast, defect rates in the 5x-DTG were similar to control. Fetal folate levels were similar between control and 1x-DTG, but were significantly higher in the 5x-DTG group. Interpretation: Our findings support a causal relationship of DTG at therapeutic doses with increased risk for fetal defects, including NTDs at a rate that is similar that reported in the Tsepamo study for women exposed to DTG-based ART from conception. The non-monotonic dose-response relationship between DTG and fetal anomalies could explain the previous lack of fetal toxicity findings from pre-clinical DTG studies. The fetal folate levels suggest that DTG is unlikely to be an inhibitor of folate uptake. Funding: This project has been funded with Federal funds from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Department of Health and Human Services, under Contract No. HHSN275201800001I. Answer: Yes, screening for fetal anomalies is considered reliable in HIV-infected pregnant women. A multicentre study found that the screening for Down syndrome during the first trimester and for neural tube defects during the second trimester is accurate for HIV-infected women and should be offered, similar to HIV-negative women (PUBMED:18784463). Additionally, a retrospective single-center study at the Charité-Universitätsmedizin Berlin indicated that prenatal ultrasound screening for fetal anomalies should be a part of prenatal care for HIV-infected women (PUBMED:38036918). Another retrospective study reinforced the importance of offering prenatal ultrasonography in the first trimester in all pregnancies, including those of HIV-positive women, to allow early reassurance or further steps if fetal disease is suspected (PUBMED:24194633). Furthermore, data from a national study of pregnant women with HIV in Italy suggested that first-trimester exposure to antiretroviral treatment does not increase the risk of congenital abnormalities, adding support to the safety of screening in this population (PUBMED:23721372). The findings from an observational study on amniocentesis and chorionic villus sampling in HIV-infected pregnant women also support the assumption that invasive prenatal testing does not increase the risk of HIV vertical transmission among pregnant women under suppressive antiretroviral treatment (PUBMED:27319948). In summary, the evidence suggests that screening for fetal anomalies is reliable and should be offered to HIV-infected pregnant women, similar to their HIV-negative counterparts.
Instruction: Is it useful to increase dialysate flow rate to improve the delivered Kt? Abstracts: abstract_id: PUBMED:25884763 Is it useful to increase dialysate flow rate to improve the delivered Kt? Background: Increasing dialysate flow rates (Qd) from 500 to 800 ml/min has been recommended to increase dialysis efficiency. A few publications show that increasing Qd no longer led to an increase in mass transfer area coefficient (KoA) or Kt/V measurement. Our objectives were: 1) Studying the effect in Kt of using a Qd of 400, 500, 700 ml/min and autoflow (AF) with different modern dialysers. 2) Comparing the effect on Kt of water consumption vs. dialysis time to obtain an individual objective of Kt (Ktobj) adjusted to body surface. Methods: This is a prospective single-centre study with crossover design. Thirty-one patients were studied and six sessions with each Qd were performed. HD parameters were acquired directly from the monitor display: effective blood flow rate (Qbe), Qd, effective dialysis time (Te) and measured by conductivity monitoring, final Kt. Results: We studied a total of 637 sessions: 178 with 500 ml/min, 173 with 700 ml/min, 160 with AF and 126 with 400 ml/min. Kt rose a 4% comparing 400 with 500 ml/min, and 3% comparing 500 with 700 ml/min. Ktobj was reached in 82.4, 88.2, 88.2 and 94.1% of patients with 400, AF, 500 and 700 ml/min, respectively. We did not find statistical differences between dialysers. The difference between programmed time and Te was 8' when Qd was 400 and 500 ml/min and 8.8' with Qd = 700 ml/min. Calculating an average time loss of eight minutes/session, we can say that a patient loses 24' weekly, 312' monthly and 62.4 hours yearly. Identical Kt could be obtained with Qd of 400 and 500 ml/min, increasing dialysis time 9.1' and saving 20% of dialysate. Conclusions: Our data suggest that increasing Qd over 400 ml/min for these dialysers offers a limited benefit. Increasing time is a better alternative with demonstrated benefits to the patient and also less water consumption. abstract_id: PUBMED:21799145 Dialysate flow rate and delivered Kt/Vurea for dialyzers with enhanced dialysate flow distribution. Background And Objectives: Previous in vitro and clinical studies showed that the urea mass transfer-area coefficient (K(o)A) increased with increasing dialysate flow rate. This observation led to increased dialysate flow rates in an attempt to maximize the delivered dose of dialysis (Kt/V(urea)). Recently, we showed that urea K(o)A was independent of dialysate flow rate in the range 500 to 800 ml/min for dialyzers incorporating features to enhance dialysate flow distribution, suggesting that increasing the dialysate flow rate with such dialyzers would not significantly increase delivered Kt/V(urea). Design, Setting, Participants, & Measurements: We performed a multi-center randomized clinical trial to compare delivered Kt/V(urea) at dialysate flow rates of 600 and 800 ml/min in 42 patients. All other aspects of the dialysis prescription, including treatment time, blood flow rate, and dialyzer, were kept constant for a given patient. Delivered single-pool and equilibrated Kt/V(urea) were calculated from pre- and postdialysis plasma urea concentrations, and ionic Kt/V was determined from serial measurements of ionic dialysance made throughout each treatment. Results: Delivered Kt/V(urea) differed between centers; however, the difference in Kt/V(urea) between dialysate flow rates of 800 and 600 ml/min was NS by any measure (95% confidence intervals of -0.064 to 0.024 for single-pool Kt/V(urea), -0.051 to 0.023 for equilibrated Kt/V(urea), and -0.029 to 0.099 for ionic Kt/V). Conclusions: These data suggest that increasing the dialysate flow rate beyond 600 ml/min for these dialyzers offers no benefit in terms of delivered Kt/V(urea). abstract_id: PUBMED:10620551 In vivo effects of dialysate flow rate on Kt/V in maintenance hemodialysis patients. It is generally assumed that hemodialysis adequacy is only minimally affected by increasing the dialysate flow rate (Qd). Recent in vitro studies showed that dialyzer urea clearance (Kd(urea)) may increase substantially more than expected in response to an increase in Qd. Because these studies implied that dialysis efficacy may benefit from greater Qds, we studied in vivo the effects of various Qds on the delivered dose of dialysis in 23 maintenance hemodialysis (MHD) patients. Hemodialysis was performed at Qds of 300, 500, and 800 mL/min for at least 3 weeks each, whereas specific dialysis prescriptions (treatment time, blood flow rate [Qb], ultrafiltration volume, and type and size of dialyzer) were kept constant. Delivered dose of dialysis, assessed by single-pool Kt/V (Kt/V(sp)) and double-pool Kt/V (Kt/ V(dp)), was measured at least three times for each Qd (218 measurements). Mean +/- SEM Kt/V(sp) was 1.19 +/- 0.03 at Qd of 300 mL/min, 1.32 +/- 0.04 at 500 mL/min, and 1.45 +/- 0.04 at 800 mL/min. The relative gains in Kt/V(sp) for increasing Qd from 300 to 500 mL/min and 500 to 800 mL/min were 11.7% +/- 8.7% and 9.9% +/- 5.1%, respectively. Kt/V(dp) increased at a similar percentage (11.2% +/- 8.9% and 10.3% +/- 5.1%, respectively). The observed gain in urea clearance by increasing Qd from 500 to 800 mL/min was significantly greater than the increase in Kd(urea) predicted from mathematical modeling (5.7% +/- 0.4%; P = 0.0008). Removal ratios for creatinine and the high-molecular-weight marker, beta(2)-microglobulin, were not affected by increasing Qd from 500 to 800 mL/min. The proportion of patients not achieving adequacy (Kt/V(sp) &gt;/= 1.2) was reduced from 56% at Qd of 300 mL/min to 30% at 500 mL/min and further to 13% at 800 mL/min. It is concluded that increasing Qd from 500 to 800 mL/min is associated with a significant increase in Kt/V. Hemodialysis with Qd of 800 mL/min should be considered in selected patients not achieving adequacy despite extended treatment times and optimized Qbs. abstract_id: PUBMED:26565938 What is the optimum dialysate flow in post-dilution online haemodiafiltration? Introduction: In post-dilution online hemodiafiltration (OL-HDF), the only recommendation concerning the dialysate, or dialysis fluid, refers to its purity. No study has yet determined whether using a high dialysate flow (Qd) is useful for increasing Kt or ultrafiltration-infusion volume. Objective: Study the influence of Qd on Kt and on infusion volume in OL-HDF. Material And Methods: This was a prospective crossover study. There were 37 patients to whom 6 sessions of OL-HDF were administered at 3 different Qds: 500, 600 and 700ml/min. A 5008(®) monitor was used for the dialysis in 21 patients, while an AK-200(®) was used in 17. The dialysers used were: 20 with FX 800(®) and 17 with Polyflux-210(®). The rest of the parameters were kept constant. Monitor data collected were effective blood flow, effective dialysis time, final Kt and infused volume. Results: We found that using a Qd of 600 or 700ml/min increased Kt by 1.7% compared to using a Qd of 500ml/min. Differences in infusion volume were not significant. Increasing Qd from 500ml/min to 600 and 700ml/min increased dialysate consumption by 20% and 40%, respectively. Conclusions: With the monitors and dialysers currently used in OL-HDF, a Qd higher than 500ml/min is unhelpful for increasing the efficacy of Kt or infusion volume. Consequently, using a high Qd wastes water, a truly important resource both from the ecological and economic points of view. abstract_id: PUBMED:30048973 Does the Blood Pump Flow Rate have an Impact on the Dialysis Dose During Low Dialysate Flow Rate Hemodialysis? We conducted a prospective study to assess the impact of the blood pump flow rate (BFR) on the dialysis dose with a low dialysate flow rate. Seventeen patients were observed for 3 short hemodialysis sessions in which only the BFR was altered (300,350 and 450 mL/min). Kt/V urea increased from 0.54 ± 0.10 to 0.58 ± 0.08 and 0.61 ± 0.09 for BFR of 300, 400 and 450 mL/min. For the same BFR variations, the reduction ratio (RR) of β2microglobulin increased from 0.40 ± 0.07 to 0.45 ± 0.06 and 0.48 ± 0.06 and the RR phosphorus increased from 0.46 ± 0.1 to 0.48 ± 0.08 and 0.49 ± 0.07. In bivariate analysis accounting for repeated observations, an increasing BFR resulted in an increase in spKt/V (0.048 per 100 mL/min increment in BPR [p &lt; 0.05, 95% CI (0.03-0.06)]) and an increase in the RR β2m (5% per 100 mL/min increment in BPR [p &lt; 0.05, 95% CI (0.03-0.07)]). An increasing BFR with low dialysate improves the removal of urea and β2m but with a potentially limited clinical impact. abstract_id: PUBMED:22458394 A model to predict optimal dialysate flow. Diffusive clearance depends on blood (Qb) and dialysate flow (Qd) rates and the overall mass transfer area coefficient (KoA) of the dialyzer. In this article we describe a model to predict an appropriated AutoFlow (AF) factor (AF factor = Ratio Qd/Qb), that is able to provide adequate Kt/V for hemodialysis patients (HDP), while consuming lower amounts of dialysate, water and energy during the treatment. We studied in vivo the effects of three various Qd on the delivered dose of dialysis in 33 stable HDP. Hemodialysis was performed at Qd of 700 mL/mn, 500 mL/mn, and with AF, whereas specific dialysis prescriptions (treatment time, blood flow rate [Qb], and type and size of dialyzer) were kept constant. The results showed that increasing the dialysate flow rate more than the model of AF predicted had a small effect on the delivered dose of dialysis. The Kt/V (mean ± SD) was 1.52 ± 0.16 at Qd 700, 1.50 ± 0.16 at Qd 500, and 1.49 ± 0.15 with AF. The use of the AF function leads to a significant saving of dialysate fluid. The model predicts the appropriate AF factor that automatically adjusts the dialysate flow rate according to the effective blood flow rate of the patient to achieve an appreciable increase in dialysis dose at the lowest additional cost. abstract_id: PUBMED:28455862 Effect of Low Dialysate Flow Rate on Hemodialyzer Mass Transfer Area Coefficients for Urea and Creatinine. Recent work has shown that the dialyzer mass transfer area coefficient (Ko A) for urea increases when the dialysate flow rate is increased from 500 to 800 mL/min. In this study we determined urea and creatinine clearances for two commercial dialyzers containing polysulfone hollow fibers in vitro at 37°C, a nominal blood flow rate of 300 mL/ min, and dialysate flow rates (Qd ) ranging from 100 to 800 mL/min. A standard bicarbonate dialysis solution was used in both the blood and dialysate flow pathways, and clearances were calculated from solute concentrations in the input and output flows on both the blood and dialysate sides. Urea and creatinine Ko A values, calculated from the mean of the blood and dialysate side clearances, increased (p &lt; 0.01) with increasing Qd over the entire range studied. The increase in both urea and creatinine Ko A with increasing Qd was proportional to the Ko A value. These data show that changes in Qd alter small solute clearances greater than predicted assuming a constant Ko A. abstract_id: PUBMED:9925153 Blood flow rate: an important determinant of urea clearance and delivered Kt/V. Implementation of the Dialysis Outcomes Quality Initiative (DOQI) Guidelines for hemodialysis adequacy will necessitate an increase in delivered Kt/V for many patients. Before increasing Kt/V by prolonging the patient's treatment time, it is important to verify that the prescribed dialyzer urea clearance is being achieved. The principal determinant of dialyzer urea clearance is blood flow rate. Actual blood flow rates are frequently less than the nominal blood flow rate displayed by the dialysis machine, particularly at higher flow rates, leading to lower than expected urea clearances. The major reason for the reduction in blood flow rate is a low pressure in the arterial blood line proximal to the blood pump. This effect can be mitigated by the use of large bore access needles. For quality assurance purposes, actual blood flow rates should be determined by correcting nominal blood flow rates for pressure effects using empirical relationships or by using an ultrasonic flow meter. Because a poorly functioning blood access may further reduce the effective blood flow rate, blood access performance should also be monitored regularly. abstract_id: PUBMED:27857008 The Effect of Dialysate Flow Rate on Dialysis Adequacy and Fatigue in Hemodialysis Patients Purpose: In this single repeated measures study, an examination was done on the effects of dialysate flow rate on dialysis adequacy and fatigue in patients receiving hemodialysis. Methods: This study was a prospective single center study in which repeated measures analysis of variance were used to compare Kt/V urea (Kt/V) and urea reduction ratio (URR) as dialysis adequacy measures and level of fatigue at different dialysate flow rates: twice as fast as the participant's own blood flow, 500 mL/min, and 700 mL/min. Thirty-seven hemodialysis patients received all three dialysate flow rates using counterbalancing. Results: The Kt/V (M±SD) was 1.40±0.25 at twice the blood flow rate, 1.41±0.23 at 500 mL/min, and 1.46±0.24 at 700 mL/min. The URR (M±SD) was 68.20±5.90 at twice the blood flow rate, 68.67±5.22 at 500 mL/min, and 70.11±5.13 at 700 mL/min. When dialysate flow rate was increased from twice the blood flow rate to 700 mL/min and from 500 mL/min to 700 mL/min, Kt/V and URR showed relative gains. There was no difference in fatigue according to dialysate flow rate. Conclusion: Increasing the dialysate flow rate to 700 mL/min is associated with a significant nicrease in dialysis adequacy. Hemodialysis with a dialysate flow rate of 700 mL/min should be considered in selected patients not achieving adequacy despite extended treatment times and optimized blood flow rate. abstract_id: PUBMED:12608556 Increasing blood flow increases kt/V(urea) and potassium removal but fails to improve phosphate removal. Background: Hyperphosphatemia and hyperkalemia are major determinants of morbidity and mortality in hemodialysis patients. Half of the dialysis population suffers from hyperphosphatemia which is now recognized as an important cardiovascular disease risk factor. It is, therefore, necessary to improve the removal of these molecules. In this study, we investigated the effect of enhancing blood flow on Kt/V for urea (Kt/Vu), potassium and phosphate removal. Methods: Thirteen patients were investigated in a randomized, cross-over, prospective study using 3 blood flows (Qb) of 200,250 and 300 ml/min which gave 39 standardized high-flux hemodialysis treatments. Effective blood flows were measured by ultrasonic flow meter. Quantification of delivered dialysis dose was performed by partial dialysate and ultrafiltrate collection for the determination of potassium and phosphate removal and by blood urea concentrations for determination of Kt/Vu. Results: Kt/Vu rose significantly from 1.10 +/- 0.14 to 1.22 +/- 0.14 and finally to 1.39 +/- 0.16 (p = 0.0001) with increasing Qb similar to the increase in potassium removal from 53.0 +/- 2.4 to 63.4 +/- 2.6 and to 74.2 +/- 3.8 mMol (p = 0.01). Phosphate removal only improved from 28.1 +/- 1.3 to 31.4 +/- 1.5 (p = 0.050) when Qb was increased from 200 to 250 ml/min but remained unchanged at 31.2 +/- 1.5 mMol (NS compared to phosphate removal at Qb = 250 ml/min) when Qb was increased to 300 ml/min. Conclusions: Increasing delivered Kt/Vu and potassium removal with higher Qb fails to produce the same desired effect with phosphate removal during high-flux hemodialysis. Answer: Increasing the dialysate flow rate (Qd) has been a common strategy to improve the delivered Kt (the product of dialyzer clearance of urea and treatment time) in hemodialysis. However, recent studies have shown that the benefits of increasing Qd may be limited, especially with modern dialyzers that have enhanced dialysate flow distribution. A study by Pérez-García et al. (PUBMED:25884763) found that increasing Qd from 400 to 500 ml/min increased Kt by 4%, and a further increase to 700 ml/min only increased Kt by an additional 3%. They concluded that increasing Qd over 400 ml/min offers limited benefit and that increasing dialysis time is a better alternative, with demonstrated benefits to the patient and less water consumption. Similarly, a multi-center randomized clinical trial (PUBMED:21799145) compared delivered Kt/Vurea (a measure of dialysis dose) at dialysate flow rates of 600 and 800 ml/min and found no significant difference in delivered Kt/Vurea between the two flow rates, suggesting that increasing Qd beyond 600 ml/min offers no benefit for these dialyzers. Another study (PUBMED:10620551) did find that increasing Qd from 500 to 800 mL/min was associated with a significant increase in Kt/V, suggesting that higher Qd should be considered in selected patients not achieving adequacy despite extended treatment times and optimized blood flow rates. In the context of post-dilution online hemodiafiltration (OL-HDF), a study (PUBMED:26565938) found that increasing Qd from 500 to 600 or 700 ml/min increased Kt by only 1.7% and did not significantly affect infusion volume, leading to the conclusion that a Qd higher than 500 ml/min is unhelpful for increasing the efficacy of Kt or infusion volume in OL-HDF with the monitors and dialysers used in the study. Overall, the evidence suggests that while there may be some benefit to increasing Qd, the magnitude of this benefit is limited, especially with modern dialyzers designed to optimize dialysate flow distribution. Increasing dialysis time and optimizing other parameters such as blood flow rate may be more effective strategies for improving dialysis adequacy (PUBMED:25884763, PUBMED:21799145, modern dialyzers designed to optimize dialysate flow distribution). Increasing dialysis time and optimizing other parameters such as blood flow rate may be more effective strategies for improving dialysis adequacy (PUBMED:25884763, PUBMED:21799145). In addition, a study by Pérez-García et al. (PUBMED:22458394) introduced a model to predict an appropriate AutoFlow (AF) factor, which adjusts the dialysate flow rate according to the effective blood flow rate of the patient. This model aims to achieve an appreciable increase in dialysis dose while consuming lower amounts of dialysate, water, and energy. The study found that increasing the dialysate flow rate more than the model predicted had a small effect on the delivered dose of dialysis, suggesting that the use of the AF function can lead to significant savings of dialysate fluid without compromising the dialysis dose. Moreover, research by Schneditz et al. (PUBMED:28455862) showed that the dialyzer mass transfer area coefficient (KoA) for urea increases with increasing Qd from 500 to 800 mL/min. However, this increase in KoA with increasing Qd was proportional to the KoA value, indicating that changes in Qd alter small solute clearances more than predicted assuming a constant KoA. Lastly, a study by Canaud et al. (PUBMED:27857008) found that increasing the dialysate flow rate to 700 mL/min is associated with a significant increase in dialysis adequacy, suggesting that a higher dialysate flow rate should be considered in selected patients not achieving adequacy with standard treatment parameters. In conclusion, while there may be some benefit to increasing the dialysate flow rate, the evidence suggests that the improvements in delivered Kt may be modest and patient-specific. Modern dialyzers with enhanced flow distribution may not require high Qd to achieve optimal Kt/Vurea, and other strategies such as increasing treatment time or optimizing blood flow rate may be more effective in improving dialysis adequacy (PUBMED:25884763, PUBMED:21799145, PUBMED:22458394, PUBMED:28455862, PUBMED:27857008).
Instruction: Can medical students identify a potentially serious acetaminophen dosing error in a simulated encounter? Abstracts: abstract_id: PUBMED:25889566 Can medical students identify a potentially serious acetaminophen dosing error in a simulated encounter? a case control study. Background: In an effort to assess medical students' abilities to identify a medication administration error in an outpatient setting, we designed and implemented a standardized patient simulation exercise which included a medication overdose. Methods: Fourth year medical students completed a standardized patient (SP) simulation of a parent bringing a toddler to an outpatient setting. In this case-control study, the majority of students had completed a patient safety curriculum about pediatric medication errors prior to their SP encounter. If asked about medications, the SP portraying a parent was trained to disclose that she was administering acetaminophen and to produce a package with dosing instructions on the label. The administered dose represented an overdose. Upon completion, students were asked to complete an encounter note. Results: Three hundred forty students completed this simulation. Two hundred ninety-one students previously completed a formal patient safety curriculum while 49 had not. A total of two hundred thirty-four students (69%) ascertained that the parent had been administering acetaminophen to their child. Thirty-seven students (11%) determined that the dosage exceeded recommended dosages. There was no significant difference in the error detection rates of students who completed the patient safety curriculum and those who had not. Conclusions: Despite a formal patient safety curriculum concerning medication errors, 89% of medical students did not identify an overdose of a commonly used over the counter medication during a standardized patient simulation. Further educational interventions are needed for students to detect medication errors. Additionally, 31% of students did not ask about the administration of over the counter medications suggesting that students may not view such medications as equally important to prescription medications. Simulation may serve as a useful tool to assess students' competency in identifying medication administration errors. abstract_id: PUBMED:11576205 Prevalence and clinical characteristics of headache in medical students in oman. Objectives: To perform a descriptive epidemiological study of headache in medical students at Sultan Qaboos University, analyzing prevalence, symptom profile, and pattern of health care utilization. Background: Headache is one of the most common complaints in medical practice. To our knowledge, headache has not been the subject of investigation in medical students in the Arabian Gulf. Methods: Lifetime and last-year prevalence of headache was based on a detailed structured headache assessment questionnaire. Besides demographic data, headache characteristics and pattern of health care utilization were evaluated. In addition, questions were included referring to the use of traditional remedies. Interviewers included three previously trained final-year medical students. The evaluation was done per cohort, and the students were guided through the assessment questionnaire by the interviewers. Migraine and tension-type headache were diagnosed according to the criteria of the International Headache Society. Results: Four hundred three students (95.3%) completed the questionnaire: 151 men (37.5%) and 252 women (62.5%). The lifetime and last-year prevalence of headache was 98.3% and 96.8%, respectively. A positive family history of headache was found in 57.6% of students. The prevalence rate of migraine and tension-type headache was found to be the same (12.2%), with a difference in distribution across sexes: 6.6% of the men and 15.5% of the women had migraine, while 13.9% of the men and 11.1% of the women suffered from tension-type headache. Only 23.3% of students sought medical assistance during headache episodes, and 80.3% took medication: 24.6% took prescribed medication, 72.9% took nonprescription medication, and only 2.5% took traditional remedies. Acetaminophen (83.1%) followed by mefenamic acid (24.6%) were the most commonly used drugs. Conclusions: The results of this prospective epidemiological study show that headache is highly prevalent among medical students at this university. The high prevalence rate of migraine sufferers in this student population might be due to the high female-to-male ratio (1.7:1). It is likely that analgesic use/overuse also coexists with headache in medical students at Sultan Qaboos University, since a large majority of them rely on nonprescription medications. abstract_id: PUBMED:33426113 Comparison of self-medication practices with analgesics among undergraduate medical and paramedical students of a tertiary care teaching institute in Central India - A questionnaire-based study. Context: Inappropriate self-medication can increase chances of adverse drug reactions, disease aggravation, or drug interactions. Analgesics are most commonly used as self-medication. Aims: The aim of this study was to evaluate and compare analgesic self-medication practices among medical and paramedical undergraduate students of a tertiary care teaching institute in Central India. Materials And Methods: A cross-sectional, observational study was conducted in 216 undergraduate medical (MBBS and BDS) and paramedical (occupational therapy/physiotherapy and BSc nursing) students. A predesigned, self-developed, semi-structured questionnaire was used. Statistical Analysis: The Chi-square test was used for testing statistical significance. Results: The overall prevalence of self-medication with analgesics was 83.33%. Self-medication was significantly high among medical students as compared to paramedical students (P = 0.003). Significantly more medical students were aware about adverse drug reactions of analgesics as compared to paramedical students (P = 0.019). The most common source of information about drugs was previous prescription (58.33%), followed by media including the Internet (53.70%). The most dominant symptom compelling self-medication was found to be muscular pain (42.12%), followed by headache (36.57%). 54.16% of the students revealed that self-medication provides quick relief from pain. The most commonly used analgesic was paracetamol (82.40%), followed by diclofenac (22.68%). A significant number of paramedical students do not know exactly what precautions should be taken while taking analgesics (P = 0.002). Conclusions: Medical students are more indulged in self-medication practices with analgesics. Paramedical students need to be educated regarding safe use of analgesics. abstract_id: PUBMED:22761527 Pharmacy student knowledge retention after completing either a simulated or written patient case. Objective: To determine pharmacy students' knowledge retention from and comfort level with a patient-case simulation compared with a written patient case. Design: Pharmacy students were randomly assigned to participate in either a written patient case or a simulated patient case in which a high-fidelity mannequin was used to portray a patient experiencing a narcotic and acetaminophen overdose. Assessment: Participants' responses on a multiple-choice test and a survey instrument administered before the case, immediately after the case, and 25 days later indicated that participation in the simulated patient case did not result in greater knowledge retention or comfort level than participation in the written patient case. Students' knowledge improved post-intervention regardless of which teaching method was used. Conclusions: Although further research is needed to determine whether the use of simulation in the PharmD curriculum is equivalent or superior to other teaching methods, students' enthusiasm for learning in a simulated environment where they can safely apply patient care skills make this technology worth exploring. abstract_id: PUBMED:32983946 Knowledge, Attitude, and Practice on Over-the-Counter Drugs Among Pharmacy and Medical Students: A Facility-Based Cross-Sectional Study. Background: Self-medication with over-the-counter (OTC) medications is common among medicine and health science students. For safe use of OTC medications, students are expected to have proper knowledge, attitude, and practice (KAP) towards OTC medications and subsequent adverse drug reactions (ADRs). Objective: The aim of this study was to assess KAP of OTC medications use and related factors among medical and pharmacy students at the University of Gondar, Gondar, Northwest Ethiopia. Methods: A cross-sectional study was conducted. Data were collected using a self-administered questionnaire and analyzed using Statistical Package for Social Sciences (SPSS) version 24. Chi-square analysis was conducted and multivariable logistic regression analysis was used to determine the association between KAP and OTC use and its related adverse effects. A P value of less than 0.05 was used to declare statistical significance. Results: A total of 380 students (229 medical students and 151 pharmacy students) participated in the study. The majority of the respondents 303 (79.7%) reported that they have the practice of self-medication. Fever 69 (80.2%), headache 21 (24.4%), and abdominal cramp 20 (23.3%) were the most common conditions for which the students go for self-medication while paracetamol 51 (59.3%) followed by non-steroidal anti-inflammatory drugs (NSAIDs) 44 (51.2%) were the most commonly used classes of drugs. An intention for time-saving caused by the waiting time due to crowds in medical consultation rooms 212 (77.4%) and a desire for quick relief 171 (62.4%) were the main reasons for the self-medication practice with OTC medications. Conclusion: Self-medication is widely practiced among medical and pharmacy students. Significant problems and malpractices were identified, such as sharing of OTC medications, the use of expired medicines, doubling the dose of medications when they were ineffective, storage of OTC medications, and not reading labels and expiry dates. abstract_id: PUBMED:35634137 Influence of Medical Education on Medicine Use and Self-Medication Among Medical Students: A Cross-Sectional Study from Kabul. Objective: To compare the prevalence of self-medication among first- and fifth-year medical students at Kabul University of Medical Sciences. Methods: A cross-sectional study was conducted with the participation of all first- and fifth-year medical students by using a short, self-administered questionnaire. The prevalence of self-medication was estimated in the entire study population and also in those who had used medicines in the preceding one week. Results: Of the total 302 students, the prevalence of medicine use was 38%. The prevalence of self-medication in all study population was 25.16%, whereas in those who had used medicines was 64.9%. Prescription-only medicines consisted of 59.2% of self-medication. The practice of self-medication and the use of prescription-only medicines were more prevalent among students in their fifth year and among males. While the prevalence of medicine use was the same among males and females, it differed between students in the fifth and first year. Paracetamol, anti-infectives, and non-steroidal anti-inflammatory drugs (NSAIDs) were more frequently used medicines. Conclusion: The use of medicines, self-medication and the use of prescription-only medicines were more prevalent among fifth-year students compared to those in the first-year. This apparently reflects the effect of medical education and training. More specific studies are required to address the issue in more detail and to facilitate interventions. The estimation of the prevalence of self-medication by using a short acceptable recall period, confined in those who had used medicines, seems to be more reasonable and accurate than by using a longer recall period in the entire study population. The prevalence of prescription-only medicines in self-medication could also be a useful indicator. abstract_id: PUBMED:27547561 Perception of the risk of adverse reactions to analgesics: differences between medical students and residents. Background. Medications are not exempt from adverse drug reactions (ADR) and how the physician perceives the risk of prescription drugs could influence their availability to report ADR and their prescription behavior. Methods. We assess the perception of risk and the perception of ADR associated with COX2-Inbitors, paracetamol, NSAIDs, and morphine in medical students and residents of northeast of Mexico. Results. The analgesic with the highest risk perception in both group of students was morphine, while the drug with the least risk perceived was paracetamol. Addiction and gastrointestinal bleeding were the ADR with the highest score for morphine and NSAIDs respectively. Discussion. Our findings show that medical students give higher risk scores than residents toward risk due to analgesics. Continuing training and informing physicians about ADRs is necessary since the lack of training is known to induce inadequate use of drugs. abstract_id: PUBMED:35356012 Self-Medication Practices in Medical Students During the COVID-19 Pandemic: A Cross-Sectional Analysis. Background And Objectives: During the pandemic, the growing influence of social media, accessibility of over-the-counter medications, and fear of contracting the virus may have led to self-medication practices among the general public. Medical students are prone to such practices due to relevant background knowledge, and access to drugs. This study was carried out to determine and analyze the prevalence of self-medication practices among medical students in Pakistan. Materials And Methods: This descriptive, cross-sectional study was conducted online in which the participants were asked about the general demographics, their self-medication practices and the reasons to use. All participants were currently enrolled in a medical college pursuing medical or pharmacy degree. Non-probability sampling technique was used to recruit participants. Results: A total of 489 respondents were included in the final analysis. The response rate was 61%. Majority of the respondents were females and 18-20 years of age. Self-medication was quite prevalent in our study population with 406 out of 489 individuals (83.0%) were using any of the drugs since the start of pandemic. The most commonly utilized medications were Paracetamol (65.2%) and multivitamins (56.0%). The reasons reported for usage of these medications included cold/flu, or preventive measures for COVID-19. The common symptoms reported for self-medication included fever (67.9%), muscle pain (54.0%), fatigue (51.7%), sore throat (46.6%), and cough (44.4%). Paracetamol was the most commonly used drug for all symptoms. Female gender, being in 3rd year of medical studies, and individuals with good self-reported health were found more frequent users of self-medication practices. Conclusion: Our study revealed common self-medication practices among medical and pharmacy students. It is a significant health issue especially during the pandemic times, with high consumption reported as a prevention or treating symptoms of COVID-19. abstract_id: PUBMED:37309455 A Cross-Sectional Study to Investigate the Prevalence of Self-Medication of Non-Opioid Analgesics Among Medical Students at Qassim University, Saudi Arabia. Purpose: Self-medication (SM) using non-opioid analgesics (NOA) is contentious and increasingly recognized as a major public health concern with severe consequences, including masking of malignant and fatal diseases, risk of misdiagnosis, problems relating to over- and under-dosing, drug interactions, incorrect dosage, and choice of therapy. Herein, we aim to determine the prevalence of SM with NOA among pharmacy and medical students at Unaizah College, Qassim University, Saudi Arabia. Patients And Methods: A cross-sectional study using a validated self-administered questionnaire was conducted on 709 pharmacy and medicine students belonging to an age group of 21-24 years from Unaizah Colleges. Data were statistically analyzed using SPSS version 21. Results: Of 709 participants, 635 responded to the questionnaire. Our results showed a prevalence percentage of 89.6% using self-medicated NOA for pain management. The most common factor leading to SM in NOA was the mild nature of the illness (50.6%), and headache/migraine (66.8%) was the dominant health problem. Paracetamol (acetaminophen, 73.7%) was the most commonly used analgesic, followed by ibuprofen (16.5%). The most common and reliable sources of drug information were pharmacists (51.5%). Conclusion: We observed a high rate of SM for NOA among undergraduate students. We believe that the adverse consequences of SM could be controlled through educational, regulatory, and administrative strategies by providing appropriate awareness sessions, and the role of pharmacists should be highlighted in preventing SM from NOA. abstract_id: PUBMED:28691871 Non-health care facility medication errors resulting in serious medical outcomes. Objective: The objective of this study is to provide an epidemiologic analysis of medication errors occurring outside of health care facilities that result in serious medical outcomes (defined by the National Poison Database System as "moderate effect," "major effect," "death," or "death, indirect report"). Methods: National Poison Database System data from 2000 through 2012 were used for this retrospective analysis of non-health care facility medication errors. Results: From 2000 through 2012, Poison Control Centers in the United States received data on 67,603 exposures related to unintentional therapeutic pharmaceutical errors that occurred outside of health care facilities that resulted in serious medical outcomes. The overall average rate of these medication errors was 1.73 per 100,000 population, and there was a 100.0% rate increase during the 13-year study period. Medication error frequency and rates increased for all age groups except children younger than 6 years of age. Medical outcome was most commonly reported as moderate effect (93.5%), followed by major effect (5.8%) and death (0.6%). Common types of medication errors included incorrect dose, taking or administering the wrong medication, and inadvertently taking the medication twice. The medication categories most frequently associated with serious outcomes were cardiovascular drugs (20.6%) (primarily beta blockers, calcium antagonists, and clonidine), analgesics (12.0%) (most often opioids and acetaminophen, alone and combination products), and hormones/hormone antagonists (11.0%) (in particular, insulin, and sulfonylurea). Conclusions: This study analyzed non-health care facility medication errors resulting in serious medical outcomes. The rate of non-health care facility medication errors resulting in serious medical outcomes is increasing, and additional efforts are needed to prevent these errors. Answer: No, the majority of medical students were unable to identify a potentially serious acetaminophen dosing error in a simulated encounter. In a case-control study involving fourth-year medical students who completed a standardized patient simulation, only 11% of the students determined that the dosage of acetaminophen administered to a child by a parent exceeded recommended dosages. This was despite the fact that the majority of the students had completed a formal patient safety curriculum concerning pediatric medication errors prior to their standardized patient encounter. The study concluded that further educational interventions are needed for students to detect medication errors, as 89% did not identify an overdose of a commonly used over-the-counter medication during the simulation (PUBMED:25889566).
Instruction: Faciocraniosynostosis: monobloc frontofacial osteotomy replacing the two-stage strategy? Abstracts: abstract_id: PUBMED:35786530 Syndromic Synostosis: Frontofacial Surgery. Frontofacial surgery, encompassing the monobloc with or without facial bipartition and the box osteotomy, can treat the frontal bone and midface simultaneously, providing comprehensive improvement in facial balance. Complex pediatric patients with genetic syndromes and craniosynostosis are most optimized by an interdisciplinary team of surgeons, pediatricians, geneticists, speech pathologists, audiologists, dietitians, pediatric dentists, orthodontists, and psychosocial support staff to manage the myriad of challenges and complications throughout early childhood and beyond. Despite early treatment of the anterior and posterior cranial vault, these patients frequently have resultant frontal and/or midface hypoplasia and orbital abnormalities that are best managed with simultaneous surgical treatment. abstract_id: PUBMED:32102742 Surgical-Orthodontic Considerations in Subcranial and Frontofacial Distraction. Subcranial and frontofacial distraction osteogenesis have emerged as powerful tools for management of hypoplasia involving the upper two-thirds of the face. The primary goal of subcranial or frontofacial distraction is to improve the orientation of the upper face and midface structures (frontal bone, orbitozygomatic complex, maxilla, nasal complex) relative to the cranial base, globes, and mandible. The various techniques used are tailored for management of specific phenotypic differences in facial position and may include segmental osteotomies, differential vectors, or synchronous maxillomandibular rotation. abstract_id: PUBMED:22872273 Faciocraniosynostosis: monobloc frontofacial osteotomy replacing the two-stage strategy? Background: Frontofacial monobloc advancement (FFMBA) is a powerful but high-risk procedure to correct both exorbitism and impaired airways of faciocraniosynostosis. Patients And Methods: One hundred and five children with faciocraniosynostosis (mean 4.9 years, 7 months-14 years) were evaluated prospectively after FFMBA and quadruple internal distraction. The advancement was started at day 5 (0.5 mm/day). Mean follow-up was 61 months (maximum 10.5 years). Relapse was evaluated by the comparison between the evaluation at the time of removal of distractors and 6 months later. Results: Seventy-six patients (72%) completed their distraction uneventfully in the initial period. Complications: - One death at D1 from acute tonsillar herniation before beginning of distraction. - Cerebrospinal fluid leaks managed conservatively (11 patients) and with transient lumbar drainage (eight patients). - Revision surgery (dysfunction/infection) of distraction devices (nine patients, subsequently four completed the distraction). Ninety-nine out of 104 patients finally completed their distraction, resulting in exorbitism correction. Respiratory impairment, when present, was corrected and class I occlusal relationship was obtained in 77% of the cases. Reossification was limited at the orbital level but relapse could be prevented by a retention phase of 6 months. Pfeiffer syndrome, previous surgeries, and surgery before 18 months of age were risk factors. Conclusions: Internal distraction allows early correction of respiratory impairment and exorbitism of faciocraniosynostosis. In order to limit the risks, we advise: - Preliminary craniovertebral junction decompression if needed - Four devices to customize the distraction - Double pericranial flap to seal the anterior cranial fossa - Systematical external transient drainage if CSF leak - Slow rate of distraction (0.5 mm/day) - Long consolidation phase (6 months). abstract_id: PUBMED:37079110 Sequential repeated tibial tubercle osteotomy in a two-stage exchange strategy: a superior approach to treating a chronically infected knee arthroplasty? Purpose: Surgical approach can impact the reliability of the debridement after a chronic total knee periprosthetic joint infection (PJI), a factor of utmost importance to eradicate the infection. The most adequate knee surgical approach in cases of PJI is a matter of debate. The purpose of this study was to determine the influence of performing a tibial tubercle osteotomy (TTO) in a two-stage exchange protocol for knee PJI treatment. Methods: Retrospective cohort study examining patients managed with two-stage arthroplasty due to chronic knee PJI (2010-2019). Performance and timing of the TTO were collected. Primary end-point was infection control with a minimum FU of 12 months and according to internationally accepted criteria. Correlation between TTO timing and reinfection rate was reviewed. Results: Fifty-two cases were finally included. Overall success (average follow-up: 46.2 months) was 90.4%. Treatment success was significantly higher among cases addressed using TTO during the second stage (97.1% vs. 76.5%, p value 0.03). Only 4.8% of the patients relapsed after performing a sequential repeated TTO, that is, during both first and second stages, compared to 23.1% cases in which TTO was not done (p value 0.28). No complications were observed among patients in the TTO group with a significant decrease in soft tissue necrosis (p: 0.052). Conclusion: Sequential repeated tibial tubercle osteotomy during a two-stage strategy is a reasonable option and offers high rates of infection control in complex cases of knee PJI with a low rate of complications. abstract_id: PUBMED:30013895 Cerclages after Femoral Osteotomy Are at Risk for Bacterial Colonization during Two-Stage Septic Total Hip Arthroplasty Revision. Aims: In cases of a two-stage septic total hip arthroplasty (THA) exchange a femoral osteotomy with subsequent cerclage stabilization may be necessary to remove a well-fixed stem. This study aims to investigate the rate of bacterial colonization and risk of infection persistence associated with in situ cerclage hardware in two-stage septic THA exchange. Patients and Methods: Twenty-three patients undergoing two-stage THA exchange between 2011 and 2016 were included in this retrospective cohort study. During the re-implantation procedure synovial fluid, periprosthetic tissue samples and sonicate fluid cultures (SFC) of the cerclage hardware were acquired. Results: Seven of 23 (30%) cerclage-SFC produced a positive bacterial isolation. Six of the seven positive cerclage-SFC were acquired during THA re-implantation. Two of the seven patients (29%) with a positive bacterial isolation from the cerclage hardware underwent a THA-revision for septic complications. The other five patients had their THA in situ at last follow-up. Conclusions: Despite surgical debridement and antimicrobial therapy, a bacterial colonization of cerclage hardware occurs and poses a risk for infection persistence. All cerclage hardware should be removed or exchanged during THA reimplantation. abstract_id: PUBMED:26539670 Comparison of two surgeries in treatment of severe kyphotic deformity caused by ankylosing spondylitis: Transpedicular bivertebrae wedge osteotomy versus one-stage interrupted two-level transpedicular wedge osteotomy. Objective: To explore a simple and effective surgery for correcting severe kyphotic deformity caused by ankylosing spondylitis (AS). Materials And Methods: From January 2003 to December 2009, we respectively reviewed 32 patients with severe spinal kyphosis caused by AS with at least 2-year follow-up. Patients were divided into two groups, according to surgical methods: transpedicular bivertebrae wedge osteotomy (Group A) or one-stage interrupted two-level transpedicular wedge osteotomy (Group B). We recorded operating time and blood loss. Variation between pre- and post-operative sagittal imbalance, global spinal alignments (Cobb angle of T1 and L5, TLKA), lumbar lordosis, chin-brow vertical angle, thoracolumbar kyphosis angle in both groups were analyzed. Results: The average operating time was 236 ± 39 min and the average blood loss was 2200 ± 712 ml in Group A, and 252 ± 43 min, 2202 ± 737 ml respectively in Group B. There were no significant differences in operating time and blood loss. Variation between pre- and post-operative sagittal imbalance, global spinal alignments, lumbar lordosis and chin-brow vertical angle (CBVA) were comparable between the two groups. The variation of thoracolumbar kyphosis angle was significantly greater in Group B compared with Group A. SRS-22 scores were similar in the two groups at the 2-year follow-up and significantly improved compared with preoperative. Conclusions: For correcting severe kyphosis in patients with AS, the one-stage interrupted two-level transpedicular wedge osteotomy is a safe and effective technique which can significantly improve the thoracolumbar kyphosis angle. abstract_id: PUBMED:35011776 Extended Trochanteric Osteotomy with Intermediate Resection Arthroplasty Is Safe for Use in Two-Stage Revision Total Hip Arthroplasty for Infection. Background: This study sought to compare the results of two-stage revision total hip arthroplasty (THA) for periprosthetic infection (PJI) in patients with and without the use of an extended trochanteric osteotomy (ETO) for removal of a well-fixed femoral stem or cement. Methods: Thirty-two patients who had undergone an ETO as part of a two-stage revision without spacer placement were matched 1:2 with a cohort of sixty-four patients of the same sex and age who had stem removal without any osteotomy. Clinical outcomes including interim revision, reinfection and aseptic failure rates were evaluated. Modified Harris hip scores (mHHS) were calculated. Minimum follow-up was two years. Results: Patients undergoing ETO had a significantly lower rate of interim re-debridement compared to non-ETO patients (0% vs. 14.1%, p = 0.026). Reinfection following reimplantation was similar in both groups (12.5% in ETO patients vs. 9.4% in non-ETO patients, p = 0.365). Revision for aseptic reason was necessary in 12.5% in the ETO group and 14.1% in the non-ETO group (p = 0.833). Periprosthetic femoral fractures were seen in three patients (3.1%), of which all occurred in non-ETO patients. Dislocation was the most common complication, which was equally distributed in both groups (12.5%). The mean mHHS was 37.7 in the ETO group and 37.3 in the non-ETO group, and these scores improved significantly in both groups following reimplantation (p &lt; 0.01). Conclusion: ETO without the use of spacer is a safe and effective method to manage patients with well-fixed femoral stems and for thorough cement removal in two-stage revision THA for PJI. While it might reduce the rate of repeated debridement in the interim period, the use of ETO appears to lead to similar reinfection rates following reimplantation. abstract_id: PUBMED:24530078 Orbitofrontal monobloc advancement for Crouzon syndrome. Introduction: Usually, patients suffering from Crouzon syndrome have synostosis of coronal sutures, exophthalmia, hypertelorism, and hypoplasia of the middle third of face. Sometimes maxillary retrusion is absent, so these patients have class I or II relationship. In these cases, frontofacial monobloc advancement, which is the gold standard, increases the maxillo-mandibular dysmorphia. Therefore we propose orbitofrontal monobloc advancement minus dental arch, without splits of the pterygoid plates. Case Report: A 12-year-old girl with Crouzon syndrome had intracranial hypertension, exophthalmia, a middle third retrusion and a class II occlusion. We achieved orbitofrontal monobloc advancement which is frontofacial monobloc advancement minus maxillary dental arch. Four distractors KLS Martin were used. After 20 days of distraction, the final advancement was 10.2 mm for cranial distractors and 10.5 mm at fronto-zygomatic. Distractors were removed after 8 months. Discussion: We offer patients suffering from Crouzon syndrome with class I or II relationship a change from the classic frontofacial monobloc advancement leaving the maxillary dental arch in place, thus avoiding the worsening of the maxillo-mandibular dysmorphia related to surgery. The idea of associating Le Fort I osteotomy with a frontofacial monobloc advancement or Le Fort III osteotomy has already been described, mainly by Tessier and Obwegeser, however they probably achieved a complete Le Fort I osteotomy while we don't split the pterygoid plates. The patient's morphology and his surgical history determine the choice between Le Fort III and monobloc advancement. Dental occlusion needs to be taken into account for surgical indication. abstract_id: PUBMED:35176524 Computer-Assisted Frontofacial Monobloc Advancement and Facial Bipartition for Pfeiffer Syndrome: Surgical Technique. Background: In patients with Pfeiffer syndrome, several corrections are required to correct facial retrusion, maxillary deficiency, or even hypertelorism. The frontofacial monobloc advancement (FFMA) and the facial bipartition (FB) are the gold standard surgeries. We present the correction of this deformity using a simultaneous computer-assisted FFMA and FB. Methods: The 3-dimensional surgical planning defined the virtual correction and bone-cutting guide in view of the FFMA and FB. Coronal and intraoral approaches were combined to perform the osteotomies. Four internal distractors were also placed for the postoperative distraction osteogenesis. Results: We reported 2 cases of computer-assisted surgery with satisfying outcomes. The sagittal deficiency (fronto-facial retrusion) was corrected by FFMA and the transversal abnormality (i.e., hypertelorism and maxillary deficiency) by the FB, then followed by an internal distraction osteogenesis. Conclusions: Computer-assisted surgery is helpful and a reliable option for the management of complex faciocraniosynostosis such as hypertelorism and frontofacial retrusion. abstract_id: PUBMED:36849679 Extended trochanteric osteotomy is a safe procedure in two-stage hip revision: a systematic review of the literature. Background: Extended trochanteric osteotomy (ETO) has proved to be an effective technique in complicated stem removal in femoral aseptic loosening or periprosthetic fracture. Debate remains about its safety in periprosthetic joint infection (PJI). The primary aim of this study is to analyze the ETO reinfection and union rate in two-stage hip revision. Material And Methods: A systematic literature review was performed regarding all studies reporting ETO outcomes in the two-stage revision for hip PJI up to October 2022, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses criteria. A literature search was conducted in the following databases: MEDLINE/EMBASE, Scopus, Web of Science, and Cochrane. Quality assessment of the articles was performed using the Methodological Index for Non-Randomized Studies. This systematic review was registered in the International Prospective Registry of Systematic Reviews. Patient demographic, clinical, and surgical data were collected. Results: This systematic review included and analyzed nine clinical studies with a total of 382 ETO PJI hips in two-stage revision. The overall ETO reinfection rate was 8.9% (34 hips), consistent with the reinfection rate after two-stage revision in patients without ETO. The overall ETO union rate was 94.8% (347 hips), comparable to the ETO union rate in non-septic patients. Compared between a group of patients with ETO PJI and a group of patients with non-PJI ETO, there were no significant differences in postoperative complications, both septic and aseptic, and for postoperative HHS. Conclusion: ETO proved to be a safe and effective procedure in PJI revisions. It may be a viable option in challenging femoral stem removal during the two-stage hip revision in PJI. Level Of Evidence: IV. Answer: The abstract from PUBMED:22872273 discusses the use of frontofacial monobloc advancement (FFMBA) as a procedure to correct exorbitism and impaired airways in patients with faciocraniosynostosis. The study evaluated 105 children who underwent FFMBA with quadruple internal distraction, starting the advancement at day 5 with a rate of 0.5 mm/day. The mean follow-up was 61 months. The results showed that 76 patients (72%) completed their distraction without issues in the initial period, and 99 out of 104 patients ultimately completed their distraction, resulting in the correction of exorbitism. Respiratory impairment, when present, was corrected, and a class I occlusal relationship was obtained in 77% of the cases. The study concluded that internal distraction allows early correction of respiratory impairment and exorbitism of faciocraniosynostosis. To limit risks, the study advises a preliminary craniovertebral junction decompression if needed, the use of four devices to customize the distraction, a double pericranial flap to seal the anterior cranial fossa, systematic external transient drainage if CSF leak occurs, a slow rate of distraction, and a long consolidation phase of 6 months. This abstract suggests that FFMBA can be a powerful single-stage procedure that may replace the traditional two-stage strategy for treating faciocraniosynostosis, as it allows for early correction of associated impairments with a high success rate and manageable complications. However, it also emphasizes the importance of careful planning and management of potential risks to ensure successful outcomes.
Instruction: Evaluation of patients with type 2 diabetes mellitus receiving treatment during the pre-diabetes period: Is early treatment associated with improved outcomes? Abstracts: abstract_id: PUBMED:27865163 Evaluation of patients with type 2 diabetes mellitus receiving treatment during the pre-diabetes period: Is early treatment associated with improved outcomes? Aim: This study evaluates the association of pretreatment with oral antidiabetics (OADs) on clinical outcomes and health resource utilization among commercially insured type II diabetes mellitus (T2DM) patients in the United States. Methods: Using administrative data (Truven MarketScan® Research Databases), patients diagnosed with T2DM between 2007 and 2014 with ⩾6months continuous enrolment pre- and post-diagnosis were evaluated. Pretreatment was defined as OAD use at least 3months prior to T2DM diagnosis. Time-to-insulin initiation and healthcare costs were compared by OAD pretreatment status. Results: Of the 866,605 patients studied, 241,856 (27.9%) were pretreated prior to T2DM diagnosis. Mean follow-up was 2.9years for pretreatment and 3.1years for those without pretreatment. Monthly diabetes-related pharmacy costs were significantly higher among pretreated patients ($66 versus $36, p&lt;0.0001), as were overall monthly pharmacy costs ($255 versus $198, p&lt;0.0001). Pretreated patients had lower mean monthly costs, both total ($625 versus $671, p&lt;0.0001) and diabetes-related ($207 versus $214, p=0.0012). After multivariable adjustment, mean monthly diabetes-related total healthcare costs were higher among pretreated patients (+$60) but total all-cause monthly healthcare costs were significantly lower (-$354) (both p&lt;0.05). Pretreatment was associated with a lower insulin initiation probability for 2years, after which probability was similar; the adjusted hazard ratio for pretreatment in a time-to-insulin model was 0.96 (95% CI, 0.94-0.97). Conclusions: Pretreatment with OADs is associated with a modest delay in initiating insulin therapy and lower total healthcare costs. The clinical and pharmacoeconomic benefits of pretreatment should be elucidated in a prospective study. abstract_id: PUBMED:29317841 Obesity in Mexico: prevalence, comorbidities, associations with patient outcomes, and treatment experiences. Objective: The goal of this study is to investigate obesity and its concomitant effects including the prevalence of comorbidities, its association with patient-reported outcomes and costs, and weight loss strategies in a sample of Mexican adults. Methods: Mexican adults (N=2,511) were recruited from a combination of Internet panels and street intercepts using a random-stratified sampling framework, with strata defined by age and sex, so that they represent the population. Participants responded to a survey consisting of a range of topics including sociodemographics, health history, health-related quality of life (HRQoL), work productivity, health care resource use, and weight loss. Results: The sample consisted of 50.6% male with a mean age of 40.7 years (SD=14.5); 38.3% were overweight, and 24.4% were obese. Increasing body mass index (BMI) was associated with increased rates of type 2 diabetes, prediabetes, and hypertension, poorer HRQoL, and decreased work productivity. Of the total number of respondents, 62.2% reported taking steps to lose weight with 27.6% and 17.1% having used an over-the-counter/herbal product and a prescription medication, respectively. Treatment discontinuation rates were high. Conclusion: Findings indicated that 62% of participants reported, at least, being overweight and that they were experiencing the deleterious effects associated with higher BMI despite the desire to lose weight. Given the rates of obesity, and its impact on humanistic and societal outcomes, improved education, prevention, and management could provide significant benefits. abstract_id: PUBMED:25024596 Non-alcoholic fatty liver disease and diabetes: from physiopathological interplay to diagnosis and treatment. Non-alcoholic fatty liver disease (NAFLD) is highly prevalent in patients with diabetes mellitus and increasing evidence suggests that patients with type 2 diabetes are at a particularly high risk for developing the progressive forms of NAFLD, non-alcoholic steatohepatitis and associated advanced liver fibrosis. Moreover, diabetes is an independent risk factor for NAFLD progression, and for hepatocellular carcinoma development and liver-related mortality in prospective studies. Notwithstanding, patients with NAFLD have an elevated prevalence of prediabetes. Recent studies have shown that NAFLD presence predicts the development of type 2 diabetes. Diabetes and NAFLD have mutual pathogenetic mechanisms and it is possible that genetic and environmental factors interact with metabolic derangements to accelerate NAFLD progression in diabetic patients. The diagnosis of the more advanced stages of NAFLD in diabetic patients shares the same challenges as in non-diabetic patients and it includes imaging and serological methods, although histopathological evaluation is still considered the gold standard diagnostic method. An effective established treatment is not yet available for patients with steatohepatitis and fibrosis and randomized clinical trials including only diabetic patients are lacking. We sought to outline the published data including epidemiology, pathogenesis, diagnosis and treatment of NAFLD in diabetic patients, in order to better understand the interplay between these two prevalent diseases and identify the gaps that still need to be fulfilled in the management of NAFLD in patients with diabetes mellitus. abstract_id: PUBMED:37364260 Sustained weight loss with semaglutide once weekly in patients without type 2 diabetes and post-bariatric treatment failure. About 20%-25% of patients experience weight regain (WR) or insufficient weight loss (IWL) following bariatric surgery (BS). Therefore, we aimed to retrospectively assess the effectiveness of adjunct treatment with semaglutide in patients without type 2 diabetes (T2D) with post-bariatric treatment failure over a 12 months period. Post-bariatric patients without T2D with WR or IWL (n = 29) were included in the analysis. The primary endpoint was weight loss 12 months after initiation of adjunct treatment. Secondary endpoints included change in body mass index, HbA1c, lipid profile, high sensitive C-reactive protein and liver enzymes. Total weight loss during semaglutide treatment added up to 14.7% ± 8.9% (mean ± SD, p &lt; .001) after 12 months. Categorical weight loss was &gt;5% in 89.7% of patients, &gt;10% in 62.1% of patients, &gt;15% in 34.5% of patients, &gt;20% in 24.1% of patients and &gt; 25% in 17.2% of patients. Adjunct treatment with semaglutide resulted in sustained weight loss regardless of sex, WR or IWL and type of surgery. Among patients with prediabetes (n = 6), 12 months treatment led to normoglycemia in all patients (p &lt; .05). Treatment options to manage post-bariatric treatment failure are scarce. Our results imply a clear benefit of adjunct treatment with semaglutide in post-bariatric patients over a 12 months follow-up period. abstract_id: PUBMED:27101131 Treatment of patients with type 2 diabetes and non-alcoholic fatty liver disease: current approaches and future directions. Non-alcoholic fatty liver disease (NAFLD) is reaching epidemic proportions in patients with type 2 diabetes. Patients with NAFLD are at increased risk of more aggressive liver disease (non-alcoholic steatohepatitis [NASH]) and at a higher risk of death from cirrhosis, hepatocellular carcinoma and cardiovascular disease. Dysfunctional adipose tissue and insulin resistance play an important role in the pathogenesis of NASH, creating the conditions for hepatocyte lipotoxicity. Mitochondrial defects are at the core of the paradigm linking chronic excess substrate supply, insulin resistance and NASH. Recent work indicates that patients with NASH have more severe insulin resistance and lipotoxicity compared with matched obese controls with only isolated steatosis. This review focuses on available agents and future drugs under development for the treatment of NAFLD/NASH in type 2 diabetes. Reversal of lipotoxicity with pioglitazone is associated with significant histological improvement, which occurs within 6 months and persists with continued treatment (or for at least 3 years) in patients with prediabetes or type 2 diabetes, holding potential to modify the natural history of the disease. These results also suggest that pioglitazone may become the standard of care for this population. Benefit has also been reported in non-diabetic patients. Recent promising results with glucagon-like peptide 1 receptor agonists have opened another new treatment avenue for NASH. Many agents in Phase 2-3 of development are being tested, aiming to restore glucose/lipid metabolism, ameliorate adipose tissue and liver inflammation, or to inhibit liver fibrosis. By targeting a diversity of relevant pathways, combination therapy in NASH will likely provide greater success in the future. In summary, increased clinical awareness and improved screening strategies (as currently done for diabetic retinopathy and nephropathy) are needed, to translate recent treatment progress into early treatment and improved quality of life for patients with type 2 diabetes and NASH. This review summarises a presentation given at the symposium 'The liver in focus' at the 2015 annual meeting of the EASD. It is accompanied by two other reviews on topics from this symposium (by John Jones, DOI: 10.1007/s00125-016-3940-5 , and by Hannele Yki-Järvinen, DOI: 10.1007/s00125-016-3944-1 ) and a commentary by the Session Chair, Michael Roden (DOI: 10.1007/s00125-016-3911-x ). abstract_id: PUBMED:22356575 The outcomes of glucose abnormalities in pre-diabetic chronic hepatitis C patients receiving peginterferon plus ribavirin therapy. Background/aims: Pre-diabetes is a risk factor for type 2 diabetes mellitus (DM) development. This study aimed to elucidate the impact of treatment response on sequential changes in glucose abnormalities in pre-diabetic chronic hepatitis C (CHC) patients. Methods: Chronic Hepatitis C patients with a baseline haemoglobin A1C (A1C) range 5.7-6.4% who achieved 80/80/80 adherence were prospectively recruited. All patients received current peginterferon-based recommendations. The primary outcome measurement was their A1C level at the end of follow-up (EOF). The interaction between variants of the IL28B gene and outcomes of glucose metabolism was also measured. Results: A total of 181 consecutive CHC patients were enrolled. The mean A1C at EOF was 5.82 ± 0.41%, which was significantly lower than the baseline level (5.93 ± 0.21%, P &lt; 0.001). At EOF, 63 (34.8%) patients became normoglycaemic, whereas 10 (5.5%) patients developed DM. The sustained virological response (SVR) rates of 63 normoglycaemics, 108 pre-diabetics and 10 diabetic patients at the EOF were 92.1%, 84.3% and 50% respectively (normoglycaemics vs. diabetics P = 0.003; pre-diabetics vs. diabetics P = 0.02). Achievement of an SVR was the only predictive factor associated with normoglycaemia development at EOF by multivariate logistic regression analysis (Odds ratio = 2.6, P = 0.04). The prevalence of the interleukin 28B rs8099917 TT variant in patients who developed DM (70.0%) at EOF tended to be lower than that in patients with pre-diabetics (87.0%) or normoglycaemics (92.1%). Conclusion: Successful eradication of HCV improves glucose abnormalities in pre-diabetic CHC patients. abstract_id: PUBMED:37850280 Efficacy of Sildenafil oral spray for the treatment of erectile dysfunction in patients with type 2 diabetes mellitus and prediabetes Aim: To evaluate the results of using Sildenafil in the form of an oral spray (Gent) for the treatment of erectile dysfunction (ED) in men with type 2 diabetes mellitus (DM) and prediabetes. Material And Methods: A total of 60 patients were divided into two groups of 30 people. The group 1 included patients with prediabetes, while group 2 consisted of patients with type 2 DM. All men had proven ED. The severity of ED was assessed using the International Index of Erectile Function (IIEF-5). To assess the state of penile blood flow, all patients underwent Doppler ultrasound before and after treatment. Patients with prediabetes used Sildenafil in the form of oral spray (Gent) 25 mg (2 doses) 1 time per day for 1 month, patients with type 2 diabetes received 50 mg (4 injections) every other day for 1 month. In addition, most of the subjects took metformin and followed diet therapy. Results: In patients of both groups, the administration of Sildenafil oral spray led to a decrease in body weight, waist circumference, a decrease in insulin and Hemoglobin A1C level without changing of hypoglycemic therapy in those with type 2 DM. In men with prediabetes, a decrease in fasting insulin levels was found. During treatment, half of the persons with impaired glucose metabolism had an increase in the testosterone level. According to IIEF-5, a decrease in the severity of ED in both groups of patients was seen. In men with prediabetes, the average IIEF-5 score increased from 15.98 to 21.57 points (p&lt;0.05), while in patients with type 2 DM it improved from 12.18 to 18.44 points (p&lt;0.05). Doppler ultrasound indicated a significant increase in the maximum systolic blood flow velocity and arterial resistivity index after treatment with Sildenafil oral spray in patients with both prediabetes and type 2 diabetes. Conclusion: Sildenafil oral spray can be effectively used for the treatment of ED in men with type 2 DM and prediabetes. abstract_id: PUBMED:30326764 Effect of Periodontal Treatment on HbA1c among Patients with Prediabetes. Evidence is limited regarding whether periodontal treatment improves hemoglobin A1c (HbA1c) among people with prediabetes and periodontal disease, and it is unknown whether improvement of metabolic status persists &gt;3 mo. In an exploratory post hoc analysis of the multicenter randomized controlled trial "Antibiotika und Parodontitis" (Antibiotics and Periodontitis)-a prospective, stratified, double-blind study-we assessed whether nonsurgical periodontal treatment with or without an adjunctive systemic antibiotic treatment affects HbA1c and high-sensitivity C-reactive protein (hsCRP) levels among periodontitis patients with normal HbA1c (≤5.7%, n = 218), prediabetes (5.7% &lt; HbA1c &lt; 6.5%, n = 101), or unknown diabetes (HbA1c ≥ 6.5%, n = 8) over a period of 27.5 mo. Nonsurgical periodontal treatment reduced mean pocket probing depth by &gt;1 mm in both groups. In the normal HbA1c group, HbA1c values remained unchanged at 5.0% (95% CI, 4.9% to 6.1%) during the observation period. Among periodontitis patients with prediabetes, HbA1c decreased from 5.9% (95% CI, 5.9% to 6.0%) to 5.4% (95% CI, 5.3% to 5.5%) at 15.5 mo and increased to 5.6% (95% CI, 5.4% to 5.7%) after 27.5 mo. At 27.5 mo, 46% of periodontitis patients with prediabetes had normal HbA1c levels, whereas 47.9% remained unchanged and 6.3% progressed to diabetes. Median hsCRP values were reduced in the normal HbA1c and prediabetes groups from 1.2 and 1.4 mg/L to 0.7 and 0.7 mg/L, respectively. Nonsurgical periodontal treatment may improve blood glucose values among periodontitis patients with prediabetes (ClinicalTrials.gov NCT00707369). abstract_id: PUBMED:32690918 Diet and exercise in the prevention and treatment of type 2 diabetes mellitus. Evidence from observational studies and randomized trials suggests that prediabetes and type 2 diabetes mellitus (T2DM) can develop in genetically susceptible individuals in parallel with weight (that is, fat) gain. Accordingly, studies show that weight loss can produce remission of T2DM in a dose-dependent manner. A weight loss of ~15 kg, achieved by calorie restriction as part of an intensive management programme, can lead to remission of T2DM in ~80% of patients with obesity and T2DM. However, long-term weight loss maintenance is challenging. Obesity and T2DM are associated with diminished glucose uptake in the brain that impairs the satiating effect of dietary carbohydrate; therefore, carbohydrate restriction might help maintain weight loss and maximize metabolic benefits. Likewise, increases in physical activity and fitness are an important contributor to T2DM remission when combined with calorie restriction and weight loss. Preliminary studies suggest that a precision dietary management approach that uses pretreatment glycaemic status to stratify patients can help optimize dietary recommendations with respect to carbohydrate, fat and dietary fibre. This approach might lead to improved weight loss maintenance and glycaemic control. Future research should focus on better understanding the individual response to dietary treatment and translating these findings into clinical practice. abstract_id: PUBMED:38164169 Treatment for type 2 diabetes and diabetic nephropathy by targeting Smad3 signaling. TGF-β/Smad3 signaling plays a critical role in type 2 diabetes (T2D) and type 2 diabetic nephropathy (T2DN), but treatment by specifically targeting Smad3 remains unexplored. To develop a new Smad3-targeted therapy for T2D and T2DN, we treated db/db mice at the pre-diabetic or established diabetic stage with a pharmacological Smad3 inhibitor SIS3. The therapeutic effect and mechanisms of anti-Smad3 treatment on T2D and T2DN were investigated. We found that anti-Smad3 treatment on pre-diabetic db/db mice largely attenuated both T2D and T2DN by markedly reducing blood glucose levels, and inhibiting the elevated serum creatinine, microalbuminuria, and renal fibrosis and inflammation. Unexpectedly, although SIS3 treatment on the established diabetic db/db mice inhibited T2DN but did not significantly improve T2D. Mechanistically, we uncovered that inhibition of T2DN in SIS3-treated db/db mice was associated with effectively restoring the balance of TGF-β/Smad signaling by inhibiting Smad3 while increasing Smad7, thereby suppressing Smad3-mediated renal fibrosis and NF-κB-driven renal inflammation via lncRNA Erbb4-IR and LRN9884-dependent mechanisms. We also revealed that inhibition of islet β cell injury by preventing the loss of islet Pax 6 could be the mechanism through which the pre-diabetic treatment, rather than the late SIS3 treatment on db/db mice significantly improved the T2D phenotype. Answer: The evaluation of patients with type 2 diabetes mellitus (T2DM) who received treatment during the pre-diabetes period suggests that early treatment is associated with improved outcomes. A study that assessed the impact of pretreatment with oral antidiabetics (OADs) on clinical outcomes and health resource utilization among T2DM patients in the United States found that pretreatment was associated with a modest delay in initiating insulin therapy and lower total healthcare costs (PUBMED:27865163). Pretreated patients had lower mean monthly costs, both total and diabetes-related, and after multivariable adjustment, mean monthly diabetes-related total healthcare costs were higher among pretreated patients, but total all-cause monthly healthcare costs were significantly lower. This indicates that early treatment with OADs during the pre-diabetes period can lead to a delay in the progression to insulin therapy and can be cost-effective in the management of T2DM. Furthermore, the study suggests that the clinical and pharmacoeconomic benefits of pretreatment should be further investigated in a prospective study to better understand the long-term implications of early intervention in pre-diabetes (PUBMED:27865163). This aligns with the broader need for improved education, prevention, and management strategies to address the increasing rates of obesity and its impact on T2DM, as highlighted in a study on obesity in Mexico (PUBMED:29317841). In summary, early treatment during the pre-diabetes period appears to be associated with delayed progression to more advanced diabetes treatments and lower overall healthcare costs, suggesting potential benefits in both clinical outcomes and healthcare resource utilization.
Instruction: Malnutrition after vascular surgery: are patients with chronic renal failure at increased risk? Abstracts: abstract_id: PUBMED:17071176 Malnutrition after vascular surgery: are patients with chronic renal failure at increased risk? Background: The deleterious effects of perioperative malnutrition on recovery after general surgery are established. Since the effects of perioperative malnutrition on recovery after vascular surgery are not known, we examined the effects of nutritional status, and risk factors predictive of malnutrition, on outcome after vascular surgery. Methods: The records of all open index vascular cases (abdominal aortic aneurysm [AAA] repair, carotid endarterectomy [CEA], lower extremity bypass) performed at the Veterans Affairs (VA) Connecticut between July 2004 and June 2005 were reviewed. The primary outcome was mortality; secondary outcomes included infection and nutritional risk index (NRI) scores. Results: Sixty-eight open vascular cases were performed during the study period. Nutritional depletion developed in 55% of patients and was more likely in patients undergoing AAA (85%) or bypass (77%) than CEA (30%; P = .0005). Patients who developed malnutrition had similar mortality as patients who did not develop postoperative malnutrition (6.1% vs. 3.7%; P = .68); however, malnourished patients had higher rates of postoperative infection (24.2% vs. 3.7%; P = .03). Chronic renal failure was the only patient-associated risk factor predictive of postoperative nutritional depletion (odds ratio 5.9, confidence interval 1.0 to 33.6; P = .04). Conclusions: Patients undergoing major open vascular surgery have high rates of postoperative malnutrition, with patients undergoing AAA repair having the highest rates of postoperative malnutrition and infection. Patients with chronic renal failure undergoing vascular surgery are associated with increased risk for postoperative malnutrition and may be a group to target for perioperative risk factor modification and nutritional supplementation. abstract_id: PUBMED:24199911 Risk factors for hospital-acquired pneumonia outside the intensive care unit: a case-control study. Background: Hospital-acquired pneumonia (HAP) is one of the leading nosocomial infections and is associated with high morbidity and mortality. Numerous studies on HAP have been performed in intensive care units (ICUs), whereas very few have focused on patients in general wards. This study examined the incidence of, risk factors for, and outcomes of HAP outside the ICU. Methods: An incident case-control study was conducted in a 600-bed hospital between January 2006 and April 2008. Each case of HAP was randomly matched with 2 paired controls. Data on risk factors, patient characteristics, and outcomes were collected. Results: The study group comprised 119 patients with HAP and 238 controls. The incidence of HAP outside the ICU was 2.45 cases per 1,000 discharges. Multivariate analysis identified malnutrition, chronic renal failure, anemia, depression of consciousness, Charlson comorbidity index ≥3, previous hospitalization, and thoracic surgery as significant risk factors for HAP. Complications occurred in 57.1% patients. The mortality attributed to HAP was 27.7%. Conclusions: HAP outside the ICU prevailed in patients with malnutrition, chronic renal failure, anemia, depression of consciousness, comorbidity, recent hospitalization, and thoracic surgery. HAP in general wards carries an elevated morbidity and mortality and is associated with increased length of hospital stay and increased rate of discharge to a skilled nursing facility. abstract_id: PUBMED:26539450 Patient-related medical risk factors for periprosthetic joint infection of the hip and knee. Despite advancements and improvements in methods for preventing infection, periprosthetic joint infection (PJI) is a significant complication following total joint arthroplasty (TJA). Prevention is the most important strategy to deal with this disabling complication, and prevention should begin with identifying patient-related risk factors. Medical risk factors, such as morbid obesity, malnutrition, hyperglycemia, uncontrolled diabetes mellitus, rheumatoid arthritis (RA), preoperative anemia, cardiovascular disorders, chronic renal failure, smoking, alcohol abuse and depression, should be evaluated and optimized prior to surgery. Treating patients to get laboratory values under a specified threshold or cessation of certain modifiable risk factors can decrease the risk of PJI. Although significant advances have been made in past decades to identify these risk factors, there remains some uncertainty regarding the risk factors predisposing TJA patients to PJI. Through a review of the current literature, this paper aims to comprehensively evaluate and provide a better understanding of known medical risk factors for PJI after TJA. abstract_id: PUBMED:33348716 Association between Geriatric Nutritional Risk Index and Mortality in Older Trauma Patients in the Intensive Care Unit. The geriatric nutritional risk index (GNRI) is a simple and efficient tool to assess the nutritional status of patients with malignancies or after surgery. Because trauma patients constitute a specific population that generally acquires accidental and acute injury, this study aimed to identify the association between the GNRI at admission and mortality outcomes of older trauma patients in the intensive care unit (ICU). Methods: The study population included 700 older trauma patients admitted to the ICU between 1 January 2009 and 31 December 2019. The collected data included age, sex, body mass index (BMI), albumin level at admission, preexisting comorbidities, injury severity score (ISS), and in-hospital mortality. Multivariate logistic regression analysis was conducted to identify the independent effects of univariate predictive variables resulting in mortality in our study population. The study population was categorized into four nutritional risk groups: a major-risk group (GNRI &lt; 82; n = 128), moderate-risk group (GNRI 82 to &lt;92; n = 191), low-risk group (GNRI 92-98; n = 136), and no-risk group (GNRI &gt; 98; n = 245). Results: There was no significant difference in sex predominance, age, and BMI between the mortality (n = 125) and survival (n = 575) groups. The GNRI was significantly lower in the mortality group than in the survival group (89.8 ± 12.9 vs. 94.2 ± 12.0, p &lt; 0.001). Multivariate logistic regression analysis showed that the GNRI (odds ratio-OR, 0.97; 95% confidence interval (CI) 0.95-0.99; p = 0.001), preexisting end-stage renal disease (OR, 3.6; 95% CI, 1.70-7.67; p = 0.001), and ISS (OR, 1.1; 95% CI, 1.05-1.10; p &lt; 0.001) were significant independent risk factors for mortality. Compared to the patients in group of GNRI &gt; 98, those patients in group of GNRI &lt; 82 presented a significantly higher mortality rate (26.6% vs. 13.1%; p &lt; 0.001) and length of stay in hospital (26.5 days vs. 20.9 days; p = 0.016). Conclusions: This study demonstrated that GNRI is a significant independent risk factor and a promising simple screening tool to identify the subjects with malnutrition associated with higher risk for mortality in those ICU elderly trauma patients. abstract_id: PUBMED:30661387 Hypoalbuminemia Is Associated With Increased Postoperative Mortality and Complications in Hand Surgery. Background: Malnutrition has been associated with increased perioperative morbidity and mortality in orthopedic surgery. This study was designed with the hypothesis that preoperative hypoalbuminemia, a marker for malnutrition, is associated with increased complications after hand surgery. Methods: A retrospective cohort study of 208 hand-specific Current Procedural Terminology codes was conducted with the American College of Surgeons National Surgical Quality Improvement Program database from 2005 to 2013. In all, 629 patients with low serum albumin were compared with 4079 patients with normal serum albumin. The effect of hypoalbuminemia was tested for association with 30-day postoperative mortality, and major and minor complications. Results: Hypoalbuminemia was independently associated with emergency surgery, diabetes mellitus, dependent functional status, hypertension, end-stage renal disease, current smoking status, and anemia. Patients with hypoalbuminemia had a higher rate of mortality, minor complications, and major complications. Conclusions: Hypoalbuminemia is associated with an increased risk of postoperative morbidity and mortality in patients undergoing hand surgery. As such, increased focus on perioperative nutrition optimization may lead to improved outcomes for patients undergoing hand surgery. abstract_id: PUBMED:33551075 Impact of preoperative nutritional scores on 1-year postoperative mortality in patients undergoing valvular heart surgery. Objective: Malnutrition is a well-recognized risk factor for poor prognosis and mortality. We investigated whether preoperative malnutrition diagnosed with objective nutritional scores affects 1-year mortality in patients undergoing valvular heart surgery. Methods: In this retrospective cohort observational study, we evaluated the association among the Controlling Nutritional Status score, Prognostic Nutritional Index, and Geriatric Nutritional Risk Index with 1-year mortality in 1927 patients undergoing valvular heart surgery. We identified factors for mortality using multivariable Cox proportional hazard analysis and investigated the utility of nutritional scores for risk stratification. Results: Malnutrition, as identified by a high Controlling Nutritional Status score and low Prognostic Nutritional Index and Geriatric Nutritional Risk Index, was significantly associated with higher 1-year mortality. Kaplan-Meier survival curve showed that mortality significantly increased as the severity of malnutrition increased (log-rank test, P &lt; .001). The predicted discrimination (C-index) was 0.79 with the Controlling Nutritional Status score, 0.77 with the Prognostic Nutritional Index, and 0.73 with the Geriatric Nutritional Risk Index. Each nutritional index (Controlling Nutritional Status; hazard ratio, 1.31, 95% confidence interval, 1.21-1.42, P &lt; .001), the European System for Cardiac Operative Risk Evaluation II (hazard ratio, 1.07, 95% confidence interval, 1.04-1.09, P &lt; .001), and chronic kidney disease (hazard ratio, 2.26, 95% confidence interval, 1.31-3.90, P = .003) were independent risk factors for mortality. The Controlling Nutritional Status score added to the European System for Cardiac Operative Risk Evaluation II significantly increased the predictive discrimination ability for mortality (C-index 0.82, 95% confidence interval, 0.78-0.87, P = .014) compared with the Controlling Nutritional Status or European System for Cardiac Operative Risk Evaluation II alone. Conclusions: Preoperative malnutrition as assessed by objective nutritional scores was associated with 1-year mortality after valvular heart surgery. The Controlling Nutritional Status score had the highest predictive ability and, when added to the European System for Cardiac Operative Risk Evaluation II, provided more accurate risk stratification. abstract_id: PUBMED:29766740 Factors Predictive of Postoperative Acute Respiratory Failure Following Inpatient Sinus Surgery. Objective: The impact of perioperative risk factors on outcomes following outpatient sinus surgery is well defined; however, risk factors and outcomes following inpatient surgery remain poorly understood. We aimed to define risk factors of postoperative acute respiratory failure following inpatient sinus surgery. Methods: Utilizing data from the Nationwide Inpatient Sample Database from the years 2010 to 2014, we identified patients (≥18 years of age) with an Internal Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) procedure code of sinus surgery. We used multivariable logistic regression to identify risk factors of postoperative acute respiratory failure. Results: We identified 4919 patients with a median age of 53 years. The rate of inpatient postoperative acute respiratory failure was 3.35%. Chronic sinusitis (57.7%) was the most common discharge diagnosis. The final multivariable logistic regression analysis suggested that pneumonia, bleeding disorder, alcohol dependence, nutritional deficiency, heart failure, paranasal fungal infections, and chronic kidney disease were associated with increased odds of acute respiratory failure (all P &lt; .05). Conclusion: To our knowledge, this represents the first study to evaluate potential risk factors of acute respiratory failure following inpatient sinus surgery. Knowledge of these risk factors may be used for risk stratification. abstract_id: PUBMED:31226386 Refining risk adjustment for bundled payment models in cervical fusions-an analysis of Medicare beneficiaries. Background Context: The current Bundled Payment for Care Improvement model relies on the use of "Diagnosis Related Groups" (DRGs) to risk-adjust reimbursements associated with a 90-day episode of care. Three distinct DRG groups exist for defining payments associated with cervical fusions: (1) DRG-471 (cervical fusions with major comorbidity/complications), (2) DRG-472 (with comorbidity/complications), and (3) DRG-473 (without major comorbidity/complications). However, this DRG system may not be entirely suitable in controlling the large amounts of cost variation seen among cervical fusions. For instance, these DRGs do not account for area/location of surgery (upper cervical vs. lower cervical), type of surgery (primary vs. revision), surgical approach (anterior vs. posterior), extent of fusion (1-3 level vs. &gt;3 level), and cause/indication of surgery (fracture vs. degenerative pathology). Purpose: To understand factors responsible for cost variation in a 90-day episode of care following cervical fusions. Study Design: Retrospective study of a 5% national sample of Medicare claims from 2008 to 2014 5% Standard Analytical Files (SAF5). Outcome Measures: To calculate the independent marginal cost impact of various patient-level, geographic-level, and procedure-level characteristics on 90-day reimbursements for patients undergoing cervical fusions under DRG-471, DRG-472, and DRG-473. Methods: The 2008 to 2014 Medicare SAF5 was queried using DRG codes 471, 472, and 473 to identify patients receiving a cervical fusion. Patients undergoing noncervical fusions (thoracolumbar), surgery for deformity/malignancy, and/or combined anterior-posterior fusions were excluded. Patients with missing data and/or those who died within 90 days of the postoperative follow-up period were excluded. Multivariate linear regression modeling was performed to assess the independent marginal cost impact of DRG, gender, age, state, procedure-level factors (including cause/indication of surgery), and comorbidities on total 90-day reimbursement. Results: Following application of inclusion/exclusion criteria, a total of 12,419 cervical fusions were included. The average 90-day reimbursement for each DRG group was as follows: (1) DRG-471=$54,314±$32,643, (2) DRG-472=$28,535±$17,271, and (3) DRG-473=$18,492±$10,706. The risk-adjusted 90-day reimbursement of a nongeriatric (age &lt;65) female, with no major comorbidities, undergoing a primary 1- to 3-level anterior cervical fusion for degenerative cervical spine disease was $14,924±$753. Male gender (+$922) and age 70 to 84 (+$1,007 to +$2,431) was associated with significant marginal increases in 90-day reimbursements. Undergoing upper cervical surgery (-$1,678) had a negative marginal cost impact. Among other procedure-level factors, posterior approach (+$3,164), &gt;3 level fusion (+$2,561), interbody (+$667), use of intra-operative neuromonitoring (+$1,018), concurrent decompression/laminectomy (+$1,657), and undergoing fusion for cervical fracture (+$3,530) were associated higher 90-day reimbursements. Severe individual comorbidities were associated with higher 90-day reimbursements, with malnutrition (+$15,536), CVA/stroke (+$6,982), drug abuse/dependence (+$5,059), hypercoagulopathy (+$5,436), and chronic kidney disease (+$4,925) having the highest marginal cost impacts. Significant state-level variation was noted, with Maryland (+$8,790), Alaska (+$6,410), Massachusetts (+$6,389), California (+$5,603), and New Mexico (+$5,530) having the highest reimbursements and Puerto Rico (-$7,492) and Iowa (-$3,393) having the lowest reimbursements, as compared with Michigan. Conclusions: The current cervical fusion bundled payment model fails to employ a robust risk adjustment of prices resulting in the large amount of cost variation seen within 90-day reimbursements. Under the proposed DRG-based risk adjustment model, providers would be reimbursed the same amount for cervical fusions regardless of the surgical approach (posterior vs. anterior), the extent of fusion, use of adjunct procedures (decompressions), and cause/indication of surgery (fracture vs. degenerative pathology), despite each of these factors having different resource utilization and associated reimbursements. Our findings suggest that defining payments based on DRG codes only is an imperfect way of employing bundled payments for spinal fusions and will only end up creating major financial disincentives and barriers to access of care in the healthcare system. abstract_id: PUBMED:26984667 Outcome Predictors in Prosthetic Joint Infections--Validation of a risk stratification score for Prosthetic Joint Infections in 120 cases. Prosthetic joint infections are a major challenge in total joint arthroplasty, especially in times of accumulating drug resistancies. Even though predictive risk classifications are a widely accepted tool to define a suitable treatment protocol a classification is still missing considering the difficulty in treating the -causative pathogen antibiotically. In this study, we present and evaluate a new predictive risk stratification for prosthetic joint infections in 120 cases, treated with a two-stage exchange. Treatment outcomes in 120 patients with proven prosthetic joint infections in hip and knee prostheses were regressed on time of infection, systemic risk factors, local risk factors and the difficulty in treating the causing pathogen. The main outcome variable was "definitely free of infection" after two years as published. Age, gender, and BMI were included as covariables and analyzed in a logistic regression model. 66 male and 54 female patients, with a mean age at surgery of 68.3 years±12.0 and a mean BMI of 26.05±6.21 were included in our survey and followed for 29.0±11.3 months. We found a significant association (p&lt;0.001) between our score and the outcome parameters evaluated. Age, gender and BMI did not show a significant association with the outcome. These results show that our score is an independent and reliable predictor for the cure rate in prosthetic joint infections in hip and knee prostheses treated within a two-stage exchange protocol. Our score illustrates, that there is a statistically significant, sizable decrease in cure rate with an increase in score. In patients with prosthetic joint infections the validation of a risk score may help to identify patients with local and systemic risk factors or with infectious organisms identified as "difficult to treat" prior to the treatment or the decision about the treatment concept. Thus, appropriate extra care should be considered and provided. abstract_id: PUBMED:31882148 Refining Risk-Adjustment of 90-Day Costs Following Surgical Fixation of Ankle Fractures: An Analysis of Medicare Beneficiaries. As the current healthcare model transitions from fee-for-service to value-based payments, identifying cost-drivers of 90-day payments following surgical procedures will be a key factor in risk-adjusting prospective bundled payments and ensuring success of these alternative payment models. The 5% Medicare Standard Analytical Files data set for 2005-2014 was used to identify patients undergoing open reduction and internal fixation (ORIF) for isolated unimalleolar, bimalleolar, and trimalleolar ankle fractures. All acute care and post-acute care payments starting from day 0 of surgery to day 90 postoperatively were used to calculate 90-day costs. Patients with missing data were excluded. Multivariate linear regression modeling was used to derive marginal cost impact of patient-level (age, sex, and comorbidities), procedure-level (fracture type, morphology, location of surgery, concurrent ankle arthroscopy, and syndesmotic fixation), and state-level factors on 90-day costs after surgery. A total of 6499 patients were included in the study. The risk-adjusted 90-day cost for a female patient, aged 65 to 69 years, undergoing outpatient ORIF for a closed unimalleolar ankle fracture in Michigan was $6949 ± $1060. Individuals aged &lt;65 or ≥70 years had significantly higher costs. Procedure-level factors associated with significant marginal cost increases were inpatient surgery (+$5577), trimalleolar fracture (+$1082), and syndesmotic fixation (+$2822). The top 5 comorbidities with the largest marginal cost increases were chronic kidney disease (+$8897), malnutrition (+$7908), obesity (+$5362), cerebrovascular disease/stroke (+$4159), and anemia (+$3087). Higher costs were seen in Nevada (+$6371), Massachusetts (+$4497), Oklahoma (+$4002), New Jersey (+$3802), and Maryland (+$3043) compared with Michigan. With the use of a national administrative claims database, the study identifies numerous patient-level, procedure-level, and state-level factors that significantly contribute to the cost variation seen in 90-day payments after ORIF for ankle fracture. Risk adjustment of 90-day costs will become a necessity as bundled-payment models begin to take over the current fee-for-service model in patients with fractures. Answer: Yes, patients with chronic renal failure are at increased risk of malnutrition after vascular surgery. The study by PUBMED:17071176 found that chronic renal failure was the only patient-associated risk factor predictive of postoperative nutritional depletion, with an odds ratio of 5.9. This indicates that patients with chronic renal failure undergoing vascular surgery are associated with an increased risk for postoperative malnutrition and may benefit from perioperative risk factor modification and nutritional supplementation.
Instruction: Gyrus plasmasect: is it better than monopolar transurethral resection of prostate? Abstracts: abstract_id: PUBMED:32223095 Monopolar transurethral enucleation and resection of the prostate: Status quo of its application and studies Transurethral enucleation of the prostate allows more complete excision of the proliferative glands at the anatomical level, and has its unique advantages over the traditional surgical procedures, such as better results of surgery, lower recurrence rate, and higher satisfaction of the patients. At present, transurethral laser enucleation of the prostate has a limited application in many grass-root hospitals for the high price of laser and plasma equipment and a high incidence rate of postoperative urinary incontinence. In this context, monopolar transurethral enucleation and resection of the prostate (mTUERP) has come into the attention of clinicians, which can be performed with the equipment for transurethral resection of the prostate (TURP) and may become a real alternative of TURP. This paper presents an overview on the development and present status of mTUERP. abstract_id: PUBMED:15539847 Gyrus plasmasect: is it better than monopolar transurethral resection of prostate? Introduction: This randomized prospective study was conducted to compare the efficacy and safety of the Gyrus Plasmasect loop bipolar transurethral resection of prostate (TURP) and conventional monopolar TURP in the treatment of benign prostatic hyperplasia (BPH). Materials And Methods: A total of 117 men were enrolled in this study. Fifty-eight patients underwent Gyrus Plasmasect TURP and 59 patients underwent monopolar TURP. They were followed up for 3 months after surgery. Results: Significant improvements were seen postoperatively in both the Gyrus and monopolar groups in terms of prostatic volume, International Prostate Symptom Score, quality of life score, peak flow rate, and post-void residual urine volume. However, the degree of improvement was not statistically different between the 2 groups. Significantly less blood loss, shorter postoperative catheterization time and length of hospital stay were seen in the Gyrus group. Conclusions: Gyrus Plasmasect TURP yielded comparable results to monopolar TURP; however, this is only a preliminary study and follow-up is necessary to assess its long-term efficacy. abstract_id: PUBMED:24485082 Bipolar versus monopolar transurethral resection of the prostate: a prospective randomized study Purpose: To compare bipolar with standard monopolar transurethral resection of the prostate (TURP). Material And Methods: A prospectively randomized study was conducted between January 2010 and September 2011. Primary end points studied were efficacy (maximum flow rate [Qmax], International Prostate Symptom Score) and safety (adverse events, decline in postoperative serum sodium [Na+] and haemoglobin [Hb] levels). Secondary end points were operation time and duration of irrigation, catheterization, and hospitalization. Results: Sixty consecutive patients were randomized and completed the study, with 29 patients in the monopolar TURP group and 31 in the TURIS group. At baseline, the two groups were comparable in age, prostate volume, mean prostate-specific antigen value, International Prostate Symptom Score, and they had at least 12 months of follow-up. Declines in the mean postoperative serum Na+ for bipolar and monopolar TURP groups were 1.2 and 8.7 mmol/L, respectively. However, there was no statistical difference in the decline in postoperative Hb between the two groups. The mean catheterization time was 26.6 and 52 hours in the bipolar and standard groups, respectively. This difference was statistically significant as was the difference in the time to hospital discharge. The IPSS and Qmax improvements were comparable between the two groups at 12 months of follow-up. Conclusion: No clinically relevant differences in short-term efficacy are existed between the two techniques, but bipolar TURP is preferable due to a more favorable safety profile and shorter catheterization duration. abstract_id: PUBMED:25371612 Day care monopolar transurethral resection of prostate: Is it feasible? Introduction: Benign prostatic hyperplasia is a common disease accounting for 30% of our OPD cases and about 25% of our surgery cases. Various treatment options are now available for more efficient care and early return to work. We wanted to determine the safety and feasibility of day care monopolar transurethral resection of prostate (m-TURP), by admitting the patients on the day of surgery and discharging the patient without catheter on the same day. We also compared the morbidity associated with conventional TURP where in the catheter is removed after 24-48 h of surgery and day care TURP where in the catheter is removed on the day of surgery. Materials And Methods: A total of 120 patients who fulfilled the criteria were included in the study which was conducted between November 2008 and December 2010. A total of 60 patients were assigned for day care and 60 for conventional monopolar TURP. There was no significant difference in age, prostatic volume or IPSS score. Day care patients were admitted on day of surgery and discharged the same day after the removal of catheter. Results: Both the groups were comparable in outcome. Stricture rate was less with day care TURP. Mean catheterization time was similar to laser TURP. Conclusion: Monopolar TURP is still the gold standard of care for BPH. If cases are selected properly and surgery performed diligently it remains the option of choice for small and medium sized glands and patients can be back to routine work early. abstract_id: PUBMED:22990062 Bipolar transurethral resection of the prostate causes deeper coagulation depth and less bleeding than monopolar transurethral prostatectomy. Objective: To investigate the hemostatic capability of mono- and bipolar transurethral resection of the prostate by comparing the perioperative blood loss with the coagulation depth achieved with mono- and bipolar transurethral resection of the prostate. Methods: A total of 136 patients with lower urinary tract symptoms associated with benign prostatic hyperplasia were randomized to undergo transurethral resection of the prostate using either a monopolar system (Karl Storz, Co., Tuttlingen, Germany) or a gyrus PlasmaKinetic bipolar system (Gyrus-ACMI Corporation, Maple Grove, MN). The operative time, resected tissue weight, decline in serum sodium and hemoglobin, postoperative bleeding, and the coagulation depth were compared. Results: There were no statistically significant differences in operative time, resected tissue weight, and capsular perforation. The decline in hemoglobin and serum sodium was 1.15 ± 0.53 g/dL and 4.57 ± 0.71 mmol/L in monopolar transurethral resection of the prostate group, respectively, whereas they fell only 0.71 ± 0.42 g/dL and 2.02 ± 0.53 mmol/L in the bipolar transurethral resection of the prostate group, respectively (P &lt;.001). The rate of postoperative bleeding was significantly higher in the monopolar transurethral resection of the prostate group (P = .027). The coagulation depths with mono- and bipolar transurethral resection of the prostate were 127.56 ± 27.76 and 148.48 ± 31.64 μm, respectively (P &lt;.001). Conclusion: Our results demonstrate that bipolar transurethral resection of the prostate causes less intraoperative hemoglobin drop and postoperative bleeding than monopolar transurethral resection of the prostate, which may be associated with the deeper coagulation depth of bipolar transurethral resection of the prostate. abstract_id: PUBMED:33198500 Safety and Efficacy of Bipolar Transurethral Resection of the Prostate vs Monopolar Transurethral Resection of Prostate in the Treatment of Moderate-Large Volume Prostatic Hyperplasia: A Systematic Review and Meta-Analysis. Aims: To compare outcomes of monopolar vs bipolar transurethral resection of the prostate (TURP) in the management of exclusively moderate-large volume prostatic hyperplasia in terms of maximum flow rate as a surrogate for clinical efficacy, duration of catheterization, hospital stay, operative time, resection weight, transurethral resection (TUR) syndrome, acute urinary retention (AUR), clot retention, and blood transfusion. Methods: We conducted a search of electronic databases (PubMed, MEDLINE, EMBASE, CINAHL, and CENTRAL), identifying studies comparing the outcomes of monopolar and bipolar TURP in the management of large-volume prostatic hyperplasia. The Cochrane risk-of-bias tool for randomized controlled trials (RCTs) and the Newcastle-Ottawa scale for observational studies were used to assess included studies. Random effects modeling was used to calculate pooled outcome data. Results: Three RCTs and four observational studies were identified, enrolling 496 patients. No difference was observed in the clinical efficacy between each procedure at 3 months postoperatively (p = 0.99), 6 months (p = 0.46), and 12 months (p = 0.29). The use of bipolar TURP was associated with significantly shorter inpatient stay (p = 0.01) and a shorter duration of catheterization (p = 0.05). Monopolar TURP was associated with an increased risk of TUR syndrome (p = 0.03). Operative time (p = 0.58), resection weight (p = 0.16), AUR (p = 0.96), clot retention (p = 0.79), and blood transfusion (p = 0.39) were similar in both groups. Conclusion: Our meta-analysis demonstrated that bipolar TURP in the treatment of moderate-large volume prostatic disease may be associated with a significantly lower rate of TUR syndrome and shortened length of hospital stay, with similar efficacy when compared with monopolar TURP. Further high-quality RCTs with adequate sample sizes are required to compare both monopolar and bipolar TURP to open prostatectomy or laser enucleation in the treatment of exclusively large-volume prostates with stricter definition of size. abstract_id: PUBMED:34466330 Monopolar Transurethral Resection of Prostate for Benign Prostatic Hyperplasia in Patients With and Without Preoperative Urinary Catheterization: A Prospective Comparative Study. Background A significant proportion of patients undergo surgery for benign prostatic hyperplasia following acute urinary retention. Studies have reported conflicting results of improvement following transurethral surgery in these patients. Objective To compare perioperative complications and postoperative voiding parameters in patients undergoing monopolar transurethral resection of prostate with and without preoperative Foley catheterization. Methods A prospective non-randomized study was conducted in patients undergoing monopolar transurethral resection of prostate for symptomatic benign prostatic hyperplasia. Patients were divided into those with Foley catheterization preoperatively (n=52), and those without catheters (n=90). Change in hemoglobin level, the resected volume of prostate, complications and the need for postoperative catheterization were compared. Postoperative symptoms score using International Prostate Symptom Score, maximum flow rate and post-void residual volume were assessed at three months follow up. Results The mean operative duration, length of stay and resected volume were higher in those patients with catheters; however, no significant differences were noted for mean hemoglobin level change and need for postoperative recatheterization. Three patients in each group required recatheterization and, all were catheter-free at one week postoperatively. Complications developed in 16.1% (n=23) with most of them being Clavien I. Patients with catheters had a lower postoperative maximum flow rate than those without it (16.90 vs 19.75 mL/sec). Patients with catheters had a significantly better postoperative quality of life and symptom score. Conclusion Monopolar transurethral resection of prostate in patients with preoperative per-urethral Foley catheter for acute urinary retention had similar postoperative voiding parameters with comparable complication rates to those without a catheter. abstract_id: PUBMED:16413335 Gyrus bipolar versus standard monopolar transurethral resection of the prostate: a randomized prospective trial. Objectives: To compare bipolar plasmakinetic (PK) with standard monopolar transurethral resection of the prostate (TURP). Methods: A total of 70 patients were prospectively randomized into two groups: 35 patients underwent PK TURP with the Gyrus device, and 35 patients underwent standard monopolar TURP. We evaluated the time to catheter removal and hospital discharge, operating time, blood loss, postoperative irrigation, complications, urinary flow rates, symptom relief, and postvoid residual volumes. Results: At baseline, the study groups were comparable in age, prostate volume, mean prostate-specific antigen value, International Prostate Symptom Score, quality-of-life score, flow rate, and postvoid residual volume. The mean catheterization time was 72 and 100 hours in the PK and standard groups, respectively. This difference was statistically significant (P &lt;0.05), as was the difference in the time to hospital discharge. No difference was found in the mean resection time, amount of resected tissue, or variations in hemoglobin and sodium levels. The improvement in flow rate, postvoid residual volume, International Prostate Symptom Score, and quality-of-life score was comparable between the two groups at 12 months of follow-up. Conclusions: In our experience, PK TURP showed comparable perioperative results to those obtained with standard TURP, but with more favorable postoperative outcomes. The resection time and blood loss were similar between the two groups, but the need for continuous bladder irrigation after surgery and time to catheter removal and hospital discharge were significantly shorter in the PK group. abstract_id: PUBMED:18721041 Technological advances in transurethral resection of the prostate: bipolar versus monopolar TURP. One of the most significant recent advancements in transurethral resection of the prostate (TURP) is the incorporation of bipolar technology. Bipolar circuitry allows TURP to be performed in a normal saline environment, which addresses a fundamental concern of conventional monopolar TURP (i.e., the use of hypo-osmolar irrigation). As a result, the risks of dilutional hyponatremia and transurethral resection (TUR) syndrome are eliminated, allowing for longer and safer resection. This review discusses the principles and applications of electrosurgery in conventional monopolar as well as new bipolar saline-based TURP systems. This review also addresses the positive impact on patient safety and resident training. abstract_id: PUBMED:18090490 Bipolar transurethral resection of prostate: a new reference standard? Purpose Of Review: Transurethral resection of prostate is the gold standard for the surgical management of benign prostate hyperplasia. Bipolar devices allow transurethral resection of prostate with saline irrigation, which lessens water intoxication and negates the need for diathermy pad and unwanted stimulation of the obturator nerves and cardiac devices. Several randomized clinical trials compare the various bipolar devices with conventional monopolar ones. For this review, we search all peer-reviewed published literature databases and present the evidence from them to substantiate its advantages and disadvantages. Recent Findings: Of the various types of bipolar devices, Gyrus has the longest clinical experience. Bipolar transurethral resection of prostate overcomes the shortcomings of bipolar transurethral prostate vaporization, which includes the absence of histology, postop irritative urinary symptoms and nondurable clinical outcomes. With bipolar transurethral resection of prostate, there is lesser bleeding which leads to shorter resection time and lower fluid absorption. This also enables shorter cathterization time and hospital stay. Transurethral resection syndrome has not been observed. Summary: Bipolar transurethral resection of prostate has demonstrated similar clinical efficacy as monopolar transurethral resection of prostate with shorter catheterization and hospital stay. It eliminates the occurrence of transurethral resection syndrome and minimizes bleeding risk. Long term outcomes from these randomized clinical trials will determine the durability of its clinical efficacy and incidence of urethral strictures. Answer: The Gyrus Plasmasect, which is a bipolar transurethral resection of the prostate (TURP) system, has been compared to the conventional monopolar TURP in several studies. According to the available abstracts, the Gyrus Plasmasect TURP has shown comparable results to monopolar TURP in terms of improvements in prostatic volume, International Prostate Symptom Score, quality of life score, peak flow rate, and post-void residual urine volume. However, the degree of improvement was not statistically different between the two groups (PUBMED:15539847). Additionally, the Gyrus Plasmasect TURP was associated with significantly less blood loss, shorter postoperative catheterization time, and a shorter length of hospital stay compared to monopolar TURP (PUBMED:15539847). Another study comparing Gyrus bipolar TURP with standard monopolar TURP found that the bipolar technique had more favorable postoperative outcomes, with significantly shorter catheterization time and time to hospital discharge, although the resection time and blood loss were similar between the two groups (PUBMED:16413335). Bipolar TURP, in general, has been shown to have a more favorable safety profile, with less intraoperative hemoglobin drop and postoperative bleeding than monopolar TURP, which may be associated with the deeper coagulation depth of bipolar TURP (PUBMED:22990062). Moreover, bipolar TURP eliminates the risks of dilutional hyponatremia and transurethral resection (TUR) syndrome, allowing for longer and safer resection (PUBMED:18721041). In summary, while the clinical efficacy of Gyrus Plasmasect TURP is comparable to that of monopolar TURP, it offers advantages in terms of safety, reduced blood loss, and shorter hospital stays. These benefits suggest that Gyrus Plasmasect TURP may be considered a better option than monopolar TURP in certain aspects, particularly related to perioperative outcomes and patient recovery (PUBMED:15539847; PUBMED:16413335; PUBMED:22990062; PUBMED:18721041).
Instruction: Does treating obesity stabilize chronic kidney disease? Abstracts: abstract_id: PUBMED:15955257 Does treating obesity stabilize chronic kidney disease? Background: Obesity is a growing health issue in the Western world. Obesity, as part of the metabolic syndrome adds to the morbidity and mortality. The incidence of diabetes and hypertension, two primary etiological factors for chronic renal failure, is significantly higher with obesity. We report a case with morbid obesity whose renal function was stabilized with aggressive management of his obesity. Case Report: A 43-year old morbidly obese Caucasian male was referred for evaluation of his chronic renal failure. He had been hypertensive with well controlled blood pressure with a body mass index of 46 and a baseline serum creatinine of 4.3 mg/dl (estimated glomerular filtration rate of 16 ml/min). He had failed all conservative attempts at weight reduction and hence was referred for a gastric by-pass surgery. Following the bariatric surgery he had approximately 90 lbs. weight loss over 8-months and his serum creatinine stabilized to 4.0 mg/dl. Conclusion: Obesity appears to be an independent risk factor for renal failure. Targeting obesity is beneficial not only for better control of hypertension and diabetes, but also possibly helps stabilization of chronic kidney failure. abstract_id: PUBMED:26594197 Fibroblast Growth Factor 21 Analogs for Treating Metabolic Disorders. Fibroblast growth factor (FGF) 21 is a member of the endocrine FGF subfamily. FGF21 expression is induced under different disease conditions, such as type 2 diabetes, obesity, chronic kidney diseases, and cardiovascular diseases, and it has a broad spectrum of functions in regulating various metabolic parameters. Many different approaches have been pursued targeting FGF21 and its receptors to develop therapeutics for treating type 2 diabetes and other aspects of metabolic conditions. In this article, we summarize some of these key approaches and highlight the potential challenges in the development of these agents. abstract_id: PUBMED:25633118 Treating COPD in Older and Oldest Old Patients. The treatment of older and oldest old patients with COPD poses several problems and should be tailored to specific outcomes, such as physical functioning. Indeed, impaired homeostatic mechanisms, deteriorated physiological systems, and limited functional reserve mainly contribute to this complex scenario. Therefore, we reviewed the main difficulties in managing therapy for these patients and possible remedies. Inhaled long acting betaagonists (LABA) and anticholinergics (LAMA) are the mainstay of therapy in stable COPD, but it should be considered that pharmacological response and safety profile may vary significantly in older patients with multimorbidity. Their association with inhaled corticosteroids is recommended only for patients with severe or very severe airflow limitation or with frequent exacerbations despite bronchodilator treatment. In hypoxemic patients, long-term oxygen therapy (LTOT) may improve not only general comfort and exercise tolerance, but also cognitive functions and sleep. Nonpharmacological interventions, including education, physical exercise, nutritional support, pulmonary rehabilitation and telemonitoring can importantly contribute to improve outcomes. Older patients with COPD should be systematically evaluated for the presence of risk factors for non-adherence, and the inhaler device should be chosen very carefully. Comorbidities, such as cardiovascular diseases, chronic kidney disease, osteoporosis, obesity, cognitive, visual and auditory impairment, may significantly affect treatment choices and should be scrutinized. Palliative care is of paramount importance in end-stage COPD. Finally, treatment of COPD exacerbations has been also reviewed. Therapeutic decisions should be founded on a careful assessment of cognitive and functional status, comorbidity, polypharmacy, and agerelated changes in pharmacokinetics and pharmacodynamics in order to minimize adverse drug events, drug-drug or drug-disease interactions, and non-adherence to treatment. abstract_id: PUBMED:22239110 Treating the obese dialysis patient: challenges and paradoxes. Obesity is a major epidemic in the general population and has added unique challenges to renal replacement therapy as choice of access, dialysis adequacy, and preparation for kidney transplantation may all be affected. There are few clinical studies on managing obese patients with end-stage renal disease (ESRD) and no accepted strategies for the variety of problems encountered in this population. Attempts at weight loss are generally advisable to prevent obesity-related surgical complications and improve patient and graft survival after kidney transplantation. This article reviews the unique aspects of managing obese patients with ESRD. abstract_id: PUBMED:37543535 A Systematic Approach to Treating Early Metabolic Disease and Prediabetes. At least 70% of US adults have metabolic disease. However, less is done to address early disease (e.g., overweight, obesity, prediabetes) versus advanced disease (e.g., type 2 diabetes mellitus, coronary artery disease). Given the burden of advanced metabolic disease and the burgeoning pandemics of obesity and prediabetes a systematic response is required. To accomplish this, we offer several recommendations: (A) Patients with overweight, obesity, and/or prediabetes must be consistently diagnosed with these conditions in medical records to enable population health initiatives. (B) Patients with early metabolic disease should be offered in-person or virtual lifestyle interventions commensurate with the findings of the Diabetes Prevention Program. (C) Patients unable to participate in or otherwise failing lifestyle intervention must be screened to assess if they require pharmacotherapy. (D) Patients not indicated for, refusing, or failing pharmacotherapy must be screened to assess if they need bariatric surgery. (E) Regardless of treatment approach or lack of treatment, patients must be consistently screened for the progression of early metabolic disease to advanced disease to enable early control. Progression of metabolic disease from an overweight yet otherwise healthy person includes the development of prediabetes, obesity ± prediabetes, dyslipidemia, hypertension, type 2 diabetes, chronic kidney disease, coronary artery disease, and heart failure. Systematic approaches in health systems must be deployed with clear protocols and supported by streamlined technologies to manage their population's metabolic health from early through advanced metabolic disease. Additional research is needed to identify and validate optimal system-level interventions. Future research needs to identify strategies to roll out systematic interventions for the treatment of early metabolic disease and to improve the metabolic health among the progressively younger patients being impacted by obesity and diabetes. abstract_id: PUBMED:25439537 The primary care physician/nephrologist partnership in treating chronic kidney disease. Chronic kidney disease (CKD) continues to be an ever-increasing health problem in the United States and elsewhere. Diabetes mellitus and hypertension remain the primary causes, and much of this is related to increased rates of obesity. Studies have demonstrated that early referral to a nephrologist can be life-saving and can also markedly improve quality of life. Besides recommending treatments for CKD, early referral can assist in medication management and in minimizing exposure to potential nephrotoxins. In patients who progress to end-stage renal disease, having an established patient-PCP-nephrologist relationship can ease the transition to renal replacement therapy or transplantation. abstract_id: PUBMED:30294952 Treating diabetic complications; from large randomized clinical trials to precision medicine. In the last decades, many large randomized controlled trials have been conducted to assess the efficacy and safety of new interventions for the treatment of diabetic kidney disease (DKD). Unfortunately, these trials failed to demonstrate additional kidney or cardiovascular protection. One of the explanations for the failure of these trials appears to be the large variation in drug response between individual patients. All trials to date tested a drug which was targeted to a large heterogeneous population assuming that every individual will show a similar beneficial respond to the drug. Post hoc analyses from the past clinical trials, however, suggest that individual patients show a marked variation in drug response. This highlights the need to personalize treatment taking proper account of the characteristics and preferences of individual patients. Transitioning to a personalized therapy approach will have implications for clinical trial designs, drug registration and its use in clinical practice. Successful implementation of personalized medicine thus requires engagement of multiple stakeholders including academic community, pharmaceutical industry, regulatory agencies, health policy makers, physicians and patients. This supplement of Diabetes Obesity and Metabolism provides a summary on the state-of-the-art of personalized medicine in diabetic kidney disease from the views of various stakeholders. abstract_id: PUBMED:29571623 Real-world evidence of superiority of endovascular repair in treating ruptured abdominal aortic aneurysm. Objective: The majority of previous studies, including randomized controlled trials, have failed to provide sufficient evidence of superiority of endovascular aneurysm repair (EVAR) over open aortic repair (OAR) of ruptured abdominal aortic aneurysm (rAAA) while comparing mortality and complications. This is in part due to small study size, patient selection bias, scarce adjustment for essential variables, single insurance type, or selection of only older patients. This study aimed to provide real-world, contemporary, comprehensive, and robust evidence on mortality of EVAR vs OAR of rAAA. Methods: A retrospective observational cohort study was performed of rAAA patients registered in the Premier Healthcare Database between July 2009 and March 2015. A multivariate logistic regression model was operated to estimate the association between procedure types (OAR vs EVAR) and in-hospital mortality. The final model was adjusted for demographics (age, sex, race, marital status, and geographic region), hospital characteristics (urban or rural, teaching or not), and potential confounders (hypertension, diabetes, hypercholesterolemia, obesity, ischemic heart disease, chronic kidney disease, symptoms of critical limb ischemia, chronic obstructive pulmonary disease, smoking, and alcoholism). Furthermore, coarsened exact matching was applied to substantiate the result in the matched cohort. Results: There were a total of 3164 patients with rAAA (1550 [49.0%] OAR and 1614 [51.0%] EVAR). Mortality was 23.79% in the EVAR group compared with 36.26% in the OAR group (P &lt; .001). The adjusted odds ratios of mortality (1.91; 95% confidence interval [CI], 1.62-2.25; P &lt; .001), cardiac complication (1.54; 95% CI, 1.22-1.96; P &lt; .001), pulmonary failure (1.90; 95% CI, 1.60-2.24; P &lt; .001), renal failure (1.90; 95% CI, 1.61-2.23; P &lt; .001), and bowel ischemia (2.40; 95% CI, 1.70-3.35; P &lt; .001) were significantly higher after OAR compared with EVAR. We further applied coarsened exact matching, which followed the same pattern of mortality (odds ratio, 1.68; 95% CI 1.41-1.99; P &lt; .001) and all major complications. Conclusions: Although the choice of repair of rAAA is highly dependent on the experience of the operating team and the anatomic suitability of the patient, this contemporary analysis of a large cohort of rAAA showed significantly higher adjusted risk of mortality in OAR compared with EVAR and substantially higher complications. abstract_id: PUBMED:3615318 Gynecomastia. A worrisome problem for the patient. Gynecomastia is common in obese or elderly men. Drug-induced breast enlargement is also frequent, whereas other causes of gynecomastia are relatively uncommon. Standardized measurement of breast tissue should be routinely performed in male patients. Diagnostic evaluation should be individualized on the basis of clinical suspicion. Therapy most often involves treating an underlying condition or discontinuing use of an offending drug. abstract_id: PUBMED:22776307 Laparoscopic Roux-en-Y gastric bypass in a morbidly obese patient with renal transplant: a case report. Renal transplant is the only curative treatment for end-stage renal disease. As diabetes and obesity are the major causes of graft failure and post-transplant complication, it is important to manage obesity in patients with renal transplant. Herein, we report a case of a morbidly obese renal-transplant patient with poorly controlled diabetes who received bariatric surgery. A 34-year-old obese Taiwanese man with type 2 diabetes had end-stage renal disease that had progressed since 2008, when he had commenced hemodialysis (January 2008) and had a renal transplant (July 2008). Because of persistent obesity and poorly controlled diabetes, he received LRYGB at Chiayi Christian hospital on 18 August 2010. In the month that followed, he lost 10 kg. His serum creatinine decreased to 1.11 mg/dL (1.4 mg/dL, preoperative) and his hemoglobin A1c decreased to 8.5% (10.4%, preoperative). These results indicate that, in obese renal transplant patients, LRYGB may be employed to treat obesity, control diabetes and stabilize or improve the renal function. Answer: Yes, treating obesity can stabilize chronic kidney disease (CKD). A case report of a 43-year-old morbidly obese male with chronic renal failure showed that after aggressive management of his obesity through gastric bypass surgery, there was a significant weight loss and his serum creatinine stabilized, suggesting a stabilization of his renal function (PUBMED:15955257). This supports the notion that obesity is an independent risk factor for renal failure and that targeting obesity can be beneficial for the stabilization of chronic kidney disease. Additionally, obesity-related surgical complications and patient and graft survival after kidney transplantation can be improved by attempts at weight loss in obese patients with end-stage renal disease (ESRD) (PUBMED:22239110). Furthermore, a systematic approach to treating early metabolic disease and prediabetes, which includes addressing obesity, can help manage the progression of metabolic disease to advanced disease, such as type 2 diabetes and chronic kidney disease (PUBMED:37543535).
Instruction: Can performance indicators be used for pedagogic purposes in disaster medicine training? Abstracts: abstract_id: PUBMED:19292895 Can performance indicators be used for pedagogic purposes in disaster medicine training? Background: Although disaster simulation trainings were widely used to test hospital disaster plans and train medical staff, the teaching performance of the instructors in disaster medicine training has never been evaluated. The aim of this study was to determine whether the performance indicators for measuring educational skill in disaster medicine training could indicate issues that needed improvement. Methods: The educational skills of 15 groups attending disaster medicine instructor courses were evaluated using 13 measurable performance indicators. The results of each indicator were scored at 0, 1 or 2 according to the teaching performance. Results: The total summed scores ranged from 17 to 26 with a mean of 22.67. Three indicators: 'Design', 'Goal' and 'Target group' received the maximum scores. Indicators concerning running exercises had significantly lower scores as compared to others. Conclusion: Performance indicators could point out the weakness area of instructors' educational skills. Performance indicators can be used effectively for pedagogic purposes. abstract_id: PUBMED:32167446 Creating a Novel Disaster Medicine Virtual Reality Training Environment. Introduction: Disasters are high-acuity, low-frequency events which require medical providers to respond in often chaotic settings. Due to this infrequency, skills can atrophy, so providers must train and drill to maintain them. Historically, drilling for disaster response has been costly, and thus infrequent. Virtual Reality Environments (VREs) have been demonstrated to be acceptable to trainees, and useful for training Disaster Medicine skills. The improved cost of virtual reality training can allow for increased frequency of simulation and training. Problem: The problem addressed was to create a novel Disaster Medicine VRE for training and drilling. Methods: A VRE was created using SecondLife (Linden Lab; San Francisco, California USA) and adapted for use in Disaster Medicine training and drilling. It is easily accessible for the end-users (trainees), and is adaptable for multiple scenario types due to the presence of varying architecture and objects. Victim models were created which can be role played by educators, or can be virtual dummies, and can be adapted for wide ranging scenarios. Finally, a unique physiologic simulator was created which allows for dummies to mimic disease processes, wounds, and treatment outcomes. Results: The VRE was created and has been used extensively in an academic setting to train medical students, as well as to train and drill disaster responders. Conclusions: This manuscript presents a new VRE for the training and drilling of Disaster Medicine scenarios in an immersive, interactive experience for trainees. abstract_id: PUBMED:20078915 The effectiveness of training with an emergency department simulator on medical student performance in a simulated disaster. Objective: Training in practical aspects of disaster medicine is often impossible, and simulation may offer an educational opportunity superior to traditional didactic methods. We sought to determine whether exposure to an electronic simulation tool would improve the ability of medical students to manage a simulated disaster. Methods: We stratified 22 students by year of education and randomly assigned 50% from each category to form the intervention group, with the remaining 50% forming the control group. Both groups received the same didactic training sessions. The intervention group received additional disaster medicine training on a patient simulator (disastermed.ca), and the control group spent equal time on the simulator in a nondisaster setting. We compared markers of patient flow during a simulated disaster, including mean differences in time and number of patients to reach triage, bed assignment, patient assessment and disposition. In addition, we compared triage accuracy and scores on a structured command-and-control instrument. We collected data on the students' evaluations of the course for secondary purposes. Results: Participants in the intervention group triaged their patients more quickly than participants in the control group (mean difference 43 s, 99.5% confidence interval [CI] 12 to 75 s). The score of performance indicators on a standardized scale was also significantly higher in the intervention group (18/18) when compared with the control group (8/18) (p &lt; 0.001). All students indicated that they preferred the simulation-based curriculum to a lecture-based curriculum. When asked to rate the exercise overall, both groups gave a median score of 8 on a 10-point modified Likert scale. Conclusion: Participation in an electronic disaster simulation using the disastermed.ca software package appears to increase the speed at which medical students triage simulated patients and increase their score on a structured command-and-control performance indicator instrument. Participants indicated that the simulation-based curriculum in disaster medicine is preferable to a lecture-based curriculum. Overall student satisfaction with the simulation-based curriculum was high. abstract_id: PUBMED:31928571 Use of Simulated Patients in Disaster Medicine Training: A Systematic Review. Simulation is an effective teaching tool in disaster medicine education, and the use of simulated patients (SPs) is a frequently adopted technique. Throughout this article, we critically analyzed the use and the preparation of SPs in the context of simulation in disaster medicine. A systematic review of English, French, and Italian language articles was performed on PubMed and Google Scholar. Studies were included if reporting the use of SPs in disaster medicine training. Exclusion criteria included abstracts, citations, theses, articles not dealing with disaster medicine, and articles not using human actors in simulation. Eighteen papers were examined. All the studies were conducted in Western countries. Case reports represent 50% of references. Only in 44.4% of articles, the beneficiaries of simulations were students, while in most of cases were professionals. In 61.1% of studies SPs were moulaged, and in 72.2%, a method to simulate victim symptoms was adopted. Ten papers included a previous training for SPs and their involvement in the participants' assessment at the end of the simulation. Finally, this systematic review revealed that there is still a lack of uniformity about the use of SPs in the disaster medicine simulations. abstract_id: PUBMED:21213132 Disaster medicine training in family medicine: a review of the evidence. When disasters strike, local physicians are at the front lines of the response in their community. Curriculum guidelines have been developed to aid in preparation of family medicine residents to fulfill this role. Disaster responsiveness has recently been added to the Residency Review Committee Program Requirements in Community Medicine with little family medicine literature support. In this article, the evidence in support of disaster training in a variety of settings is reviewed. Published evidence of improved educational or patient-oriented outcomes as a result of disaster training in general, or of specific educational modalities, is weak. As disaster preparedness and disaster training continue to be implemented, the authors call for increased outcome-based research in disaster response training. abstract_id: PUBMED:30802013 Education and Training in Disaster Medicine for Clinical Laboratory Staff The clinical laboratory is essential to medicine in general. Clinical laboratory staff with a core of medical technologists are asked to take part in disaster medicine, and their education and training are urgently neces- sary to markedly contribute to disaster medicine. Clinical laboratory staff who participate in disaster medicine should have the following: 1) understanding of disaster medicine and the skills to implement it, 2)warm spirits of togetherness with disaster victims and enthusiasm to help them, 3) physical strength and self-control, 4) the ability to communicate and connect with other staff, 5) the ability to devise clinical laboratory systems according to the situation. It is desirable for associations of clinical laboratory and those of disaster medicine to work together to de- velop education programs and certification systems, and for education programs to be developed into medical technologist-training schools. [Review]. abstract_id: PUBMED:31270005 Disaster Training Needs and Expectations Among Turkish Emergency Medicine Physicians - A National Survey. Objectives: Earthquakes, landslides, and floods are the most frequent natural disasters in Turkey. The country has also recently experienced an increased number of terrorist attacks. The purpose of this study is to understand the expectations and training of Turkish emergency medicine attending physicians in disaster medicine. Methods: An online questionnaire was administered to the 937 members of the Emergency Medicine Association of Turkey, of which 191 completed the survey (20%). Results: Most participants (68%) worked at a Training and Research Hospital (TRH) or a University Hospital (UH), and 69% had practiced as an attending for 5 years or less. Mass immigration, refugee problems, and war/terror attacks were considered to be the highest perceived risk topics. Most (95%) agreed that disaster medicine trainings should occur during residency training. Regular disaster drills and exercises and weekly or monthly trainings were the most preferred educational modalities. Most respondents (85%) were interested in advanced training in disaster medicine, and this was highest for those working less than 5 years as an attending. UH and TRH residency training programs were not considered in themselves to be sufficient for learning disaster medicine. Conclusions: Turkish emergency medicine residency training should include more disaster medicine education and training. abstract_id: PUBMED:27040319 Disaster Medicine: A Multi-Modality Curriculum Designed and Implemented for Emergency Medicine Residents. Objective: Few established curricula are available for teaching disaster medicine. We describe a comprehensive, multi-modality approach focused on simulation to teach disaster medicine to emergency medicine residents in a 3-year curriculum. Methods: Residents underwent a 3-year disaster medicine curriculum incorporating a variety of venues, personnel, and roles. The curriculum included classroom lectures, tabletop exercises, virtual reality simulation, high-fidelity simulation, hospital disaster drills, and journal club discussion. All aspects were supervised by specialty emergency medicine faculty and followed a structured debriefing. Residents rated the high-fidelity simulations by using a 10-point Likert scale. Results: Three classes of emergency medicine residents participated in the 3-year training program. Residents found the exercise to be realistic, educational, and relevant to their practice. After participating in the program, residents felt better prepared for future disasters. Conclusions: Given the large scope of impact that disasters potentiate, it is understandably difficult to teach these skills effectively. Training programs can utilize this simulation-based curriculum to better prepare the nation's emergency medicine physicians for future disasters. (Disaster Med Public Health Preparedness. 2016;10:611-614). abstract_id: PUBMED:24001650 Training in disaster medicine in Africa: where we are in 2013 This retrospective study, conducted in 26 African countries where French is the first or second language, identified the postsecondary educational institutions teaching disaster medicine. This subject is taught in various institutions in 7 of the 26 countries (27%), including 3 of 47 colleges, 1 of 6 institutes, 1 military health and medical sciences school, and in civil defense agencies in 2 countries. Teaching disaster medicine is often confined to military physicians. This subject must be recognized as a subject in its own right and must be integrated into the medical school curriculum for doctors in French-speaking Africa. abstract_id: PUBMED:28318478 Disaster Education: A Survey Study to Analyze Disaster Medicine Training in Emergency Medicine Residency Programs in the United States. Background: The increase in natural and man-made disasters occurring worldwide places Emergency Medicine (EM) physicians at the forefront of responding to these crises. Despite the growing interest in Disaster Medicine, it is unclear if resident training has been able to include these educational goals. Hypothesis This study surveys EM residencies in the United States to assess the level of education in Disaster Medicine, to identify competencies least and most addressed, and to highlight effective educational models already in place. Methods: The authors distributed an online survey of multiple-choice and free-response questions to EM residency Program Directors in the United States between February 7 and September 24, 2014. Questions assessed residency background and details on specific Disaster Medicine competencies addressed during training. Results: Out of 183 programs, 75 (41%) responded to the survey and completed all required questions. Almost all programs reported having some level of Disaster Medicine training in their residency. The most common Disaster Medicine educational competencies taught were patient triage and decontamination. The least commonly taught competencies were volunteer management, working with response teams, and special needs populations. The most commonly identified methods to teach Disaster Medicine were drills and lectures/seminars. Conclusion: There are a variety of educational tools used to teach Disaster Medicine in EM residencies today, with a larger focus on the use of lectures and hospital drills. There is no indication of a uniform educational approach across all residencies. The results of this survey demonstrate an opportunity for the creation of a standardized model for resident education in Disaster Medicine. Sarin RR , Cattamanchi S , Alqahtani A , Aljohani M , Keim M , Ciottone GR . Disaster education: a survey study to analyze disaster medicine training in emergency medicine residency programs in the United States. Prehosp Disaster Med. 2017;32(4):368-373. Answer: Yes, performance indicators can be used for pedagogic purposes in disaster medicine training. A study aimed at evaluating the educational skills of groups attending disaster medicine instructor courses used 13 measurable performance indicators to assess teaching performance. The results showed that performance indicators could highlight areas where instructors' educational skills were weak, suggesting that these indicators can be effectively used for pedagogic purposes in disaster medicine training (PUBMED:19292895).
Instruction: Do people become more apathetic as they grow older? Abstracts: abstract_id: PUBMED:25432651 Apathetic thyrotoxicosis presenting with diabetes mellitus. Apathetic form of thyrotoxicosis occurs in the elderly who can present with features of hyperglycemia, hypothyroidism, depression, or an internal malignancy. A clinical suspicion and timely diagnosis of hyperthyroidism is needed to define the correct etiology of existing problems, and to prevent grave complications. We discuss an 84-year-old woman who presented with fatigue and uncontrolled diabetes due to apathetic thyrotoxicosis. abstract_id: PUBMED:34659997 Atrial Fibrillation as an Initial Presentation of Apathetic Thyroid Storm. Atrial fibrillation as an initial presenting symptom of an apathetic thyroid storm is under-reported, especially in the setting of undiagnosed hyperthyroidism. Very rarely, thyroid storm can present with apathetic symptoms. The author presents a case of apathetic thyrotoxicosis with atrial fibrillation. The patient had a generalized weakness, lethargy, and weight loss as initial symptoms and was found to have atrial fibrillation, which was initially thought to be the inciting event. However, further evaluation revealed a new diagnosis of apathetic thyroid storm secondary to uncontrolled Graves' disease. She was managed medically for thyroid storm with hopes to control the tachyarrhythmia by controlling the underlying etiology. Subsequently, her symptoms resolved, and she came back to baseline except for continued atrial fibrillation, which was rate controlled. Early recognition of an apathetic thyroid storm can prevent mortality and morbidity as it can often be missed due to atypical symptoms. abstract_id: PUBMED:36347138 Factors associated with nurses' willingness to handle abuse of older people. Aim: The aim of this study was to explore predictors of nurses' willingness to handle abuse of older people. Background: Abuse of older people is a long-discussed healthcare issue worldwide. Although nurses are considered capable of identifying and reporting cases of abuse of older people, no study has been conducted in Taiwan on nurses' willingness to handle abuse of older people. Design: A cross-sectional design was used. Methods: The study was conducted from May to June 2019. A convenience sampling was adopted to survey 555 nurses from a medical center in Taiwan. Data were collected using the Knowledge of Abuse of Older People Scale, Attitudes Towards Older People Scale, Attitudes Towards Handling Abuse of Older People Scale, Willingness to Handle Abuse of Older People Scale, and personal characteristics. Pearson correlation coefficient analysis, independent sample t-test, one-way analysis of variance, and multiple linear regression were performed. Results: Participants scored an average of 2.98 out of 4 on the Willingness to Handle Abuse of Older People Scale, indicating that they were inclined to do so. Attitudes towards older people, knowledge, attitudes towards handling abuse of older people, awareness of the hospital's reporting procedure and dissemination of information related to abuse of older people, sex, age, and clinical work experience explained 41.4% of the variance of willingness. Participants' attitudes toward handling abuse of older people was the most important predictor of their willingness to do so. Conclusions: To improve nurses' willingness to handle cases of abuse of older people, particularly that of male nurses, hospital authorities should provide in-service training and education and disseminate information on the subject matter. Nursing schools should prioritize offering gerontological nursing courses to foster nursing students' positive attitudes toward older adults and handling abuse of older people. Tweetable Abstract: Nurses' attitudes toward handling abuse of older people were the most important predictor of their willingness to handle abuse of older people. abstract_id: PUBMED:33140630 Exploring Older Swiss People's Preferred Dental Services for When They Become Dependent. The objective of this study was to explore the preferred dental services of older people for when they become dependent. It aimed to assess their preferred type of health care professional and location of dental service, and relate their preferences to their willingness to pay (WTP) and willingness to travel (WTT). Older people aged 65 years or older were invited to participate in a questionnaire-based discrete choice experiment (DCE), to measure preferences for dental examinations and treatment, defined by two attributes: type of professional and location of the activity. Hypothetical scenarios based attributes were displayed in a projected visual presentation and participants noted their personal preference using a response sheet. Data was analyzed using a random-effects logit model. Eighty-nine participants (mean age 73.7 ± 6.6 years) attended focus group sessions. Respondents preferred that the family dentist (β: 0.2596) or an auxiliary (β: 0.2098) undertake the examination and wanted to avoid a medical doctor (β: –0.469). The preferred location for dental examination was at a dental practice (β: 0.2204). Respondents preferred to avoid treatments at home (β: –0.3875); they had a significant preference for treatment at the dental office (β: 0.2255) or in a specialist setting (β: 0.1620, ns). However, the type of professional did not have a significant influence on overall preference. Participants with a low WTP preferred examination at home (β: 0.2151) and wanted to avoid the dental practice (β: –0.0235), whereas those with a high WTP preferred the dental office (β: 0.4535) rather than home (β: –0.3029). WTT did not have a significant influence on preference. The study showed that older people generally preferred receiving dental services in a dental practice or specialist setting, and would prefer not to be treated at home. Continuity of dental services provided by the family dentist should therefore be prioritized where possible and further studies should examine the role of domiciliary care at home. abstract_id: PUBMED:29104702 Review of Public Transport Needs of Older People in European Context. People's life expectancy is increasing throughout the world as a result of improved living standards and medical advances. The natural ageing process is accompanied by physiological changes which can have significant consequences for mobility. As a consequence, older people tend to make fewer journeys than other adults and may change their transport mode. Access to public transport can help older people to avail themselves of goods, services, employment and other activities. With the current generation of older people being more active than previous generations of equivalent age, public transport will play a crucial role in maintaining their active life style even when they are unable to drive. Hence, public transport is important to older people's quality of life, their sense of freedom and independence. Within the European Commission funded GOAL (Growing Older and staying mobile) project, the requirements of older people using public transport were studied in terms of four main issues: Affordability, availability, accessibility and acceptability. These requirements were then analysed in terms of five different profiles of older people defined within the GOAL project - 'Fit as a Fiddle', 'Hole in the Heart', 'Happily Connected', An 'Oldie but a Goodie' and 'Care-Full'. On the basis of the analysis the paper brings out some areas of knowledge gaps and research needed to make public transport much more attractive and used by older people in the 21st century. abstract_id: PUBMED:35526845 Clinical trials in older people. Randomised controlled trials (RCTs) usually provide the best evidence for treatments and management. Historically, older people have often been excluded from clinical medication trials due to age, multimorbidity and disabilities. The situation is improving, but still the external validity of many trials may be questioned. Individuals participating in trials are generally less complex than many patients seen in geriatric clinics. Recruitment and retention of older participants are particular challenges in clinical trials. Multiple channels are needed for successful recruitment, and especially individuals experiencing frailty, multimorbidity and disabilities require support to participate. Cognitive decline is common, and often proxies are needed to sign informed consent forms. Older people may fall ill or become tired during the trial, and therefore, special support and empathic study personnel are necessary for the successful retention of participants. Besides the risk of participants dropping out, several other pitfalls may result in underestimating or overestimating the intervention effects. In nonpharmacological trials, imperfect blinding is often unavoidable. Interventions must be designed intensively and be long enough to reveal differences between the intervention and control groups, as control participants must still receive the best normal care available. Outcome measures should be relevant to older people, sensitive to change and targeted to the specific population in the trial. Missing values in measurements are common and should be accounted for when designing the trial. Despite the obstacles, RCTs in geriatrics must be promoted. Reliable evidence is needed for the successful treatment, management and care of older people. abstract_id: PUBMED:35310771 Comparing older people's drinking habits in four Nordic countries: Summary of the thematic issue. Aim: The present article summarises status and trends in the 21st century in older people's (60-79 years) drinking behaviour in Denmark, Finland, Norway and Sweden and concludes this thematic issue. Each country provided a detailed report analysing four indicators of alcohol use: the prevalence of alcohol consumers, the prevalence of frequent use, typical amounts of use, and the prevalence of heavy episodic drinking (HED). The specific aim of this article is to compare the results of the country reports. Findings: Older people's drinking became more common first in Denmark in the 1970s and then in the other countries by the 1980s. Since 2000 the picture is mixed. Denmark showed decreases in drinking frequency, typically consumed amounts and HED, while in Sweden upward trends were dominant regarding prevalence of consumers and frequency of drinking as well as HED. Finland and Norway displayed both stable indicators except for drinking frequency and proportion of women consumers where trends increased. In all four countries, the gender gap diminished with regard to prevalence and frequency of drinking, but remained stable in regard to consuming large amounts. In Norway the share of alcohol consumers among women aged 60-69 years exceeded the share among men. During the late 2010s, Denmark had the highest prevalence of alcohol consumers as well as the highest proportion drinking at a higher frequency. Next in ranking was Finland, followed by Sweden and Norway. This overall rank ordering was observed for both men and women. Conclusion: As the populations aged 60 years and older in the Nordic countries continue to grow, explanations for the drivers and consequences of changes in older people's drinking will become an increasingly relevant topic for future research. Importantly, people aged 80 years and older should also be included as an integral part of that research. abstract_id: PUBMED:19742272 Subjective health in older people. The Nottingham Health Profile (NHP [1]) is a widely used instrument to measure subjective health in clinical research. However, there are no age-specific norms for older adults in Germany. The present study was conducted to analyze the psychometric properties of the German version of the Nottingham Health Profile (NHP) in older people. Age-specific reference values for the elderly are presented. Subjects were drawn from the general population of older German people aged 61 to 95 years (n=630; mean age 69.5 years; 55.7% female). Five of the six NHP scales revealed a good internal consistency (.70≤α≤.92). The subscales (with the exception of the subscale social isolation) showed moderate relations with instruments measuring somatization disorders and subjective body complaints (Screening for Somatoform Disorders SOMS [2]); Giessen Subjective Complaints List GBB-24 [3]). Compared to younger subjects, subjective health decreases with age. Therefore age-specific reference data for older people are necessary. Differential reference data are reported for four age groups above 60 years of age and for male and female older people. The outcome confirms that the NHP is a reliable, valid instrument to measure subjective health in older people. abstract_id: PUBMED:37125863 Factors Affecting Health Policies for Older People in Iran. Globally, the number and proportion of people aged 60 years and older is growing fast. As people age, health needs become more complex, and the health system's responsiveness to older people's needs requires evidence-informed policies. Hence, this study explores the factors affecting the health policy development process for older people in Iran. We conducted 32 interviewers with people aged 60 years and older and 21 interviews with key informants involved in policy making related to older people. Qualitative data were analyzed using thematic analysis. Actors and stakeholders, policy structure, selected health policy processes, the health care service delivery system, government financial support, and community and culture building are the most influential factors in health policy making for older people. Government policies and health priority interventions are needed to address these influential factors for older people to ensure healthy aging over the life course. abstract_id: PUBMED:31576729 Understanding and managing depression in older people. Clinicians do not always recognise depression in older people as they attribute symptoms to the ageing process and the effects of failing health. Similarly, older people do not always appreciate that their symptoms relate to their mood. Understanding how depression affects older people can improve access to support, thereby improving overall health and quality of life. To ensure these outcomes we need a workforce with excellent communication skills that supports therapeutic relationships, promotes recognition of symptoms, and enhances assessment, diagnosis, treatment and management. Answer: The provided abstracts do not directly address whether people become more apathetic as they grow older in a general sense. However, they do discuss specific conditions and contexts in which older individuals may exhibit apathetic symptoms. For instance, apathetic thyrotoxicosis is a condition that can occur in the elderly and may present with symptoms such as fatigue, uncontrolled diabetes, generalized weakness, lethargy, and weight loss, which could be interpreted as apathy (PUBMED:25432651; PUBMED:34659997). It is important to note that the term "apathy" in the context of these abstracts refers to a lack of interest or concern, particularly in situations that others find moving or exciting, and is not necessarily a general characteristic of aging. The abstracts also discuss various aspects of older people's lives, such as their willingness to handle abuse (PUBMED:36347138), preferred dental services (PUBMED:33140630), public transport needs (PUBMED:29104702), participation in clinical trials (PUBMED:35526845), drinking habits (PUBMED:35310771), subjective health (PUBMED:19742272), health policies (PUBMED:37125863), and managing depression (PUBMED:31576729). These topics suggest that older individuals have diverse needs and concerns, and their engagement with these issues does not necessarily indicate a general trend towards increased apathy with age. In summary, while certain medical conditions in older adults may present with apathetic symptoms, the abstracts do not provide evidence to support a broad claim that people become more apathetic as they grow older.
Instruction: Selection in air traffic control: is nonradar training a predictor of radar performance? Abstracts: abstract_id: PUBMED:19634306 Selection in air traffic control: is nonradar training a predictor of radar performance? Objective: The purpose of the current research was to investigate whether performance in nonradar training would predict performance in radar training. Background: There is a discussion in the Federal Aviation Administration about the necessity of keeping nonradar training as part of the required selection criteria for radar controllers. In nonradar training, controllers separate traffic by relying on the estimated time over navigational fixes printed on flight progress strips, rather than monitoring the perceptually available positional information on a radar screen. The two ways of controlling traffic-nonradar and radar-are different along a number of dimensions. Method: Sixteen participants were taught to control simulated air traffic using nonradar and radar procedures. Performance on final radar scenarios was predicted from cognitive variables; performance on earlier, simpler radar scenarios; and performance on nonradar scenarios. Results: Performance during nonradar trials predicted final radar performance (i.e., collisions and landed aircraft count) independent of the predictive power of cognitive variables and above and beyond earlier radar training. Conclusion: Performance in nonradar training enhanced users' ability to predict radar performance, even in addition to the predictive power of simpler, earlier radar performance variables. Good nonradar performers had higher situation awareness in the radar environment. Application: Performance in a nonradar environment may serve as an important selection tool in assessing the performance of student controllers in radar environments. The results indicate the need for future research with field controllers. abstract_id: PUBMED:30913459 A new approach for inferring traffic-related air pollution: Use of radar-calibrated crowd-sourced traffic data. Background: Crowd-sourced traffic data potentially allow prediction of traffic-related air pollution at high temporal and spatial resolution. Objectives: To examine associations (1) of radar-based traffic measurements with congestion colors displayed on crowd-sourced traffic data maps and (2) of black carbon (BC) levels with radar and crowd-sourced traffic data. Methods: At an off-ramp of an interstate and a small one-way street in a mixed-use area in New York City, we used radar devices to obtain vehicle speeds and flows (hourly counts) for cars and trucks. At these radar sites and at an additional non-radar equipped site at a 2-way street, we monitored BC levels using aethalometers in the summer and early fall of 2017. At all three sites, free-flow traffic conditions typically did not occur due to the nearby presence of traffic lights and forced turns. We also downloaded real-time traffic maps from a crowd-sourced traffic data provider and assigned an ordinal integer congestion color code CCC to the congestion colors, ranging from 1 (dark red) to 5 (gray). Results: CCC increased with vehicle speed. Traffic flow was highest for intermediate speeds and intermediate CCC. Regression analyses showed that BC levels increased with either segregated or total vehicle flows. At the off-ramp, time-dependent BC levels can be inferred from time-dependent CCC and radar-derived mean vehicle flow data. A unit decrease in CCC for a mean traffic flow of 100 vehicles/h was associated with a mean (95% CI) increase in BC levels of 0.023 (0.028, 0.018) μg/m3. At the small 1-way and the 2-way street, BC levels were also negatively associated with CCC, though at a &gt;0.05 significance level. Conclusions: Use of inexpensive crowd-sourced traffic data holds great promise in air pollution modeling and health studies. Time-dependent traffic-related primary air pollution levels may be inferred from radar-calibrated crowd-sourced traffic data, in our case radar-derived mean traffic flow and widely available CCC data. However, at some locations mean traffic flow data may already be available. abstract_id: PUBMED:18225776 Improved military air traffic controller selection methods as measured by subsequent training performance. Introduction: Over the past decade, the U.S. military has conducted several studies to evaluate determinants of enlisted air traffic controller (ATC) performance. Research has focused on validation of the Armed Services Vocational Aptitude Battery (ASVAB) and has shown it to be a good predictor of training performance. Despite this, enlisted ATC training and post-training attrition is higher than desirable, prompting interest in alternate selection methods to augment current procedures. The current study examined the utility of the FAA Air Traffic Selection and Training (AT-SAT) battery for incrementing the predictiveness of the ASVAB versus several enlisted ATC training criteria. Method: Subjects were 448 USAF enlisted ATC students who were administered the ASVAB and FAA AT-SAT subtests and subsequently graduated or were eliminated from apprentice-level training. Training criteria were a dichotomous graduation/elimination training score, average ATC fundamentals course score, and FAA certified tower operator test score. Results: Results confirmed the predictive validity of the ASVAB and showed that one of the AT-SAT subtests resembling a low-fidelity ATC work sample significantly improved prediction of training performance beyond the ASVAB alone. Discussion: Results suggested training attrition could be reduced by raising the current ASVAB minimum qualifying score. However, this approach may make it difficult to identify sufficient numbers of trainees and lead to adverse impact. Although the AT-SAT ATC work sample subtest showed incremental validity to the ASVAB, its length (95 min) may be problematic in operational testing. Recommendations are made for additional studies to address issues affecting operational implementation. abstract_id: PUBMED:24218904 Relative position vectors: an alternative approach to conflict detection in air traffic control. Objective: We explore whether the visual presentation of relative position vectors (RPVs) improves conflict detection in conditions representing some aspects of future airspace concepts. Background: To help air traffic controllers manage increasing traffic, new tools and systems can automate more cognitively demanding processes, such as conflict detection. However, some studies reveal adverse effects of such tools, such as reduced situation awareness and increased workload. New displays are needed that help air traffic controllers handle increasing traffic loads. Method: A new display tool based on the display of RPVs, the Multi-Conflict Display (MCD), is evaluated in a series of simulated conflict detection tasks. The conflict detection performance of air traffic controllers with the MCD plus a conventional plan-view radar display is compared with their performance with a conventional plan-view radar display alone. Results: Performance with the MCD plus radar was better than with radar alone in complex scenarios requiring controllers to find all actual or potential conflicts, especially when the number of aircraft on the screen was large. However performance with radar alone was better for static scenarios in which conflicts for a target aircraft, or target pair of aircraft, were the focus. Conclusion: Complementing the conventional plan-view display with an RPV display may help controllers detect conflicts more accurately with extremely high aircraft counts. Applications: We provide an initial proof of concept that RPVs may be useful for supporting conflict detection in situations that are partially representative of conditions in which controllers will be working in the future. abstract_id: PUBMED:35270899 Pedestrian Traffic Light Control with Crosswalk FMCW Radar and Group Tracking Algorithm. The increased mobility requirements of modern lifestyles put more stress on existing traffic infrastructure, which causes reduced traffic flow, especially in peak traffic hours. This calls for new and advanced solutions in traffic flow regulation and management. One approach towards optimisation is a transition from static to dynamic traffic light intervals, especially in spots where pedestrian crossing cause stops in road traffic flow. In this paper, we propose a smart pedestrian traffic light triggering mechanism that uses a Frequency-modulated continuous-wave (FMCW) radar for pedestrian detection. Compared to, for example, camera-surveillance systems, radars have advantages in the ability to reliably detect pedestrians in low-visibility conditions and in maintaining privacy. Objects within a radar's detection range are represented in a point cloud structure, in which pedestrians form clusters where they lose all identifiable features. Pedestrian detection and tracking are completed with a group tracking (GTRACK) algorithm that we modified to run on an external processor and not integrated into the used FMCW radar itself. The proposed prototype has been tested in multiple scenarios, where we focused on removing the call button from a conventional pedestrian traffic light. The prototype responded correctly in practically all cases by triggering the change in traffic signalization only when pedestrians were standing in the pavement area directly in front of the zebra crossing. abstract_id: PUBMED:35212002 Psychophysiological coherence training to moderate air traffic controllers' fatigue on rotating roster. The nature of the current rotating roster, providing 24-h air traffic services over five irregular shifts, leads to accumulated fatigue which impairs air traffic controllers' cognitive function and task performance. It is imperative to develop an effective fatigue risk management system to improve aviation safety based upon scientific approaches. Two empirical studies were conducted to address this issue. Study 1 investigated the mixed effect of circadian rhythm disorders and resource depletion on controllers' accumulated fatigue. Then, study 2 proposed a potential biofeedback solution of quick coherence technique which can mitigate air traffic controllers' (ATCOs') fatigue while on controller working position and improve ATCOs' mental/physical health. The current two-studies demonstrated a scientific approach to fatigue analysis and fatigue risk mitigation in the air traffic services domain. This research offers insights into the fluctuation of ATCO fatigue levels and the influence of a numbers of factors related to circadian rhythm and resource depletion impact on fatigue levels on study 1; and provides psychophysiological coherence training to increase ATCOs' fatigue resilience to mitigate negative impacts of fatigue on study 2. Based on these two studies, the authors recommended that an extra short break for air traffic controllers to permit practicing the quick coherence breathing technique for 5 min at the sixth working hour could substantially recharge cognitive resources and increase fatigue resilience. Application: Present studies highlight an effective fatigue intervention based on objective biofeedback to moderate controllers' accumulated fatigue as a result of rotating shift work. Accordingly, air navigation services providers and regulators can develop fatigue risk management systems based on scientific approaches to improve aviation safety and air traffic controller's wellbeing. abstract_id: PUBMED:36613210 Analysis of Perception Accuracy of Roadside Millimeter-Wave Radar for Traffic Risk Assessment and Early Warning Systems. Millimeter-wave (MMW) radar is essential in roadside traffic perception scenarios and traffic safety control. For traffic risk assessment and early warning systems, MMW radar provides real-time position and velocity measurements as a crucial source of dynamic risk information. However, due to MMW radar's measuring principle and hardware limitations, vehicle positioning errors are unavoidable, potentially causing misperception of the vehicle motion and interaction behavior. This paper analyzes the factors influencing the MMW radar positioning accuracy that are of major concern in the application of transportation systems. An analysis of the radar measuring principle and the distributions of the radar point cloud on the vehicle body under different scenarios are provided to determine the causes of the positioning error. Qualitative analyses of the radar positioning accuracy regarding radar installation height, radar sampling frequency, vehicle location, posture, and size are performed. The analyses are verified through simulated experiments. Based on the results, a general guideline for radar data processing in traffic risk assessment and early warning systems is proposed. abstract_id: PUBMED:37896668 A Specific Emitter Identification System Design for Crossing Signal Modes in the Air Traffic Control Radar Beacon System and Wireless Devices. To improve communication stability, more wireless devices transmit multi-modal signals while operating. The term 'modal' refers to signal waveforms or signal types. This poses challenges to traditional specific emitter identification (SEI) systems, e.g., unknown modal signals require extra open-set mode identification; different modes require different radio frequency fingerprint (RFF) extractors and SEI classifiers; and it is hard to collect and label all signals. To address these issues, we propose an enhanced SEI system consisting of a universal RFF extractor, denoted as multiple synchrosqueezed wavelet transformation of energy unified (MSWTEu), and a new generative adversarial network for feature transferring (FTGAN). MSWTEu extracts uniform RFF features for different modal signals, FTGAN transfers different modal features to a recognized distribution in an unsupervised manner, and a novel training strategy is proposed to achieve emitter identification across multi-modal signals using a single clustering method. To evaluate the system, we built a hybrid dataset, which consists of multi-modal signals transmitted by various emitters, and built a complete civil air traffic control radar beacon system (ATCRBS) dataset for airplanes. The experiments show that our enhanced SEI system can resolve the SEI problems associated with crossing signal modes. It directly achieves 86% accuracy in cross-modal emitter identification using an unsupervised classifier, and simultaneously obtains 99% accuracy in open-set recognition of signal mode. abstract_id: PUBMED:37430801 Sensor Fusion-Based Vehicle Detection and Tracking Using a Single Camera and Radar at a Traffic Intersection. Recent advancements in sensor technologies, in conjunction with signal processing and machine learning, have enabled real-time traffic control systems to adapt to varying traffic conditions. This paper introduces a new sensor fusion approach that combines data from a single camera and radar to achieve cost-effective and efficient vehicle detection and tracking. Initially, vehicles are independently detected and classified using the camera and radar. Then, the constant-velocity model within a Kalman filter is employed to predict vehicle locations, while the Hungarian algorithm is used to associate these predictions with sensor measurements. Finally, vehicle tracking is accomplished by merging kinematic information from predictions and measurements through the Kalman filter. A case study conducted at an intersection demonstrates the effectiveness of the proposed sensor fusion method for traffic detection and tracking, including performance comparisons with individual sensors. abstract_id: PUBMED:36236408 "Conical" Frustum Multi-Beam Phased Arrays for Air Traffic Control Radars. The design of conical frustum phased array antennas for air traffic control (ATC) radar systems is addressed. The array architecture, which is controlled by a fully digital beam-forming (DBF) network, is composed by a set of equal vertical modules. Each module consists of a linear sparse array that generates on receive multiple instantaneous beams pointing along different directions in elevation. To reach the best trade-off between the antenna complexity (i.e., minimum number of array elements and/or radio frequency components) and radiation performance (i.e., matching a set of reference patterns), the synthesis problem is formulated in the Compressive Sampling (CS) framework. Then, the positions of the array elements and the complex excitations for generating each single beam are jointly determined through a customized version of the Bayesian CS (BCS) tool. Representative numerical results, concerned with ideal as well as real antenna models, are reported both to validate the proposed design strategy and to assess the effectiveness of the synthesized modular sparse array architecture also in comparison with conventional arrays with uniformly-spaced elements. Answer: Yes, nonradar training is a predictor of radar performance in air traffic control. A study investigated whether performance in nonradar training could predict performance in radar training. The research found that performance during nonradar trials predicted final radar performance, including metrics such as collisions and landed aircraft count. This prediction was independent of cognitive variables and earlier radar training performance. The study concluded that nonradar training enhances the ability to predict radar performance and that good nonradar performers demonstrated higher situation awareness in the radar environment. Therefore, performance in a nonradar environment may serve as an important selection tool for assessing the performance of student controllers in radar environments (PUBMED:19634306).
Instruction: Is there a correlation between sudden deafness and smoking? Abstracts: abstract_id: PUBMED:9522313 Is there a correlation between sudden deafness and smoking? Background: The etiology of sudden hearing loss is not yet known. The most common mechanism of sudden hearing loss would appear to be impaired cochlear blood circulation. Tobacco smoking causes changes in hemostasis and raises the body's need for oxygen because of carbon monoxide, one component of the smoke which blocks a part of the hemoglobin. Patients And Methods: 297 patients (76 smokers, 99 former smokers, and 122 non smokers) who were treated because of sudden hearing loss in the hospital in the last 5 years were queried about their smoking habits. We asked the patients to complete a questionnaire in order to get more reliable answers. We explored the kind of tobacco, the number of cigarettes or cigars per day, the age at onset of smoking, the number and rate of recurrence of sudden hearing loss, the result of the treatment of a former sudden hearing loss (if there was one), the characteristics of tinnitus, the possibility of stopping smoking, and the significance of tobacco smoking as reflected in health policy. Results: Tobacco smoking does not increase the overall risk of sudden hearing loss. The incidence of smokers in the population of the region and the incidence of smokers among patients with sudden hearing loss is equal. But the average age of the smoking patients is significantly lower than the average age of non smokers and former smokers. Smokers have a higher rate of recurrence of a sudden hearing loss. The result of treatment of former sudden hearing loss is worse in smoking patients. Conclusions: There is no obvious relation between the risk of sudden hearing loss and tobacco smoking. abstract_id: PUBMED:2087981 Smoking habits in patients with sudden hearing loss. Preliminary results. Tobacco, especially its content of nicotine, is a drug that has a powerful seductive effect and a dangerous risk of dependence. It is well known that smoking of cigarettes in the industrialized countries of Europe neutralizes an essential part of medical success in that tobacco-induced shortening of life is estimated to equal the possible medical extension of life. Nicotine is the best explored component of tobacco and its effects on heart circulation have been investigated by many research groups. Cigarette smoking raises the body's need for oxygen, because the carbon monoxide that is always inhaled with the smoke partially blocks the haemoglobin. Though acute and combined effects of nicotine and carbon monoxide have been established chronic diseases of the smoker cannot fully be declared. An attempt was made to define some correlation between smoking habits and the risk of a sudden hearing loss. All the patients who were treated in our clinic because of sudden hearing loss were asked to complete a questionnaire anonymously in order to get more reliable answers. However, the outcome was not very encouraging. Only three quarters of our patients responded to the questions, of these only 29% were smokers. However, half of the non-smokers were former smokers. The data are discussed in relation to other investigations. abstract_id: PUBMED:11388497 Smoking, alcohol, sleep and risk of idiopathic sudden deafness: a case-control study using pooled controls. Sudden deafness sometimes has an identifiable cause, but in most cases the cause is unknown (idiopathic sudden deafness). Vascular impairment has been proposed as an aetiological mechanism for this condition, but it is unclear whether traditional cardiovascular risk factors, such as smoking or alcohol intake, are associated with this condition. We accordingly investigated associations of idiopathic sudden deafness with smoking, alcohol intake and sleep duration in a case-control study. Cases were consecutive patients diagnosed with idiopathic sudden deafness between October 1996 and August 1998 at collaborating hospitals in Japan. Controls were obtained from a nationwide database of pooled controls, with matching for age, gender and residential district. Exposure variables were assessed from a self-administered questionnaire. Subgroup analyses were performed using audiometric subtypes of sudden deafness. Data were obtained for 164 cases and 20,313 controls. Increased risks of idiopathic sudden deafness were observed among participants who consumed two or more units of alcohol per day (OR=1.90, 95% CI=1.12-3.21), and among participants who slept less than seven hours per night (OR=1.61, 95% CI=1.09-2.37). The direct association with alcohol intake was particularly strong for the participants with profound hearing loss. There was little evidence of an association with smoking. This study suggests that alcohol intake and short sleep duration might be risk factors for idiopathic sudden deafness. abstract_id: PUBMED:2086568 Vascular risk factors of sudden deafness and its incidence in the normal population. A retrospective study We report the frequency of the vascular risk factors (overweight, hypertension, hypercholesterinaemia, hypertriglyceridaemia, hyperuricaemia, hyperglycaemia and smoking) in patients with sudden hearing loss. Analysis of 264 cases shows that only hyperuricaemia and hyperglycaemia are found more often in patients suffering from sudden deafness than in the normal population. There was a negative correlation between hearing improvement and the number of risk factors. Also the number of risk factors increased proportionally to the age of the patients. The patient's age and late treatment were the only unfavourable prognostic factors for hearing improvement. abstract_id: PUBMED:31446719 Correlation analysis of incidence, season and temperature parameters of different types of sudden deafness Objective:The objective of this study was to investigate the correlation between the onset of different types of sudden sensorineural hearing loss(SSNHL) with temperature parameters and seasons. Method:We retrospectively reviewed the medical charts of 175 patients who were diagnosed as SSNHL, precisely collected the exact date and city of onset, confirmed the season, and obtained the meteorological data including maximum temperature(Tmax), minimum temperature(Tmin), mean temperature(T), day-to-day change of mean temperature(ΔT), and diurnal temperature range(Trange) at the same day, then analyzed the relation between season and temperature with the onset of different types of SSNHL. Result:There was a significant difference of Trange between different types of SSNHL(P=0.001). Trange on the onset date of all-frequency SSNHL(including flat and profound type) was significantly higher than low and high frequency descending type(P=0.001, P&lt;0.05 respectively). Types of SSNHL had weak association with Trange groups(P=0.03, Cramer's V=0.220). An increase of 1℃ in Trange increased the risk of flat type SSNHL by 23.9% and 16.5% compared with low and high frequency descending type, respectively, and for profound type, the risk was increased by 22.4% and 15.1%. No significant differences were observed between seasons and SSNHL types(P=0.666). Conclusion:The incidence of different types of sudden sputum may be related to the worse temperature on the day, and has nothing to do with the disease season. abstract_id: PUBMED:28366076 Relationships among drinking and smoking habits, history of diseases, body mass index and idiopathic sudden sensorineural hearing loss in Japanese patients. Objectives: To present the cardiovascular risk factors in idiopathic sudden sensorineural hearing loss (SSNHL) patients enrolled in a nationwide epidemiological survey of hearing disorders in Japan. Materials And Methods: We compiled the cardiovascular risk factors in 3073 idiopathic SSNHL subjects (1621 men and 1452 women) and compared their proportions with controls as part of the National Health and Nutrition Survey in Japan, 2014. The cardiovascular risk factors consisted of drinking and smoking habits, a history of five conditions related to cardiovascular disease and body mass index. Results: The proportion of current smokers was significantly higher among men aged 50-59, 60-69 and 70+ and among women aged 30-39, 40-49 and 60-69. The proportion of patients with a history of diabetes mellitus was significantly higher among men aged 50-59, 60-69 and 70+, but not in women. In addition, male and female SSNHL subjects aged 60-69 showed lower proportions of current drinking; and female SSNHL subjects aged 60-69 showed higher proportions of overweight (BMI ≥25 kg/m2). Conclusions: The present cross-sectional study revealed showed significantly higher proportions of current smokers among both men and women as well as those with a history of diabetes mellitus among men across many age groups in patients with idiopathic SSNHL compared with the controls. abstract_id: PUBMED:31446718 Correlation study of peripheral blood inflammatory factors in patients with sudden deafness Objective:The aim of this study is to compare the difference of inflammatory factors in peripheral blood between sudden deafness patients and normal people, and to evaluate the predictive value of inflammatory factors in hearing recovery of sudden deafness patients. Method:Seventy-two inpatients with sudden deafness and 19 healthy persons were included. At the beginning of treatment in our hospital, audiometry was performed and peripheral blood was collected. The levels of IL-1β, IL-6, IL-17α, TGF-β1 and TNF-α in peripheral blood were detected by ELISA. The treatment was intravenous steroid(not applied if patients with contraindication of systemic steroid application)+ intratympanic steroid injection+ microcirculation improvement or neurotrophic therapy+ hyperbaric oxygen. At the end of the treatment, audiometry was performed again. A total of 26 patients were collected to test the levels of inflammatory factors in peripheral blood again at the end of the treatment. Result:The mean levels of inflammatory factors IL-1β, IL-6, IL-17α, TGF-β1 and TNF-α in peripheral blood of patients were (2.66±9.57) pg/ml, (4.71±6.91) pg/ml, (19.33±32.27) pg/ml, (50 018.37±14 660.72) pg/ml, (1.52±2.40) pg/ml, respectively. And the level of these five inflammatory factors in normal persons were (3.61±9.82) pg/ml, (3.58±4.49) pg/ml, (11.64±13.29) pg/ml, (45 199.98±11 956.09) pg/ml,(1.09±1.08) pg/ml respectively. Statistical analysis showed no significant difference between these two groups. A total of 45 cases were effective(hearing threshold increased ≥15 dB) and 27 cases were ineffective(hearing threshold increased&lt;15 dB). There was no significant difference in the levels of inflammatory factors between the two groups. Among 26 patients with blood samples before and after treatment, the level of TGF-β1 after treatment was significantly lower than that before treatment. Conclusion:The levels of these five inflammatory factors including IL-1β, IL-6, IL-17α, TGF-β1 and TNF-αin peripheral blood could not predict the recovery of sudden hearing loss. The role of inflammation in the development of sudden deafness needs further confirmation. TGF-β1 may be involved in the development of sudden deafness. abstract_id: PUBMED:3982177 Etiology and pathogenesis of sudden deafness 163 patients suffering from sudden hearing loss were examined with regard to their cardiovascular risk factors (hypertension, hyperlipaemia, cigarette smoking, hyperglycaemia, hyperuricaemia and obesity). Patients with sudden hearing loss had significantly more vascular risk factors than a healthy control group. Significantly more risk factors were also seen in patients where the hearing loss could not be influenced by therapy. We consider this an indication for the importance of vascular risk factors in the genesis of sudden hearing loss, especially in cases with infaust prognosis. abstract_id: PUBMED:9251855 Risk factors for sudden deafness: a case-control study. In order to investigate risk factors for idiopathic sudden sensorineural hearing loss (sudden deafness), a case-control study was done in 109 patients with sudden deafness who visited our hospital between 1992 and 1994, with 109 controls matched to each patient by gender and age. Odds ratio (OR) and 95% confidence interval (CI) for smoking habits, drinking habits, dietary habits, environmental noise, past history of disease, sleeping hours, appetite, fatigue, incidence of common cold were obtained. Fatigue (OR: 3.28; 95% CI: 1.36-7.90) and loss of appetite (OR: 8:00; 95% CI: 1.00-64.0) elevated the risk for sudden deafness. Those who ate many fresh vegetables were at a decreased risk (OR: 0.48; 95% CI: 0.24-0.96 for light-colored vegetables, OR: 0.55; 95% CI: 0.30-1.02 for green-yellow vegetables). Personal histories of hypertension and thyroid disease, and susceptibility to colds appeared to be positively associated with the risk (0.05 &lt; P &lt; 0.10). Smoking habits, drinking habits and environmental noise had no significant association with sudden deafness. These results suggested that environmental factors, including diet, may be importantly involved in the genesis of sudden deafness. abstract_id: PUBMED:20551629 Cardiovascular and thromboembolic risk factors in idiopathic sudden sensorineural hearing loss: a case-control study. Objective: The pathogenesis of idiopathic sudden sensorineural hearing loss (ISSHL) remains unknown, but vascular involvement is one of the main hypotheses. The main objective of this study was to investigate the association between ISSHL and cardiovascular and thromboembolic risk factors. Study Design: Multicentric case-control study. Methods: Ninety-six Caucasian patients with ISSHL and 179 sex- and age-matched controls were included. Patients were evaluated on the day of the inclusion and 1 week, 3 weeks and 3 months later. Clinical information concerning personal and familial cardiovascular and thromboembolic risk factors and concerning the ISSHL was collected. Blood samples were collected for genetic analysis of factor V Leiden and G20210A polymorphism in the factor II gene. The severity of the hearing loss was classified as mild (21-40 dB), moderate (41-70 dB), severe (71-90 dB) and profound or total (&gt;90 dB). Hearing improvement was calculated as a relative improvement of hearing thresholds using the contralateral ear as baseline. Results: Systolic blood pressure was higher in patients (130 ± 1.7 mm Hg) than in controls (124 ± 1.1 mm Hg, p = 0.003). The personal/familial history of cardiovascular events was also more prevalent in patients (p = 0.023 and p = 0.014, respectively), whereas no difference was found in the prevalence of personal cardiovascular risk factors (hypertension, diabetes mellitus, hyperlipidemia, smoking habits). There was no correlation between the audiogram type, the hearing outcome and the presence of cardiovascular risk factors. No significant difference was observed in the personal/familial history or in the presence of thromboembolic risk factors. The prothrombin and factor V mutations were uncommon in both patients and controls. The final hearing threshold was only correlated with the severity of the initial hearing loss (p &lt; 0.001), but not influenced by the presence of vertigo, audiogram type, time elapsed from onset of ISSHL to hospitalization or failure of a previous oral therapy. Hearing stabilization was obtained at 21 days in 92% of patients. Conclusion: These results support the theory of vascular involvement as the etiology of some cases of ISSHL. The sole predictive factor of poor final hearing is the severity of the initial hearing loss. Answer: The relationship between sudden deafness (also known as sudden sensorineural hearing loss or SSNHL) and smoking has been explored in various studies, but the findings are not entirely consistent. Some studies have found no significant correlation between smoking and the overall risk of sudden deafness. For instance, a study with 297 patients did not find that tobacco smoking increased the overall risk of sudden hearing loss, although smokers did have a higher rate of recurrence of sudden hearing loss and worse outcomes from treatment compared to non-smokers and former smokers (PUBMED:9522313). Another study also reported little evidence of an association between smoking and idiopathic sudden deafness (PUBMED:11388497). Conversely, other research has indicated that there may be a connection. A study involving 163 patients found that those with sudden hearing loss had significantly more vascular risk factors, including cigarette smoking, than a healthy control group (PUBMED:3982177). Additionally, a case-control study revealed that the proportion of current smokers was significantly higher among both men and women with idiopathic SSNHL compared to controls (PUBMED:28366076). Another case-control study found that personal histories of hypertension and thyroid disease, which could be related to smoking, appeared to be positively associated with the risk of sudden deafness (PUBMED:9251855). Furthermore, a multicentric case-control study supported the theory of vascular involvement in the etiology of some cases of ISSHL, with higher systolic blood pressure and a more prevalent personal/familial history of cardiovascular events in patients with ISSHL, although it did not find a direct correlation between smoking habits and ISSHL (PUBMED:20551629). In summary, while some studies suggest that smoking may not increase the overall risk of sudden deafness, others indicate that smoking could be one of several vascular risk factors associated with the condition. The evidence is mixed, and further research may be needed to clarify the relationship between smoking and sudden deafness.
Instruction: Are newly diagnosed columnar-lined oesophagus patients getting younger? Abstracts: abstract_id: PUBMED:19295439 Are newly diagnosed columnar-lined oesophagus patients getting younger? Objectives: The prevalence of columnar-lined oesophagus seems to have increased steadily in the past three decades in Europe and North America. Although the vast majority of columnar-lined oesophagus will not progress to malignancy, it is nevertheless important to identify the risk factors associated with this condition. This study investigates whether there has been a change, at diagnosis, in age of columnar-lined oesophagus patients between 1990 and 2005, or an increase in the number of patients aged less than 50 years. Methods: Data on age of diagnosis were abstracted from medical records of 7220 patients from 19 centres registered with UK National Barrett's Oesophagus Registry, between the years 1990 and 2005. Linear regression analysis was carried out to assess any trends in the mean age of diagnosis. Results: Overall there was a mean decrease in age at diagnosis for each 1-year increase in time. This equated to a mean decrease of 3 years over the study period, 1990-2005 with the greatest difference being seen in female patients. About 18% of patients in the study were aged less than 50 years at the time of diagnosis. With this group also, the trend was similar, with an increase in the number of patients aged less than 50 years, at the time of diagnosis, with increasing years. Conclusion: The mean age of diagnosis of columnar-lined oesophagus has decreased between the years 1990 and 2005 in both men and women, more so in women. This is also reflected in an increase in newly diagnosed columnar-lined oesophagus patients below the age of 50 years. abstract_id: PUBMED:32052051 Is the age of diagnosis of esophageal adenocarcinoma getting younger? Analysis at a tertiary care center. There are emerging data that patients &lt;50 years are diagnosed with esophageal adenocarcinoma (EAC) more frequently, suggesting that the age threshold for screening should be revisited. This study aimed to determine the age distribution, outcomes, and clinical features of EAC over time. The pathology database at the Hospital of the University of Pennsylvania was reviewed from 1991 to 2018. The electronic health records and pathology were reviewed for age of diagnosis, pathology grade, race, and gender for a cohort of 630 patients with biopsy proven EAC. For the patients diagnosed from 2009 to 2018, the Penn Abramson Cancer Center Registry was reviewed for survival and TNM stage. Of the 630 patients, 10.3% (65 patients) were &lt;50 years old [median 43 years, range 16-49]. There was no increase in the number of patients &lt;50 years diagnosed with EAC (R = 0.133, P = 0.05). Characteristics of those &lt;50 years versus &gt;50 years showed no difference in tumor grade. Among the 179 eligible patients in the cancer registry, there was no significant difference in clinical or pathological stage for patients &lt;50 years (P value = 0.18). There was no association between diagnosis age and survival (P = 0.24). A substantial subset of patients with EAC is diagnosed at &lt;50 years. There was no increasing trend of EAC in younger cohorts from 1991 to 2018. We could not identify more advanced stage tumors in the younger cohort. There was no significant association between diagnosis age and survival. abstract_id: PUBMED:24708395 Brief report: the length of newly diagnosed Barrett's esophagus may be decreasing. Few studies have examined the temporal trends of length in newly diagnosed Barrett's esophagus (BE) and arrived at conflicting results. The aim of this study was to identify whether there has been a change over time in the length of BE at the time of diagnosis. This is a retrospective, single-center, observational study from Houston, Texas on newly diagnosed BE between 2008 and 2013. All cases were defined by the presence of endoscopically visible BE and histologic confirmation of intestinalized columnar epithelium with goblet cells. The length of BE was measured using the Prague classification. We examined temporal changes in 1-year intervals in the length of BE at the time of diagnosis. Both the frequency and mean length of BE at diagnosis seemed to decrease over time from February 2008 to July 2013. The proportion of patients diagnosed with BE ≥3 cm per year declined during the study period, while the proportion of patients with BE ≥1 and &lt;3 cm increased, and those with &lt;1 cm remained stable. The mean age and the gender of patients diagnosed with BE ≥3 cm did not differ significantly by BE length or year of diagnosis. The mean length of newly diagnosed BE may be decreasing as a result of a decline in BE ≥3 cm. These observations cannot be explained by changes in age and gender. abstract_id: PUBMED:36686114 Correlation of Anxiety and Depression to the Development of Gastroesophageal Disease in the Younger Population. Gastroesophageal reflux disease (GERD) is a condition characterized by the reflux of stomach contents into the esophagus, which leads to heartburn and regurgitation. GERD has been categorized its types according to severity. The categories that have been discussed in this study are reflux esophagitis (RE), non-erosive reflux disease (NERD), and Barrett's esophagus. Our study compared various studies and showed that the subjects with GERD had a high level of anxiety and depression. Gastroesophageal reflux disease has a significant negative impact on the quality of life (QoL) by perturbing daily activities. The majority of GERD patients use antacid drugs to control their acid symptoms. However, these symptoms are sometimes difficult to control, even with the most potent proton-pump inhibitors (PPIs), and these patients tend to have a lower response rate. According to the clinical data, Anxiety and Depression are linked to the development of GERD. A major focus of this study is to explore psychological influences such as anxiety and depression and how they relate to GERD. This study also reviews the effect of these conditions on the younger population. It is concluded that the quality of life (QoL) of subjects with GERD is reduced by depression and anxiety. abstract_id: PUBMED:19798573 Secular trends in patients diagnosed with Barrett's esophagus. Background: It is not known whether there have been recent changes in demographic or clinical characteristics among patients newly diagnosed with Barrett's esophagus (BE), which could be a result of changes in disease epidemiology or of screening or surveillance effects, and could have clinical implications. Aims: The aim of this study was to determine whether there has been a shift in age at diagnosis of BE over the past decade. Secondary aims were to determine whether there has been a shift in patient body mass index (BMI) or BE segment length. Methods: An endoscopic database at a tertiary medical center was used to identify all esophagogastroduodenoscopies (EGDs) performed between 1997 and 2007. The cohort was restricted to patients newly diagnosed with BE. Pathology records were reviewed to confirm biopsy findings of intestinal metaplasia (IM). Results: BE was diagnosed in 378 subjects between 1997 and 2007. Mean age at diagnosis of BE was 60.7 +/- 14.1 years, with mean BMI of 27.4 +/- 5.2 kg/m(2) and mean BE segment length of 4.7 +/- 3.7 cm. Between 1997 and 2007 there was no significant change in mean age at diagnosis, BMI, BE segment length or in proportion of men versus women newly diagnosed. Conclusions: Despite an increase in volume of EGDs performed in an open-access endoscopy unit between 1997 and 2007, there was no appreciable shift in age at diagnosis of BE. BMI and BE segment length among newly diagnosed patients also remained stable over this time period. abstract_id: PUBMED:15191506 The length of newly diagnosed Barrett's oesophagus and prior use of acid suppressive therapy. Background: The length of Barrett's oesophagus seems to correlate well with indicators of severe gastro-oesophageal reflux disease. However, it remains unknown whether prior acid suppressive therapy affects the length of newly diagnosed Barrett's oesophagus. Methods: A retrospective analysis of a well-characterized large cohort of patients with Barrett's oesophagus diagnosed between 1981 and 2000. Aim: To compare the length of Barrett's oesophagus between patients who received acid suppressive therapy prior to their diagnosis to those who did not receive such therapy. Pharmacy records were obtained from Department of the Veterans Affairs computerized records and prospectively collected research records. We further examined the association between prior use of acid suppressive therapy and the length of Barrett's oesophagus in correlation analyses, as well as multivariate linear regression analyses while adjusting for differences in year of diagnosis, age, gender, ethnicity, and the presence of intestinal metaplasia of the gastric cardia. Results : There were 340 patients with Barrett's oesophagus first diagnosed between 1981 and 2000. The average length of Barrett's oesophagus at the time of first diagnosis was 4.4 cm (range: 0.5-16). Of all patients, 139 (41%) had prior use of histamine-2 receptor antagonists, or proton-pump inhibitors (41 used both), and 201 (59%) used neither prior to the diagnosis of Barrett's oesophagus. The mean length of Barrett's oesophagus was significantly shorter in patients with prior use of proton-pump inhibitors (3.4 cm) or proton-pump inhibitors and histamine-2 receptor antagonists (3.1 cm) when compared to those with none of these medications (4.8 cm). In the multivariate linear regression model, the prior use of proton-pump inhibitors or either proton-pump inhibitors or histamine-2 receptor antagonists was an independent predictor of shorter length of Barrett's oesophagus (P = 0.0396). Conclusions: The use of acid suppressive therapy among patients is associated with a reduction in the eventual length of newly diagnosed Barrett's oesophagus with gastro-oesophageal reflux disease. This finding is independent of the year of diagnosis or demographic features of patients. Further studies are required to confirm this finding. abstract_id: PUBMED:15067623 Is the length of newly diagnosed Barrett's esophagus decreasing? The experience of a VA Health Care System. Background & Aims: Secular trends in the length of newly diagnosed Barrett's esophagus (BE) are unknown. We have anecdotally noticed less frequent new diagnoses of long segments of BE. Methods: This is a retrospective analysis of prospectively collected information on a well-characterized large cohort of patients with documented BE that was diagnosed between 1981 and 2000 at Southern Arizona Department of Veterans Affairs Health Care System. We examined temporal changes in the length of BE at the time of diagnosis (frequency and proportions). We conducted correlation analyses, as well as multivariate linear regression analyses, to examine the association between year of diagnosis and BE length while adjusting for temporal differences in age, sex, ethnicity, previous use of antisecretory therapy, and the presence of intestinal metaplasia (IM) of the gastric cardia. Results: There were 340 patients with BE first diagnosed between 1981 and 2000. All cases were defined by the presence of areas of salmon-colored mucosa in the lower end of the tubular esophagus and IM in biopsy specimens obtained from these areas on at least 2 endoscopic examinations. There were no significant changes over time in mean age of patients with BE (61 yr) or proportion of white patients (84%). The mean length of BE at the time of first diagnosis declined progressively over time. In the earliest period (1981-1985), mean BE length was 6 +/- 3.8 cm, whereas mean BE length in 1996-2000 was 3.6 +/- 2.9 cm. This observation was explained not only by more frequent diagnoses of short BE, but also by less frequent diagnoses of long BE (&gt; or =3 cm). There was a strong inverse correlation between BE length at the time of diagnosis and year of diagnosis (Pearson's correlation coefficient, -0.29; P &lt;0.0001). In the multivariate linear regression model, a more recent year of BE diagnosis was an independent predictor of shorter BE length (P &lt;0.0001). Similar results were obtained in analyses restricted to veteran patients or those with BE &gt; or = 3 cm. Conclusions: There has been a progressive decline in the length of newly diagnosed BE as a result of an increase in short-segment BE, but, curiously, also because of a decline in long-segment BE (&gt; or =3 cm). These changes cannot be explained fully by changes in demographic features of patients, previous therapy, or the increasing emphasis on IM of the gastric cardia. The role of referral bias and/or temporal changes in the definitions cannot be excluded. abstract_id: PUBMED:12086900 Esophagectomy for adenocarcinoma in patients 45 years of age and younger. Esophageal adenocarcinoma in patients 45 years of age or younger is uncommon. We reviewed our experience with the surgical management of these patients to determine their clinical characteristics, pathologic findings, and treatment results. Thirty-two patients were identified through our surgical pathology database, and their medical records were reviewed to determine clinical characteristics, treatment, treatment-associated mortality, tumor staging, presence of Barrett's mucosa, and survival. In our series, patients were white (100%) males (96.9%) with a history of reflux (56.3%), cigarette smoking (40.6%), and alcohol consumption (59.4%), who presented with progressive solid food dysphagia (78.1%). A prior diagnosis of Barrett's mucosa or use of antireflux medications was noted in five patients each (15.6%). There were no operative deaths. Actuarial survival was 81.1% (95% confidence interval [CI] 66.1 to 96.2) at 12 months, 68.5% (95% CI 49.5 to 87.5) at 24 months, and 56.9% (95% CI 34.6 to 79.1) at 60 months. Our findings show that patients with esophageal adenocarcinoma 45 years of age or younger have similar clinical findings to those reported in other large series where the median age is in the sixth or seventh decade of life, supporting a uniform theory of tumor pathogenesis. Esophagectomy may be performed with low mortality, and survival is reasonable for early-stage disease. Young patients with Barrett's esophagus are not immune from the development of adenocarcinoma and need to be screened accordingly. abstract_id: PUBMED:31126651 Higher clinical suspicion is needed for prompt diagnosis of esophageal adenocarcinoma in young patients. Background: Esophageal cancer is considered a disease of the elderly. Although the incidence of esophageal adenocarcinoma in young patients is increasing, current guidelines for endoscopic evaluation of gastroesophageal reflux disease and Barrett's esophagus include age as a cutoff. There is a paucity of data on the presentation and treatment of esophageal cancer in young patients. Most studies are limited by small sample sizes, and conflicting findings are reported regarding delayed diagnosis and survival compared with older patients. Methods: A retrospective cohort study was performed using the National Cancer Database between 2004 and 2015. Patients with esophageal adenocarcinoma were divided into quartiles by age (18-57, 58-65, 66-74, 75+ years) for comparison. Clinicopathologic and treatment factors were compared between groups. Results: A total of 101,596 patients were identified with esophageal cancer. The youngest patient group (18-57 years) had the highest rate of metastatic disease (34%). No difference in tumor differentiation was observed between age groups. Younger patient groups were more likely to undergo treatment despite advanced stage at diagnosis. Overall 5-year survival was better for younger patients with local disease, but the difference was less pronounced in locoregional and metastatic cases. Conclusions: In this study, young patients were more likely to have metastatic disease at diagnosis. Advanced stage in young patients may reflect the need for more aggressive clinical evaluation in high-risk young patients. abstract_id: PUBMED:31894428 Barrett's esophagus patients are becoming younger: analysis of a large United States dataset. Background: Barrett's esophagus (BE), a complication of long-term gastroesophageal reflux disease (GERD), has been reported to affect 6-8% of those with heartburn. Most patients are males, Caucasians and middle aged. However, there are no recent demographic studies that evaluated the proportion trends of BE. We aimed to assess proportion trends of BE over an 11-year period, using a very large national dataset. Methods: This was a population-based analysis of the national Explorys dataset. Explorys is an aggregate of electronic medical record database representing over 54 million patients. Proportions of BE's variables such as age, gender, race, BMI, and treatment with PPI were recorded during an 11-year period. BE patients were classified into seven age groups (15-19, 20-29, 30-39, 40-49, 50-59, 60-69, ≥ 70 years old). Secular trends of the proportion of BE were assessed over time for each age group. Results: The majority of patients diagnosed with BE were ≥ 70 years old across all calendar years. However, the proportion of BE patients who were ≥ 70 years old has significantly decreased between 2006 and 2016 (- 19.9%, p &lt; 0.001). The proportion of patients with BE increased in all age groups but most prominently in the age groups, 30-39: 2.07%, 40-49: 3.64%, 50-59: 6.89%, 60-69: 6.18%, p &lt; 0.001. BE was significantly more common in those who were Caucasian and male. PPI usage fell significantly in those who were ≥ 70 years old (- 20.8%, p &lt; 0.001), but increased in the other remaining age groups. Conclusions: The proportion of BE patients who are 70 years and older has significantly dropped. Younger patients' groups have demonstrated the highest increase in the proportion of BE patients, especially those in the age group of 30-39 years old. Answer: Yes, newly diagnosed columnar-lined oesophagus patients are getting younger. A study investigating the change in age at diagnosis of columnar-lined oesophagus patients between 1990 and 2005 found a mean decrease in age at diagnosis over the study period, with the greatest difference seen in female patients. Approximately 18% of patients in the study were aged less than 50 years at the time of diagnosis, and there was an increase in the number of patients aged less than 50 years at the time of diagnosis with increasing years (PUBMED:19295439). Additionally, another study analyzing a large United States dataset reported that the proportion of Barrett's esophagus (BE) patients who were 70 years and older significantly dropped, while younger patient groups, especially those in the age group of 30-39 years old, demonstrated the highest increase in the proportion of BE patients (PUBMED:31894428). These findings suggest that there is a trend of younger individuals being diagnosed with columnar-lined oesophagus.
Instruction: Is allergen skin test reactivity a predictor of mortality? Abstracts: abstract_id: PUBMED:11122209 Is allergen skin test reactivity a predictor of mortality? Findings from a national cohort. Background: The importance of atopy on subsequent mortality is controversial. A clearer understanding is important as atopy is increasing worldwide. Objective: To determine the influence of allergen skin test reactivity on observed mortality of a national cohort. Methods: Baseline health status and atopic status (allergen skin testing) was measured as part of the second National Health and Nutrition Examination Survey (NHANES II), a representative sample of the US population, during the years 1976-80. Vital status and cause of death were assessed through December 31, 1992 for all examinees 30 years of age or older at baseline (n = 9252) as part of the NHANES II Mortality Study (NH2MS). The analytic sample contained 8179 men and women after excluding missing data. Allergen skin test reactivity was defined as weal &gt;/= 3 mm to one of eight 1 : 20 (w/v), 50% glycerinated ('No US Standard of Potency') allergens licensed by the FDA: house dust, cat, dog, Alternaria, mixed giant/short ragweed, oak, perennial rye grass, and Bermuda grass. Survival analyses were conducted using multivariate adjusted Cox regression models to evaluate the association between atopy and all-cause, cardiovascular, and cancer mortality. Results: There was no association between allergen skin test reactivity and all cause mortality: 30-44 years RR = 1.07 (95% CI 0.63-1.84); 45-59 years RR = 1.10 (0.78-1.55); 60-75 years RR = 1.07 (0.91-1.25). Results were unchanged when cancer or heart disease mortality were examined separately. The presence or absence of allergic symptoms, using the flare to define skin test reactivity, eliminating deaths in the first 5 years of follow-up, or eliminating individuals with pre-existing conditions did not alter the findings. Conclusions: Atopy, defined by allergen skin test reactivity, with or without symptoms, is not a predictor of subsequent mortality. abstract_id: PUBMED:28487839 Changes in skin reactivity and associated factors in patients sensitized to house dust mites after 1 year of allergen-specific immunotherapy. Background: Allergen-specific immunotherapy (SIT) can significantly improve symptoms and reduce the need for symptomatic medication. Objective: The aim of this study was to investigate changes in skin reactivity to house dust mites (HDMs) as an immunologic response and associated factors after 1 year of immunotherapy. Methods: A total of 80 patients with allergic airway diseases who received subcutaneous SIT with HDMs from 2009 to 2014 were evaluated. The investigated parameters were basic demographic characteristics, skin reactivity and specific IgE for HDM, serum total IgE level, blood eosinophil counts, and medication score. Results: The mean levels of skin reactivity to HDMs, blood eosinophil counts, and medication scores after 1 year were significantly reduced from baseline. In univariate comparison of the changes in skin reactivity to HDMs, age ≤30 years, HDMs only as target of immunotherapy, and high initial skin reactivity (≥2) to HDMs were significantly associated with the reduction in skin test reactivity. In multivariate analysis, high initial skin reactivity and HDMs only as target allergens were significantly associated with changes in skin reactivity to HDMs. In the receiver operating characteristic curve of the initial mean skin reactivity to HDMs for more than 50% reduction, the optimal cutoff value was 2.14. Conclusion: This study showed significant reductions in allergen skin reactivity to HDMs after 1 year of immunotherapy in patients sensitized to HDMs. The extent of initial allergen skin reactivity and only HDMs as target allergen were important predictive factors for changes in skin reactivity. abstract_id: PUBMED:6699318 Skin test reactivity and clinical allergen sensitivity in infancy. We examined the development of skin test reactivity and clinical allergen sensitivity in infancy. Seventy-eight infants of atopic parents were skin prick tested every 4 mo from 4 to 16 mo and an additional 57 of these infants were tested at 20 mo. Wheal diameters were recorded for histamine (1 mg/ml) and specific allergen reactions by use of cow's milk, egg albumen, wheat, and Dermatophagoides pteronyssinus. The histamine mean wheal diameter was significantly lower at 4 and 8 mo compared to the older infants. Infants at 20 mo also had significantly smaller wheals than adult controls. Histamine reactivity was greater in atopic infants at 4 mo compared to nonatopic infants. Reactions to ingested allergens occurred early in infancy but were usually transient. There was a good correlation between skin sensitivity and clinical immediate-food hypersensitivity to the food concerned. In contrast, reactions to the inhaled allergen, D. pteronyssinus, occurred later in infancy, were persistent, and increased in size with age. Although we found no relationship between the acquisition of skin reactivity to D. pteronyssinus and development of the respiratory symptoms of atopic disease during the period of the study, it is possible that inhaled allergen reactivity may be related to respiratory symptoms at later ages. Despite the decreased histamine reactivity in early infancy, skin tests proved reliable markers of clinical disease in ingested but not inhalant allergen sensitivity. abstract_id: PUBMED:33512036 Allergen skin test reactivity and asthma are inversely associated with ratios of IgG4/IgE and total IgE/allergen-specific IgE in Ugandan communities. Background: Serum inhibition of allergen-specific IgE has been associated with competing IgG4 and non-specific polyclonal IgE. In allergen immunotherapy, beneficial responses have been associated with high IgG4/IgE ratios. Helminths potentiate antibody class switching to IgG4 and stimulate polyclonal IgE synthesis; therefore, we hypothesized a role for helminth-associated IgG4 and total IgE in protection against atopic sensitization and clinical allergy (asthma) in tropical low-income countries. Methods: Among community residents of Ugandan rural Schistosoma mansoni (Sm)-endemic islands and a mainland urban setting with lower helminth exposure, and among urban asthmatic schoolchildren and non-asthmatic controls, we measured total, Schistosoma adult worm antigen (SWA)-specific, Schistosoma egg antigen (SEA)-specific and allergen (house dust mite [HDM] and German cockroach)-specific IgE and IgG4 by ImmunoCAP® and/or ELISA. We assessed associations between these antibody profiles and current Sm infection, the rural-urban environment, HDM and cockroach skin prick test (SPT) reactivity, and asthma. Results: Total IgE, total IgG4 and SWA-, SEA- and allergen-specific IgE and IgG4 levels were significantly higher in the rural, compared to the urban setting. In both community settings, both Sm infection and SPT reactivity were positively associated with allergen-specific and total IgE responses. SPT reactivity was inversely associated with Schistosoma-specific IgG4, allergen-specific IgG4/IgE ratios and total IgE/allergen-specific IgE ratios. Asthmatic schoolchildren, compared with non-asthmatic controls, had significantly higher levels of total and allergen-specific IgE, but lower ratios of allergen-specific IgG4/IgE and total IgE/allergen-specific IgE. Conclusions And Clinical Relevance: Our immuno-epidemiological data support the hypothesis that the IgG4-IgE balance and the total IgE-allergen-specific IgE balance are more important than absolute total, helminth- or allergen-specific antibody levels in inhibition of allergies in the tropics. abstract_id: PUBMED:8172364 Allergen skin test reactivity in an unselected Danish population. The Glostrup Allergy Study, Denmark. The aim of this study was to assess the distribution of allergen skin test reactivity in an unselected Danish population. A total of 793 subjects, aged 15-69 years, were invited, and 599 (75.5%) attended. The skin prick test was performed with standardized allergen extracts of high potency. Skin reactivity occurred in 28.4% of the subjects. The frequency of skin reactivity to the specific allergens ranged from 1.5% (Cladosporium) to 12.5% (Dermatophagoides pteronyssinus), and the frequencies of skin reactivity to the allergen groups (pollen, animal dander, house-dust mites, and molds) were 17.6%, 8.7%, 14.0%, and 3.2%, respectively. Young women appeared to reflect the average skin reactivity. When compared with young women, skin reactivity to animal dander was more probable in young men (odds ratio (OR) value = 2.6; 95% confidence interval (CI) of odds ratio value = 1.1-6.1). Current smokers were less likely than nonsmokers to be skin-reactive to pollen (OR = 0.4; 95% CI = 0.3-0.7). In conclusion, allergen skin test reactivity was common, and was related to sex, age, smoking history, and probably genetic predisposition. abstract_id: PUBMED:36909903 Reactivity of nasal cavity mucosa in the nasal cow's milk allergen provocation test. Introduction: The nasal allergen provocation test plays an important role in differential diagnostics of rhinitis. Due to its informative potential, the test is also becoming increasingly used in other areas of diagnostics, including the diagnostics of food allergies. Aim: To assess the reactivity of nasal mucosa to the cow's milk protein allergens (as being widely used in powdered form in the food industry). Material And Methods: The study material consisted of a group of 31 healthy subjects not sensitized to environmental allergens including cow's milk protein allergens. The study method involved an incremental nasal provocation test with cow's milk protein evaluated using the visual analog scale and acoustic rhinometry. Results: A total of 29 out of 31 volunteers presented with a significant decrease in nasal patency (control solution: 1.112 ±0.161 vs. local allergen application 1.005 ±0.157; p &lt; 0.004) as measured by acoustic rhinometry following the allergen dose of 12.5 μg. Slight changes in complaints were observed using the visual analog scale. Exposure to the widespread food allergens (including powdered cow's milk allergens) presents a potential risk of positive response in non-sensitized individuals. Conclusions: Further studies on dose standardization are necessary in the study area. abstract_id: PUBMED:18547322 Early infection with Trichuris trichiura and allergen skin test reactivity in later childhood. Background: Allergic diseases cause a large and increasing burden in developed countries and in urban centres in middle-income countries. The causes of this increase are unknown and, currently, there are no interventions to prevent the development of allergic diseases. The 'hygiene hypothesis' has tried to explain the increase through a reduction in the frequency of childhood infections causing a failure to program the immune system for adequate immune regulation. Intestinal helminth parasites are prevalent in childhood in developing countries and are associated with a lower prevalence of allergen skin test reactivity and asthma. Objectives: To investigate whether children who had intestinal helminth infections during early childhood have a lower prevalence of allergen skin test reactivity later in childhood. Methods: We re-visited a population of 1055 children from whom stool samples had been collected for detection of intestinal helminth infections for another study, and collected new stool samples and performed allergen skin prick testing. Information on potential confounding variables was collected. Results: Children with heavy infections with Trichuris trichiura in early childhood had a significantly reduced prevalence of allergen skin test reactivity in later childhood, even in the absence of T. trichiura infection at the time of skin testing in later childhood. Conclusion: Early heavy infections with T. trichiura may protect against the development of allergen skin test reactivity in later childhood. Novel treatments to program immune-regulation in early childhood in a way that mimics the effects of early infections with T. trichiura may offer new strategies for the prevention of allergic disease. abstract_id: PUBMED:33609533 Array-based measurements of aero-allergen-specific IgE correlate with skin-prick test reactivity in asthma regardless of specific IgG4 or total IgE measurements. Skin prick testing (SPT) and measurement of serum allergen-specific IgE (sIgE) are used to investigate asthma and other allergic conditions. Measurement of serum total IgE (tIgE) and allergen-specific IgG4 (sIgG4) may also be useful. The aim was to ascertain the correlation between these serological parameters and SPT. Sera from 60 suspected asthmatic patients and 18 healthy controls were assayed for sIgE and sIgG4 reactivity against a panel of 70 SPT allergen preparations, and for tIgE. The patients were also assessed by skin prick tests for reactivity to cat, dog, house dust mite and grass allergens. Over 50% of the patients had tIgE levels above the 75th percentile of the controls. 58% of patients and 39% of controls showed sIgE reactivity to ≥1 allergen. The mean number of allergens detected by sIgE was 3.1 in suspected asthma patients and 0.9 in controls. 58% of patients and 50% of controls showed sIgG4 reactivity to ≥1 allergen. The mean number of allergens detected by sIgG4 was 2.5 in patients and 1.7 in controls. For the patients, a strong correlation was observed between clinical SPT reactivity and serum sIgE levels to cat, dog, house dust mite (HDM) and grass allergens. SPT correlations using sIgE/sIgG4 or sIgE/tIgE ratios were not markedly higher. The measurement of serum sIgE by microarray using SPT allergen preparations showed good correlation with clinical SPT reactivity to cat, dog, HDM and grass allergens. This concordance was not improved by measuring tIgE or sIgG4. abstract_id: PUBMED:25765942 Allergen skin prick test should be adjusted by the histamine reactivity. Background: Skin prick test results are mostly reported as mean wheal diameter obtained with one concentration of allergen. Differences in technique between personnel causes variation in wheal size. The research question was whether the influence of differences in skin prick test technique among assistants and centers can be reduced by relating the allergen wheal response to that of histamine. Methods: Two methods for estimating skin reactivity, the method of Nordic Guidelines using histamine as a reference and the method of Brighton et al. [Clin Allergy 1979;9:591-596] not using histamine as a reference, were applied to data from two biological standardization trials, using the same batch of freeze-dried timothy pollen preparation. Results: The concentration defining the Nordic biological unit, defined as a concentration of allergen eliciting a wheal of the same size as that of histamine dihydrochloride 10 mg/ml, did not differ between the centers. When not using histamine as a reference, applying the method of Brighton et al., there was a 15-fold difference in the estimate of the biological activity between the trials that was eliminated by adjusting the allergen response to that of the histamine reference. Conclusions: To reduce the influence of differences in test technique among assistants and centers responses to allergen-induced skin prick tests should be compared to that of histamine. abstract_id: PUBMED:23920475 Recombinant allergen-based provocation testing. Over the last 25 years, recombinant allergens from all important allergen sources have been cloned and are now available as recombinant proteins. These molecules can be produced in practically unlimited amounts without biological or batch-to-batch variability. It has been shown in provocation tests that recombinant allergens have similar clinical effects as their natural counterparts. With the help of these tools it is possible to reveal the precise reactivity profiles of patients and to uncover and differentiate cross-reactivity from genuine sensitization to an allergen source. Although it has been shown some time ago that it would be possible to replace crude allergen extracts with recombinant allergens for skin prick testing, and even though the use of allergen components can improve routine diagnosis, these tools are still not available for clinical routine applications. The use of provocation tests is a crucial step in the development of new, hypoallergenic vaccines for therapy of allergic disease. Here we describe important provocation methods (skin prick test, intradermal test, atopy patch test, nasal provocation, colonoscopic provocation test) and give an overview of the clinical provocation studies which have been performed with recombinant allergens so far. Answer: No, allergen skin test reactivity is not a predictor of mortality. A study conducted as part of the second National Health and Nutrition Examination Survey (NHANES II) and its Mortality Study (NH2MS) found no association between allergen skin test reactivity and all-cause mortality across different age groups. The study included a national cohort of 8179 men and women aged 30 years or older and assessed their vital status and cause of death through December 31, 1992. The allergen skin test reactivity was defined as a weal >= 3 mm to one of eight common allergens. Survival analyses using multivariate adjusted Cox regression models showed that atopy, defined by allergen skin test reactivity, with or without symptoms, did not predict subsequent mortality, including when examining cancer or heart disease mortality separately (PUBMED:11122209).
Instruction: Can high central nervous system penetrating antiretroviral regimens protect against the onset of HIV-associated neurocognitive disorders? Abstracts: abstract_id: PUBMED:35850626 The effect of antiretroviral therapy with high central nervous system penetration on HIV-related cognitive impairment: a systematic review and meta-analysis. Chronic complications are a significant concern for people living with HIV/AIDS (PLWHA) infection. HIV-associated neurocognitive disorders (HAND) are prevalent in PLWHA. Yet, the efficacy of medications that penetrate the central nervous system (CNS) at preventing or slowing the progression of HAND remains largely unknown. The objective of this study was to determine whether high CNS penetration effectiveness (CPE) regimens improve neurocognitive test scores in PLWHA on combined antiretroviral therapy (cART). Primary literature evaluating cognitive outcomes based on CPE score of cART regimens in PLWHA was assembled from PubMed/Medline and EMBASE. Both randomized controlled trials and observational studies with at least 12 weeks of follow-up were included. A meta-analysis was conducted to calculate the standardized mean difference. Eight trials including a total of 3,303 patients with 13,103 person-years of follow-up were included in the systematic review. Four trials (n = 366 patients) met our inclusion criteria and were included in the meta-analysis. In the meta-analysis, HIV regimens with a high CPE score did not affect NPZ-4 or GDS scores (standardized mean difference (SMD) 0.10, 95% CI -0.19, 0.38; I2 = 26%). Future studies with larger sample sizes are warranted to prospectively evaluate the relationship between CPE and progression of HAND. abstract_id: PUBMED:31790377 The impact of HIV central nervous system persistence on pathogenesis. : The persistence of HIV in the central nervous system is somewhat controversial particularly in the context of HIV viral suppression from combined antiretroviral therapy. Further, its significance in relation to HIV pathogenesis in the context of HIV-associated neurocognitive disorders, systemic HIV pathogenesis, and eradication in general, but especially from the brain, are even more contentious. This review will discuss each of these aspects in detail, highlighting new data, particularly from recent conference presentations. abstract_id: PUBMED:21191673 Pathogenesis of HIV in the central nervous system. HIV can infect the brain and impair central nervous system (CNS) function. Combination antiretroviral therapy (cART) has not eradicated CNS complications. HIV-associated neurocognitive disorders (HAND) remain common despite cART, although attenuated in severity. This may result from a combination of factors including inadequate treatment of HIV reservoirs such as circulating monocytes and glia, decreased effectiveness of cART in CNS, concurrent illnesses, stimulant use, and factors associated with prescribed drugs, including antiretrovirals. This review highlights recent investigations of HIV-related CNS injury with emphasis on cART-era neuropathological mechanisms in the context of both US and international settings. abstract_id: PUBMED:34291558 Interrogating the impact of combination antiretroviral therapies on HIV-associated neurocognitive disorders. Objectives: Although the advent of Combination Antiretroviral Therapy (cART) has greatly reduced the prevalence of HIV-Associated Dementia, the most severe form of HIV-Associated Neurocognitive Disorder (HAND), the incidence of the milder forms of HAND have risen. The explanations proposed include persistent central nervous system (CNS) viraemia and the neurotoxicity of chronic cART regimens. Nonetheless, controversies in HAND prevalence estimates, alongside a lack of consensus on the significance of CNS Penetration Effectiveness (CPE) have added to the complexity of elucidating the role of cART in HAND. The present review will evaluate the evidence underlying these explanations, as well as highlighting the need for improved trial designs and the incorporation of emerging biomarkers and neuroimaging tools. Methods: A review of the current literature investigating cART neurotoxicity, controversies in HAND prevalence estimates, CNS Penetration Effectiveness, and neuroprotective adjuvant therapies. Conclusions: Ultimately, the inadequacy of cART in achieving complete preservation of the CNS underscores the imminent need for neuroprotective adjuvant therapies, where the efficacy of combining multiple adjuvant classes presents a potential therapeutic frontier which must be interrogated. abstract_id: PUBMED:19415500 HIV infection and the central nervous system: a primer. The purpose of this brief review is to prepare readers who may be unfamiliar with Human Immunodeficiency Virus/Acquired Immune Deficiency Syndrome (HIV/AIDS) and the rapidly accumulating changes in the epidemic by providing an introduction to HIV disease and its treatment. The general concepts presented here will facilitate understanding of the papers in this issue on HIV-associated neurocognitive disorders (HAND). Toward that end, we briefly review the biology of HIV and how it causes disease in its human host, its epidemiology, and how antiretroviral treatments are targeted to interfere with the molecular biology that allows the virus to reproduce. Finally, we describe what is known about how HIV injures the nervous system, leading to HAND, and discuss potential strategies for preventing or treating the effects of HIV on the nervous system. abstract_id: PUBMED:22156215 Central nervous system complications in HIV disease: HIV-associated neurocognitive disorder. HIV-associated neurocognitive disorder (HAND) is the result of neural damage caused by HIV replication and immune activation. Potent antiretroviral therapy has reduced the prevalence of severe HAND but not mild to moderate HAND. Brief symptom questionnaires, screening tests, and neuropsychological tests can be used with relative ease in the clinic to identify cognitive and neurologic deficits and to track patient status. Increasing data on pharmacokinetics of antiretrovirals in cerebrospinal fluid (CSF) have permitted formulation of central nervous system (CNS) penetration-effectiveness (CPE) rankings for single drugs and combinations. Available data indicate that regimens with higher CPE scores are associated with lower HIV RNA levels in CSF and improvement in neurocognitive functioning. This article summarizes a presentation by Scott Letendre, MD, at the IAS-USA live continuing medical education course held in San Francisco in May 2011. abstract_id: PUBMED:25781980 HIV-associated neurocognitive disorders and central nervous system drug penetration: what next? The current prevalence of cognitive impairment in HIV-infected individuals is surprisingly high, even in those with undetectable plasma HIV RNA. The aetiology is unknown but one possibility is inadequate control of persistent central nervous system (CNS) HIV infection. The CNS Penetration Effectiveness (CPE) rank has been proposed to predict how well an antiretroviral regimen treats CNS infection. Fabbiani et al. report that 'correcting' the CPE rank of each drug in an individual's regimen for the results of genotypic susceptibility (the CPE-GSS score) results in better ability to predict whether the regimen will improve cognition. The CPE-GSS score may help us better understand the aetiology of HIV-associated cognitive impairment. Whether it will be useful in the management of individual patients requires further study. abstract_id: PUBMED:33604875 Clinical Treatment Options and Randomized Clinical Trials for Neurocognitive Complications of HIV Infection: Combination Antiretroviral Therapy, Central Nervous System Penetration Effectiveness, and Adjuvants. The etiology and pathogenesis of human immunodeficiency virus type-I (HIV)-associated neurocognitive disorders (HAND) remain undetermined and are likely the produce of multiple mechanisms. This can mainly include neuronal injury from HIV, inflammatory processes, and mental health issues. As a result, a variety of treatment options have been tested including NeuroHIV-targeted regimens based on the central nervous system (CNS) penetration effectiveness (CPE) of antiretroviral therapy (ART) and adjuvant therapies for HAND. NeuroHIV-targeted ART regimens have produced consistent and statistically significant HIV suppression in the CNS, but this is not the case for cognitive and functional domains. Most adjuvant therapies such as minocycline, memantine, and selegiline have negligible benefit in the improvement of cognitive function of people living with HIV (PLWH) with mild to moderate neurocognitive impairment. Newer experimental treatments have been proposed to target cognitive and functional symptoms of HAND as well as potential underlying pathogenesis. This review aims to provide an analytical overview of the clinical treatment options and clinical trials for HAND by focusing on NeuroHIV-targeted ART regimen development, CPE, and adjuvant therapies. abstract_id: PUBMED:24472743 Can high central nervous system penetrating antiretroviral regimens protect against the onset of HIV-associated neurocognitive disorders? Objective: To assess changes over time in neuropsychological test results (NPr) and risk factors among a regularly followed HIV-infected patient population. Methods: Prospective cohort of HIV-infected patients randomly selected to undergo neuropsychological follow-up. Test score was adjusted for age, sex and education. Patients were divided into five groups: normal tests, neuropsychological deficit (one impaired cognitive domain), asymptomatic neurocognitive disorders (ANIs), mild neurocognitive disorders (MNDs) and HIV-associated dementia (HAD). Demographic and background parameters including CSF drug concentration penetration effectiveness (CPE) score 2010 were recorded. Changes in NPr and associated risk factors were analyzed. Results: Two hundred and fifty-six patients underwent neuropsychological tests and 96 accepted follow-up approximately 2 years later. The groups were comparable. Upon neuropsychological retesting, six patients improved, 31 worsened and 59 were stable. The proportion of patients with HIV-associated neurocognitive disorders (HANDs) rose from 26 to 45%, with ANIs and MNDs still mostly represented. Most patients initially diagnosed with HANDs remained stable, five of 25 showed clinical improvement and three of 25 deteriorated. Of 33 patients with normal tests, four deteriorated, whereas 24 of 38 with initial neuropsychological deficit had poorer NPr, and contributed most of the new HAND cases. Patients with clinical deterioration had a lower CPE score both at inclusion (6.9 vs. 8.1; P = 0.005) and at the end of follow-up (7.2 vs. 7.8; P = 0.08) than those with improved or stable performance. This was confirmed by multivariate analysis. Conclusion: Patients with higher CPE scores upon inclusion and at the end of follow-up were at lower risk of clinical worsening, suggesting that combination antiretroviral therapy with better CSF penetration could protect against cognitive deterioration. abstract_id: PUBMED:34980928 Central nervous system disorders in HIV-infected individuals using distinct antiretroviral drugs. Neuropsychiatric disorders and central nervous system-related symptoms are very common in people with HIV and can have a very negative impact on their quality of life and worsen the prognosis of the disease. These disorders are multifactorial in origin, but may be triggered or worsened by the use of certain antiretroviral treatments. This paper reviews the epidemiology of neuropsychiatric disorders and symptoms in people with HIV, the recommendations and tools available for their early assessment, as well as the neurotoxicity of the main families of antiretroviral (ARV) drugs. It is important to focus on improvement towards the detection of these disorders during the first evaluation or patient follow-up, aimed at improving quality of life. Because of the central nervous system neurotoxicity profile of different antiretroviral drugs, proactive assessment of neuropsychiatric disorders and symptoms prior to treatment start and during follow-up is necessary. Answer: The relationship between high central nervous system (CNS) penetrating antiretroviral regimens and protection against the onset of HIV-associated neurocognitive disorders (HAND) is complex and not fully understood. Some studies suggest that antiretroviral therapy (ART) regimens with higher CNS penetration effectiveness (CPE) scores may be associated with lower HIV RNA levels in cerebrospinal fluid (CSF) and improvement in neurocognitive functioning (PUBMED:22156215). Moreover, patients with higher CPE scores upon inclusion and at the end of follow-up were at lower risk of clinical worsening, suggesting that combination antiretroviral therapy with better CSF penetration could protect against cognitive deterioration (PUBMED:24472743). However, the evidence is not entirely consistent. A systematic review and meta-analysis found that HIV regimens with a high CPE score did not significantly affect neurocognitive test scores in people living with HIV/AIDS (PLWHA) on combined antiretroviral therapy (cART), indicating that high CNS penetration alone may not be sufficient to improve neurocognitive outcomes (PUBMED:35850626). Additionally, the persistence of HIV in the CNS and its significance in relation to HAND, even in the context of viral suppression from cART, is still somewhat controversial (PUBMED:31790377). The current prevalence of cognitive impairment in HIV-infected individuals remains high, even in those with undetectable plasma HIV RNA, and the aetiology is unknown. It is possible that inadequate control of persistent CNS HIV infection contributes to this issue (PUBMED:25781980). Despite consistent and statistically significant HIV suppression in the CNS by NeuroHIV-targeted ART regimens based on CPE, this has not translated into clear cognitive and functional improvements (PUBMED:33604875). In conclusion, while there is some evidence to suggest that high CNS penetrating antiretroviral regimens may offer some protection against the onset of HAND, the data is not conclusive, and further research is needed to fully understand the relationship between CPE and the progression of HAND (PUBMED:35850626). The complexity of HAND pathogenesis and the potential neurotoxicity of chronic cART regimens also contribute to the ongoing debate and highlight the need for improved trial designs and the incorporation of emerging biomarkers and neuroimaging tools (PUBMED:34291558).
Instruction: Atrial fibrillation in young stroke patients: do we underestimate its prevalence? Abstracts: abstract_id: PUBMED:23678962 Atrial fibrillation in young stroke patients: do we underestimate its prevalence? Background And Purpose: The prevalence of atrial fibrillation (AF) in young stroke patients has rarely been reported and is considered an uncommon ischaemic stroke (IS) aetiology. Our objective was to analyse the prevalence of AF in IS patients up to 50 years of age and its relationship with stroke severity and outcomes. Methods: This was an observational study of consecutive IS patients up to 50 years of age admitted to a stroke centre during a 5-year period (2007-2011). A complete cardiology study was performed with a daily electrocardiogram and cardiac monitoring for 72 h as well as echocardiography. In cases of stroke of unknown aetiology a 24-h Holter monitoring was performed. Baseline data, previously or newly diagnosed AF, structural heart disease (SHD) (valvulopathy/cardiomyopathy), stroke severity on admission as measured by the National Institutes of Health Stroke Scale (NIHSS) (moderate-severe stroke if NIHSS ≥ 8) and 3-month outcomes according to the modified Rankin Scale (mRS) (good outcome if mRS ≤ 2) were analysed. AF was classified as AF associated with SHD (AF-SHD) and AF not associated with SHD (AF-NSHD). Results: One hundred and fifty-seven patients were included (mean age 43 years, 58.6% male). Fourteen subjects (8.9%) presented with AF, four with AF-NSHD and 10 with AF-SHD. AF was previously known in 10 patients (6.3%), two with AF-NSHD and eight with AF-SHD. A multivariate analysis showed an independent association between AF and moderate-severe IS (odds ratio 3.771, 95% CI 1.182-12.028), but AF was not an independent prognostic factor. Conclusion: AF may be more common than expected in young patients with IS and is associated with increased NIHSS scores. abstract_id: PUBMED:32689612 Prevalence of ischemic stroke and atrial fibrillation in young patients with migraine national inpatient sample analysis. Objective: To estimate the prevalence of ischemic stroke (IS) and atrial fibrillation (AF) in young patients with migraine and to identify the independent predictors of IS in a large cohort of hospitalized patients. Methods: A cohort of patients with migraine with aura (MA) and migraine without aura (MO) was identified from the National Inpatient Sample database for the years 2012 to 2015. Ischemic stroke was identified by the International Classification of Diseases-9-CM codes. Binary logistic regression and Chi-square tests were utilized. Results: A total number of 834,875 young patients (18-44 years) were included in this study with a mean age of 33 years. The prevalence of IS was 1.3% and was significantly higher in patients with MA (3.7% versus 1.2%, P &lt;0.001). The prevalence of AF was 0.9% and it was significantly higher in patients with MA (1.2% versus 0.8%, P &lt;0.001). Migraine with aura was an independent predictor of IS (OR 3.23, 95% CI 3.05-3.42, P &lt;0.001) and AF (OR 1.63, 95% CI 1.42-1.88, P &lt;0.001). Other predictors of IS were hypertension (OR 2.2, 95% CI 2.12-2.3, P &lt;0.001), diabetes mellitus (DM) (OR 1.37, 95% CI 1.31-1.42, P &lt;0.001), peripheral vascular disease (PVD) (OR 12.08, 95% CI 11.23-12.98, P &lt;0.001) and smoking (OR 1.37, 95% CI 1.31-1.42, P &lt;0.001). Conclusion: In this relatively large study, the overall prevalence of IS in young migraine patients was low at 1.3%. The prevalence of IS and AF was significantly higher in patients with MA. Presence of PVD confers a high risk of IS in young patients with migraine. Migraine aura was observed to be an independent predictor of IS and AF in patients with history of migraine. Optimal control of vascular risk factors in migraine patients appears to be indicated despite the overall low risk. abstract_id: PUBMED:30565987 Heart Failure With Preserved Ejection Fraction in the Young. Background: Heart failure with preserved ejection fraction (HFpEF), traditionally considered a disease of the elderly, may also affect younger patients. However, little is known about HFpEF in the young. Methods: We prospectively enrolled 1203 patients with HFpEF (left ventricular ejection fraction ≥50%) from 11 Asian regions. We grouped HFpEF patients into very young (&lt;55 years of age; n=157), young (55-64 years of age; n=284), older (65-74 years of age; n=355), and elderly (≥75 years of age; n=407) and compared clinical and echocardiographic characteristics, quality of life, and outcomes across age groups and between very young individuals with HFpEF and age- and sex-matched control subjects without heart failure. Results: Thirty-seven percent of our HFpEF population was &lt;65 years of age. Younger age was associated with male preponderance and a higher prevalence of obesity (body mass index ≥30 kg/m2; 36% in very young HFpEF versus 16% in elderly) together with less renal impairment, atrial fibrillation, and hypertension (all P&lt;0.001). Left ventricular filling pressures and prevalence of left ventricular hypertrophy were similar in very young and elderly HFpEF. Quality of life was better and death and heart failure hospitalization at 1 year occurred less frequently ( P&lt;0.001) in the very young (7%) compared with elderly (21%) HFpEF. Compared with control subjects, very young HFpEF had a 3-fold higher death rate and twice the prevalence of hypertrophy. Conclusions: Young and very young patients with HFpEF display similar adverse cardiac remodeling compared with their older counterparts and very poor outcomes compared with control subjects without heart failure. Obesity may be a major driver of HFpEF in a high proportion of HFpEF in the young and very young. abstract_id: PUBMED:22779430 High prevalence of silent brain infarction in patients presenting with mechanical heart valve thrombosis. Background: Symptomatic thromboembolic events including stroke occur frequently in patients with mechanical heart valves, particularly among those who are poorly anticoagulated. Objective: This study set out to determine the prevalence of silent brain infarction (SBI) in this population. Methods: This was a post hoc analysis of data from a randomized controlled trial carried out in a tertiary-care academic medical center. The trial included participants from a randomized controlled trial of fibrinolytic therapy (FT) in patients with left-sided prosthetic valve thrombosis (PVT), who had undergone pre-treatment computed tomography (CT) scans of the brain. The prevalence of SBI in this population was investigated. Main Outcome Measure: Prevalence of silent brain infarction. Results: Silent brain infarction was present in 27 of 72 patients (37.5%; 95% confidence interval [CI] 27.2, 49.1). Most patients with SBI (57; 82.6%) had sub-therapeutic anticoagulation at presentation. We identified baseline characteristics that were associated with the presence of SBI using a logistic regression model. Atrial fibrillation (AF) was strongly associated with the presence of SBI (odds ratio [OR] 5.60; 95% CI 1.32, 23.87; p = 0.02). Conclusion: The high prevalence of SBI among this cohort of young patients with mechanical heart valves is alarming and calls for urgent efforts to improve the quality of anticoagulation. Clinical Trial Registration: Registered in the US National Institutes of Health registry at http://clinicaltrials.gov as NCT00232622. abstract_id: PUBMED:32430233 Temporal Trends in the Risk Factors and Clinical Characteristics of Ischemic Stroke in Young Adults. Objectives: This study aimed to analyze the risk factors of ischemic stroke in young adults of different ages; explore the changes in these risk factors with time; analyze the clinical characteristics of ischemic stroke in young adults; and assess how to better prevent ischemic stroke in young adults. Methods: All patients with ischemic stroke who presented to the Department of Emergency Medicine, Nanjing Drum Tower Hospital, Affiliated Hospital of Nanjing University Medical School were enrolled. The data of patients aged 18-50 years were retrospectively evaluated for two periods, January-December 2008 and January-December 2018. Additionally, we collected the data of patients aged 51-90 years with ischemic stroke in the same ward in 2018. The subjects were divided into three groups: ischemic stroke in young people in 2008 ("Youth 2008"), ischemic stroke in young people in 2018 ("Youth 2018"), and ischemic stroke in elderly people in 2018 ("Senior 2018"). Risk factors, clinical characteristics and test indices were recorded and analyzed statistically. Results: The "Youth 2008" group included 28 patients-19 males (67.9%) and 9 females (31.2%) with a male-to-female ratio of 2.11:1. The "Youth 2018" group included 23 patients-20 males (87.0%) and 3 females (13.0%) with a male-to-female ratio of 6.67:1. The "Senior 2018" group included 210 patients-150 males (71.4%) and 60 females (28.6%) with a male-to-female ratio of 2.50:1. The risk factors in "Youth 2018" were higher than those in "Youth 2008" in terms of hypertension, hyperglycemia, and hypercholesterolemia without significant difference. Smoking and hypertrophic cardiomyopathy were significantly increased (P &lt; 0.05) in this population. Smoking, hypercholesterolemia, and hypertrophic cardiomyopathy were more prevalent among the "Youth 2018" group than among the "Senior 2018" group, whereas carotid plaques, hypertension, and atrial fibrillation were less common in the younger group (P &lt; 0.05). There was no significant difference between the younger and older groups in terms of thrombolysis rate, cerebral infarction type, and complications, except pulmonary infections (P &lt; 0.05). Conclusions: There was no significant change in the main risk factors of ischemic stroke in young adults during the 10-year period. Traditional risk factors-smoking and hypertrophic cardiomyopathy-were still common but with a significantly greater prevalence, whereas carotid plaques, hypertension, and atrial fibrillation had become less common. The clinical characteristics, other than pulmonary infection, were not significantly different between the younger and older patients with ischemic stroke. abstract_id: PUBMED:24625564 Prevalence of stroke and the need for thromboprophylaxis in young patients with atrial fibrillation: a cohort study. Atrial fibrillation is the most common cardiac arrhythmia, and age is a well-established independent risk factor for stroke in these patients. Whereas high-risk patients clearly benefit from anticoagulation to prevent stroke, less is known about how to treat low-risk patients. Despite the recent guidelines and studies demonstrating no benefit and excess bleeding risk with aspirin, many low-risk patients still receive this medication. Our objective was to determine the stroke rate in young patients with atrial fibrillation, a group of previously unstudied and predominantly low-risk patients. We hypothesized that the event rate would be so low as to preclude benefit from antithrombotic medications. A retrospective chart review identified patients with atrial fibrillation between the age of 18 and 35. Exclusion criteria included no ECG documentation of atrial fibrillation, anticoagulation, except around the time of cardioversion, and surgical valve disease. The primary outcome was stroke during the period of observation. The final cohort included 99 patients, mean age 27.6 years, followed for a mean of 4.3 years. Mean CHADS2 and CHA2DS2-VASc scores were 0.26 and 0.4, respectively. A total of 42.4% were taking aspirin for over 50% of the time. There was one event identified, a transient ischemic attack in a man not on aspirin with CHADS2 and CHADS2-VASc scores of 1, resulting in event rates of 0.234 per 100 patient-years overall or 0.392 among those not on aspirin. Patients with nonvalvular atrial fibrillation under age 35 have an exceedingly low stroke risk. We assert that aspirin may be unnecessary for most patients in this population, especially those with a CHA2DS2-VASc score of 0. abstract_id: PUBMED:25745305 Pattern and risk factors of stroke in the young among stroke patients admitted in medical college hospital, Thiruvananthapuram. Background: Stroke in the young is particularly tragic because of its potential to create a long-term burden on the victims, their families, and the community. There had been relatively few studies on young stroke in Kerala's socio-economic setup, that too encapsulating the mentioned apparently relevant dimensions of stroke in the young. Objective: To study the prevalence, patterns and risk factors of young stroke. Settings And Design: A cross-sectional study with case control comparison at Government Medical College Hospital, Thiruvananthapuram, Kerala, India. Materials And And Methods: Total 100 stroke patients were identified over a period of 2 months, and data were collected on the basis of questionnaire developed for the purpose. Results: Of 100 stroke patients, 15 had stroke in the young, among which 9 (60%) had ishaemic stroke. Hypertension was the most common risk factor. Smoking, alcohol, atrial fibrillation, and hyperlipidemia were found to be more common in cases (young stroke) when compared with controls. Alcohol use and atrial fibrillation were significantly higher among young stroke patients. Physical inactivity was significantly lesser in those with stroke in the young than elderly. Atrial fibrillation emerged as an independent risk factor of stroke in the young with adjusted odds ratio of 6.18 (1.31-29.21). Conclusion: In all, 15% of total stroke occurred in young adults &lt;50 years. The proportion of hemorrhagic stroke in young adults is higher than in elderly. Atrial fibrillation is identified as an independent risk factor of stroke in the young. Compared with stroke in elderly alcohol use, smoking, hyperlipidemia, and cardiac diseases, which are known risk factors, are higher in young stroke. abstract_id: PUBMED:37337492 Postoperative Atrial Fibrillation Following Off-Pump Coronary Artery Bypass Graft Surgery: Elderly Versus Young Patients. Background Atrial fibrillation (AF) is one of the common rhythm disturbances that occur after coronary artery bypass graft (CABG) surgery. Postoperative atrial fibrillation (POAF) can lead to thromboembolic events, hemodynamic instability, and prolonged hospital stay, affecting morbidity and influencing short and long-term outcomes after CABG. Methodology This prospective comparative study was conducted between May 2018 and April 2020. This study aimed to compare the prevalence of POAF following off-pump coronary artery bypass graft surgery (OPCAB) between elderly and young patients. Additionally, we aimed to determine the risk factors associated with POAF following OPCAB in the elderly compared to young patients. Patients aged ≥65 years were considered elderly, and those aged &lt;65 years were considered young. A total of 120 patients (60 in the elderly group and 60 in the young group) were included in this study and evaluated to correlate the preoperative and intraoperative risk factors with postoperative outcomes during the hospital stay. Results The prevalence of POAF following OPCAB in the elderly was significantly higher compared to young patients (48.3% vs. 20%,p = 0.002). The elderly group also had a significantly longer intensive care unit stay (p = 0.001) and hospital stay (p = 0.001). In an unadjusted logistic regression model, age (odds ratio (OR) = 3.74, 95% confidence interval (CI) = 1.66-8.41, p = 0.001), preoperative plasma B-type natriuretic peptide (OR = 1.01, 95% CI = 1.00-1.01, p = 0.001), and left atrial diameter (OR = 1.10, 95% CI = 1.03-1.17, p = 0.001) were significantly associated with POAF. However, in an adjusted logistic regression model, age was found to be an independent predictor (OR = 1.31, 95% CI = 1.14-1.52, p &lt; 0.0001) of POAF following OPCAB. Although stroke developed in the elderly (p &gt;0.05), no mortality was observed postoperatively. Conclusions The prevalence of POAF following OPCAB in the elderly is higher than in young patients. Advancing age is an independent predictor of POAF following OPCAB. abstract_id: PUBMED:27733570 Increasing atrial fibrillation prevalence in acute ischemic stroke and TIA. Objective: To evaluate trends in atrial fibrillation (AF) prevalence in acute ischemic stroke (AIS) and TIA in the United States. Methods: We used the Nationwide Inpatient Sample to retrospectively compute weighted prevalence of AF in AIS (n = 4,355,140) and TIA (n = 1,816,459) patients admitted to US hospitals from 2004 to 2013. Multivariate-adjusted models were used to evaluate the association of AF with clinical factors, mortality, length of stay, and cost. Results: From 2004 to 2013, AF prevalence increased by 22% in AIS (20%-24%) and by 38% in TIA (12%-17%). AF prevalence varied by age (AIS: 6% in 50-59 vs 37% in ≥80 years; TIA: 4% in 50-59 vs 24% in ≥80 years), sex (AIS: male 19% vs female 25%; TIA: male 15% vs female 14%), race (AIS: white 26% vs black 12%), and region (AIS: Northeast 25% vs South 20%). Advancing age, female sex, white race, high income, and large hospital size were associated with increased odds of AF in AIS. AF in AIS was a risk factor for in-hospital death (odds ratio 1.93, 95% confidence interval 1.89-1.98) but mortality in AIS with AF decreased from 11.6% to 8.3% (p &lt; 0.001). Compared to no AF, AF was associated with increased cost of $2,310 and length of stay 1.1 days in AIS. Conclusions: AF prevalence in AIS and TIA has continued to increase. Disparity in AF prevalence in AIS and TIA exists by patient and hospital factors. AF is associated with increased mortality in AIS. Innovative AIS preventive strategies are needed in patients with AF, especially in the elderly. abstract_id: PUBMED:29649277 Estimated prevalence of undiagnosed atrial fibrillation in the United States. Introduction: As atrial fibrillation (AF) is often asymptomatic, it may remain undiagnosed until or even after development of complications, such as stroke. Consequently the observed prevalence of AF may underestimate total disease burden. Methods: To estimate the prevalence of undiagnosed AF in the United States, we performed a retrospective cohort modeling study in working age (18-64) and elderly (≥65) people using commercial and Medicare administrative claims databases. We identified patients in years 2004-2010 with incident AF following an ischemic stroke. Using a back-calculation methodology, we estimated the prevalence of undiagnosed AF as the ratio of the number of post-stroke AF patients and the CHADS2-specific stroke probability for each patient, adjusting for age and gender composition based on United States census data. Results: The estimated prevalence of AF (diagnosed and undiagnosed) was 3,873,900 (95%CI: 3,675,200-4,702,600) elderly and 1,457,100 (95%CI: 1,218,500-1,695,800) working age adults, representing 10.0% and 0.92% of the respective populations. Of these, 698,900 were undiagnosed: 535,400 (95%CI: 331,900-804,400) elderly and 163,500 (95%CI: 17,700-400,000) working age adults, representing 1.3% and 0.09% of the respective populations. Among all undiagnosed cases, 77% had a CHADS2 score ≥1, and 56% had CHADS2 score ≥2. Conclusions: Using a back-calculation approach, we estimate that the total AF prevalence in 2009 was 5.3 million of which 0.7 million (13.1% of AF cases) were undiagnosed. Over half of the modeled population with undiagnosed AF was at moderate to high risk of stroke. Answer: Yes, it appears that the prevalence of atrial fibrillation (AF) in young stroke patients may be underestimated. A study analyzing the prevalence of AF in ischemic stroke (IS) patients up to 50 years of age found that 8.9% presented with AF, which was previously known in 6.3% of patients. This study suggests that AF may be more common than expected in young patients with IS and is associated with increased stroke severity as measured by the National Institutes of Health Stroke Scale (NIHSS) scores (PUBMED:23678962). Another study focusing on young patients with migraine found that the prevalence of IS was 1.3% and significantly higher in patients with migraine with aura (MA) compared to those without. The prevalence of AF was also higher in patients with MA. Migraine with aura was an independent predictor of IS and AF, indicating that young patients with MA are at a higher risk for both conditions (PUBMED:32689612). Furthermore, a study on heart failure with preserved ejection fraction (HFpEF) in the young found that younger patients with HFpEF displayed similar adverse cardiac remodeling compared with their older counterparts and had very poor outcomes compared with control subjects without heart failure. Obesity was highlighted as a potential major driver of HFpEF in the young (PUBMED:30565987). Additionally, a study on patients with mechanical heart valve thrombosis found a high prevalence (37.5%) of silent brain infarction (SBI), with atrial fibrillation being strongly associated with the presence of SBI (PUBMED:22779430). A retrospective evaluation of ischemic stroke in young adults over a 10-year period showed that traditional risk factors such as smoking and hypertrophic cardiomyopathy were still common and had significantly increased prevalence, while carotid plaques, hypertension, and atrial fibrillation had become less common (PUBMED:32430233). In a cohort study of young patients with atrial fibrillation, the stroke rate was found to be exceedingly low, suggesting that aspirin may be unnecessary for most patients in this population, especially those with a CHA2DS2-VASc score of 0 (PUBMED:24625564).
Instruction: Is reduced dermatoglyphic a-b ridge count a reliable marker of developmental impairment in schizophrenia? Abstracts: abstract_id: PUBMED:11439235 Is reduced dermatoglyphic a-b ridge count a reliable marker of developmental impairment in schizophrenia? Background: Finger and hand prints are formed during the late first and second trimester of foetal development, after which they remain unchanged. Their expression may be influenced by both genetic and environmental factors. Some studies have suggested that a reduced total finger ridge count (TFRC) and, in particular, a reduce total a-b ridge count (TABRC), may be associated with schizophrenia. Aim: To study these two variables in a large, ethnically homogenous sample and to compare our findings with those of other recent studies. Method: Finger and hand prints of 150 people with DSM-III-R schizophrenia were compared with those of 92 healthy controls. Results: Patients had a reduced mean TABRC (P = 0.03) compared with controls. There was a significant (P=0.02) linear trend for lower TABRC and increasing incidence of schizophrenia (ORlineartrend = 1.3; 95%CI1.1-1.7), implying a continuous increase in the risk for schizophrenia with reduction in TABRC. No significant difference between groups was observed for TFRC. Conclusion: These results provide further evidence that dermatoglyphic abnormalities exist in at least some patients with schizophrenia and that the a-b ridge count may be a marker of disruption, probably environmental, that occurs when the developing brain may also be particularly vulnerable to such insult. These findings support the concept that some cases of schizophrenia may be due to adverse intrauterine events. abstract_id: PUBMED:8827858 Dermatoglyphic a-b ridge count as a possible marker for developmental disturbance in schizophrenia: replication in two samples. The aim of this study was to conduct an epidemiological analysis of quantitative dermatoglyphic traits as a marker of prenatal disturbance during the second trimester of life in schizophrenic patients. TFRC (Total Finger Ridge Count) and TABRC (Total a-b Ridge Count) were studied in a sample of 38 schizophrenic patients and 69 healthy individuals. A significant decrease of the a-b ridge count was found in patients compared to controls, with a significant linear trend across the population distribution (OR linear trend = 1.6; 95% CI = 1.0-2.4), indicating that the effect was not confined to a subgroup of cases with values in the lowest range. This finding was replicated in a second, larger sample (OR linear trend = 1.3; 95% CI = 1.0-1.8). The suggestion that a-b ridge count is associated with genetic risk for schizophrenia needs to be investigated further. TFRC did not distinguish between patients and controls. The a-b ridge count may be a continuous risk factor for later schizophrenia, pointing towards a disturbance occurring during the second trimester of prenatal life, a period of critical CNS growth. abstract_id: PUBMED:23116885 The presentation of dermatoglyphic abnormalities in schizophrenia: a meta-analytic review. Within a neurodevelopmental model of schizophrenia, prenatal developmental deviations are implicated as early signs of increased risk for future illness. External markers of central nervous system maldevelopment may provide information regarding the nature and timing of prenatal disruptions among individuals with schizophrenia. One such marker is dermatoglyphic abnormalities (DAs) or unusual epidermal ridge patterns. Studies targeting DAs as a potential sign of early developmental disruption have yielded mixed results with regard to the strength of the association between DAs and schizophrenia. The current study aimed to resolve these inconsistencies by conducting a meta-analysis examining the six most commonly cited dermatoglyphic features among individuals with diagnoses of schizophrenia. Twenty-two studies published between 1968 and 2012 were included. Results indicated significant but small effects for total finger ridge count and total A-B ridge count, with lower counts among individuals with schizophrenia relative to controls. Other DAs examined in the current meta-analysis did not yield significant effects. Total finger ridge count and total A-B ridge count appear to yield the most reliable dermatoglyphic differences between individuals with and without schizophrenia. abstract_id: PUBMED:9657417 Congenital dermatoglyphic malformations in severe bipolar disorder. Dermatoglyphic alterations may be the result of early prenatal disturbances thought to be implicated in the aetiology of psychiatric illness. In order to test this hypothesis in the particular case of bipolar disorder, we assessed two congenital dermatoglyphic malformations (ridge dissociation (RD) and abnormal features (AF)) and two metric dermatoglyphic traits (total finger ridge count (TFRC) and total a-b ridge count (TABRC)) in a sample of 118 patients with chronic DSM-III-R bipolar illness, and 216 healthy controls. Bipolar cases showed a significant excess of RD and AF (OR = 2.80; 95% CI: 2.31-3.38) when compared with controls. In the cases, the presence of anomalies was associated with earlier age of onset. No differences were found for TFRC and TABRC. No associations were found with sex or familial morbid risk of psychiatric disorders. Our findings add further weight to the suggestion that early developmental disruption is a risk factor for later bipolar disorder. abstract_id: PUBMED:11011835 Association between cerebral structural abnormalities and dermatoglyphic ridge counts in schizophrenia. Dermatoglyphic ridge counts (1) reflect ontogenic processes during the second trimester of pregnancy and (2) can be influenced by some of the factors that also affect cerebral development. Therefore, the demonstration of an association between dermatoglyphic and cerebral structural measures in patients with schizophrenia would give credence to the view that the structural brain abnormalities associated with this disorder have their origin early in development. Twenty-eight male subjects with schizophrenia and 19 male controls underwent magnetic resonance imaging (MRI) and dermatoglyphic analysis. The pattern of association between the ab-ridge count and nine MRI features was dissimilar in cases and controls for two measures. Associations between dermatoglyphic features, on the one hand, and the frontal CSF (r = .54, P = .004) and fourth ventricular volume (r = .38, P = .05), on the other, were larger in the cases versus the controls (test for interaction, P = .08 and P = .06, respectively). These findings, while in need of replication, support the view that the cerebral structural abnormalities found in patients with schizophrenia are the result of an early pathologic process affecting the development of fetal ectodermal structures. abstract_id: PUBMED:14610723 Nonreplication of the association between ab-ridge count and cerebral structural measures in schizophrenia. The origins of cerebral abnormalities in psychotic patients remain unknown. Dermatoglyphics are suitable markers of prenatal injury due to their fetal ontogenesis and their susceptibility to some of the factors that also affect cerebral development. In a previous study, positive associations between brain volumetric measures and a dermatoglyphic marker, the ab-ridge count, were reported. The present study is an attempt to replicate that finding in an independent sample. Magnetic resonance imaging (MRI) scans and dermatoglyphic measures were available for 29 schizophrenia patients (Research Diagnostic Criteria [RDC] criteria) and 26 unrelated healthy controls. The images were processed using an automated procedure, yielding volumes of total grey matter, white matter, cerebrospinal fluid (CSF), and total brain volume. The ab-ridge count was not positively associated with brain volumes in either patients or controls. The present findings do not support the hypothesis that the changes in brain volume seen in patients with schizophrenia are of prenatal origin. abstract_id: PUBMED:3809329 Fluctuating dermatoglyphic asymmetry and the genetics of liability to schizophrenia. Schizophrenic subjects were compared to normal and psychiatric control subjects for degree of fluctuating asymmetry in two dermatoglyphic traits, a-b ridge count and fingertip pattern. The schizophrenic group exhibited significantly greater fluctuating asymmetry than either control group. Furthermore, indicators of disease severity such as early onset and declining course of illness correlated with degree of asymmetry. Both of these observations are expected if a disorder has a polygenic basis, since fluctuating asymmetry is a marker of polygenic inheritance. abstract_id: PUBMED:26385539 Dermatoglyphic correlates of hippocampus volume: Evaluation of aberrant neurodevelopmental markers in antipsychotic-naïve schizophrenia. Schizophrenia is a disorder of aberrant neurodevelopment is marked by abnormalities in brain structure and dermatoglyphic traits. However, the link between these two (i.e. dermatoglyphic parameters and brain structure) which share ectodermal origin and common developmental window has not been explored extensively. The current study examined dermatoglyphic correlates of hippocampal volume in antipsychotic-naïve schizophrenia patients in comparison with matched healthy controls. Ridge counts and asymmetry measures for palmar inter-digital areas (a-b, b-c, c-d) were obtained using high resolution digital scans of palms from 89 schizophrenia patients [M:F=48:41] and 48 healthy controls [M:F=30:18]. Brain scans were obtained for subset of subjects including 26 antipsychotic-naïve patients [M:F=13:13] and 29 healthy controls [M:F=19:10] using 3 T-MRI. Hippocampal volume and palmar ridge counts were measured by blinded raters with good inter-rater reliability using valid methods. Directional asymmetry (DA) of b-c and bilateral hippocampal volume were significantly lower in patients than controls. Significant positive correlation was found between DA and ridge count of b-c with bilateral anterior hippocampal volume. Study demonstrates the utility of dermatoglyphic markers in identifying structural changes in the brain which may form the basis for neurodevelopmental pathogenesis in schizophrenia. abstract_id: PUBMED:1571744 Dermatoglyphic evidence of fluctuating asymmetry in schizophrenia. Fluctuating asymmetry provides a measure of an organism's capacity to buffer adverse factors that could disturb its development. It is estimated from the differences between theoretically identical right- and left-sided structures. Dermatoglyphic fluctuating asymmetry has been recently used to investigate developmental disorders. Fingerprints and palm prints of schizophrenic patients, which had been the subjects of an earlier report of conventional dermatoglyphic trait frequencies, were reanalysed to determine their level of fluctuating asymmetry. A review of the diagnostic protocols and clinical records used in the original study indicated that most of the 482 subjects would have met DSM-III-R criteria for schizophrenia. The schizophrenic sample had significantly higher levels of fluctuating asymmetry on four dermatoglyphic traits, the finger-ridge counts, fingerprint patterns, the palmar atd angles and palmar a-b ridge counts, than controls. This finding supports the results of two earlier studies, and its relevance to the roles of genetics, foetal insults, and developmental anomalies of the brain in the aetiology of schizophrenia is discussed. abstract_id: PUBMED:15158923 Neurological soft signs and dermatoglyphic anomalies in twins with schizophrenia. Schizophrenia is associated with altered neural development. We assessed neurological soft signs (NSS) and dermatoglyphic anomalies (total a-b ridge count (TABRC) and total finger ridge count) in 15 pairs of twins concordant and discordant for schizophrenia. Within-pair differences in both NSS and TABRC scores were significantly greater in discordant compared to concordant monozygotic pairs. There was no significant difference in NSS and TABRC scores between subjects with schizophrenia and their co-twins without the illness. However, monozygotic discordant twins with schizophrenia had higher ABRCs on their right hands compared to their co-twins without the illness. These findings suggest that an unidentified environmental event acting between weeks 6 and 15 of gestation affects the development of monozygotic twins who go on to develop schizophrenia but does not have a corresponding effect on their co-twins who do not develop the illness. The effect of such an event on dermatoglyphic profiles appears lateralised to the right hand in affected twins. Answer: The evidence from multiple studies suggests that reduced dermatoglyphic a-b ridge count (TABRC) may be a reliable marker of developmental impairment in schizophrenia. Several studies have found that individuals with schizophrenia have a significantly reduced mean TABRC compared to healthy controls, with a continuous increase in the risk for schizophrenia associated with a reduction in TABRC (PUBMED:11439235; PUBMED:8827858). A meta-analytic review also indicated significant but small effects for TABRC, with lower counts among individuals with schizophrenia (PUBMED:23116885). Furthermore, the presence of dermatoglyphic abnormalities (DAs), including reduced TABRC, has been associated with early developmental disruptions that may be risk factors for later schizophrenia (PUBMED:9657417). Associations between TABRC and cerebral structural measures, such as frontal cerebrospinal fluid (CSF) and fourth ventricular volume, have been observed in schizophrenia patients, supporting the view that cerebral structural abnormalities in these patients may originate early in development (PUBMED:11011835). However, not all studies have found a positive association between TABRC and cerebral structural measures in schizophrenia, with some failing to replicate earlier findings (PUBMED:14610723). Additionally, while some studies have reported a significant excess of dermatoglyphic malformations in bipolar disorder, no differences were found for TABRC in this context (PUBMED:9657417). The evidence also suggests that fluctuating dermatoglyphic asymmetry, which may be a marker of polygenic inheritance, is greater in schizophrenic subjects compared to control groups, and correlates with disease severity (PUBMED:3809329; PUBMED:1571744). Moreover, dermatoglyphic markers have been used to identify structural changes in the brain, such as hippocampal volume, which may contribute to the neurodevelopmental pathogenesis in schizophrenia (PUBMED:26385539). In summary, while there is substantial evidence supporting reduced TABRC as a marker of developmental impairment in schizophrenia, some studies have not found this association, and further research is needed to fully understand the relationship between dermatoglyphic features and schizophrenia.
Instruction: Is paraoxonase 1 a marker of cardiovascular risk in youth with type 1 diabetes? Abstracts: abstract_id: PUBMED:25769648 Is paraoxonase 1 a marker of cardiovascular risk in youth with type 1 diabetes? (Study about 109 cases) Objectives: We purpose to verify if paraoxonase 1 (PON1) activity may be a marker of cardiovascular risk in a young Tunisian population with type 1 diabetes (T1D). Methods: PON1 activity was measured by a kinetic method using paraoxon as substrate. The other parameters were determined by automated methods. Results: One hundred and nine children and adolescents with T1D and 97 healthy subjects were involved in this study. PON1 activity and PON1/HDL-cholesterol ratio were significantly decreased in diabetics (303 ± 174 vs. 372 ± 180 U/L and 221 ± 139 vs. 298 ± 20 1U/mmol, P=0.006, P=0.002, respectively) compared to controls. A significant increase in total cholesterol, LDL-c and microalbuminuria was observed in diabetics compared to controls. PON1 activity was decreased by 9.5% in patients with diabetes duration ≥ 6 years, by 28.4% for those with fasting glycemia ≥ 7 mmol/L (P&lt;0.001), by 14% in those with HbA1c ≥ 8% and by 12.3% for diabetics with dyslipidemia. PON1 activity is reduced when the number of cardiovascular risk factors increases (P&lt;0.001). Conclusion: PON1 seems to be associated to cardiovascular risk markers in T1D. This result remains to be seen. Nevertheless, improving PON1 activity could be a significant target for reducing cardiovascular risk. abstract_id: PUBMED:29084567 The role and function of HDL in patients with diabetes mellitus and the related cardiovascular risk. Background: Diabetes mellitus (DM) is a major public health problem which prevalence is constantly raising, particularly in low- and middle-income countries. Both diabetes mellitus types (DMT1 and DMT2) are associated with high risk of developing chronic complications, such as retinopathy, nephropathy, neuropathy, endothelial dysfunction, and atherosclerosis. Methods: This is a review of available articles concerning HDL subfractions profile in diabetes mellitus and the related cardiovascular risk. In this review, HDL dysfunction in diabetes, the impact of HDL alterations on the risk diabetes development as well as the association between disturbed HDL particle in DM and cardiovascular risk is discussed. Results: Changes in the amount of circulation lipids, including triglycerides and LDL cholesterol as well as the HDL are frequent also in the course of DMT1 and DMT2. In normal state HDL exerts various antiatherogenic properties, including reverse cholesterol transport, antioxidative and anti-inflammatory capacities. However, it has been suggested that in pathological state HDL becomes "dysfunctional" which means that relative composition of lipids and proteins in HDL, as well as enzymatic activities associated to HDL, such as paraoxonase 1 (PON1) and lipoprotein-associated phospholipase 11 (Lp-PLA2) are altered. HDL properties are compromised in patients with diabetes mellitus (DM), due to oxidative modification and glycation of the HDL protein as well as the transformation of the HDL proteome into a proinflammatory protein. Numerous studies confirm that the ability of HDL to suppress inflammatory signals is significantly reduced in this group of patients. However, the exact underlying mechanisms remains to be unravelled in vivo. Conclusions: The understanding of pathological mechanisms underlying HDL dysfunction may enable the development of therapies targeted at specific subpopulations and focusing at the diminishing of cardiovascular risk. abstract_id: PUBMED:11918623 Paraoxonase gene cluster is a genetic marker for early microvascular complications in type 1 diabetes. Background: Paraoxonase is a serum enzyme, which prevents oxidation of low-density lipoprotein (LDL) by hydrolyzing lipid peroxides. Two polymorphisms in PON1 gene have been associated with cardiovascular and microvascular diseases in both diabetic and non-diabetic patients. Aims: The current project was designed to investigate the association between the polymorphisms of two PON genes and diabetes microvascular diseases (retinopathy and microalbuminuria) and any potential linkage between Met54Leu of PON1 and Cys311Ser of PON2 gene. Methods: Diabetic retinopathy and albumin excretion rate were assessed in 372 adolescents with Type 1 diabetes who were genotyped for the two polymorphisms. Results: We confirmed the increased susceptibility for diabetic retinopathy for the Leu/Leu genotype (odds ratio (OR) 3.34 (confidence interval (CI) 1.95, 5.75), P &lt; 0.0001). The Ser/Ser genotype was significantly more common in those patients with microalbuminuria (albumin excretion rate &gt; or = 20 microg/min) compared with those with albumin excretion rate &lt; 20 microg/min (OR 4.72 (CI 2.65, 8.41), P &lt; 0.0001). The Ser311 of PON2 was in strong linkage disequilibrium with Leu54 of PON1 gene (Delta = 23 x 10(4), P &lt; 0.001). The delta value was higher for those without complications (28 x 104, P &lt; 0.001) compared with those with complications (15.5 x 10(4), P &lt; 0.001). Conclusions: This study supports the hypothesis that diabetic microangiopathy is genetically heterogeneous. PON1 Leu/Leu increases the risk for retinopathy and PON2 Ser/Ser increases the risk for microalbuminuria. abstract_id: PUBMED:16140307 High C-reactive protein and low paraoxonase1 in diabetes as risk factors for coronary heart disease. Background: Paraoxonase1 (PON1) is an anti-inflammatory enzyme located on HDL, which protects against the development of atherosclerosis. C-reactive protein (CRP) is a marker of the inflammatory response in CHD. We hypothesised that low PON1 and high CRP found in CHD may be important markers of CHD and the CRP:PON1 ratio may be an index of the risk of developing atherosclerosis. We have, therefore, compared the levels of PON1 and CRP between control subjects, those with no diabetes and CHD, type 1 diabetes and type 2 diabetes. Methods And Results: PON1 activity was different between the populations in the order: controls &gt; type 1 diabetes &gt; type 2 diabetes &gt; CHD with no diabetes (P&lt;0.001). CRP concentration also differed between the populations in the order: controls &lt; type 1 diabetes &lt; type 2 diabetes &lt; CHD with no diabetes (P&lt;0.001). The CRP:PON1 ratio followed the same trend as the CRP concentration (P&lt;0.001). Both CRP and the CRP:PON1 ratio were associated with the presence of CHD. In the control population only, PON1 was a determinant of CRP concentration. Amongst the diabetics, people with CHD had higher levels of CRP (P&lt;0.001) and in comparing the control group with the CHD group, the CHD group had a higher level of CRP (P&lt;0.001). Conclusions: Higher levels of CRP seem to be generally associated with low levels of PON1 activity, providing a mechanistic link between inflammation and the development of atherosclerosis. However, the relationship between PON1, CRP and atherosclerosis, and the usefulness of the PON1:CRP ratio as a risk factor for CHD requires further evaluation. abstract_id: PUBMED:38247481 Antioxidant and Anti-Inflammatory Functions of High-Density Lipoprotein in Type 1 and Type 2 Diabetes. (1) Background: high-density lipoproteins (HDLs) exhibit antioxidant and anti-inflammatory properties that play an important role in preventing the development of atherosclerotic lesions and possibly also diabetes. In turn, both type 1 diabetes (T1D) and type 2 diabetes (T2D) are susceptible to having deleterious effects on these HDL functions. The objectives of the present review are to expound upon the antioxidant and anti-inflammatory functions of HDLs in both diabetes in the setting of atherosclerotic cardiovascular diseases and discuss the contributions of these HDL functions to the onset of diabetes. (2) Methods: this narrative review is based on the literature available from the PubMed database. (3) Results: several antioxidant functions of HDLs, such as paraoxonase-1 activity, are compromised in T2D, thereby facilitating the pro-atherogenic effects of oxidized low-density lipoproteins. In addition, HDLs exhibit diminished ability to inhibit pro-inflammatory pathways in the vessels of individuals with T2D. Although the literature is less extensive, recent evidence suggests defective antiatherogenic properties of HDL particles in T1D. Lastly, substantial evidence indicates that HDLs play a role in the onset of diabetes by modulating glucose metabolism. (4) Conclusions and perspectives: impaired HDL antioxidant and anti-inflammatory functions present intriguing targets for mitigating cardiovascular risk in individuals with diabetes. Further investigations are needed to clarify the influence of glycaemic control and nephropathy on HDL functionality in patients with T1D. Furthermore, exploring the effects on HDL functionality of novel antidiabetic drugs used in the management of T2D may provide intriguing insights for future research. abstract_id: PUBMED:26884296 Paraoxonase 1 polymorphisms (L55M and Q192R) as a genetic marker of diabetic nephropathy in youth with type 1 diabetes. Introduction: Paraoxonase 1 (PON1) polymorphisms have been largely involved in diabetes complications. The aim of the study is to evaluate the effects of PON1 polymorphisms (L55M and Q192R) on diabetic nephropathy (DN). Material And Methods: The study involved 116 children and adolescents with type 1 diabetes (T1D) and 91 healthy subjects. Albumin excretion rate (AER) was determined by immunoturbidimetry. PON1 activity was measured by a spectrophotometric method, and genotyping of PON1 gene was assessed by multiplex PCR followed by RFLP. Results: PON1 activity was inversely correlated to AER (r = -0.245, p = 0.008). A significant decrease (p = 0.037) in PON1 activity was shown between patients with nephropathy and those without (162 [57-618] vs. 316 [37-788] IU/L, respectively). The distribution of AER was, for L55M polymorphism MM &gt; LM &gt; LL (p = 0.002), and for Q192R polymorphism QQ &gt; QR &gt; RR (p &lt; 0.001). The opposite distribution was noted for PON 1 activity (p &lt; 0.001). LMQQ and MMQQ haplotypes seem to increase AER (p = 0.004, p = 0.003, respectively) and to reduce PON1 activity (p = 0.011, p = 0.052, respectively) in youths with T1D. However, LLRR haplotype seems to have the opposite effect. Conclusions: This study demonstrated that PON1 polymorphisms L55M and Q192R seem to be genetic markers involved in the development of DN in T1D. (Endokrynol Pol 2017; 68 (1): 35-41). abstract_id: PUBMED:24793345 Can paraoxonase 1 polymorphisms (L55 M and Q192 R) protect children with type 1 diabetes against lipid abnormalities? Background: Only a few studies have focused on the possible modulatory role of paraoxonase 1 (PON1) polymorphisms in lipid profiles, especially in children and in adolescents with type 1 diabetes (T1D). Objective: We propose to study the association between PON1 polymorphisms (PON1-55 and PON1-192) and a lipid profile in a young Tunisian population with T1D. Methods: The study compared 122 children and adolescents with T1D with 97 controls. Genomic DNA was collected from 116 patients and 91 controls. Lipid parameters were determined by automated methods. PON1 activity was measured by a spectrophotometric method and genotyping of the PON1 gene was assessed by multiplex polymerase chain reaction followed by restriction fragment-length polymorphism. Results: A significant increase in total cholesterol, high-density lipoprotein-cholesterol, low-density lipoprotein-cholesterol (LDL-C), apolipoprotein B (ApoB), and lipoprotein (a) (Lp(a)) and a significant decrease in apolipoprotein A1 (ApoA1), ApoA1/ApoB ratio, and PON1 activity/HDL-C ratio were observed in children with T1D compared with controls. In the LLQR haplotype, the group with diabetes showed significantly higher values of total cholesterol, LDL-C, apoB, Lp(a), and apoA1/apoB ratio compared with the control group. Those with diabetes with the LLQQ haplotype showed a significant decrease in LDL-C and Lp(a) compared with controls (P &lt; .0001). Conclusion: PON1 polymorphisms (PON1-55 and PON1-192) seem to be involved in the altering the lipid profile in T1D. The LLQR haplotype provided an atherogenic lipid profile in children with T1D compared with controls. LLQQ haplotype seemed to have a protective effect against the increase in LDL-C and Lp(a) that are heavily involved in the development of cardiovascular diseases. abstract_id: PUBMED:27329016 Shorter telomeres in adults with Type 1 diabetes correlate with diabetes duration, but only weakly with vascular function and risk factors. Objective: To determine if white blood cell (WBC) telomeres are shorter in Type 1 diabetes (T1D) than in subjects without diabetes (non-DB), and shorter in T1D subjects with vs. without vascular complications; and to determine associations with vascular biomarkers. Research Design And Methods: WBC relative telomere length (RTL) was determined by quantitative PCR in a cross-sectional study of 140 non-DB and 199 T1D adults, including 128 subjects without vascular complications (T1DNoCx) and 71 subjects with vascular complications (T1DCx). Relationships of RTL with age, T1D duration, arterial elasticity, pulse pressure and vascular risk factors were determined. Results: RTL did not differ by gender within T1D and non-DB groups. Age-adjusted RTL was shorter in T1D vs. non-DB subjects (1.48±0.03 AU vs. 1.64±0.04 AU, p=0.002), but did not differ by T1D complication status (T1DNoCX 1.50±0.04 vs. T1DCX 1.46±0.05, p=0.50), nor correlate with arterial elasticity. Univariate analysis in T1D showed RTL correlated (inversely) with age r=-0.27, p=0.0001, T1D duration r=-0.16, p=0.03, and pulse pressure (r=-0.15, p=0.04), but not with HbA1c, BP, renal function (serum creatinine, ACR, eGFR), lipids, insulin sensitivity, inflammation (CRP, CAMs) or oxidative stress (OxLDL, OxLDL/LDL-C, MPO, PON-1). Multiple regression analysis showed independent determinants of RTL were age and T1D presence (r=0.29, p&lt;0.0001). Conclusions: In this cross-sectional study telomeres were shorter in T1D. RTL correlated inversely with T1D duration, but did not differ by complication status and weakly correlated with pulse pressure and vascular risk factors. Only age and T1D were independent determinants of RTL. Longitudinal studies are merited. abstract_id: PUBMED:27506748 Functional and proteomic alterations of plasma high density lipoproteins in type 1 diabetes mellitus. Objective: Higher HDL-cholesterol (HDL-C) is linked to lower cardiovascular risk but individuals with type 1 diabetes mellitus (T1DM) with normal or high HDL-C have higher cardiovascular events compared to age matched non-diabetic controls (ND). We determined whether altered HDL functions despite having normal HDL-C concentration may explain increased cardiovascular risk in T1DM individuals. We also determined whether irreversible posttranslational modifications (PTMs) of HDL bound proteins occur in T1DM individuals with altered HDL functions. Methods: T1DM with poor glycemic control (T1D-PC, HbA1c≥8.5%, n=15) and T1DM with good glycemic control (T1D-GC, HbA1c≤6.6%, n=15) were compared with equal numbers of NDs, ND-PC and ND-GC respectively, matched for age, sex and body mass index (BMI). We measured cholesterol efflux capacity (CEC) of HDL in the serum using J774 macrophages, antioxidant function of HDL as the ability to reverse the oxidative damage of LDL and PON1 activity using commercially available kit. For proteomic analysis, HDL was isolated by density gradient ultracentrifugation and was analyzed by mass spectrometry and shotgun proteomics method. Results: Plasma HDL-C concentrations in both T1DM groups were similar to their ND. However, CEC (%) of T1D-PC (16.9±0.8) and T1D-GC (17.1±1) were lower than their respective ND (17.9±1, p=0.01 and 18.2±1.4, p=0.02). HDL antioxidative function also was lower (p&lt;0.05). The abundance of oxidative PTMs of apolipoproteins involved in CEC and antioxidative functions of HDL were higher in T1D-PC (ApoA4, p=0.041) and T1D-GC (ApoA4, p=0.025 and ApoE, p=0.041) in comparison with ND. Both T1D-PC and T1D-GC groups had higher abundance of amadori modification of ApoD (p=0.002 and p=0.041 respectively) and deamidation modification of ApoA4 was higher in T1D-PC (p=0.025). Conclusions: Compromised functions of HDL particles in T1DM individuals, irrespective of glycemic control, could be explained by higher abundance of irreversible PTMs of HDL proteins. These results lend mechanistic support to the hypothesis that HDL quality rather than quantity determines HDL function in T1DM and suggest that measurements of concentrations of HbA1c and HDL-C are not sufficient as biomarkers of effective treatment to lower cardiovascular risk in T1DM individuals. abstract_id: PUBMED:31092010 Albuminuria, the High-Density Lipoprotein Proteome, and Coronary Artery Calcification in Type 1 Diabetes Mellitus. Objective- Albuminuria is an important risk factor for cardiovascular disease in diabetes mellitus. We determined whether albuminuria associates with alterations in the proteome of HDL (high-density lipoprotein) of subjects with type 1 diabetes mellitus and whether those alterations associated with coronary artery calcification. Approach and Results- In a cross-sectional study of 191 subjects enrolled in the DCCT (Diabetes Control and Complications Trial)/EDIC study (Epidemiology of Diabetes Interventions and Complications), we used isotope dilution tandem mass spectrometry to quantify 46 proteins in HDL. Stringent statistical analysis demonstrated that 8 proteins associated with albuminuria. Two of those proteins, AMBP (α1-microglobulin/bikunin precursor) and PTGDS (prostaglandin-H2 D-isomerase), strongly and positively associated with the albumin excretion rate ( P&lt;10-6). Furthermore, PON (paraoxonase) 1 and PON3 levels in HDL strongly and negatively associated with the presence of coronary artery calcium, with odds ratios per 1-SD difference of 0.63 (95% CI, 0.43-0.92; P=0.018) for PON1 and 0.59 (95% CI, 0.40-0.87; P=0.0079) for PON3. Only 1 protein, PON1, associated with both albumin excretion rate and coronary artery calcification. Conclusions- Our observations indicate that the HDL proteome is remodeled in type 1 diabetes mellitus subjects with albuminuria. Moreover, low concentrations of the antiatherosclerotic protein PON1 in HDL associated with both albuminuria and coronary artery calcification, raising the possibility that alterations in HDL protein cargo mediate, in part, the known association of albuminuria with cardiovascular risk in type 1 diabetes mellitus. Visual Overview- An online visual overview is available for this article. Answer: Yes, paraoxonase 1 (PON1) activity may be a marker of cardiovascular risk in youth with type 1 diabetes (T1D). A study involving 109 children and adolescents with T1D found that PON1 activity and the PON1/HDL-cholesterol ratio were significantly decreased in diabetics compared to controls. Additionally, PON1 activity was reduced in patients with longer diabetes duration, higher fasting glycemia, higher HbA1c, and those with dyslipidemia. The study concluded that PON1 is associated with cardiovascular risk markers in T1D, suggesting that improving PON1 activity could be a target for reducing cardiovascular risk (PUBMED:25769648). Furthermore, other studies have supported the role of PON1 as a genetic marker for early microvascular complications in T1D, such as retinopathy and microalbuminuria (PUBMED:11918623). Polymorphisms in the PON1 gene have been associated with diabetic nephropathy in youth with T1D, indicating that PON1 polymorphisms could be genetic markers involved in the development of diabetic nephropathy (PUBMED:26884296). Additionally, PON1 polymorphisms have been implicated in altering the lipid profile in T1D, with certain haplotypes providing an atherogenic lipid profile and others having a protective effect against increases in LDL-C and Lp(a), which are heavily involved in the development of cardiovascular diseases (PUBMED:24793345). In summary, PON1 activity and its genetic polymorphisms are associated with cardiovascular risk factors and complications in youth with T1D, making it a potential marker for cardiovascular risk in this population.
Instruction: Ultrasound examination of the breast with 7.5 MHz and 13 MHz-transducers: scope for improving diagnostic accuracy in complementary breast diagnostics? Abstracts: abstract_id: PUBMED:15948057 Ultrasound examination of the breast with 7.5 MHz and 13 MHz-transducers: scope for improving diagnostic accuracy in complementary breast diagnostics? Aim: Complementary diagnostic methods in early diagnosis of breast cancer are used to increase diagnostic accuracy and minimize unnecessary invasive diagnostic procedures. Aim of the following prospective, open multicenter clinical study was to define the value of high-frequency breast ultrasound with 13 MHZ transducers compared to standard breast ultrasound with 7.5 MHz. Method: Data of 810 female patients, aged 45 to 60 years, with 819 suspicious breast lesions evaluated by four participating centres between October 1996 and December 1997. Standardised breast ultrasound was performed uniformly using a AU4 IDEA diagnostic ultrasound system by Esaote-Biomedica in addition to a standardised procedure of clinical examination and standard-2view-mammography. Analysis of all aquired data and the correlating histopathological findings was done by means of descriptive statistics on the basis of an access datafile (Version 2.0). Results: The histopathological evaluation showed 435 benign and 384 malignant findings. Overall sensitivity and specificity of the clinical examination were 71.1 % and 88.9 % and for mammography 84.7 % and 76.5 %, respectively. Standard ultrasound with 7,5 MHz reached a sensitivity of 82,6 % and a specificity of 80.8 % high-frequency ultrasound with 13 MHz came to 87.2 % and 78.4 %, respectively. Regarding tumour size, mammography gave the highest sensitivity in detection of pre-invasive cancers (DCIS). High-frequency breast ultrasound (13 MHz) proved to have a higher diagnostic accuracy compared to standard breast ultrasound (7,5 MHz) regardless of tumour size. Sensitivity was especially improved in case of small invasive tumours (pT1a) with 78 % versus 56 %, respectively. Conclusions: We conclude that high-frequency ultrasound is a valueable additive tool especially in the diagnosis of small tumours, improving diagnostic safety and reducing unnecessary invasive diagnostic procedures. abstract_id: PUBMED:8679727 Ultrasound examination of the female breast: comparison of 7.5 and 13 MHz Purpose: We investigated whether the high resolution ultrasound (13 MHz-scanner) shows smaller lesions and better differentiation than the 7.5 MHz-scanner. Method: Prospectively, sonography was performed on forty-seven patients with a 7.5 MHz-scanner as well as with a 13 MHz-scanner in identical slices. Results: Obviously we could obtain more exact diagnoses by using the high resolution scanner. In two patients additional satellite of the primary tumor could be found. In four patients, unclear sonographic findings could be identified as cysts. A disadvantage in the usage of the 13 MHz-scanner is that mastopathy and benign lesions are more difficult to diagnose. With the high resolution more details could be seen although the inhomogeneity as well as the irregularity of the margins are seen more clearly and, therefore, the physician has to reestimate his point of view. To optimize the quality of the pictures made by high resolution ultrasound, it is necessary to regulate the system, which sometimes is quite difficult. Conclusion: The recognition of smallest lesions and the reliable presentation of cysts indicates that the 13 MHz-scanner is a good additive diagnostic parameter to the 7.5 MHz-scanner. Therefore, this method may become important for diagnosing multicentrity within carcinomas. abstract_id: PUBMED:30046532 Comparison of 25 MHz and 50 MHz ultrasound biomicroscopy for imaging of the lens and its related diseases. Aim: To compare the results of 25 MHz and 50 MHz ultrasound biomicroscopy (UBM) regarding the image characteristics of the lens and its related diseases and to discuss the application value of 25 MHz UBM in ophthalmology. Methods: A total of 302 patients (455 eyes) were included in this study from November 2014 to May 2015. Patient ages ranged from 5 to 89y (mean±SD: 61.0±17.7y). Different cross-sectional images of the lens were collected to compare and analyze the image characteristics and anterior segment parameters using 25 MHz and 50 MHz UBM in axial and longitudinal scanning modes, respectively. SPSS 19.0 for Windows, paired t-tests and B&amp;A plot analysis were used for data analysis, and a value of P&lt;0.05 was considered statistically significant. Results: The 25 MHz UBM images displayed the lens shape more clearly than 50 MHz UBM images. Particularly for cataracts, the whole opacity of the lens was shown by 25 MHz UBM, but 50 MHz UBM only showed part of the lens. The means of the anterior segment parameters obtained using 25 MHz and 50 MHz UBM were as follows: central corneal thickness: 0.55±0.03 and 0.51±0.04 mm, respectively; central anterior chamber depth: 2.48±0.54 and 2.56±0.56 mm, respectively; and central lens thickness: 4.26±0.62 and 4.15±0.56 mm, respectively. A statistically significant difference was found between the results obtained with 25 MHz UBM and those obtained with 50 MHz UBM. The two devices had a good agreement in measuring the anterior segment parameters. Conclusion: The 25 MHz UBM had an obvious advantage in showing the lens shape. It can provide reliable imaging of the lens and its related diseases and has a high application value for ophthalmology. abstract_id: PUBMED:32363545 Comparison of radial and meander-like breast ultrasound with respect to diagnostic accuracy and examination time. Purpose: To prospectively compare the diagnostic accuracy of radial breast ultrasound (r-US) to that of conventional meander-like breast ultrasound (m-US), patients of a consecutive, unselected, mixed collective were examined by both scanning methods. Methods: Out of 1948 dual examinations, 150 revealed suspicious lesions resulting in 168 biopsies taken from 148 patients. Histology confirmed breast cancers in 36 cases. Sensitivity, specificity, accuracy, PPV, and NPV were calculated for r-US and m-US. The examination times were recorded. Results: For m-US and r-US, sensitivity (both 88.9%), specificity (86.4% versus 89.4%), accuracy (86.9% versus 89.3%), PPV (64.0% versus 69.6%), NPV (both 98.3%), false-negative rate (both 5.6%), and rate of cancer missed by one method (both 5.6%) were similar. The mean examination time for r-US (14.8 min) was significantly (p &lt; 0.01) shorter than for m-US (22.6 min). Conclusion: Because the diagnostic accuracy of r-US and m-US are comparable, r-US can be considered an alternative to m-US in routine breast US with the added benefit of a significantly shorter examination time. abstract_id: PUBMED:2543951 Sonographic diagnosis of breast carcinoma by a 7.5 MHz high-resolution electronic linear array transducer The diagnostic quality with 7.5 MHz real-time, transducer was evaluated in 175 cases of breast lesions including 106 carcinoma. All cases were histologically confirmed by surgery or excisional biopsy. In the prospective study, accuracy in carcinoma was 84.6% with true positive ratio of 86.8%. There were 14 false negative cases with carcinoma and 13 false positive cases. Five "early" breast carcinoma within 1 cm in diameter were misdiagnosed as benign breast lesions. The false positive cases were composed of 7 degenerated fibroadenomas, 2 mastopathies and 4 inflammatory changes. In the retrospective study, accuracy in carcinoma was 83.4% with true positive ratio of 93.4%. Compared with the prospective study, false negative cases decreased and false positive cases increased in retrospect. The predictive value of positive results with carcinoma were about 80% in each malignant findings on the criterion of breast carcinoma. The 7.5 MHz electronic, linear-array transducer has several advantages over the 7.5 MHz polymer transducer in detection and also precise observation of the breast lesions. abstract_id: PUBMED:10575449 An update of B-mode echography in the characterization of nodular thyroid diseases. An echographic study comparing 7.5 and 13 MHz probes Purpose: We investigated B-mode US capabilities in diagnosis and characterizing thyroid nodules and compared our personal findings with those of the few analytical studies in the literature. We also compared the diagnostic accuracy of conventional 7.5 MHz versus more recent 13 MHz transducers. Material And Methods: We examined 136 consecutive patients with a single thyroid nodule: they were 97 women and 39 men, age ranging 15-87 years (mean: 37.4). The patients were submitted to scintigraphy and laboratory tests first and then to US, fine-needle biopsy and/or histologic examination. The final diagnosis was made at cytology and/or histology: we had 98 follicular hyperplasias, 20 follicular adenomas and 18 carcinomas. We studied the presence/absence of the halo sign, cystic portions, microcalcifications; nodule margins and echogenicity relative to the thyroid gland were also studied. Results: The presence of microcalcifications had the highest specificity for malignancy. The sensitivity of this parameter was higher with 13 MHz than with 7.5 MHz transducers. Relative to microcalcifications, absence of cystic portions and irregular margins, 13 MHz US had 64.7-89% accuracy. The halo sign and lesion echogenicity did not permit a reliable differential diagnosis between benign and malignant nodules with both 7.5 and 13 MHz transducers. The association of microcalcifications and irregular margins had the highest accuracy, scoring 86% at 7.5 MHz and 90.5% at 13 MHz. Conclusions: High frequency US is a sensitive tool for diagnosing thyroid nodes. Accurate analysis of the US signs can suggest the benign/malignant lesion nature, which must be integrated with color, power and pulsed Doppler findings. abstract_id: PUBMED:1792150 Diagnostic accuracy of breast sonography. Comparison among three different techniques Mechanical arc scanning is widely used for breast sonography in Japan. The authors have used three different kinds of devices over the past seven years. The diagnostic accuracy was compared among three groups of patients with pathologically confirmed breast masses. In group A, 309 cases (77 carcinomas) were evaluated with a 10 MHz contact compound scanner, and the sensitivity, specificity and accuracy rates were 77.9%, 97.8% and 92.9%, respectively. In group B, 306 cases (56 carcinomas) were evaluated with a 5 or 7.5 MHz mechanical arc scanner, and the sensitivity, specificity and accuracy rates were 89.3%, 84.8% and 85.6%, respectively. In group C, 296 cases (71 carcinomas) were evaluated with a 7.5 MHz real time scanner, and the sensitivity, specificity and accuracy rates were 95.8%, 92.4% and 93.2%, respectively. In cases with T1 breast cancer, the methods had sensitivities of 60.0%, 85.7% and 94.5%, respectively. The sensitivity of the real-time scanner was not significantly different from that of the mechanical arc scanner. In conclusion, the hand-held real-time scanner with high-frequency transducer is a simple, useful device with high diagnostic accuracy for breast examination and can be used as a substitute for the mechanical arc scanner. abstract_id: PUBMED:26835671 Strain Elastography Ultrasound: An Overview with Emphasis on Breast Cancer Diagnosis. Strain elastography (SE), which estimates tissue strain, is an adjunct to the conventional ultrasound B-mode examination. We present a short introduction to SE and its clinical use. Furthermore, we present an overview of the 10 largest studies performed on the diagnostic accuracy of SE in breast cancer diagnostics. Eight of 10 studies presented data for both SE and B-mode imaging. Seven studies showed better specificity and accuracy for SE than for B-mode imaging in breast cancer diagnosis. Four studies showed an increase in specificity and accuracy when combining B-mode imaging with SE. The ways of combining B-mode imaging with SE in the diagnosis of breast cancer differed between the five studies. We believe that further studies are needed to establish an optimal algorithm for the combination of B-mode ultrasound and SE in breast cancer. abstract_id: PUBMED:2273044 Advantages of the 5 MHz transducer in the diagnosis of non-palpable breast cancer We surveyed more than 800 neoplastic lesions studied by US in our department. Fifty-five were non-palpable (T0) tumors. In 60% of the cases, positivity or high suspicion of malignancy was demonstrated by US. If the doubtful cases (one sign of positivity), are added, the diagnostic rate is 67%. In the same series, X-ray mammography including positive, suspicions and doubtful cases reached the same percentage, giving together a final score of 89%. The method use is briefly described and the particular contribution of the 5 MHz mono-transducer is emphasized. The complementary of X-ray mammography and US studies in the diagnosis of T0 breast neoplasms is stressed. abstract_id: PUBMED:21573095 Variability of standardized echographic ultrasound using 10 mHz and high-resolution 20 mHz B scan in measuring intraocular melanoma. Background: The purpose of this study was to evaluate the applicability and variability of echographic imaging using 10 mHz and high-resolution 20 mHz B scanning for measurement of intraocular tumors. Methods: This prospective consecutive study comprises 27 eyes in 27 patients with uveal melanoma. Each patient was examined by three experienced examiners independently on three occasions within a two-week time frame in a blinded manner. The height of the lesion was measured by all examiners using the 10 mHz B, 20 mHz B, and 8 mHz A scan probes. Additionally, basal diameter was examined using the 10 and 20 mHz B scan. Results: Tumor height measurements for all examiners using the standardized A scan tended to be higher than for both B scan measurements. Statistical analysis revealed significant differences in tumor height between B and A scan measurements. No difference in tumor height was found between the two B scan techniques (P = 0.239). Basal tumor diameter measurements revealed significant differences between 10 mHz and 20 mHz B scans (P &lt; 0.001 and P = 0.001, respectively). For the 10 mHz B scan, basal diameter results tended to be larger than for the 20 mHz B scan. No difference was found for interobserver variation in all A scan and B scan examinations. The mean standard deviation of the difference in tumor height measurements between the examiners was ±0.24 mm for the 8 mHz A scan, ±0.46 mm for the 10 mHz B scan, and ±0.42 mm for the 20 mHz B scan. Both the 10 mHz and more precise 20 mHz B scan evaluations underestimated tumor height. Conclusion: The 20 mHz ultrasound probe, despite its theoretically higher resolution, is not able to replace A scan measurements of tumor height. Answer: The use of ultrasound examination in breast diagnostics has been explored with different transducer frequencies to improve diagnostic accuracy and minimize unnecessary invasive procedures. Studies have compared the effectiveness of standard breast ultrasound using 7.5 MHz transducers with high-frequency ultrasound using 13 MHz transducers. One study involving 810 female patients with suspicious breast lesions found that high-frequency ultrasound with 13 MHz transducers had a higher diagnostic accuracy compared to standard ultrasound with 7.5 MHz transducers, particularly in the diagnosis of small tumors (PUBMED:15948057). The sensitivity and specificity of high-frequency ultrasound were 87.2% and 78.4%, respectively, compared to 82.6% and 80.8% for standard ultrasound. High-frequency ultrasound was especially more sensitive in detecting small invasive tumors (pT1a), with a sensitivity of 78% versus 56% for standard ultrasound. Another study comparing 7.5 MHz and 13 MHz transducers in 47 patients found that the high-resolution 13 MHz scanner could detect smaller lesions and provide more exact diagnoses. It also showed that while the 13 MHz scanner could identify additional satellites of the primary tumor and clarify unclear sonographic findings as cysts, it made the diagnosis of mastopathy and benign lesions more challenging due to the increased visibility of details and irregular margins (PUBMED:8679727). In the context of thyroid nodules, a study comparing 7.5 MHz and 13 MHz probes found that the presence of microcalcifications had the highest specificity for malignancy, and the sensitivity of this parameter was higher with 13 MHz than with 7.5 MHz transducers (PUBMED:10575449). Overall, these studies suggest that high-frequency ultrasound transducers, such as the 13 MHz, can be valuable in improving the diagnostic accuracy of breast ultrasound, particularly in detecting smaller lesions and characterizing unclear findings, thus potentially reducing the need for invasive diagnostic procedures.
Instruction: Antegrade enemas for defecation disorders: do they improve the colonic motility? Abstracts: abstract_id: PUBMED:19635308 Antegrade enemas for defecation disorders: do they improve the colonic motility? Purpose: The aim of the study was to describe the changes in colonic motility occurring after chronic antegrade enema use in children and young adults. Methods: Colonic manometry tracings of patients who had used antegrade enemas for at least 6 months and were being evaluated for possible discontinuation of this treatment were retrospective reviewed. Results: Seven patients (median age of 12 years, range 3-15 years) met our inclusion criteria. Four patients had idiopathic constipation, 2 had tethered cord, and 1 had Hirschsprung disease. Colonic manometry before the use of antegrade enemas showed dysmotility in 6 (86%) children, mostly in the distal colon. None of the patients underwent colonic resection between the 2 studies. All the patients had colonic manometry repeated between 14 and 46 months after the creation of the cecostomy. All patients with abnormal colonic manometry improved with the use of antegrade enema with a complete normalization of colonic motility in 5 (83%) patients. Conclusion: Use of antegrade enema alone, without diversion or resection, may improve colonic motility. abstract_id: PUBMED:25079485 Prolonged colonic manometry in children with defecatory disorders. Objectives: Colonic manometry is a test used in the evaluation of children with defecation disorders unresponsive to conventional treatment. The most commonly reported protocol in pediatrics consists of a study that lasts approximately 4 hours. Given the wide physiological variations in colonic motility throughout the day, longer observation may detect clinically relevant information. The aim of the present study was to compare prolonged colonic manometry studies in children referred for colonic manometry with the more traditional short water-perfused technology. Methods: Colonic manometry studies of 19 children (8 boys, mean age 9.4 ± 0.9, range 3.9-16.3) with severe defecation disorders were analyzed. First, a "standard test" was performed with at least 1-hour fasting, 1-hour postprandial, and 1-hour postbisacodyl provocation recording. Afterwards, recordings continued until the next day. Results: In 2 of the 19 children, prolonged recording gave us extra information. In 1 patient with functional nonretentive fecal incontinence who demonstrated no abnormalities in the short recording, 2 long clusters of high-amplitude contractions were noted in the prolonged study, possibly contributing to the fecal incontinence. In another patient evaluated after failing use of antegrade enemas through a cecostomy, short recordings showed colonic activity only in the most proximal part of the colon, whereas the prolonged study showed normal motility over a larger portion of the colon. Conclusions: Prolonged colonic measurement provides more information regarding colonic motor function and allows detection of motor events missed by the standard shorter manometry study. abstract_id: PUBMED:30337036 Constipation: Beyond the Old Paradigms. Constipation is a common problem in children. Although most children respond to conventional treatment, symptoms persist in a minority. For children with refractory constipation, anorectal and colonic manometry testing can identify a rectal evacuation disorder or colonic motility disorder and guide subsequent management. Novel medications used in adults with constipation are beginning to be used in children, with promising results. Biofeedback therapy and anal sphincter botulinum toxin injection can be considered for children with a rectal evacuation disorder. Surgical management of constipation includes the use of antegrade continence enemas, sacral nerve stimulation, and colonic resection. abstract_id: PUBMED:37096634 Clinical utility of colonic low-amplitude propagating contractions in children with functional constipation. Background: Colonic high-amplitude propagating contractions (HAPC) are generally accepted as a marker of neuromuscular integrity. Little is known about low-amplitude propagating contractions (LAPCs); we evaluated their clinical utility in children. Methods: Retrospective review of children with functional constipation undergoing low-resolution colon manometry (CM) recording HAPCs and LAPCs (physiologic or bisacodyl-induced) in three groups: constipation, antegrade colonic enemas (ACE), and ileostomy. Outcome (therapy response) was compared to LAPCs in all patients and within groups. We evaluated LAPCs as potentially representing failed HAPCs. Key Results: A total of 445 patients were included (median age 9.0 years, 54% female), 73 had LAPCs. We found no association between LAPCs and outcome (all patients, p = 0.121), corroborated by logistic regression and excluding HAPCs. We found an association between physiologic LAPCs and outcome that disappears when excluding HAPCs or controlling with logistic regression. We found no association between outcome and bisacodyl-induced LAPCs or LAPC propagation. We found an association between LAPCs and outcome only in the constipation group that cancels with logistic regression and excluding HAPCs (p = 0.026, 0.062, and 0.243, respectively). We found a higher proportion of patients with LAPCs amongst those with absent or abnormally propagated (absent or partially propagated) HAPCs compared to those with fully propagated HAPCs (p = 0.001 and 0.004, respectively) suggesting LAPCs may represent failed HAPCs. Conclusions/inferences: LAPCs do not seem to have added clinical significance in pediatric functional constipation; CM interpretation could rely primarily on the presence of HAPCs. LAPCs may represent failed HAPCs. Larger studies are needed to further validate these findings. abstract_id: PUBMED:23035840 Factors associated with successful decrease and discontinuation of antegrade continence enemas (ACE) in children with defecation disorders: a study evaluating the effect of ACE on colon motility. Background: Antegrade continence enemas (ACE) have been used in the treatment of defecation disorders in children; little is known on their effect on colon motility and the utility of the colon manometry (CM) predicting long-term ACE outcomes. Methods: Retrospective review of children with constipation undergoing CM before and after ACE to evaluate CM changes and their utility on predicting ACE outcome. Key Results: A total of 40 patients (mean age 8.8 SD 3 years and 53% female patients) were included; 39 of 40 responded to the ACE. Of these 39, 14 (36%) were dependent and 25 (64%) had decreased it (11 of those or 28% discontinued it). On repeat CM we found a significant increase in the fasting (P &lt; 0.01) and postprandial (P = 0.03) motility index, number of bisacodyl-induced high amplitude propagating contractions (HAPCs) (P = 0.03), and total HAPCs (P = 0.02). Gastrocolonic response to a meal, propagation and normalization of HAPCs improved in 28%, 58%, and 33%, respectively, with CM normalizing in 33% of patients. The baseline CM did not predict ACE outcome. The presence of normal HAPCs on the repeat CM was associated with ACE decrease. Progression and normalization of HAPCs (P = 0.01 and 0.02, respectively) and CM normalization (P = 0.01) on repeat CM were individually associated with ACE decrease. No CM change was associated with ACE discontinuation. Multivariate analysis showed that older age and HAPC normalization on CM predict ACE decrease and older age is the only predictor for ACE discontinuation. Conclusions & Inferences: Colon motility improves after ACE and the changes on the repeat CM may assist in predicting ACE outcome. abstract_id: PUBMED:16567185 Colonic manometry as predictor of cecostomy success in children with defecation disorders. Purpose: The aim of this study was to define the predictive value of colonic manometry and contrast enema before cecostomy placement in children with defecation disorders. Methods: Medical records, contrast enema, and colonic manometry studies were reviewed for 32 children with defecation disorders who underwent cecostomy placement between 1999 and 2004. Diagnoses included idiopathic constipation (n = 13), Hirschsprung's disease (n = 2), cerebral palsy (n = 1), imperforate anus (n = 6), spinal abnormality (n = 6), and anal with spinal abnormality (n = 4). Contrast enemas were evaluated for the presence of anatomic abnormalities and the degree of colonic dilatation. Colonic manometry was considered normal when high-amplitude propagating contractions (HAPC) occurred from proximal to distal colon. Clinical success was defined as normal defecation frequency with no or occasional fecal incontinence. Results: Colonic manometry was done on 32 and contrast enema on 24 patients before cecostomy. At follow-up, 25 patients (78%) fulfilled the success criteria. Absence of HAPC throughout the colon was related to unsuccessful outcome (P = .03). Colonic response with normal HAPC after bisacodyl administration was predictive of success (P = .03). Presence of colonic dilatation was not associated with colonic dysmotility. Conclusion: Colonic manometry is helpful in predicting the outcome after cecostomy. Patients with generalized colonic dysmotility are less likely to benefit from use of antegrade enemas via cecostomy. Normal colonic response to bisacodyl predicts favorable outcome. abstract_id: PUBMED:33909507 Characterization of haustral activity in the human colon. Contraction patterns of the human colon are rarely discussed from the perspective of its haustra. Colonic motility was analyzed in 21 healthy subjects using 84-sensor manometry catheters with 1-cm sensor spacing. Capsule endoscopy and manometry showed evidence of narrow rhythmic circular muscle contractions. X-ray images of haustra and sensor locations allowed us to identify manometry motor activity as intrahaustral activity. Two common motor patterns were observed that we infer to be associated with individual haustra: rhythmic pressure activity confined to a single sensor, and activity confined to a section of the colon of 3-6 cm length. Intrahaustral activity was observed by 3-4 sensors. Approximately 50% of the haustra were intermittently active for ∼30% of the time; 2,402 periods of haustral activity were analyzed. Intrahaustral activity showed rhythmic pressure waves, propagating in mixed direction, 5-30 mmHg in amplitude at a frequency of ∼3 cpm (range 2-6) or ∼12 cpm (range 7-15), or exhibiting a checkerboard segmentation pattern. Boundaries of the haustra showed rhythmic pressure activity with or without elevated baseline pressure. Active haustra often showed no boundary activity probably allowing transit to neighboring haustra. Haustral boundaries were seen at the same sensor for the 6- to 8-h study duration, indicating that they did not propagate, thereby likely contributing to continence. The present study elucidates the motility characteristics of haustral boundaries and the nature of intrahaustral motor patterns and paves the way for investigating their possible role in pathophysiology of defecation disorders.NEW &amp; NOTEWORTHY Here, we present the first full characterization and quantification of motor patterns that we infer to be confined to single haustra, both intrahaustral activity and haustral boundary activity, in the human colon using high-resolution manometry. Haustral activity is intermittent but consistently present in about half of the haustra. Intrahaustral activity presents as a cyclic motor pattern of mixed propagation direction dominated by simultaneous pressure waves that can resolve into checkerboard segmentation, allowing for mixing, absorption, and stool formation. abstract_id: PUBMED:28011998 Can balneotherapy improve the bowel motility in chronically constipated middle-aged and elderly patients? Balneotherapy or spa therapy is usually known for different application forms of medicinal waters and its effects on the human body. Our purpose is to demonstrate the effect of balneotherapy on gastrointestinal motility. A total of 35 patients who were treated for osteoarthritis with balneotherapy from November 2013 through March 2015 at our hospital had a consultation at the general surgery for constipation and defecation disorders. Patients followed by constipation scores, short-form health survey (SF-12), and a colonic transit time (CTT) study before and after balneotherapy were included in this study, and the data of the patients were analyzed retrospectively. The constipation score, SF-12 score, and CTT were found statistically significant after balneotherapy (p &lt; 0.05). The results of our study confirm the clinical finding that a 15-day course of balneotherapy with mineral water from a thermal spring (Bursa, Turkey) improves gastrointestinal motility and reduces laxative consumption in the management of constipation in middle-aged and elderly patients, and it is our belief that treatment with thermal mineral water could considerably improve the quality of life of these patients. abstract_id: PUBMED:7607277 Defaecation disorders in children, colonic transit time versus the Barr-score. It is still unclear how to evaluate the existence of faecal retention or impaction in children with defaecation disorders. To objectivate the presence and degree of constipation we measured segmental and total colonic transit times (CTT) using radio-opaque markers in 211 constipated children. On clinical grounds, patients (median age 8 years (5-14 years)) could be divided into three groups; constipation, isolated encopresis/soiling and recurrent abdominal pain. Barr-scores, a method for assessment of stool retention using plain abdominal radiographs, were obtained in the first 101 patients, for comparison with CTT measurements as to the clinical outcome. Of the children with constipation, 48% showed significantly prolonged total and segmental CTT. Surprisingly, 91% and 91%, respectively, of the encopresis/soiling and recurrent abdominal pain children had a total CTT within normal limits, suggesting that no motility disorder was present. Prolonged CTT through all segments, known as colonic inertia, was found in the constipation group only. Based on significant differences in clinical presentation, CTT and colonic transit patterns, encopresis/soiling children formed a separate entity among children with defaecation disorders, compared to children with constipation. Recurrent abdominal pain in children was in the great majority, not related to constipation. Barr-scores were poorly reproducible, with low inter- and intra-observer reliability. This is the first study which shows that clinical differences in constipated children are associated with different colonic transit patterns. The usefulness of CTT measurements lies in the objectivation of complaints and the discrimination of certain transit patterns. Conclusion. Abdominal radiographs, even when assessed with the Barr-score proved unreliable in diagnosing constipation.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:38430328 Relationship of Age and Gender to Motility Test Results and Symptoms in Patients with Chronic Constipation. Background/aims: Patients with chronic constipation (CC) exhibit symptoms and functional abnormalities upon testing, but their relationship to age and gender is unclear. We assessed age- and gender-related differences in symptoms, colon transit time, and anorectal motility, sensation, and expulsion. Patients And Methods: Retrospective, post hoc data analysis of patients with CC, who underwent Wireless Motility Capsule (WMC), High-Resolution Anorectal Manometry (HR-ARM), Balloon Expulsion Test (BET) and Rectal Sensory Testing (RST). Clinical assessment was made by questionnaires. Standard WMC criteria for colonic transit time (CTT) and the London classification was used for HR-ARM analyses, and regression plots between age, gender, CTT, HR-HRM, RST and BET were calculated. Results: We studied 75 women and 91 men. Abdominal pain, infrequent defecation, incomplete evacuation, defecatory straining, and multiple motility and anorectal function abnormalities were common. Abdominal pain was least frequently, and straining was most frequently associated with a motility abnormality. For each symptom, the highest prevalence was associated with failed BET. There was a significant increase in CTT with age only in men (p = 0.0006). In men, for each year of age there was a CTT increase of 1.02 h. The prevalence of abdominal pain and incomplete evacuation for females was significantly higher than that for males (both P &lt; 0.05). The prevalence of low anal squeeze pressure for females was significantly higher than that for males, and the prevalence of poor rectal sensation for males was significantly higher than that for females (both P &lt; 0.05). A significant decrease in basal anal and squeeze pressures with age occurred in women (p &lt; 0.0001); an increase in age of one year was associated with a decrease in anal base pressure of 1.2 mmHg. Abnormal CTT and HR-ARM tests were associated with increased symptom frequency, but not severity. Conclusions: There are significant age- and gender-related differences in symptoms, CTT, and HR-HRM parameters, rectal sensation, and expulsion, that may influence the multifaceted management of constipation. Answer: Yes, antegrade enemas for defecation disorders can improve colonic motility. A study described changes in colonic motility after chronic antegrade enema use in children and young adults, showing that all patients with abnormal colonic manometry improved with the use of antegrade enema, with a complete normalization of colonic motility in 83% of patients (PUBMED:19635308). Another study found that antegrade continence enemas (ACE) led to significant increases in fasting and postprandial motility index, number of bisacodyl-induced high amplitude propagating contractions (HAPCs), and total HAPCs, indicating an improvement in colon motility after ACE (PUBMED:23035840). Additionally, colonic manometry was found to be helpful in predicting the outcome after cecostomy, with patients showing a normal colonic response to bisacodyl predicting a favorable outcome (PUBMED:16567185). These findings suggest that the use of antegrade enema alone, without the need for diversion or resection, may improve colonic motility in patients with defecation disorders.
Instruction: Does periprostatic block increase the transrectal ultrasound (TRUS) biopsy sepsis rate in men with elevated PSA? Abstracts: abstract_id: PUBMED:23677210 Does periprostatic block increase the transrectal ultrasound (TRUS) biopsy sepsis rate in men with elevated PSA? Introduction: Periprostatic nerve block (PPNB) is a common local anaesthetic technique in transrectal ultrasound-guided (TRUS) prostate biopsy, but concerns remain over the increased theoretical risks of urinary tract infection (UTI) and sepsis from the additional transrectal needle punctures. This study reviewed our biopsy data to assess this risk. Materials And Methods: Retrospective data collected from 177 men who underwent TRUS biopsy between July 2007 and December 2009 in a single institution were analysed. PPNB was administered using 1% xylocaine at the prostatic base and apex and repeated on the contralateral side under ultrasound guidance. Complications, including UTI sepsis, bleeding per rectum and acute retention of urine (ARU) were noted. Every patient was tracked for the first 2 weeks for complications until his clinic review. Demographic profi le, biopsy parameters and histological fi ndings were reviewed. Univariate and multivariate analysis of possible risk factors for development of sepsis after TRUS biopsy were performed. Statistical analysis was performed using SPSS 17.0. Results: Ninety (51%) men received PPNB and 87 (49%) did not. The groups were matched in age (PPNB: mean 62.7 ± 5.8 years; without PPNB: mean 64.4 ± 5.7 years) and prebiopsy prostate specific antigen (PSA) levels (PPNB: mean 8.2 ± 3.9 ng/mL; without PPNB: mean 8.3 ± 3.7 ng/mL). The PPNB group had a larger prostate volume, with more cores taken (P &lt;0.05). On univariate and multivariate analysis controlling for age, PSA, prostate volume, number of cores taken and histological prostatitis, PPNB was not a significant risk factor for sepsis. Sepsis rates were 5.6% in the PPNB group and 5.7% in the other group (P = 0.956). Overall prostate cancer detection rate was 33.3%. Conclusion: The risk of sepsis was not increased in patients who received PPNB, even though this group had larger gland volumes and more biopsy cores taken. abstract_id: PUBMED:29264144 Contemporary outcomes in the detection of prostate cancer using transrectal ultrasound-guided 12-core biopsy in Singaporean men with elevated prostate specific antigen and/or abnormal digital rectal examination. Objective: Despite being the third commonest cancer in Singaporean men, there is a dearth of basic data on the detection rate of prostate cancer and post-procedure complication rates locally using systematic 12-core biopsy. Our objective is to evaluate prostate cancer detection rates using 12-core prostate biopsy based on serum prostate specific antigen (PSA) levels and digital rectal examination (DRE) findings in Singaporean men presenting to a single tertiary centre. The secondary objective is to evaluate the complication rates of transrectal prostate biopsies. Methods: We retrospectively examined 804 men who underwent first transrectal-ultrasound (TRUS) guided 12-core prostate biopsies from January 2012 to April 2014. Prostate biopsies were performed on men presenting to a tertiary institution when their PSA levels were ≥4.0 ng/mL and/or when they had suspicious DRE findings. Results: Overall prostate cancer detection rate was 35.1%. Regardless of DRE findings, patients were divided into four subgroups based on their serum PSA levels: 0-3.99 ng/mL, 4.00-9.99 ng/mL, 10.00-19.99 ng/mL and ≥20.00 ng/mL and their detection rates were 9.5%, 20.9%, 38.4% and 72.3%, respectively. The detection rate of cancer based on suspicious DRE findings alone was 59.2% compared to 36.5% based on serum PSA cut-off of 4.0 ng/mL alone. The post-biopsy admission rate for sepsis was 1.5%. Conclusion: In conclusion, using contemporary 12-core biopsy methods, the local prostate cancer detection rate based on serum PSA and DRE findings has increased over the past decade presumably due to multiple genetic and environmental factors. Post-biopsy sepsis remains an important complication worldwide. abstract_id: PUBMED:26239148 Transrectal ultrasound-guided prostate biopsy in Taiwan: A nationwide database study. Background: For patients with an elevated prostate specific antigen (PSA) level or a suspected lesion detected by digital rectal examination, transrectal ultrasound-guided (TRUS) prostate biopsy is the standard procedure for prostate cancer diagnoses. In Taiwan, TRUS prostate biopsy has not been well-studied on a nationwide scale. This article aimed to study TRUS prostate biopsy in Taiwan and its related complications, according to the claims generated through the National Health Insurance (NHI) program. Methods: We applied for access to claims from the NHI Research Database of Taiwan of all patients who visited the urology clinic during the period of 2006 to 2010. In the 5-year urology profile, we obtained all records, which included admission and ambulatory clinical records. The definition of TRUS biopsy included codes for ultrasound-guided procedure and for prostate puncture; other codes involving complications such as postbiopsy voiding difficulty, significant bleeding, or infection requiring treatment were also included. Risk factors included age, diagnosis of prostate cancer, hospitalization or nonhospitalization, and the Charlson Comorbidity Index (CCI; with a value of 0, 1, 2 or ≥ 3). Descriptive and comparative analyses were also performed. Results: In the 5-year urology profile, 12,968 TRUS biopsies performed of which 6885 were in-patient procedures and 6083 were ambulatory clinic procedures. After the procedures, 1266 (9.76%) biopsies were associated with voiding difficulty; 148 (1.14%) biopsies, with significant bleeding; and 855 (6.59%) biopsies, with infection that required treatment. The prostate cancer diagnosis rate was 36.02%. The overall biopsy-related mortality rate within 30 days was 0.25%, and the postbiopsy sepsis-related mortality rate was 0.13%. Age, diagnosis of cancer, hospitalization, and CCI value ≥ 1 were all significant factors in univariate analysis and multivariate analysis for postbiopsy voiding difficulty and severe infection. A diagnosis of cancer and a CCI value ≥ 2 were significant factors for significant bleeding after biopsy. Patients diagnosed as having prostate cancer had fewer bleeding complications after biopsy. Conclusion: The most frequent complication was postbiopsy voiding difficulty, followed by infection that required treatment and significant bleeding. The sepsis-related mortality rate was 0.13%. Significant risk factors for postbiopsy complications included age, diagnosis of prostate cancer, hospitalization, and the CCI value. abstract_id: PUBMED:25556709 Transrectal ultrasound-guided biopsy sepsis and the rise in carbapenem antibiotic use. Background: This study sought to determine the number of hospital admissions for sepsis following transrectal ultrasound-guided (TRUS) biopsy, and the rate of both prophylactic and therapeutic use of carbapenem antibiotics for TRUS biopsy, at a single institution. Methods: A retrospective review of prospectively collected data from the medical records electronic database of Cabrini Health, a private metropolitan hospital, was queried for coding of admissions under any admitting urologist for sepsis and prostate-related infections from 2009 to 2012. Records were examined for whether a TRUS biopsy had been performed within 14 days prior and if a therapeutic carbapenem was required. The database also queried the use of carbapenems as prophylaxis in patients undergoing TRUS biopsy. Results: Of the 63 admissions for TRUS biopsy sepsis, multi-drug-resistant organisms were isolated from 26 (41%). Twenty-three admissions were from the 1937 patients who underwent a TRUS biopsy at Cabrini (a sepsis rate of 1.2%) and 40 were following TRUS biopsies at other centres. Thirty-seven (58.7%) patients received therapeutic carbapenems either empirically, or after culture results. Of the 1937 Cabrini TRUS biopsy patients, 154 (8%) were given a carbapenem as prophylaxis, with a rapid increase in prophylactic use over the 4 years studied from 0.25% to 13%. Conclusion: This study did not show evidence of an increasing rate of hospital admissions for TRUS biopsy sepsis at this institution. However, there was a dramatic uptake in prophylactic administration of carbapenems. Increasing carbapenem use may contribute to development of carbapenem-resistant bacteria. Alternative methods of prostate biopsy that avoid sepsis should be considered. abstract_id: PUBMED:32395330 Ciprofloxacin: single versus multiple doses in transrectal ultrasound guided prostate biopsy. Introduction: There is rising concern regarding overuse of fluoroquinolones due to severe musculoskeletal and neurological side effects, and development of resistant microorganisms. In June 2019, the European Commission recommended fluoroquinolones should not be used routinely for prophylaxis in urological surgical procedures. Methods to reduce unnecessary exposure to fluroquinolones should be investigated.The aim of this article was to determine differences in hospital admission secondary to sepsis following transrectal ultrasound (TRUS) guided prostate biopsies between patients who received single vs. multiple doses of fluoroquinolones. Material And Methods: A retrospective analysis (June 2017-September 2018) of 200 consecutive TRUS biopsies at a single centre was undertaken. Group 1 (n = 100) received 750 mg ciprofloxacin 1-hr before their procedure followed by 3 days of ciprofloxacin 250 mg BD. Group 2 (n = 100) received a single dose of 750 mg ciprofloxacin 1-hr before the procedure. Midstream urine (MSU) culture results were examined pre-biopsy and 7 days post-biopsy. Data was also gathered on readmission rates to hospital as a result of urosepsis. Results: A total of 1% of patients in each group required hospital admission secondary to Escherichia coli sepsis. A further 4% (n = 4) in Group 1 developed a urinary tract infection requiring antibiotic treatment post biopsy compared with 1% (n = 1) in Group 2. There was no statistically significant difference in development of infectious complications post-biopsy between the two groups (p &gt;0.05). Conclusions: A single prophylactic dose of 750 mg of ciprofloxacin 1-hour pre-biopsy is as effective as multiple doses for TRUS guided prostate biopsy. Avoiding an unnecessary and prolonged course of fluoroquinolones has advantages in reducing potential side effects and development of resistant pathogens. abstract_id: PUBMED:35390395 Change from transrectal to transperineal ultrasound-guided prostate biopsy under local anaesthetic eliminates sepsis as a complication. Transrectal ultrasound-guided (TRUS) biopsy of the prostate is associated with increased risk of post-procedural sepsis with associated morbidity, mortality, re-admission to hospital, and increased healthcare costs. In the study institution, active surveillance of post-procedural infection complications is performed by clinical nurse specialists for prostate cancer under the guidance of the infection prevention and control team. To protect hospital services for acute medical admissions related to the coronavirus disease 2019 (COVID-19) pandemic, TRUS biopsy services were reduced nationally, with exceptions only for those patients at high risk of prostate cancer. In the study institution, this change prompted a complete move to transperineal (TP) prostate biopsy performed in outpatients under local anaesthetic. TP biopsies eliminated the risk of post-procedural sepsis and, consequently, sepsis-related admission while maintaining a service for prostate cancer diagnosis during the COVID-19 pandemic. abstract_id: PUBMED:34674018 Transrectal ultrasound-guided prostate needle biopsy remains a safe method in confirming a prostate cancer diagnosis: a multicentre Australian analysis of infection rates. Purpose: Worldwide, transrectal ultrasound-guided prostate needle remains the most common method of diagnosing prostate cancer. Due to high infective complications reported, some have suggested it is now time to abandon this technique in preference of a trans-perineal approach. The aim of this study was to report on the infection rates following transrectal ultrasound-guided prostate needle biopsy in multiple Australian centres. Materials And Methods: Data were collected from seven Australian centres across four states and territories that undertake transrectal ultrasound-guided prostate needle biopsies for the diagnosis of prostate cancer, including major metropolitan and regional centres. In four centres, the data were collected prospectively. Rates of readmissions due to infection, urosepsis resulting in intensive care admission and mortality were recorded. Results: 12,240 prostate biopsies were performed in seven Australian centres between July 1998 and December 2020. There were 105 readmissions for infective complications with rates between centres ranging from 0.19 to 2.60% and an overall rate of 0.86%. Admission to intensive care with sepsis ranged from 0 to 0.23% and overall 0.03%. There was no mortality in the 12,240 cases. Conclusion: Infective complications following transrectal ultrasound-guided prostate needle biopsies are very low, occurring in less than 1% of 12,240 biopsies. Though this study included a combination of both prospective and retrospective data and did not offer a comparison with a trans-perineal approach, TRUS prostate biopsy is a safe means of obtaining a prostate cancer diagnosis. Further prospective studies directly comparing the techniques are required prior to abandoning TRUS based upon infectious complications. abstract_id: PUBMED:29723457 One-puncture one-needle TRUS-guided prostate biopsy for prevention of postoperative infections Objective: To explore the feasibility and effectiveness of "one-puncture one-needle" transrectal ultrasound (TRUS)-guided prostate biopsy in the prevention of postoperative infections. Methods: We retrospectively analyzed the clinical data about "one-puncture one-needle" (the observation group) and "one-person one-needle" (the control group) TRUS-guided prostate biopsy performed in the Second People's Hospital of Guangdong Province from January 2005 to December 2015, and compared the incidence rates of puncture-related infection between the two strategies. By "one-puncture one-needle", one needle was used for one biopsy puncture, while by "one-person one-needle", one needle was used for all biopsy punctures in one patient and the needle was sterilized with iodophor after each puncture. Results: Totally, 120 patients received 6+1-core or 12+1-core "one-person one-needle" and 466 underwent 12+1-core "one-puncture one-needle" TRUS-guided prostate biopsy. There were no statistically significant differences between the two groups of patients in age, the prostate volume, the serum PSA level, or the detection rate of prostate cancer (P &gt;0.05). Compared with the control group, the observation group showed remarkably lower incidence rates of puncture-related urinary tract infection (7.5% vs 0.9%, P &lt;0.05), fever (5.0% vs 1.1%, P &lt;0.05), bacteriuria (2.5% vs 0.2%, P &lt;0.05), and total infections (16.7% vs 2.6%, P&lt;0.05) postoperatively. Two cases of bacteremia or sepsis were found in each of the groups, with no significant difference between the two. Conclusions: "One-puncture one-needle" TRUS-guided prostate biopsy can effectively prevent puncture-related infections. abstract_id: PUBMED:25858102 Decision analysis model comparing cost of multiparametric magnetic resonance imaging vs. repeat biopsy for detection of prostate cancer in men with prior negative findings on biopsy. Purpose: We compared cost of multiparametric magnetic resonance imaging (MP-MRI) vs. repeat biopsy in detection of prostate cancer (PCa) in men with prior negative findings on biopsy. Methods: A decision tree model compared the strategy of office-based transrectal ultrasound-guided biopsy (TRUS) for men with prior negative findings on biopsy with a strategy of initial MP-MRI with TRUS performed only in cases of abnormal results on imaging. Study end points were cost, number of biopsies, and cancers detected. Cost was based on Medicare reimbursement. Cost of sepsis and minor complications were incorporated into analysis. Sensitivity analyses were performed by varying model assumptions. Results: The baseline model with 24% PCa found that the overall cost for 100 men was $90,400 and $87,700 for TRUS and MP-MRI arms, respectively. The MP-MRI arm resulted in 73 fewer biopsies per 100 men but detected 4 fewer cancers (16 vs. 20.4) than the TRUS arm did. A lower risk of PCa resulted in lower costs for the MP-MRI arm and a small difference in detected cancers. At lower cancer rates, MP-MRI is superior to TRUS over a wide range of sensitivity and specificity of MRI. A lower sensitivity of MP-MRI decreases the cost of the MP-MRI, as fewer biopsies are performed, but this also reduces the number of cancers detected. Conclusions: The use of MP-MRI to select patients for repeat biopsy reduced the number of biopsies needed by 73% but resulted in a few cancers being missed at lower cost when compared with the TRUS arm. Further studies are required to determine whether cancers missed represent clinically significant tumors. abstract_id: PUBMED:36573091 Introduction of surgical site surveillance post transrectal ultrasound (TRUS) guided prostate biopsy and the impact on infection rates. Background: Transrectal ultrasound (TRUS)-guided prostate biopsy is associated with infection rates between 0.3 % and 3.2%. Infectious complications include urinary tract infection, prostatitis, bacteraemia and sepsis. Surgical site surveillance in this patient cohort is becoming increasingly important given global increases in antimicrobial resistance. Methods: Surgical site surveillance for patients undergoing TRUS biopsies was introduced in our hospital in 2017. All patients had a risk assessment form completed to assess for carriage or risk of carriage of multi-drug resistant organisms. An intense analysis was completed on any patient who developed an infection post-TRUS biopsy. Data was fed back on a quarterly basis to a multi-disciplinary working group. Members of this group include a Consultant Microbiologist, Infection Prevention and Control Nurse, Consultant Urologist, Antimicrobial Pharmacists and Clinical Nurse Ward Managers. Results: 784 TRUS-guided biopsy of the prostate procedures were performed between January 1 st 2017 and the end of the third quarter, 2021. The rate of infection post-TRUS was 2.7% in 2017, 3.4% in 2018 and 3.2% in 2019. This improved to 0% in 2020 and 0.8% in the first three quarters of 2021. Conclusions: Several interventions were introduced resulting in a sustained reduction in infection rates in this cohort. These include changing the choice of surgical antibiotic prophylaxis, improvement in the timing of antibiotic prophylaxis and scheduling of other urology procedures. The introduction of surgical site surveillance and multi-disciplinary input has demonstrated a reduction in infection rates post TRUS biopsy. Answer: No, periprostatic block (PPNB) does not increase the transrectal ultrasound (TRUS) biopsy sepsis rate in men with elevated PSA. A study that reviewed biopsy data from 177 men who underwent TRUS biopsy found that the risk of sepsis was not increased in patients who received PPNB, even though this group had larger gland volumes and more biopsy cores taken. The sepsis rates were 5.6% in the PPNB group and 5.7% in the group that did not receive PPNB, which was not statistically significant (P = 0.956) (PUBMED:23677210).
Instruction: Does red blood cell storage time still influence ICU survival? Abstracts: abstract_id: PUBMED:18757224 Does red blood cell storage time still influence ICU survival? Objective: Few studies have shown that aged packed red blood cells (RBC) transfusion negatively influenced the outcome of ICU patients, probably related to storage lesions which could be decreased by leukodepletion of RBC. The purpose of this study was to evaluate the impact of aged leukodepleted-RBC pack, on the outcome of ICU patients. Design: Retrospective, observational, cohort study in a Medical Intensive Care Unit. Patients: Consecutive patients admitted during the years 2005 and 2006, and requiring a transfusion. We recorded patient's demographic data, number of RBC unit and age of each RBC, length of ICU, mortality during ICU stay. Results: Five hundred and thirty-four patients were included with global mortality was 26.6%, length of stay in ICU six days (3-14) and SAPS II 48 (35-62). RBC equaling to 5.9 were transfused per patients (22.7%&lt;14 days and 57.3%&lt;21 days). The number of RBC was significantly higher in the dead patients group, but the rate of RBC stored less than 21 days was not different (54% versus 60%; p=0.21). In a multivariate logistic model, independent predictors of ICU death were SAPS II (OR=1.02 per point, p&lt;0.001), number of RBC (OR=1.08 per RBC, p&lt;0.001), length of stay in ICU (p&lt;0.001). Similar results were obtained while introducing the age of RBC as time dependent covariates in a multivariate Cox's model. Conclusions: RBC transfused in our ICU are old. The ICU outcome is independently associated with the number of leucodepleted RBC transfused, but not with their age. abstract_id: PUBMED:17002627 Effects of storage time of red blood cell transfusions on the prognosis of coronary artery bypass graft patients. Background: In different centers for cardiothoracic surgery throughout the world, different policies are followed concerning the maximum storage time of to-be-transfused red blood cells (RBCs). The aim in this study was to investigate the possible role of the storage time of RBC transfusions on the outcome of coronary artery bypass graft (CABG) surgery patients. Study Design And Methods: In a single-center study, all patients who had undergone CABG surgery in the period 1993 until 1999 were identified. Only those patients who had received standard, allogeneic, buffy coat-depleted, unfiltered RBCs in saline-adenine-glucose-mannitol were entered in the analyses (n = 2732). Endpoints were 30-day survival, hospital stay, and intensive care unit (ICU) stay. Storage time of the perioperative RBC transfusions was analyzed in the following four ways: 1) mean storage time of all perioperative RBC transfusions; 2) storage time of the youngest RBC transfusion; 3) storage time of the oldest RBC transfusion; and 4) comparing outcome in patients receiving only RBCs with a storage time below the median storage of 18 days with patients receiving only RBCs with a storage time above the median. Results: The univariate analyses showed a strong correlation between storage time and the endpoints survival and ICU stay, but also a correlation with an established risk factor such as the number of transfusions. The multivariate analyses showed no independent effect of storage time on survival or ICU stay. Conclusion: In these analyses, pertaining to 2732 CABG patients, no justification could be found for use of a particular maximum storage time for RBC transfusions in patients undergoing CABG surgery. abstract_id: PUBMED:29372291 Effects of shorter versus longer storage time of transfused red blood cells in adult ICU patients: a systematic review with meta-analysis and Trial Sequential Analysis. Purpose: Patients in the intensive care unit (ICU) are often transfused with red blood cells (RBC). During storage, the RBCs and storage medium undergo changes, which may have clinical consequences. Several trials now have assessed these consequences, and we reviewed the present evidence on the effects of shorter versus longer storage time of transfused RBCs on outcomes in ICU patients. Methods: We conducted a systematic review with meta-analyses and trial sequential analyses (TSA) of randomised clinical trials including adult ICU patients transfused with fresher versus older or standard issue blood. Results: We included seven trials with a total of 18,283 randomised ICU patients; two trials of 7504 patients were judged to have low risk of bias. We observed no effects of fresher versus older blood on death (relative risk 1.04, 95% confidence interval (CI) 0.97-1.11; 7349 patients; TSA-adjusted CI 0.93-1.15), adverse events (1.26, 0.76-2.09; 7332 patients; TSA-adjusted CI 0.16-9.87) or post-transfusion infections (1.07, 0.96-1.20; 7332 patients; TSA-adjusted CI 0.90-1.27). The results were unchanged by including trials with high risk of bias. TSA confirmed the results and the required information size was reached for mortality for a relative risk change of 20%. Conclusions: We may be able to reject a clinically meaningful effect of RBC storage time on mortality in transfused adult ICU patients as our trial sequential analyses reject a 10% relative risk change in death when comparing fresher versus older blood for transfusion. abstract_id: PUBMED:28901549 Effects of red blood cell storage time on transfused patients in the ICU-protocol for a systematic review. Background: Patients in the intensive care unit (ICU) are often anaemic due to blood loss, impaired red blood cell (RBC) production and increased RBC destruction. In some studies, more than half of the patients were treated with RBC transfusion. During storage, the RBC and the storage medium undergo changes, which lead to impaired transportation and delivery of oxygen and may also promote an inflammatory response. Divergent results on the clinical consequences of storage have been reported in both observational studies and randomised trials. Therefore, we aim to gather and review the present evidence to assess the effects of shorter vs. longer storage time of transfused RBCs for ICU patients. Methods: We will conduct a systematic review with meta-analyses and trial sequential analyses of randomised clinical trials, and also include results of severe adverse events from large observational studies. Participants will be adult patients admitted to an ICU and treated with shorter vs. longer stored RBC units. We will systematically search the Cochrane Library, MEDLINE, Embase, BIOSIS, CINAHL and Science Citation Index for relevant literature, and we will follow the recommendation by the Cochrane Collaboration and the Preferred Reporting Items for Systemtic Review and Meta-Analysis (PRISMA)-statement. We will assess the risk of bias and random errors, and we will use the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach to evaluate the overall quality of evidence. Conclusion: We need a high-quality systematic review to summarise the clinical consequences of RBC storage time among ICU patients. abstract_id: PUBMED:26169241 Red cell distribution width and survival in patients hospitalized on a medical ICU. Objectives: Red cell distribution width was shown to reliably predict mortality and morbidity in numerous clinical settings, including patients hospitalized on surgical intensive care units (ICU). Patients hospitalized on an ICU usually comprise a very heterogeneous patient population. The aim of this analysis was to investigate whether (1) RDW is related to survival outcomes in patients hospitalized on a medical ICU and (2) the prognostic value of RDW is dependent on the diagnosis that led to ICU admission. Methods: 829 patients hospitalized on the medical ICU of a tertiary care hospital were retrospectively investigated. Patients were divided in two groups according to the main diagnosis that led to ICU admission. Group 1: non-infectious cardiac disease and group 2: other. The prognostic value of RDW for ICU- and long-term mortality was investigated for the entire patient cohort as well as for the two subgroups. Results: The median RDW of the whole study population was 16.1%. Patients with an RDW above this threshold were exposed to an increased risk for ICU mortality (34.4% vs. 17.2%, p&lt;0.001) and long-term mortality (log-rank p&lt;0.001). Similarly, this cut-off was able to distinguish patients with an elevated risk for death in subgroup 2 (ICU mortality: 37.9% vs. 19.2%, p&lt;0.001; long-term mortality: log-rank p&lt;0.001). In subgroup 1, this value was not able to identify patients with an increased risk for ICU-mortality (17.6% vs. 11.8%, p=0.26) as well as long-term mortality (log-rank p=0.3). Conclusions: Data of this analysis revealed that (1) RDW is a powerful predictor for ICU- and long-term mortality in patients hospitalized on a medical ICU and (2) RDW cut-offs to assess risk for death differ according to the main diagnosis that led to ICU admission. abstract_id: PUBMED:31283834 Storage time of red blood cells among ICU patients with septic shock. Background: We aimed to describe the exposure to blood transfusions and mortality among patients with septic shock. Methods: We did a retrospective cohort study of two cohorts-patients with septic shock registered in a Danish ICU database (2008-2010) and patients from the Transfusion Requirements in Septic Shock (TRISS) trial (2011-2013). We extracted information on blood transfusions issued to all patients. We investigated the number of patients receiving very fresh blood (less than 7 days), very old blood (more than 24 days) and blood with a mixture of storage time. Results: In the Danish cohort, 1637 patients were included of whom 1394 (85%) received 20,239 blood units from 14 days prior the ICU admission to 90 days after; 33% were transfused before, 77% in the ICU and 36% after ICU. The exposure to exclusively very fresh or very old blood was 3% and 4%, respectively. In the TRISS cohort, 77% of the 937 patients received 5047 RBC units; 3% received exclusively very fresh and 13% very old blood. The point estimate of mortality was higher among patients receiving large amounts of exclusively very fresh and very old blood, but the number of patients were very small. Conclusions: Patients with septic shock were transfused both before and after ICU. Exposure to blood of less than 7 days or more than 24 days old were limited. We were not able to detect higher mortality among the limited number of patients with septic shock transfused with very fresh or very old blood. abstract_id: PUBMED:28353627 Effects of Storage Time on Glycolysis in Donated Human Blood Units. Background: Donated blood is typically stored before transfusions. During storage, the metabolism of red blood cells changes, possibly causing storage lesions. The changes are storage time dependent and exhibit donor-specific variations. It is necessary to uncover and characterize the responsible molecular mechanisms accounting for such biochemical changes, qualitatively and quantitatively; Study Design and Methods: Based on the integration of metabolic time series data, kinetic models, and a stoichiometric model of the glycolytic pathway, a customized inference method was developed and used to quantify the dynamic changes in glycolytic fluxes during the storage of donated blood units. The method provides a proof of principle for the feasibility of inferences regarding flux characteristics from metabolomics data; Results: Several glycolytic reaction steps change substantially during storage time and vary among different fluxes and donors. The quantification of these storage time effects, which are possibly irreversible, allows for predictions of the transfusion outcome of individual blood units; Conclusion: The improved mechanistic understanding of blood storage, obtained from this computational study, may aid the identification of blood units that age quickly or more slowly during storage, and may ultimately improve transfusion management in clinics. abstract_id: PUBMED:24364006 Influence of storage time and amount of red blood cell transfusion on postoperative renal function: an observational cohort study. Introduction: To identify the impact of storage time and amount of transfused red blood cell units on renal function. Methods: Consecutive transfused patients (n=492), undergoing cardiac surgery at a single centre and receiving at least one red blood cell unit, were pooled in different groups depending on storage time and amount of transfusion. Results: Altogether 2,133 red blood cell units were transfused (mean age 21.87 days). Pre- and intraoperative data were similar between groups. Postoperative serum creatinine (p&lt;0.01), glomerular filtration rate (p&lt;0.01), and urea (p&lt;0.01) showed a significant correlation with the amount of transfused red blood cell units, but not with storage time. Acute kidney insufficiency (creatinine values greater than 2.0 mg/dl or a duplication of the preoperative value) developed in 29% of patients and was associated with red blood cell mean age (p=0.042), absolute age (p=0.028), and amount of transfused (p&lt;0.01) units. Acute kidney failure requiring renal replacement therapy occurred in 9.6% of patients and was associated with the amount of transfusion (p&lt;0.01). Conclusions: Worsening of renal function after cardiac surgery is associated with storage time and amount of transfused red blood cell units. Acute kidney insufficiency was defined as serum creatinine values greater than 2.0 mg/dl or a duplication of the preoperative value (baseline). Acute kidney failure was defined as becoming dependent upon dialysis. abstract_id: PUBMED:19573176 A novel mouse model of red blood cell storage and posttransfusion in vivo survival. Background: Storage of red blood cells (RBCs) is necessary for an adequate blood supply. However, reports have identified potential negative sequelae of transfusing stored RBCs. An animal model would be useful to investigate the pathophysiology of transfusing stored RBCs. However, it has been reported that storage of rat RBCs in CPDA-1 resulted in an unexpected sudden decline in posttransfusion survival. A mouse model of RBC storage and transfusion was developed to assess survival kinetics of mouse RBCs. Study Design And Methods: RBCs expressing green fluorescent protein were collected in CPDA-1, filter leukoreduced, adjusted to a 75% hematocrit, and stored at 4°C. At weekly intervals, stored RBCs were transfused into C57BL/6 recipients. RBC survival was measured by flow cytometry and chromium-51 labeling. Phosphatidylserine externalization and CD47 expression was also evaluated. Results: Mean 24-hour survivals of transfused RBCs were 99, 91, 64, 54, 30, and 18% after 0, 7, 14, 21, 28, and 35 days of storage, respectively. Stored RBCs showed an initial rapid clearance with subsequent extended survival. Increased surface phosphatidylserine and decreased CD47 expression were also observed. Conclusions: Mouse RBCs showed a progressive decline in survival, as a function of storage time, unlike the precipitous loss of viability reported for rat RBCs. Moreover, changes in the measured surface markers were analogous to trends reported for human RBCs. Together, these findings provide an initial characterization of a novel mouse model of RBC storage with the potential to serve as an experimental platform for studying the pathophysiologic consequences of transfusing stored RBCs. abstract_id: PUBMED:35643539 Effect of storage temperature and time on erythrocyte sedimentation rate. Objective: This paper explores the effect of blood sample storage temperature and time on the erythrocyte sedimentation rate (ESR) by using the Weiss method. Methods: Whole blood samples were collected from 80 patients and diluted 1:9 with sodium citrate solution. Each sample was split into two tubes. Using the Weiss method, ESR was tested within 1 h of collection, and one sample was placed at 4 °C and the other at room temperature (23 ± 2 °C). ESR was then measured at 2, 4, 6, 8, 12, and 24 h. The data were statistically analyzed with consideration for temperature and time. Results: ESR decreased gradually over 6 h at room temperature, but the results were not statistically significant. Similarly, there was no significant difference in the decline of ESR within 8 h at 4 °C. However, ESR results decreased significantly after the samples were stored at room temperature for more than 6 h or at 4 °C for more than 8 h. ESR reduction was lower in the samples stored at 4 °C than in those stored at room temperature over the same time period. Conclusion: Blood sample storage temperature and duration can affect the measurement of ESR using the Weiss method. ESR testing should be completed within 4 h of sample collection in clinical work. Answer: Based on the provided abstracts, the influence of red blood cell (RBC) storage time on ICU survival appears to be a topic with mixed findings. Several studies have investigated the impact of RBC storage time on patient outcomes, with varying conclusions: 1. A retrospective observational cohort study in a Medical Intensive Care Unit found that the ICU outcome was independently associated with the number of leukodepleted RBC transfused, but not with their age (PUBMED:18757224). This suggests that the storage time of RBCs did not influence ICU survival in this particular setting. 2. A single-center study on coronary artery bypass graft (CABG) patients showed no independent effect of storage time on survival or ICU stay after analyzing 2732 patients (PUBMED:17002627). This indicates that, for CABG patients, the storage time of RBC transfusions did not impact the prognosis. 3. A systematic review with meta-analysis and Trial Sequential Analysis (TSA) including seven trials with 18,283 randomised ICU patients found no effects of fresher versus older blood on death, adverse events, or post-transfusion infections (PUBMED:29372291). The TSA confirmed the results and reached the required information size for mortality, suggesting that RBC storage time does not have a clinically meaningful effect on mortality in transfused adult ICU patients. 4. An observational cohort study on the influence of storage time and amount of RBC transfusion on postoperative renal function found that worsening of renal function after cardiac surgery is associated with storage time and amount of transfused RBC units (PUBMED:24364006). This indicates a potential influence of storage time on specific postoperative outcomes, such as renal function. 5. A retrospective cohort study among ICU patients with septic shock did not detect higher mortality among the limited number of patients transfused with very fresh or very old blood (PUBMED:31283834). This suggests that, at least in the context of septic shock, the storage time of RBCs may not significantly influence mortality. In summary, the majority of the studies provided do not support a significant influence of RBC storage time on ICU survival (PUBMED:18757224, PUBMED:17002627, PUBMED:29372291). However, there may be specific contexts or patient subgroups where storage time could have an impact on certain outcomes, such as renal function post-cardiac surgery (PUBMED:24364006). Overall, the current evidence suggests that RBC storage time does not have a major impact on ICU survival.
Instruction: Should we still focus that much on cardiovascular mortality in end stage renal disease patients? Abstracts: abstract_id: PUBMED:23620729 Should we still focus that much on cardiovascular mortality in end stage renal disease patients? The CONvective TRAnsport STudy. Background: We studied the distribution of causes of death in the CONTRAST cohort and compared the proportion of cardiovascular deaths with other populations to answer the question whether cardiovascular mortality is still the principal cause of death in end stage renal disease. In addition, we compared patients who died from the three most common death causes. Finally, we aimed to study factors related to dialysis withdrawal. Methods: We used data from CONTRAST, a randomized controlled trial in 714 chronic hemodialysis patients comparing the effects of online hemodiafiltration versus low-flux hemodialysis. Causes of death were adjudicated. The distribution of causes of death was compared to that of the Dutch dialysis registry and of the Dutch general population. Results: In CONTRAST, 231 patients died on treatment. 32% died from cardiovascular disease, 22% due to infection and 23% because of dialysis withdrawal. These proportions were similar to those in the Dutch dialysis registry and the proportional cardiovascular mortality was similar to that of the Dutch general population. cardiovascular death was more common in patients &lt;60 years. Patients who withdrew were older, had more co-morbidity and a lower mental quality of life at baseline. Patients who withdrew had much co-morbidity. 46% died within 5 days after the last dialysis session. Conclusions: Although the absolute risk of death is much higher, the proportion of cardiovascular deaths in a prevalent end stage renal disease population is similar to that of the general population. In older hemodialysis patients cardiovascular and non-cardiovascular death risk are equally important. Particularly the registration of dialysis withdrawal deserves attention. These findings may be partly limited to the Dutch population. abstract_id: PUBMED:34357124 Aortic Arch Calcification and Cardiomegaly Are Associated with Overall and Cardiovascular Mortality in Hemodialysis Patients. Patients with end-stage renal disease have a higher risk of cardiovascular morbidity and mortality. In this study, we investigated the predictive ability of a combination of cardiothoracic ratio (CTR) and aortic arch calcification (AoAC) for overall and cardiovascular mortality in patients receiving hemodialysis. We also evaluated the predictive power of AoAC and CTR for clinical outcomes. A total of 365 maintenance hemodialysis patients were included, and AoAC and CTR were measured using chest radiography at enrollment. We stratified the patients into four groups according to a median AoAC score of three and CTR of 50%. Multivariable Cox proportional hazards analysis was used to identify the risk factors of mortality. The predictive performance of the model for clinical outcomes was assessed using the χ2 test. Multivariable analysis showed that, compared to the AoAC &lt; 3 and CTR &lt; 50% group, the AoAC ≥ 3 and CTR &lt; 50% group (hazard ratio [HR], 4.576; p &lt; 0.001), and AoAC ≥ 3 and CTR ≥ 50% group (HR, 5.912; p &lt; 0.001) were significantly associated with increased overall mortality. In addition, the AoAC &lt; 3 and CTR ≥ 50% (HR, 3.806; p = 0.017), AoAC ≥ 3 and CTR &lt; 50% (HR, 4.993; p = 0.002), and AoAC ≥ 3 and CTR ≥ 50% (HR, 8.614; p &lt; 0.001) groups were significantly associated with increased cardiovascular mortality. Furthermore, adding AoAC and CTR to the basic model improved the predictive ability for overall and cardiovascular mortality. The patients who had a high AoAC score and cardiomegaly had the highest overall and cardiovascular mortality among the four groups. Furthermore, adding AoAC and CTR improved the predictive ability for overall and cardiovascular mortality in the hemodialysis patients. abstract_id: PUBMED:30022737 Serum magnesium and cardiovascular mortality in peritoneal dialysis patients: a 5-year prospective cohort study. The aim of this study was to explore the association between serum Mg and cardiovascular mortality in the peritoneal dialysis (PD) population. This prospective cohort study included prevalent PD patients from a single centre. The primary outcome of this study was cardiovascular mortality. Serum Mg was assessed at baseline. A total of 402 patients (57 % male; mean age 49·3±14·9 years) were included. After a median of 49·9 months (interquartile range: 25·9-68·3) of follow-up, sixty-two patients (25·4 %) died of CVD. After adjustment for conventional confounders in multivariate Cox regression models, being in the lower quartile for serum Mg level was independently associated with a higher risk of cardiovascular mortality, with hazards ratios of 2·28 (95 % CI 1·04, 5·01), 1·41 (95 % CI 0·63, 3·16) and 1·62 (95 % CI 0·75, 3·51) for the lowest, second and third quartiles, respectively. A similar trend was observed when all-cause mortality was used as the study endpoint. Further analysis showed that the relationships between lower serum Mg and higher risk of cardiovascular and all-cause mortality were present only in the female subgroup, and not among male patients. The test for interaction indicated that the associations between lower serum Mg and cardiovascular and all-cause mortality differed by sex (P=0·008 and P=0·011, respectively). In conclusion, lower serum Mg was associated with a higher risk of cardiovascular and all-cause mortality in the PD population, especially among female patients. abstract_id: PUBMED:38165561 Relationship between dietary fiber and all-cause mortality, cardiovascular mortality, and cardiovascular disease in patients with chronic kidney disease: a systematic review and meta-analysis. Background: The potential protective effects of dietary fiber against all-cause mortality, cardiovascular mortality, and cardiovascular disease in patients with chronic kidney disease have not been definitively established. To verify this relationship, a systematic review and a meta-analysis were undertaken. Methods: PubMed, The Cochrane Library, Web of Science, Embase, ProQuest, and CINAHL were used to systematically search for prospective cohort studies that investigate the association between dietary fiber and all-cause mortality, cardiovascular mortality, and cardiovascular disease in individuals with chronic kidney disease (CKD). This search was conducted up to and including March 2023. Results: The analysis included 10 cohort studies, with a total of 19,843 patients who were followed up for 1.5-10.1 y. The results indicated a significant negative correlation between dietary fiber and all-cause mortality among patients with CKD (HR 0.80, 95% CI 0.58-0.97, P &lt; 0.001). Subgroup analysis further revealed that the study population and exposure factors were significantly associated with all-cause mortality (P &lt; 0.001). Increased dietary fiber intake was associated with a reduced risk of cardiovascular mortality (HR 0.78; 95% CI 0.67-0.90) and a reduced incidence of cardiovascular disease (HR 0.87; 95% CI 0.80-0.95) among patients with CKD. Conclusions: The pooled results of our meta-analysis indicated an inverse association between dietary fiber intake and all-cause mortality, cardiovascular mortality, and cardiovascular disease. abstract_id: PUBMED:29913453 Uremic Pruritus is Associated with Two-Year Cardiovascular Mortality in Long Term Hemodialysis Patients. Background/aims: Uremic pruritus (UP) is an unpleasant complication in patients undergoing maintenance dialysis. Cardiovascular and infection related deaths are the major causes of mortality in patients undergoing dialysis. Studies on the correlation between cardiovascular or infection related mortality and UP are limited. Methods: We analyze 866 maintenance hemodialysis (MHD) patients in our hemodialysis centers. Clinical parameters and 24-month cardiovascular and infection-related mortality are recorded. Results: The associations between all-cause, cardiovascular and infection related mortality with clinical data including UP are analyzed. Multivariate Cox regression demonstrated that UP is a significantly predictor for 24-month cardiovascular mortality in the MHD patients (Hazard ratio: 3.164; 95% confidence interval, 1.743-5.744; p &lt; 0.001). Conclusion: Uremic pruritus is one of the predictor of 24-month cardiovascular mortality in MHD patients. abstract_id: PUBMED:37010736 The number of valvular insufficiency is a strong predictor of cardiovascular and all-cause mortality in hemodialysis patients. Objectives: To investigate the relationship between the number of valvular insufficiency (VI) and emergency hospitalization or mortality in maintenance hemodialysis (HD) patients. Methods: The maintenance HD patients with cardiac ultrasonography were included. According to the number of VI ≥ 2 or not, the patients were divided into two groups. The difference of emergency hospitalized for acute heart failure, arrhythmia, acute coronary syndrome (ACS) or stroke, cardiovascular mortality, and all-cause mortality between the two groups were compared. Results: Among 217 maintenance HD patients, 81.57% had VI. 121 (55.76%) patients had two or more VI, and 96 (44.24%) with one VI or not. The study subjects were followed up for a median of 47 (3-107) months. At the end of the follow up, 95 patients died (43.78%), of whom 47 (21.66%) patients died because of cardiovascular disease. Age (HR 1.033, 95% CI 1.007-1.061, P = 0.013), number of VI ≥ 2 (HR 2.035, 95% CI 1.083-3.821, P = 0.027) and albumin (HR 0.935, 95% CI 0.881-0.992, P = 0.027) were independent risk factors for cardiovascular mortality. The three parameters were also independent risk factors for all-cause mortality. The patients with number of VI ≥ 2 were more likely to be emergency hospitalized for acute heart failure (56 [46.28%] vs 11 [11.46%], P = 0.001). On the contrary, the number of VI was not associated with emergency hospitalized for arrhythmia, ACS or stroke. Survival analysis results showed that probability of survival was statistically different in the two groups (P &lt; 0.05), no matter based on cardiovascular mortality or all-cause mortality. Based on age, number of VI ≥ 2 and albumin, nomogram models for 5-year cardiovascular and all-cause mortality were built. Conclusions: In maintenance HD patients, the prevalence of VI is prominently high. The number of VI ≥ 2 is associated with emergency hospitalized for acute heart failure, cardiovascular and all-cause mortality. Combining age, number of VI ≥ 2, and albumin can predict cardiovascular and all-cause mortality. abstract_id: PUBMED:33391735 Klotho: a link between cardiovascular and non-cardiovascular mortality. Klotho is a membrane-bound protein acting as an obligatory coreceptor for fibroblast growth factor 23 (FGF23) in the kidney and parathyroid glands. The extracellular portion of its molecule may be cleaved and released into the blood and produces multiple endocrine effects. Klotho exerts anti-inflammatory and antioxidative activities that may explain its ageing suppression effects evidenced in mice; it also modulates mineral metabolism and FGF23 activities and limits their negative impact on cardiovascular system. Clinical studies have found that circulating Klotho is associated with myocardial hypertrophy, coronary artery disease and stroke and may also be involved in the pathogenesis of salt-sensitive hypertension with a mechanism sustained by inflammatory cytokines. As a consequence, patients maintaining high serum levels of Klotho not only show decreased cardiovascular mortality but also non-cardiovascular mortality. Klotho genetic polymorphisms may influence these clinical relationships and predict cardiovascular risk; rs9536314 was the polymorphism most frequently involved in these associations. These findings suggest that Klotho and its genetic polymorphisms may represent a bridge between inflammation, salt sensitivity, hypertension and mortality. This may be particularly relevant in patients with chronic kidney disease who have decreased Klotho levels in tissues and blood. abstract_id: PUBMED:37439196 Effect of kidney disease on all-cause and cardiovascular mortality in patients undergoing coronary angiography. Acute kidney injury (AKI) occurred in 12.8% of patients undergoing surgery and is associated with increased mortality. Chronic kidney disease (CKD) is a well-known risk for death and cardiovascular disease (CVD). Effects of AKI and CKD on patients undergoing coronary angiography (CAG) remain incompletely defined. The aim of our study was to investigate the relationship between acute and CKD and mortality in patients undergoing CAG. The cohort study included 49,194 patients in the multicenter cohort from January 2007 to December 2018. Cox regression analyses and Fine-Gray proportional subdistribution risk regression analysis are used to examine the association between kidney disease and all-cause and cardiovascular mortality. In the present study, 13,989 (28.4%) patients had kidney disease. During follow-up, 6144 patients died, of which 4508 (73.4%) were due to CVD. AKI without CKD (HR: 1.54, 95% CI: 1.36-1.74), CKD without AKI (HR: 2.02, 95% CI: 1.88-2.17), AKI with CKD (HR: 3.26, 95% CI: 2.90-3.66), and end-stage kidney disease (ESKD; HR: 5.63, 95% CI: 4.40-7.20) were significantly associated with all-cause mortality. Adjusted HR (95% CIs) for cardiovascular mortality was significantly elevated among patients with AKI without CKD (1.78 [1.54-2.06]), CKD without AKI (2.28 [2.09-2.49]), AKI with CKD (3.99 [3.47-4.59]), and ESKD (6.46 [4.93-8.46]). In conclusion, this study shows that acute or CKD is present in up to one-third of patients undergoing CAG and is associated with a substantially increased mortality. These findings highlight the importance of perioperative management of kidney function, especially in patients with CKD.Impact StatementWhat is already known on this subject? Acute kidney injury (AKI) occurred in 12.8% of patients undergoing surgery and is linked to a 22.2% increase in mortality. Chronic kidney disease (CKD) is a well-known risk for death and cardiovascular events. Effects of AKI and CKD on patients undergoing coronary angiography (CAG) remain incompletely defined.What do the results of this study add? This study shows that kidney disease is present in up to one-third of patients undergoing CAG and is associated with a substantially increased mortality. AKI and CKD are independent predicators for mortality in patients undergoing CAG.What are the implications of these findings for clinical practice and/or further research? These findings highlight the importance of perioperative management of kidney function, especially in patients with CKD. abstract_id: PUBMED:35656294 Association of Mineralocorticoid Receptor Antagonists With the Mortality and Cardiovascular Effects in Dialysis Patients: A Meta-analysis. Whether Mineralocorticoid receptor antagonists (MRA) reduce mortality and cardiovascular effects of dialysis patients remains unclear. A meta-analysis was designed to investigate whether MRA reduce mortality and cardiovascular effects of dialysis patients, with a registration in INPLASY (INPLASY2020120143). The meta-analysis revealed that MRA significantly reduced all-cause mortality (ACM) and cardiovascular mortality (CVM). Patients receiving MRA presented improved left ventricular mass index (LVMI) and left ventricular ejection fraction (LVEF), decreased systolic blood pressure (SBP) and diastolic blood pressure (DBP). There was no significant difference in the serum potassium level between the MRA group and the placebo group. MRA vs. control exerts definite survival and cardiovascular benefits in dialysis patients, including reducing all-cause mortality and cardiovascular mortality, LVMI, and arterial blood pressure, and improving LVEF. In terms of safety, MRA did not increase serum potassium levels for dialysis patients with safety. Systematic Review Registration: (https://inplasy.com/inplasy-protocol-1239-2/), identifier (INPLASY2020120143). abstract_id: PUBMED:35613193 Impact of High Cardiovascular Risk on Hospital Mortality in Intensive Care Patients Hospitalized for COVID-19. Background: Some studies have shown a higher prevalence of deaths in patients with cardiovascular risk factors (CRF) during hospitalization for COVID-19. Objectives: To assess the impact of high cardiovascular risk in patients hospitalized in intensive care for COVID-19. Methods: Retrospective study with patients admitted to an intensive care unit, with a diagnosis of COVID-19 confirmed by RT-PCR, and with at least one troponin measurement during hospitalization. The criteria for defining high cardiovascular risk (HCR) patients were: history of established cardiovascular disease (myocardial infarction, stroke, or peripheral arterial disease), diabetes, chronic kidney disease with clearance &lt; 60ml/min, or presence of 3 CRFs (hypertension, smoking, dyslipidemia, or age &gt; 65 years). The primary outcome of this study is all-cause in-hospital mortality. P&lt;0.05 was considered significant. Results: This study included 236 patients, mean age = 61.14±16.2 years, with 63.1% men, 55.5% hypertensive, and 33.1% diabetic; 47.4% of the patients also presented HCR. A significant increase in mortality was observed as the number of risk factors increased (0 FRC: 5.9%; 1 FRC: 17.5%; 2 FRC: 32.2% and ≥3 FRC: 41.2%; p=0.001). In the logistic regression adjusted for severity (SAPS3 score), the HCR and myocardial injury group had a higher occurrence of in-hospital mortality (OR 40.38; 95% CI 11.78-138.39). Patients without HCR but with myocardial injury also exhibited a significant association with the primary outcome (OR 16.7; 95% CI 4.45-62.74). Conclusion: In patients hospitalized in intensive care for COVID-19, HCR impacts in-hospital mortality only in patients with myocardial injury. Answer: The question of whether to maintain a strong focus on cardiovascular mortality in end-stage renal disease (ESRD) patients is complex, as cardiovascular disease (CVD) remains a significant cause of death in this population, but other factors also contribute to mortality. In the CONTRAST cohort, cardiovascular disease accounted for 32% of deaths, which is similar to the proportion in the general Dutch population, suggesting that while the absolute risk of death is higher in ESRD patients, the proportion of cardiovascular deaths is comparable (PUBMED:23620729). This indicates that cardiovascular mortality is still a principal cause of death in ESRD patients, especially in those under 60 years of age. However, the study also highlights the importance of non-cardiovascular causes, such as infections and dialysis withdrawal, particularly in older patients. Other studies have identified factors associated with cardiovascular and overall mortality in hemodialysis patients, such as aortic arch calcification and cardiomegaly (PUBMED:34357124), serum magnesium levels (PUBMED:30022737), and uremic pruritus (PUBMED:29913453). These findings suggest that cardiovascular risk factors are still relevant and should be monitored. Moreover, the number of valvular insufficiencies has been shown to be a strong predictor of cardiovascular and all-cause mortality in hemodialysis patients (PUBMED:37010736), and Klotho levels have been linked to both cardiovascular and non-cardiovascular mortality (PUBMED:33391735). Additionally, kidney disease itself has been associated with increased mortality in patients undergoing coronary angiography (PUBMED:38165561). On the other hand, interventions such as increased dietary fiber intake have been associated with reduced cardiovascular mortality and disease in CKD patients (PUBMED:37439196), and mineralocorticoid receptor antagonists have been found to reduce all-cause and cardiovascular mortality in dialysis patients (PUBMED:35656294). In conclusion, while cardiovascular mortality remains a significant concern in ESRD patients, it is important to also consider other factors contributing to mortality. A comprehensive approach that includes managing cardiovascular risk factors alongside other health issues may be the most beneficial for this population.
Instruction: Injuries of the renal pedicle: is renal revascularization justified? Abstracts: abstract_id: PUBMED:9894257 Injuries of the renal pedicle: is renal revascularization justified? Objective: Renal trauma with pedicle lesions may require emergency vascular repair, a surveillance in a surgical unit or immediate or secondary nephrectomy. The objective of this study was to evaluate these various treatment modalities. Material And Methods: 28 patients presenting with renal pedicle trauma, treated in two urological centres between 1985 and 1995 were reviewed. All cases of trauma were investigated by intravenous urography, CT and/or arteriography. 16 patients had associated intra-abdominal lesions. Results: 7 patients underwent vascular repair after a mean interval of 4.8 hours. There were 5 nephrectomies and 2 functional kidneys, including 1 with hypertension. 13 patients underwent first-line nephrectomy: 4 performed as an emergency for haemodynamic instability, and 9 performed as a deferred emergency for silent kidney or secondary haemodynamic disorders. The mean time to diagnosis was 20 hours. No complication was observed in this group. Non-surgical management was decided in 8 patients. The mean time to diagnosis was 7.5 hours. One death was observed in this group, due to associated cerebral lesions. 3 patients subsequently underwent late nephrectomy for severe hypertension and 4 had a persistent silent kidney without sequelae. Overall: 21 nephrectomies, 2 functional kidneys (1 patient was hypertensive), 4 silent kidneys without hypertension and one death were observed. Conclusion: In cases of renal pedicle trauma seen after the 4th hour, the severity of ischaemic lesions and renal sequelae and the small number of kidneys saved despite revascularization surgery argue in favour of immediately elective nephrectomy. abstract_id: PUBMED:23899868 Stent revascularization restores cortical blood flow and reverses tissue hypoxia in atherosclerotic renal artery stenosis but fails to reverse inflammatory pathways or glomerular filtration rate. Background: Atherosclerotic renal artery stenosis (ARAS) is known to reduce renal blood flow, glomerular filtration rate (GFR) and amplify kidney hypoxia, but the relationships between these factors and tubulointerstitial injury in the poststenotic kidney are poorly understood. The purpose of this study was to examine the effect of renal revascularization in ARAS on renal tissue hypoxia and renal injury. Methods And Results: Inpatient studies were performed in patients with ARAS (n=17; &gt;60% occlusion) before and 3 months after stent revascularization, or in patients with essential hypertension (n=32), during fixed Na(+) intake and angiotensin converting enzyme/angiotensin receptors blockers Rx. Single kidney cortical, medullary perfusion, and renal blood flow were measured using multidetector computed tomography, and GFR by iothalamate clearance. Tissue deoxyhemoglobin levels (R(2)*) were measured by blood oxygen level-dependent MRI at 3T, as was fractional kidney hypoxia (percentage of axial area with R(2)*&gt;30/s). In addition, we measured renal vein levels of neutrophil gelatinase-associated lipocalin, monocyte chemoattractant protein-1, and tumor necrosis factor-α. Pre-stent single kidney renal blood flow, perfusion, and GFR were reduced in the poststenotic kidney. Renal vein neutrophil gelatinase-associated lipocalin, tumor necrosis factor-α, monocyte chemoattractant protein-1, and fractional hypoxia were higher in untreated ARAS than in essential hypertension. After stent revascularization, fractional hypoxia fell (P&lt;0.002) with increased cortical perfusion and blood flow, whereas GFR and neutrophil gelatinase-associated lipocalin, monocyte chemoattractant protein-1, and tumor necrosis factor-α remained unchanged. Conclusions: These data demonstrate that despite reversal of renal hypoxia and partial restoration of renal blood flow after revascularization, inflammatory cytokines and injury biomarkers remained elevated and GFR failed to recover in ARAS. Restoration of vessel patency alone failed to reverse tubulointerstitial damage and partly explains the limited clinical benefit of renal stenting. These results identify potential therapeutic targets for recovery of kidney function in renovascular disease. abstract_id: PUBMED:3820406 Acute renal artery occlusion: when is revascularization justified? Acute renal artery occlusion is an infrequently encountered entity, with a paucity of literature on which to form clinical decisions. During a 20-year period 35 patients were treated for acute renal artery occlusion as a result of embolism (13 patients), thrombosis of a stenosed vessel (16 patients), or trauma (six patients). Patients were treated operatively in 16 cases and nonoperatively in 19 cases. In patients with embolic occlusion, embolectomy was successful in the relief of hypertension but was ineffective in the restoration of renal function. In patients with thrombotic occlusion, thrombectomy and aortorenal bypass were successful in both the reduction of blood pressure and the retrieval of renal function. In this group, salvage was dependent on the presence of a reconstituted distal renal artery, irrespective of the operative delay. In patients with traumatic renal artery occlusion, return of renal function did not occur, despite reperfusion as early as 6 hours after injury. These data suggest that the period in which function of embolized or traumatized kidneys may be preserved has usually passed by the time the diagnosis of renal artery occlusion has been made. By contrast, operative therapy of thrombotic occlusion frequently results in return of renal function, irrespective of the delay in treatment. abstract_id: PUBMED:35833175 Transplant Renal Artery Stenosis Revascularization: Common Distal External Iliac Bypass. Stenosis proximal to transplant renal artery anastomoses are complications leading to allograft dysfunction. This study was aimed to evaluate a novel surgical approach to renal allograft revascularization, taking into consideration the length of time elapsed since transplantation. We describe an arterial bypass using a polytetrafluoroethylene (PTFE) graft from the common iliac artery (proximal to the renal artery implantation) to the external iliac artery (distal to the renal artery implantation) that allows the adequate revascularization of both the transplant kidney, as well as the lower extremity. This technique provides several advantages when compared with previously described procedures to revascularize a transplanted kidney with an iliac artery stenosis proximal to the allograft implantation site. Benefits of this technique include (1) no need to repair the stenosis, (2) no need to take down and redo the arterial anastomosis, (3) no need to perform a dissection around the renal hilum of the transplanted kidney, (4) no requirement to address the anastomosis transfer, and (5) no need to perfuse the kidney with preservation fluid at the time of repair and/or (6) avoidance of potential injury to the renal parenchyma and/or hilum during dissections. Adequate perfusion of the organ, as well as of the lower extremity was verified by serial Doppler duplex ultrasound evaluations. Hence, we describe a novel revascularization technique in instances of kidney transplant and lower extremity ischemia. abstract_id: PUBMED:30803138 Urinary mitochondrial DNA copy number identifies renal mitochondrial injury in renovascular hypertensive patients undergoing renal revascularization: A Pilot Study. Aims: Patients with renovascular hypertension (RVH) exhibit elevated urinary mtDNA copy numbers, considered to constitute surrogate markers of renal mitochondrial injury. The modest success of percutaneous transluminal renal angioplasty (PTRA) in restoring renal function in RVH has been postulated to be partly attributable to acute reperfusion injury. We hypothesized that mitoprotection during revascularization would ameliorate PTRA-induced renal mitochondrial injury, reflected in elevated urinary mtDNA copy numbers and improve blood pressure and functional outcomes 3 months later. Methods: We prospectively measured urinary copy number of the mtDNA genes COX3 and ND1 using qPCR in RVH patients before and 24 hrs after PTRA, performed during IV infusion of vehicle (n = 8) or the mitoprotective drug elamipretide (ELAM, 0.05 mg/kg/h, n = 6). Five healthy volunteers (HV) served as controls. Urinary mtDNA levels were also assessed in RVH and normal pigs (n = 7 each), in which renal mitochondrial structure and density were studied ex-vivo. Results: Baseline urinary mtDNA levels were elevated in all RVH patients vs HV and directly correlated with serum creatinine levels. An increase in urinary mtDNA 24 hours after PTRA was blunted in PTRA+ELAM vs PTRA+Placebo. Furthermore, 3-months after PTRA, systolic blood pressure decreased and estimated glomerular filtration rate increased only in ELAM-treated subjects. In RVH pigs, mitochondrial damage was observed using electron microscopy in tubular cells and elevated urinary mtDNA levels correlated inversely with renal mitochondrial density. Conclusions: PTRA leads to an acute rise in urinary mtDNA, reflecting renal mitochondrial injury that in turn inhibits renal recovery. Mitoprotection might minimize PTRA-associated mitochondrial injury and improve renal outcomes after revascularization. abstract_id: PUBMED:28870883 Percutaneous renal artery revascularization after prolonged ischemia secondary to blunt trauma: pooled cohort analysis. Purpose: We aimed to identify factors related to technical and clinical success of percutaneous revascularization for blunt renal arterial trauma. Methods: All cases of percutaneous revascularization for blunt renal arterial trauma were searched in the available literature. We included a case of iatrogenic renal artery occlusion at our institution treated by percutaneous stenting 20 hours after injury. A pooled cohort analysis of percutaneous revascularization for blunt renal artery injury was then performed to analyze factors related to technical and clinical success. Clinical failure was defined as development of new hypertension, serum creatinine rise, or significant asymmetry in split renal function. Results: A total of 53 cases have been reported, and 54 cases were analyzed including our case. Median follow-up was 6 months. Technical success was 88.9% and clinical success was 75%. Of 12 treatment failures (25%), 66.7% occurred during the first postprocedure month. Time from injury to revascularization was not a predictor of clinical success (OR=1.00, P = 0.681). Renal artery occlusion was significantly associated with clinical failure (OR=7.50, P = 0.017) and postintervention antiplatelet therapy was significantly associated with treatment success (OR=0.16, P = 0.043). At 37-month follow-up, the stented renal artery in our case remained patent and the patient was normotensive with preserved glomerular filtration rate. Conclusion: Percutaneous revascularization for blunt renal arterial injury resulted in relatively high technical and clinical success. Time-to-revascularization was independent of successful outcomes. Clinical success was significantly associated with a patent renal artery at the time of intervention and with postprocedure antiplatelet therapy. abstract_id: PUBMED:3798306 Revascularization of traumatic thrombosis of the renal artery. Renal artery thrombosis, although well recognized, remains a rare complication of blunt abdominal trauma. In an effort to resolve the current controversy concerning the appropriate therapy, we have reviewed the available literature. Only those instances when the injury was due to blunt trauma and resulted in complete occlusion of the renal artery, documented by roentgenographic means, were included in this review. Avulsion injuries, incomplete occlusion or branch artery injuries were also excluded. In order to be classified as a surgical success, postoperative documentation of renal function and a patent renal artery were required. Only nine successfully performed vascularization procedures were identified. There were four instances of bilateral obstruction with postoperative serum creatinine levels ranging from 1.77 to 7.1 milligrams per deciliter. All required postoperative dialysis ranging from three days to three months in duration. Thirty-five patients with an unilaterally obstructed renal artery underwent attempted revascularization. Five patients, all with a presumed ischemic time of less than 12 hours, had a successful outcome. Postoperatively, four patients demonstrated either a decrease in size or function of the injured kidney. Thirteen eventually required nephrectomy. abstract_id: PUBMED:1740811 Traumatic bilateral renal artery thrombosis diagnosed by computed tomography with successful revascularization: case report. Traumatic bilateral renal artery thrombosis is a rare injury. We found 15 cases previously reported. An additional case report of a 54-year-old man is presented with a review of the literature. The diagnosis was made by computed tomography and confirmed by angiography. Successful revascularization was performed. A high index of suspicion, early diagnosis, and prompt revascularization are essential in obtaining optimal results without hypertension or permanent impairment of renal function. abstract_id: PUBMED:21748626 Traumatic renal artery occlusion treated with an endovascular stent--the limitations of surgical revascularization: report of a case. When renal artery occlusion occurs secondary to blunt trauma, the recovery rate of renal function after open revascularization is varied and far from satisfactory. Although the optimal treatment for this type of injury has not been established, percutaneous revascularization by endovascular stenting has recently been advocated for patients with unilateral renal artery occlusion. We herein report a case of blunt renal artery occlusion treated with an endovascular stent. After the placement of the stent, renal arteriography showed multiple nonflow-limiting contrast defects in the distal renal arteries, suggesting peripheral thrombosis. Although the duration of warm renal ischemia appears to be the crucial determinant of renal function, multiple thrombi in the distal renal arteries, which would be undetectable during open surgery, could also affect the functional outcome. The presence of these thrombi may explain the limited success of surgical revascularization in such cases. abstract_id: PUBMED:26908767 Changes in inflammatory biomarkers after renal revascularization in atherosclerotic renal artery stenosis. Background: Atherosclerotic renal artery stenosis (ARAS) activates oxidative stress and chronic inflammatory injury. Contrast imaging and endovascular stenting pose potential hazards for acute kidney injury, particularly when superimposed upon reduced kidney perfusion. Methods: We measured sequential early and long-term changes in circulating inflammatory and injury biomarkers in 12 ARAS subjects subjected to computed tomography imaging and stent revascularization compared with essential hypertensive (EH) subjects of similar age under fixed sodium intake and medication regimens in a clinical research unit. Results: NGAL, TIMP-2, IGFBP7, MCP-1 and TNF-α all were elevated before intervention. Post-stenotic kidney volume, perfusion, blood flow and glomerular filtration rate (GFR) were lower in ARAS than in EH subjects. TIMP-2 and IGFBP7 fell briefly, then rose over 18 h after contrast imaging and stent deployment. Circulating NGAL decreased and remained lower for 27 h. These biomarkers in ARAS returned to baseline after 3 months, while kidney volume, perfusion, blood flow and GFR increased, but remained lower than EH. Conclusions: These divergent patterns of inflammatory signals are consistent with cell cycle arrest (TIMP-2, IGFBP7) and relative protection from acute kidney injury after imaging and stenting. Sustained basal elevation of circulating and renal venous inflammatory biomarkers support ongoing, possibly episodic, renal stress in ARAS that limits toxicity from stent revascularization. Answer: The question of whether renal revascularization is justified in cases of renal pedicle injury is complex and depends on various factors, including the timing of the injury, the type of injury, and the presence of associated conditions. In a study evaluating the treatment modalities for renal pedicle trauma, it was found that in cases seen after the 4th hour, the severity of ischemic lesions and renal sequelae, as well as the small number of kidneys saved despite revascularization surgery, argue in favor of immediately elective nephrectomy (PUBMED:9894257). This suggests that delayed revascularization may not be as beneficial, and immediate nephrectomy might be a more appropriate course of action in certain cases. In contrast, for acute renal artery occlusion, the type of occlusion (embolic, thrombotic, or traumatic) influences the success of revascularization. While embolic occlusion embolectomy was successful in relieving hypertension but not in restoring renal function, thrombotic occlusion treatment frequently resulted in the return of renal function, irrespective of the delay in treatment. However, in traumatic renal artery occlusion, the return of renal function did not occur despite reperfusion as early as 6 hours after injury (PUBMED:3820406). For atherosclerotic renal artery stenosis (ARAS), stent revascularization restored cortical blood flow and reversed tissue hypoxia but failed to reverse inflammatory pathways or improve glomerular filtration rate (GFR) (PUBMED:23899868). This indicates that while revascularization can improve some aspects of renal function, it may not fully reverse the damage caused by ARAS. In the context of transplant renal artery stenosis, a novel surgical approach to renal allograft revascularization has been described, which provides several advantages over previous procedures and has shown to be effective in revascularizing the kidney (PUBMED:35833175). For traumatic bilateral renal artery thrombosis, early diagnosis and prompt revascularization are essential for optimal results without hypertension or permanent impairment of renal function (PUBMED:1740811). However, the presence of multiple thrombi in the distal renal arteries may limit the success of surgical revascularization (PUBMED:21748626). In summary, the justification for renal revascularization in cases of renal pedicle injury is not straightforward and must be evaluated on a case-by-case basis, considering the timing of the injury, the type of occlusion, and the potential for recovery of renal function.
Instruction: Reducing urine leakage after radical retropubic prostatectomy: pelvic floor exercises, magnetic innervation or no treatment? Abstracts: abstract_id: PUBMED:23658129 Reducing urine leakage after radical retropubic prostatectomy: pelvic floor exercises, magnetic innervation or no treatment? A quasi-experimental study. Background: In Italy, nurses can use pelvic floor muscle exercises (PFMEs) and extracorporeal magnetic innervation (ExMI) to treat urinary incontinence after radical retropubic prostatectomy (RRP). The efficacy of these treatments remains unclear. Purpose: To compare PFMEs, ExMI, in the management of post-RRP urinary incontinence. Methodology: This study compared PFMEs versus no treatment in reducing bladder continence difficulties, and PFMEs versus ExMI in reducing urine leakage. A total of 87 patients were treated with PFMEs, 23 with ExMI; 22 refused rehabilitation (control group). Findings: Three months after RRP, both interventions reduced the International Prostate Symptom Score, when compared to control group. After 6 months, no significant differences between the treatments were found (p = .8346). After a complete ExMI treatment (6 weeks), 63.88% lost less than 10 grams of urine per day (32.73% in the PFMEs group, p &lt; .0001). Conclusions: PFMEs are useful up to the 3rd month after surgery; ExMI reduces leakages faster than PFMEs. abstract_id: PUBMED:36557065 Efficacy Comparison between Kegel Exercises and Extracorporeal Magnetic Innervation in Treatment of Female Stress Urinary Incontinence: A Randomized Clinical Trial. Background and Objectives: To estimate the effectiveness of Kegel exercises versus extracorporeal magnetic innervation (EMI) in the treatment of stress urinary incontinence (SUI). Materials and Methods: A parallel group, randomized clinical trial was conducted in the Department of Obstetrics and Gynecology, Clinical Hospital Centre Zagreb, Croatia. After assessing the inclusion/exclusion criteria, each eligible participant was randomized to one of the two observed groups by flipping a coin: the first group underwent treatment with Kegel exercises for 8 weeks, while the second group underwent EMI during the same time interval. The primary outcome was the effectiveness of treatment as measured by the ICIQ-UI-SF overall score, eight weeks after the commencement of treatment. Results: During the study period, 117 consecutive patients with SUI symptoms were assessed for eligibility. A total of 94 women constituted the study population, randomized into two groups: Group Kegel (N = 48) and Group EMI (N = 46). After 8 weeks of follow-up, intravaginal pressure values in the EMI group were 30.45 cmH2O vs. the Kegel group, whose values were 23.50 cmH2O (p = 0.001). After 3 months of follow-up, the difference was still observed between the groups (p = 0.001). After the end of treatment and 3 months of follow-up, the values of the ICIQ-UI SF and ICIQ-LUTSqol questionnaires in the EMI group were lower than in the Kegel group (p &lt; 0.001). Treatment satisfaction was overall better in the EMI group than in the Kegel group (p &lt; 0.001). Conclusions: Patients treated with EMI had a lower number of incontinence episodes, a better quality of life, and higher overall satisfaction with treatment than patients who performed Kegel exercises. abstract_id: PUBMED:23432098 Preoperative pelvic floor physiotherapy improves continence after radical retropubic prostatectomy. Objectives: Urinary incontinence is a predictable sequela of radical retropubic prostatectomy, and is most severe in the early postoperative phase. The present study aimed to evaluate the effect of a physiotherapist-guided pelvic floor muscle training program, commenced preoperatively, on the severity and duration of urinary continence after radical retropubic prostatectomy. Methods: A retrospective analysis of men undergoing radical retropubic prostatectomy by one high-volume surgeon (n = 284) was carried out. The intervention group received physiotherapist-guided pelvic floor muscle training from 4 weeks preoperatively (n = 152), whereas the control group was provided with verbal instruction on pelvic floor muscle exercise by the surgeon alone (n = 132). Postoperatively, all patients received physiotherapist-guided pelvic floor muscle training. The primary outcome measure was 24-h pad weight at 6 weeks and 3 months postoperatively. Secondary outcome measures were the percentage of patients experiencing severe urinary incontinence, and patient-reported time to one and zero pad usage daily. Results: At 6 weeks postoperatively, the 24-h pad weight was significantly lower (9 g vs 17 g, P &lt; 0.001) for the intervention group, which also showed less severe urinary incontinence (24-h pad weight &gt;50 g; 8/152 patients vs 33/132 patients, P &lt; 0.01). There was no significant difference between groups in the 24-h pad weight at 3 months (P = 0.18). Patient-reported time to one and zero pad usage was significantly less for the intervention group (P &lt; 0.05). Multivariate Cox regression showed that preoperative physiotherapist-guided pelvic floor muscle training reduced time to continence (1 pad usage daily) by 28% (P &lt; 0.05). Conclusions: A physiotherapist-guided pelvic floor muscle training program, commenced 4 weeks preoperatively, significantly reduces the duration and severity of early urinary incontinence after radical retropubic prostatectomy. abstract_id: PUBMED:29923602 Mechanical oscillations superimposed on the pelvic floor muscles during Kegel exercises reduce urine leakage in women suffering from stress urinary incontinence: A prospective cohort study with a 2-year follow up. Introduction: New methods of conservative treatment of female stress urinary incontinence are needed. We investigated whether superimposed vibration mechanosignals during Kegel exercises could reduce the amount of urinary leakage after 4 and 6 weeks of training. Material And Methods: Sixty women with stress urinary incontinence were included in this prospective cohort study. Vibration mechanosignals were superimposed during Kegel exercises using an intravaginal device. Each training session consisted of 15 maximal contractions of pelvic floor muscles for 5 s. The women performed training (5 min/day) at home for 4 (n = 60) and 6 (n = 36) weeks. Urine leakage (g) during stress test with standardized bladder volume, and contraction force without and with superimposed mechanical stimulations were measured at inclusion (T0 ), and after 4 (T2 ) and 6 (T3 ) weeks of training using an intravaginal device. Incontinence Questionnaire-Short Form was recorded at T0 , and in a sub-cohort of women (n = 36) at 2 years follow up. Results: Mean urine leakage reduced significantly from 20.5 (± 12.2) g at T0 to 4.8 (± 6.7) g at T2 and 1.5 (± 6.7) g at T3 . After 4 and 6 weeks of training, urinary leakage was ≤ 4 g on stress test in 44 and 49 of the 60 women, respectively. At T0 , the mean Incontinence Questionnaire-Short Form score was 13 (± 2.4), and at 2 years follow up, the score was 6.3 (± 3.75). Conclusions: Superimposed mechanical stimulation with Kegel exercises significantly reduced urinary leakage in women with stress urinary incontinence. abstract_id: PUBMED:32527102 Possibilities of objectivization of pelvic floor muscle exercises in patients with urine leakage after delivery. Background: Examination of pelvic floor muscle function is very important before starting exercises in patients with urine leakage and other pelvic floor dysfunctions. Perineometer and palpation examination is currently being used. A new trend in physiotherapy is the ultrasound examination of pelvic floor muscles. The examination can be performed by abdominal approach or perineal approach. We evaluate 2D and 3/4D images of pelvic floor muscles. Methods: The International Consultation on Incontinence Questionnaire Urinary Incontinence Short Form (ICIQ-UI SF). OAB-q - overactive bladder questionnaire - short form. The Urinary Incontinence Quality of Life scale (I-QoL) - self-assessment scale for assessing the quality of life of patients with urinary incontinence. Adjusted Oxford scale to assess pelvic floor muscle strength. PERFECT scheme by Laycock and Jerwood. Pelvic floor examination by perineometer (Peritron-Ontario, L4V, Canada). Pelvic floor examination by 2D and 3/4D ultrasound examination (Volunson-i BT 11 Console, VCI volume contrast imaging software, (GE Healthcare Austria GmbH &amp; Co OG, Zipf, Austria, RAB4-8-RS 3D/4D 4-8 MHz probe). High intensity exercise of pelvic floor muscles with stabilization elements. Conclusion: The effect of pelvic floor muscle training was objectively proved by the above mentioned objectivization methods with subjective improvement of quality of life. There was also a significant effect of education in USG exercise. abstract_id: PUBMED:27088198 The impact assessment of pelvic floor exercises to reduce symptoms and quality of life of women with stress urinary incontinence Unlabelled: Stress urinary incontinence (SUI) has an independent will, uncontrolled leakage of urine from the bladder during exercise, sneezing, coughing, laughing, bending or lifting heavy objects. It leads to an increase in abdominal pressure, resulting in a failure of muscular-ligamentous. This problem affects both women and men. Can lead to serious mental disorders such as depression, low self-esteem and dignity, decline in social status, deterioration of mood, anxiety and a decrease in sexual activity. In the treatment of SUI used surgical and conservative treatment, among others, in the form of pelvic floor exercises. The Aim: of study was to evaluate the effect of pelvic muscle exercise intensity and the frequency of the occurrence of outflow of urine and quality of life in women with SUI. Materials And Methods: The study was conducted on a group of 30 women with a mean age of 46±4.23 years diagnosed with SUI. To carry out the study served its own questionnaire consisting of two parts, a total of 33 questions. All subjects have undergone training the muscles of the pelvic floor which consists of two stages. The first lasted four weeks, and consisted of a set of six exercises, in the second stage of the women received outline home exercise. Statistical analysis was performed using the statistical package PQStat ver. 1.4.2.324. Results: Regular pelvic floor muscle training resulted in in a 90% reduction in the frequency of women need to use the toilet during the day, and 93% of respondents get up less often, or not at all used the toilet at night. 30% of respondents declared reduce the outflow of urine during physical activity, and 17% during sneezing and wearing purchases. It has been found to reduce the need to use panty liners. 30% of women observed an improvement in the professional sphere, 28% in the sexual sphere and 21% in the social sphere and family. Conclusions: After completion of treatment observed a reduction in the number of uncontrolled micturition which contributed to the comfort of the surveyed women. After a series of exercises was noted improvement in interpersonal relations, in private life and professional life and an increased sense of satisfaction among patients. abstract_id: PUBMED:33690249 Effects of Biofeedback-Guided Pelvic Floor Muscle Training With and Without Extracorporeal Magnetic Innervation Therapy on Stress Incontinence: A Randomized Controlled Trial. Purpose: We evaluated the effects of biofeedback-guided pelvic floor muscle training (EMG-BF), with and without extracorporeal magnetic innervation (EMG-BF+ExMI) therapy on lower urinary tract symptoms based on frequency of stress urinary incontinence (SUI) and grams of urine loss, health-related quality of life, and sexual function in women with SUI. Design: This was a randomized controlled trial. Subjects And Setting: The sample comprised 51 adult women with SUI. Their mean age was 50.92 years (SD 8.88). Twenty-six were randomly allocated to EMG-BF alone and 25 were allocated to undergo EMG-BF+ExMI. Methods: This study's main outcome was lower urinary tract symptoms measured via the 1-hour pad test (grams of urine loss) and a 3-day bladder diary (frequency of stress incontinence episodes). Additional outcome measures were health-related quality of life measured with the Incontinence Quality of Life (I-QOL) questionnaire, sexual function evaluated via the Female Sexual Function Index (FSFI), and pelvic floor muscle contraction force measured via a perineometer and Modified Oxford Scale (MOS). All participants underwent biofeedback-enhanced pelvic floor muscle training using EMG during 20-minute sessions twice weekly for a period of 8 weeks. In addition to the EMG-BF+ExMI group, ExMI was applied during 20-minute sessions twice weekly for a period of 8 weeks. Participants from both groups were asked to perform pelvic floor muscle exercises at home (60 pelvic floor muscle contractions daily, divided into 3 sessions of 20 contractions each). Outcome measures were made at baseline and repeated at the end of treatment. Results: Fifteen (57.7%) in the EMG-BF group and 13 (52.0%) in the EMG-BF+ExMI group achieved dryness. Four participants (15.4%) in the EMG-BF group and 5 (20%) in the EMG-BF+ExMI group experienced improvement. Seven patients (26.9%) in the EMG-BF group and 7 (28%) in the EMG-BF+ExMI group did not benefit from the treatments. There was no statistically significant difference between the groups in terms of cure and improvement (P = .895). Conclusions: Findings indicate that use of magnetic innervation does not improve lower urinary tract symptoms, health-related quality of life, sexual function, and pelvic floor muscle strength when compared to pelvic floor muscle training alone. abstract_id: PUBMED:24741121 Impact of Retropubic vs. Transobturator Slings for Urinary Incontinence on Myofascial Structures of the Pelvic Floor, Adductor and Abdominal Muscles. Suburethral tension-free slings (tapes or bands) are an essential component in the operative treatment of urinary incontinence. In the present contribution the influence of the type of suburethral sling (retropubic vs. transobturator) on the myofascial structures of the abdominal, adductor and pelvic floor muscles is examined. For this purpose, 70 patients were prospectively observed clinically and physiotherapeutically. Significant differences were seen in the improvement of the pelvic floor musculature (strength, endurance, speed) after placement of a suburethral sling, irrespective of whether it was of the retropubic or the transobturator type. Thus, after surgical treatment patients should be encouraged to undertake further pelvic floor exercising or this should be prescribed for them. There were no significant changes in the abdominal and adductor muscles but there were slight increases with regard to pain level, pain on palpation, and trigger points after placement of both types of sling; thus this is not a criterion in the decision as to which type of sling to use. abstract_id: PUBMED:14972468 Comparative study of effects of extracorporeal magnetic innervation versus electrical stimulation for urinary incontinence after radical prostatectomy. Objectives: To perform a randomized comparative study to investigate the clinical effects of extracorporeal magnetic innervation (ExMI) and functional electrical stimulation (FES) on urinary incontinence after retropubic radical prostatectomy. Methods: Thirty-six patients with urinary incontinence after radical prostatectomy were randomly assigned to three groups (12 patients each in the FES, ExMI, and control groups). For FES, an anal electrode was used. Pulses of 20-Hz square waves at a 300-micros pulse duration were used for 15 minutes twice daily for 1 month. For ExMI, the Neocontrol system was used. The treatment sessions were for 20 minutes, twice a week for 2 months. The frequency of the pulse field was 10 Hz for 10 minutes, followed by a second treatment at 50 Hz for 10 minutes. For the control group, only pelvic floor muscle exercises were performed. Objective measures included bladder diaries, 24-hour pad weight testing, and a quality-of-life survey, at 1, 2, and 4 weeks and 2, 3, 4, 5, and 6 months after removing the catheter. Results: The leakage weight during the 24 hours after removing the catheter was 684, 698, and 664 g for the FES, ExMI, and control groups, respectively. At 1 month, it was 72, 83, and 175 g (FES versus control, P &lt;0.05) and at 2 months was 54, 18, and 92 g (ExMI versus control, P &lt;0.05) in the FES, ExMI, and control groups, respectively. Finally, 6 months later, the average 24-hour leakage weight was less than 10 g in all groups. Quality-of-life measures decreased after surgery, but gradually improved over time in all groups. No complications were noted in any of the groups. Conclusions: ExMI and FES therapies offered earlier continence compared with the control group after radical prostatectomy. We consider ExMI and FES to be recommendable options for patients who want quick improvement of postoperative urinary incontinence. abstract_id: PUBMED:9202555 Pelvic floor exercises as a treatment for post-micturition dribble. Objective: To determine the effectiveness of pelvic floor exercises and urethral milking as treatments for post-micturition dribble. Patients And Methods: A method of measuring small amounts of urine loss during normal activity was developed; pads were worn for short periods (&lt; 4 h) and then stored in two sealed plastic bags which were weighed within 72 h. Forty-nine men (age range 36-83 years) drawn from a hospital out-patient population, who had not undergone surgery on the bladder, urethra or prostate gland, entered the study. They were randomly assigned to one of three treatment groups; pelvic muscle exercise, urethral milking or counselling. Participants in each group followed the treatment specific to their group for 12 weeks. At 5, 9 and 13 weeks, urine loss was assessed using the method described. Results: The groups were comparable for age, height, weight and pelvic muscle contraction strength and compliance of the men who completed the study was excellent. The outcome measure (improvement in pad weight gain) was strongly influenced by initial pad weight gain, or degree of urine loss at the start of the study and this was treated as a covariate in an analysis of variance model. After allowing for the effects of initial pad weight gain, the counselling group showed no improvement, the urethral milking group showed an adjusted mean improvement in urine loss of 2.9 g after 13 weeks, compared with 4.7 g in the exercise group. Conclusion: Both pelvic floor exercises and urethral milking are effective treatments for post-micturition dribble compared with counselling alone. Pelvic floor exercises were more effective in reducing urine loss than urethral milking in this study. Answer: After radical retropubic prostatectomy (RRP), various treatments have been explored to reduce urine leakage, including pelvic floor exercises (PFMEs), extracorporeal magnetic innervation (ExMI), and no treatment. The efficacy of these treatments has been compared in several studies. One study found that both PFMEs and ExMI reduced the International Prostate Symptom Score three months after RRP when compared to no treatment. However, after six months, no significant differences between the treatments were found. Notably, after a complete ExMI treatment (6 weeks), a higher percentage of patients lost less than 10 grams of urine per day compared to the PFMEs group, suggesting that ExMI reduces leakages faster than PFMEs (PUBMED:23658129). Another study compared the efficacy of Kegel exercises and ExMI in the treatment of female stress urinary incontinence (SUI). It found that patients treated with ExMI had a lower number of incontinence episodes, a better quality of life, and higher overall satisfaction with treatment than patients who performed Kegel exercises (PUBMED:36557065). Preoperative pelvic floor physiotherapy, commenced 4 weeks preoperatively, has also been shown to significantly reduce the duration and severity of early urinary incontinence after RRP (PUBMED:23432098). Additionally, mechanical oscillations superimposed on the pelvic floor muscles during Kegel exercises have been shown to significantly reduce urinary leakage in women with stress urinary incontinence (PUBMED:29923602). Furthermore, biofeedback-guided pelvic floor muscle training with and without ExMI therapy on SUI showed that the addition of magnetic innervation did not improve lower urinary tract symptoms, health-related quality of life, sexual function, and pelvic floor muscle strength when compared to pelvic floor muscle training alone (PUBMED:33690249). In conclusion, both PFMEs and ExMI have been found to be useful in reducing urine leakage after RRP, with ExMI potentially providing faster results. Preoperative pelvic floor physiotherapy can also be beneficial. However, the addition of magnetic innervation to pelvic floor muscle training may not offer additional benefits over training alone. No treatment is generally less effective than either PFMEs or ExMI.
Instruction: Quality of end-of-life care for cancer patients: does home hospice care matter? Abstracts: abstract_id: PUBMED:35686746 End-of-life care quality for children with cancer who receive palliative care. Background: We previously developed stakeholder-informed quality measures to assess end-of-life care quality for children with cancer. We sought to implement a subset of these quality measures in the multi-center pediatric palliative care (PPC) database. Procedures: We utilized the Shared Data and Research database to evaluate the proportion of childhood cancer decedents from 2017-2021 who, in the last 30 days of life, avoided chemotherapy, mechanical ventilation, intensive care unit admissions, and &gt; 1 hospital admission; were enrolled in hospice services, and reported ≤ 2 highly distressing symptoms. We then explored patient factors associated with the attainment of quality benchmarks. Results: Across 79 decedents, 82% met ≥ 4 quality benchmarks. Most (76%) reported &gt; 2 highly distressing symptoms; 17% were enrolled in hospice. In univariable analyses, patients with an annual household income ≤$50,000 had lower odds of hospice enrollment and avoidance of mechanical ventilation or intensive care unit admissions near end of life (odds ratio [OR] 0.10 [95% confidence interval (C.I.) 0.01, 0.86], p = 0.04; OR 0.13 [0.02, 0.64], p = 0.01; OR 0.36 [0.13, 0.98], p = 0.04, respectively). In multivariable analyses, patients with an income ≤$50,000 remained less likely to enroll in hospice, after adjusting for cancer type (OR 0.10 [0.01, 0.87]; p = 0.04). Conclusions: Childhood cancer decedents who received PPC met a large proportion of quality measures near the end of their life. Yet, many reported highly distressing symptoms. Moreover, patients with lower household incomes appeared less likely to enroll in hospice and more likely to receive intensive hospital services near the end of life. This study identifies opportunities for palliative oncology quality improvement. abstract_id: PUBMED:35951460 Attitudes and Beliefs of End-of-Life Care Among Blackfeet Indians. Disparity in hospice use threatens optimal quality of life during the final stage of life while American Indians/Alaska Natives may not be aware of hospice benefits. Our established Blackfeet members and Montana State University collaborative team conducted a modified Duke End-of-Life Care Survey (8 sections with 60 questions) to assess a baseline end-of-life values, beliefs, and attitudes of Blackfeet individuals. In this manuscript, we present the results of 3 sections with 28 questions: Preference of Care; Beliefs About Dying, Truth Telling, and Advance Care Planning; and Hospice Care by examining overall and generational differences. Most participants (n = 92) chose quality of life over quantity of life with using various devices if they had an incurable disease (54-82%), would want to know if they were dying (92%) or had cancer (89%), but had not thought or talked about their preference of end-of-life care (30% and 35% respectively). The results portray understandable cultural context as well as generational differences with personal variability. While an affirmative shift towards hospice was emerging, dissemination of accurate hospice information would benefit people in the partner community. In conclusion, an individual-centered approach-understanding individual need first-may be the most appropriate and effective strategy to promote hospice information and its use. abstract_id: PUBMED:32856023 Caregiver-Reported Quality Measures and Their Correlates in Home Hospice Care. Background: A majority of hospice care is delivered at home, with significant caregiver involvement. Identifying factors associated with caregiver-reported quality measures could help improve hospice care in the United States. Objectives: To identify correlates of caregiver-reported quality measures: burden, satisfaction, and quality of end-of-life (EoL) care in home hospice care. Design: A cross-sectional study was conducted from April 2017 through February 2018. Setting/Subjects: A nonprofit, urban hospice organization. We recruited caregivers whose patients were discharged from home hospice care. Eligible caregiver participants had to be 18 years or older, English-speaking, and listed as a primary caregiver at the time the patient was admitted to hospice. Measures: The (1) short version of the Burden Scale for Family Caregivers; (2) Family Satisfaction with Care; and (3) Caregiver Evaluation of the Quality of End-Of-Life Care. Results: Caregivers (n = 391) had a mean age of 59 years and most were female (n = 297, 76.0%), children of the patient (n = 233, 59.7%), and non-Hispanic White (n = 180, 46.0%). The mean age of home hospice patients was 83 years; a majority had a non-cancer diagnosis (n = 235, 60.1%), were female (n = 250, 63.9%), and were non-Hispanic White (n = 210, 53.7%). Higher symptom scores were significantly associated with greater caregiver burden and lower satisfaction with care; but not lower quality of EoL care. Caregivers who were less comfortable managing patient symptoms during the last week on hospice had higher caregiver burden, lower caregiver satisfaction, and lower ratings of quality of EoL care. Conclusion: Potentially modifiable symptom-related variables were correlated with caregiver-reported quality measures. Our study reinforces the important relationship between the perceived suffering/symptoms of patients and caregivers' hospice experiences. abstract_id: PUBMED:28684928 End-of-Life Transitions and Hospice Utilization for Adolescents: Does Having a Usual Source of Care Matter? Adolescents with life-limiting illnesses have intensive end-of-life trajectories and could benefit from initiation of hospice services. The medical home model, which includes having a usual source of primary care, may help facilitate quality outcomes at the end-of-life for adolescents. The purpose of this study was to determine the relationship between having a usual source of primary care on hospice utilization and end-of-life transitions among adolescents between 15-20 years with a life-limiting illness. A retrospective cohort design used 2007-2010 California Medicaid claims data (n=585). Our dependent variables were hospice utilization (i.e., hospice enrollment, hospice length of stay) and the independent variable was usual source of primary care. Multivariate regression techniques including least squares regression, multivariate logistic regression, and negative binomial regression were used in the analysis of the relationship between usual source of primary care and hospice utilization and end-of-life transitions. Ten percent of our sample utilized hospice services. Having a usual source of primary care was associated with an increase in hospice enrollment, hospice length of stay, and end-of-life transitions. Adolescents with a cancer diagnosis were more likely to enroll in hospice services. For adolescents at the end of life, having a usual source of primary care had a significant impact on hospice enrollment and length of stay. This study is among the first to demonstrate a relationship between primary care and hospice use among this vulnerable population. abstract_id: PUBMED:34128937 Quality of life in cancer patients in integrated home oncological care and hospice: comparative study. Background: Measuring quality of life and factors influencing it such as pain and anxiety is a major part of the overall assessment of cancer patients. There are different clinical settings aimed at satisfying the needs of this patients. The purpose of this study was to assess the different perceptions of quality of life in cancer patients in integrated home oncological care (ADI) and hospice. Methods: We invited to participate all subjects suffering from oncological pathology followed with home care activities at ANT, ODO Bat-Bari Nord and inpatients at the hospice "Mons. Aurelio Marena" in Bitonto (BA) from 15/10/2019 to 15/07/2020. During the 4 collection phases, BPI, STAI-Y 1-2, EORTC were administered. Results: 80 subjects were consecutively enrolled, divided in the same proportion in ADI and hospice. At the end of the study the pain intensity in subjects in ADI was significantly lower than baseline (p=0.02) and the level shown by subjects hospitalized in hospice (p=0.01). No differences in anxiety were found between the settings; lower levels were found among ADI subjects (p=0.03) and those living with families (p=0.01). The EORTC QLC-30 scores trend shows a progressive worsening of the quality of life, in particular in the subjects in hospice (p=0.021). Discussion: The research suggests that time at home (ADI) compared to hospice can impact on pain perception, quality of life and anxiety levels. In addition, the presence of the family and therefore the closer ties that have always been with us, seems to be a determining factor of support for the individual able to positively affect the levels of anxiety. It is desirable to investigate the prognostic value of the quality of life perceived by patients. abstract_id: PUBMED:28408618 National Policies Fostering Hospice Care Increased Hospice Utilization and Reduced the Invasiveness of End-of-Life Care for Cancer Patients. Background: In 2011, two national policies aiming to foster hospice services for terminal cancer patients took effect in Taiwan. The single-payer National Health Insurance of Taiwan started to reimburse full hospice services. The national hospital accreditation program, which graded all hospitals, incorporated hospice utilization in its evaluation. We assessed the impact of these national policies. Methods: A cohort of 249,394 patients aged ≥18 years who died of cancer between 2008 and 2013 were identified from the National Death Registry. We retrieved utilization data of medical services and compared the health care utilization in the final month of life before and after the implementation of the new policies. Results: After the policy changes, hospice utilization increased from 20.8% to 36.2%. In a multivariate analysis adjusting for patient demographics, cancer features, and hospital characteristics, hospice utilization significantly increased after 2011 (adjusted odds ratio [AOR] 2.35, p &lt; .001), accompanied by a decrease in intensive care unit (ICU) admissions, invasive mechanical ventilation (IMV), and cardiopulmonary resuscitation (CPR; AORs 0.87, 0.75, and 0.80, respectively; all p &lt; .001). The patients who received hospice services were significantly less likely to receive ICU admissions, IMV, and CPR (AORs 0.20, 0.12, and 0.10, respectively; all p &lt; .001). Hospice utilization was associated with an adjusted net savings of U.S. $696.90 (25.2%, p &lt; .001) per patient in the final month of life. Conclusion: The national policy changes fostering hospice care significantly increased hospice utilization, decreased invasive end-of-life care, and reduced the medical costs of terminal cancer patients. Implications For Practice: National policies fostering hospice care significantly increased hospice utilization, decreased invasive end-of-life care, and reduced the medical costs of terminal cancer patients. abstract_id: PUBMED:27127256 Regulating and Paying for Hospice and Palliative Care: Reflections on the Medicare Hospice Benefit. Hospice began as a social movement outside of mainstream medicine with the goal of helping those dying alone and in unbearable pain in health care institutions. The National Hospice Study, undertaken to test whether hospice improved dying cancer patients' quality of life while saving Medicare money, found hospice care achieved comparable outcomes to traditional cancer care and was less costly as long as hospice lengths of stay were not too long. In 1982, before study results were final, Congress created a Medicare hospice benefit under a capitated per diem payment system restricting further treatment. In 1986 the benefit was extended to beneficiaries living in nursing homes. This change resulted in longer average lengths of stay, explosive growth in the number of hospices, particularly of the for-profit variety, and increases in total Medicare expenditures on hospice care. An increasingly high proportion of beneficiaries receive hospice care. However, over 30 percent are served fewer than seven days before they die, while very long stays are also increasingly common. These and other factors raise quality concerns about hospice being disconnected from the rest of the health care system. We offer suggestions regarding how hospice could be better integrated into the broader health care delivery system. abstract_id: PUBMED:36497668 Hospice Care Improves Patients' Self-Decision Making and Reduces Aggressiveness of End-of-Life Care for Advanced Cancer Patients. The aim of the current study is to evaluate the different degrees of hospice care in improving patients' autonomy in decision-making and reducing aggressiveness of cancer care in terminal-stage cancer patients, especially in reducing polypharmacy and excessive life-sustaining treatments. This was a retrospective cross-sectional study conducted in a single medical center in Taiwan. Patients with advanced cancer who died in 2010-2019 were included and classified into three subgroups: hospice ward admission, hospice shared care, and no hospice care involvement. In total, 8719 patients were enrolled, and 2097 (24.05%) admitted to hospice ward; 2107 (24.17%) received hospice shared care, and 4515 (51.78%) had no hospice care. Those admitted to hospice ward had significantly higher rates of having completed do-not-resuscitate order (100%, p &lt; 0.001) and signed the do-not-resuscitate order by themselves (48.83%, p &lt; 0.001), and they had lower aggressiveness of cancer care (2.2, p &lt; 0.001) within the 28 days before death. Hospice ward admission, hospice shared care, and age &gt; 79 years were negatively associated with aggressiveness of cancer care. In conclusion, our study showed that patients with end-of-life hospice care related to higher patient autonomy in decision-making and less excessively aggressive cancer care; the influence of care was more overt in patients approaching death. Further clinical efforts should be made to clarify the patient and the families' satisfaction and perceptions of quality after hospice care involvement. abstract_id: PUBMED:30587012 Timely Referral to Hospice Care for Oncology Patients: A Retrospective Review. Hospice care is medical care provided to terminally ill patients with a life expectancy of 6 months or less. Hospice services include symptom control, pain management, palliative care, and other supportive services such as providing for home equipment or oxygen; however, it does not provide for life-prolonging therapies such as chemotherapy. Although oncologic benchmarks suggest patients should be enrolled in hospice 3 months prior to death, studies show that most hospice referrals are being made too late. These shorter stays in hospice result in increased cost of care especially at the end of life with most patients dying on aggressive treatments in the hospital. Thus, identifying barriers to hospice placement is critical in improving the referral process and enhancing the quality of end-of-life care. This retrospective study collected data on 418 oncologic patients who passed in 2015 and categorized patients based on hospice status at the time of death. Our study found that the demographics between hospice and nonhospice patients were not significantly different. Hospice patients spent a median of 10 days in hospice and 71% (n = 161) of patients were in hospice 30 days or less. Additionally, 56% of patients were in hospice 10 days or less. Increased education for patients and health-care providers along with better utilization of palliative care services and incorporating a nurse navigator to help with transitioning patients to hospice would improve earlier referral to hospice care and enhance patients' quality of life. abstract_id: PUBMED:25526387 Quality of end-of-life care for cancer patients: does home hospice care matter? Objectives: Since metastasized cancer patients receive many treatments and services, it is important to ascertain whether home hospice (HH) care makes a meaningful contribution to end-of-life quality for terminal patients. This study examines whether people who had died from metastasized cancer-both recipients of HH care and nonrecipients-were cared for according to palliative indicators and whether HH care made a difference. Study Design: Three to 6 months after the deaths of 193 metastatic cancer patients, members of their families were interviewed face-to-face. Information on their loved ones' utilization of healthcare services in the last 2 months of life was retrieved from computerized administrative files. Results: The patients' average age was 69.5 years (SD=13.9), 56% were men, and 21% received HH care. More patients with HH care than without received opiate medication (92% vs 68%, respectively; P&lt;.01), appropriate treatment for anxiety (57% vs 30%, respectively; P&lt;.01), had advance directives, and received explanations about their rights. Only 5% of HH patients were treated with curative care in the last 2 month of life, compared with 40% of those without HH (P&lt;.01). Of those who received curative care, more of them died at home (56% vs 26%, respectively; P&lt;.01), and more died at the place of their choice (60% vs 30%, respectively; P&lt;.01). No differences were found regarding healthcare service utilization. Conclusions: The findings demonstrate the valuable contributions of HH and palliative care. Clinicians should consider referring cancer patients to palliative care services and establishing working relationships with HH and palliative care providers. Answer: Yes, home hospice care does matter for the quality of end-of-life care for cancer patients. Studies have shown that home hospice (HH) care can make a meaningful contribution to the quality of end-of-life care for terminal patients. Patients who received home hospice care were more likely to receive opiate medication, appropriate treatment for anxiety, had advance directives, and received explanations about their rights compared to those who did not receive HH care. Additionally, a smaller percentage of HH patients were treated with curative care in the last two months of life, and more of them died at home and at the place of their choice (PUBMED:25526387). Furthermore, hospice care has been associated with higher patient autonomy in decision-making and less excessively aggressive cancer care, particularly as patients approach death. The influence of hospice care is more overt in patients nearing the end of life, and clinical efforts should focus on clarifying patient and family satisfaction and perceptions of quality after hospice care involvement (PUBMED:36497668). National policies fostering hospice care have been shown to significantly increase hospice utilization, decrease invasive end-of-life care, and reduce medical costs for terminal cancer patients (PUBMED:28408618). Additionally, timely referral to hospice care is critical for enhancing the quality of end-of-life care, as studies indicate that most hospice referrals are made too late, resulting in increased costs and patients dying on aggressive treatments in the hospital (PUBMED:30587012). In summary, home hospice care plays a significant role in improving the quality of end-of-life care for cancer patients by ensuring better symptom management, respecting patient preferences, and reducing the aggressiveness of care at the end of life.
Instruction: Are accelerometers a useful tool for measuring disease activity in children with eczema? Abstracts: abstract_id: PUBMED:22970691 Are accelerometers a useful tool for measuring disease activity in children with eczema? Validity, responsiveness to change, and acceptability of use in a clinical trial setting. Background: Actigraphy, which uses accelerometers to record movement, has been proposed as an objective method of itch assessment in eczema. Previous studies have found strong correlations with actigraphy and video surveillance, disease severity and biological markers in patients with eczema. Objectives: To assess the validity of accelerometer data, its responsiveness to change and the practicality and acceptability of accelerometers when used as an outcome measure in a clinical trial. Methods: This study used data collected from 336 participants of the Softened Water Eczema Trial (SWET). Accelerometer data were compared with three standardized scales: Six Area, Six Sign Atopic Dermatitis (SASSAD) severity score, Patient Oriented Eczema Measure (POEM) and Dermatitis Family Impact (DFI). Spearman's rank testing was used for correlations. Results: Only 70% of trial participants had complete data, compared with 96% for the primary outcome (eczema severity - SASSAD). The convergent validity of accelerometer data with other measures of eczema severity was poor: correlation with SASSAD 0·15 (P = 0·02) and POEM 0·10 (P = 0·13). Assessing for divergent validity against quality of life measures, the correlation with the DFI was low (r = 0·29, P &lt; 0·0001). Comparing the change scores from baseline to week 12 for SASSAD, POEM and DFI with the change in accelerometer scores we found low, negative correlations (r = -0·02, P = 0·77; r = -0·12, P = 0·06; and r = -0·01, P = 0·87, respectively). In general, the units were well tolerated but suggestions were made that could improve their usability in children. Conclusions: Actigraphy did not correlate well with disease severity or quality of life when used as an objective outcome measure in a multicentre clinical trial, and was not responsive to change over time. Further work is needed to establish why this might be, and to establish improved methods of distinguishing between eczema-related and eczema-nonrelated movements. abstract_id: PUBMED:23891353 A simple asthma prediction tool for preschool children with wheeze or cough. Background: Many preschool children have wheeze or cough, but only some have asthma later. Existing prediction tools are difficult to apply in clinical practice or exhibit methodological weaknesses. Objective: We sought to develop a simple and robust tool for predicting asthma at school age in preschool children with wheeze or cough. Methods: From a population-based cohort in Leicestershire, United Kingdom, we included 1- to 3-year-old subjects seeing a doctor for wheeze or cough and assessed the prevalence of asthma 5 years later. We considered only noninvasive predictors that are easy to assess in primary care: demographic and perinatal data, eczema, upper and lower respiratory tract symptoms, and family history of atopy. We developed a model using logistic regression, avoided overfitting with the least absolute shrinkage and selection operator penalty, and then simplified it to a practical tool. We performed internal validation and assessed its predictive performance using the scaled Brier score and the area under the receiver operating characteristic curve. Results: Of 1226 symptomatic children with follow-up information, 345 (28%) had asthma 5 years later. The tool consists of 10 predictors yielding a total score between 0 and 15: sex, age, wheeze without colds, wheeze frequency, activity disturbance, shortness of breath, exercise-related and aeroallergen-related wheeze/cough, eczema, and parental history of asthma/bronchitis. The scaled Brier scores for the internally validated model and tool were 0.20 and 0.16, and the areas under the receiver operating characteristic curves were 0.76 and 0.74, respectively. Conclusion: This tool represents a simple, low-cost, and noninvasive method to predict the risk of later asthma in symptomatic preschool children, which is ready to be tested in other populations. abstract_id: PUBMED:27156181 Associations of Physical Activity and Sedentary Behavior with Atopic Disease in United States Children. Objectives: To determine if eczema, asthma, and hay fever are associated with vigorous physical activity, television/video game usage, and sports participation and if sleep disturbance modifies such associations. Study Design: Data were analyzed from 2 cross-sectional studies including 133 107 children age 6-17 years enrolled in the 2003-2004 and 2007-2008 National Survey of Children's Health. Bivariate and multivariate survey logistic regression models were created to calculate the odds of atopic disease and atopic disease severity on vigorous physical activity, television/video game use, and sports participation. Results: In multivariate logistic regression models controlling for sociodemographic factors, lifetime history of asthma was associated with decreased odds of ≥1 days of vigorous physical activity (aOR, 0.87; 95% CI, 0.77-0.99) and decreased odds of sports participation (0.91; 95% CI, 0.84-0.99). Atopic disease accompanied by sleep disturbance had significantly higher odds of screen time and lower odds of sports participation compared with children with either atopic disease or sleep disturbance alone. Severe eczema (aOR, 0.39; 95% CI, 0.19-0.78), asthma (aOR, 0.29; 95% CI, 0.14-0.61), and hay fever (aOR, 0.48; 95% CI, 0.24-0.97) were all associated with decreased odds of ≥1 days of vigorous physical activity. Moderate (aOR, 0.76; 95% CI, 0.57-0.99) and severe eczema (aOR, 0.45; 95% CI, 0.28-0.73), severe asthma (aOR, 0.47; 95% CI, 0.25-0.89), and hay fever (aOR, 0.53; 95% CI, 0.36-0.61) were associated with decreased odds of sports participation in the past year. Conclusions: Children with severe atopic disease, accompanied by sleep disturbance, have higher risk of sedentary behaviors. abstract_id: PUBMED:31384581 Measuring the quality of life of the families of children with eczema in Hong Kong. Background: Eczema is the most common skin problem among children in Hong Kong. Previous studies have highlighted that the quality of life of the families of children with eczema influences the effects of eczema interventions. However, the Chinese version of the Family Dermatology Life Quality Index (C-FDLQI), a tool for measuring the quality of life of the families of children with eczema, has not yet been validated. Objective: This study examined the psychometric properties of the C-FDLQI among parents and caregivers of children with eczema in Hong Kong. Methods: This study evaluated the internal consistency, test-retest reliability and structural validity of the C-FDLQI and its convergent validity by examining its correlations with the SCORing Atopic Dermatitis (SCORAD) and the Cantonese version of the Children's Dermatology Life Quality Index (C-CDLQI) among 147 Chinese parents/caregivers of children with varying degrees of eczema. Results: Based on the ratings by an expert panel, both the content validity index and semantic equivalence of the C-FDLQI were satisfactory (&gt;0.90). The C-FDLQI showed high internal consistency, with a Cronbach α of 0.95. Its test-retest reliability was good, with weighted kappa values for the items ranging from 0.70 to 1.00. The total scores of the C-FDLQI showed positive correlations with those of the C-CDLQI (Pearson r = 0.75, p &lt; 0.001) and SCORAD (Pearson r = 0.62, p &lt; 0.001). Known-group comparisons of the C-FDLQI between the parents/caregivers of children with mild eczema and those of children with moderate to severe eczema showed a significant difference (t = -7.343, p &lt; 0.001), indicating that the C-FDLQI had acceptable convergent validity. Confirmatory factor analysis supported the one-factor structure of the C-FDLQI. Conclusion: The results suggest that the C-FDLQI is a reliable and valid tool for evaluating the quality of life of parents or caregivers of children with eczema in Hong Kong. abstract_id: PUBMED:16792766 A comparative study of impairment of quality of life in children with skin disease and children with other chronic childhood diseases. Background: Chronic disease can have physical and psychological effects which affect social functioning. These effects can be better understood from the perspective of parent and child by the use of health-related quality of life (HRQL) measures. Various HRQL measures are now available, of which generic health measures have been the most widely used. These permit comparison between different diseases and also the normal population. Objectives: To cross-validate a new generic HRQL proxy measure for children, the Children's Life Quality Index (CLQI), with an established speciality-specific dermatological questionnaire, the Children's Dermatology Life Quality Index (CDLQI), in a group of children with chronic skin diseases. The impairment of HRQL in the same group of children with skin disease was then compared with that associated with other common chronic childhood diseases using the CLQI. Methods: The CDLQI was completed by 379 children aged 5-16 years with skin disease of more than 6 months' duration. Their parents (n=379) and parents of 161 children aged 5-16 years with other chronic diseases were also asked to complete a proxy measure, the CLQI. Results: Using linear regression analysis, the CLQI and the CDLQI scores showed a strong linear association (rs=0.72, P&lt;0.001) and on a Bland-Altman plot, reasonably good agreement (expressing scores out of 100, the 95% limits of agreement were from -25.5/100 to 26.7/100). In the child's opinion psoriasis and atopic dermatitis (AD) caused the greatest impairment (CDLQI scores of 30.6% and 30.5%), followed by urticaria (20%) and acne (18%). Using the generic CLQI (scored 0-36), from the parental perspective the highest score was for AD (33%), followed by urticaria (28%), psoriasis (27%) and alopecia (19%). Comparing this with children with other chronic diseases, those with cerebral palsy had the highest score (38%), followed in descending order by those with generalized AD (33%), renal disease (33%), cystic fibrosis (32%), urticaria (28%), asthma (28%) and psoriasis (27%). Diseases such as epilepsy (24%) and enuresis (24%) scored higher than diabetes (19%), localized eczema (19%), alopecia (19%) and acne (16%). Conclusions: Using the CLQI we have shown that HRQL impairment in children with chronic skin disease is at least equal to that experienced by children with many other chronic diseases of childhood, with AD and psoriasis having the greatest impact on HRQL among chronic skin disorders and only cerebral palsy scoring higher than AD. Cross-validation of the CLQI with the CDLQI in the group of children with skin disease demonstrates a strong linear association and good agreement between the two. abstract_id: PUBMED:21504435 Patch testing is a useful investigation in children with eczema. Background: Allergic contact dermatitis in children is less recognized than in adults. However, recently, allergic contact dermatitis has started to attract more interest as a cause of or contributor to eczema in children, and patch testing has been gaining in recognition as a useful diagnostic tool in this group. Objectives: The aim of this analysis was to investigate the results of patch testing of selected children with eczema of various types (mostly atopic dermatitis) attending the Sheffield Children's Hospital, and to assess potential allergens that might elicit allergic contact dermatitis. Patients And Methods: We analysed retrospectively the patch test results in 110 children aged between 2 and 18 years, referred to a contact dermatitis clinic between April 2002 and December 2008. We looked at the percentages of relevant positive reactions in boys and girls, by age groups, and recorded the outcome of treatment following patch testing. Results: One or more positive allergic reactions of current or past relevance was found in 48/110 children (44%; 29 females and 19 males). There were 94 allergy-positive patch test reactions in 110 patients: 81 had a reaction of current or past relevance, 12 had a reaction of unknown relevance, and 1 had reaction that was a cross-reaction. The commonest allergens with present or past relevance were medicaments, plant allergens, house dust mite, nickel, Amerchol® L101 (a lanolin derivative), and 2-bromo-2-nitropropane-1,3-diol. However, finding a positive allergen was not associated with a better clinical outcome. Conclusions: We have shown that patch testing can identify relevant allergens in 44% of children with eczema. The commonest relevant allergens were medicament allergens, plant allergens, house dust mite, nickel, Amerchol® L101, and 2-bromo-2-nitropropane-1,3-diol. Patch testing can be performed in children as young as 2 years with the proper preparation. abstract_id: PUBMED:38090787 A Mobile Health App for Facilitating Disease Management in Children With Atopic Dermatitis: Feasibility and Impact Study. Background: Inadequate control of atopic dermatitis (AD) increases the frequency of exacerbations and reduces the quality of life. Mobile health apps provide information and communication technology and may increase treatment adherence and facilitate disease management at home. The mobile health app, Atopic App, designed for patients and their caregivers, and the associated web-based patient education program, Atopic School, provide an opportunity for improving patients' and caregivers' engagement and adherence to the management of AD. Objective: This noninterventional, observational study aimed to explore the feasibility and potential impact on the management of AD in children by caregivers using the Atopic App mobile health app. Methods: The patient-oriented eczema measure (POEM) and numerical rating scale for the grading of pruritus were used as severity scores (scale range: 0-28). The artificial intelligence model of the app was used to assess the severity of AD based on the eczema area and severity index approach. The deidentified data enabled the analysis of the severity of AD, treatment plan history, potential triggers of flare-ups, usage of available features of the app, and the impact of patient education. Results: During a 12-month period, of the 1223 users who installed the app, 910 (74.4%) registered users were caregivers of children with AD. The web-based Atopic School course was accessed by 266 (29.2%) caregivers of children with AD, 134 (50.4%) of whom completed the course. Usage of the app was significantly more frequent among those who completed the Atopic School program than among those who did not access or did not complete the course (P&lt;.001). Users who completed a second POEM 21 to 27 days apart exhibited a significant improvement of AD severity based on the POEM score (P&lt;.001), with an average improvement of 3.86 (SD 6.85) points. The artificial intelligence severity score and itching score were highly correlated with the POEM score (r=0.35 and r=0.52, respectively). Conclusions: The Atopic App provides valuable real-world data on the epidemiology, severity dynamics, treatment patterns, and exacerbation-trigger correlations in patients with AD. The significant reduction in the POEM score among users of the Atopic App indicates a potential impact of this tool on health care engagement by caregivers of children with AD. abstract_id: PUBMED:17129899 The importance of children's illness beliefs: the Children's Illness Perception Questionnaire (CIPQ) as a reliable assessment tool for eczema and asthma. A lack of information about disease in children can lead to erroneous views such as children believing that hospital admittance or the presence of a disease is a punishment for a perceived wrong. There has thus far been no standard tool available to measure children's illness conceptualizations from a Leventhalian framework. Three groups of children with eczema, asthma and eczema and asthma between the ages of 7 and 12 years of age were recruited. Children were given the Children's Illness Perception Questionnaire (CIPQ), a 26-item instrument adapted from the Illness Perception Questionnaire for adults. A Kuder - Richardson 20 test of reliability for dichotomous data was performed allowing an estimate of the internal consistency of the measurement scales. It can be seen that, for all three illness groups, internal consistency is acceptable for the timeline and consequences scale. The cure/control scale, however, was not internally consistent for any illness group. As health professionals, we need to develop the means to further understand how paediatric illness beliefs relate to specific disease types, age and psychosocial factors and the utility of this instrument is discussed within this context. abstract_id: PUBMED:32449786 Patient-Oriented Eczema Measure score: A Useful Tool for Web-Based Surveys in Patients with Atopic Dermatitis. The Patient-Oriented Eczema Measure (POEM, 0-28 points) is a self-assessed, repeatable measurement tool for measuring atopic dermatitis (AD) severity. How-ever, whether POEM score is influenced by allergic comorbidities and whether POEM's severity banding is applicable in web-based surveys for AD remain unclear. A web-based questionnaire survey was conducted in 329 patients with AD. POEM, self-reported severity of AD, and comorbidity of allergic diseases including asthma, pollen rhinitis, allergic conjunctivitis, and food allergy were assessed. POEM scores were not affected by a history of comorbid allergic diseases. The severity banding for POEM scores on the web-based survey was as follows: clear/almost clear = 0, mild = 1-8, moderate = 9-21, and severe/very severe = 22-28, which was comparable to previous banding. These results suggest that POEM is useful for determining AD severity, even in web-based surveys. Patients with POEM scores above 9 points may be grouped into moderate, severe, and very severe AD. abstract_id: PUBMED:9496795 Knemometry in children with atopic dermatitis treated with topical glucocorticoids. Recently the knemometer, a lower leg length measuring device, has been introduced for sensitive assessment of systemic activity of exogeneous glucocorticoids in children. The aim of this study was to assess by means of knemometry whether the topical glucocorticoid budesonide affects short-term growth in children with atopic dermatitis. Fourteen children 5 to 12 years old were studied in an open longitudinal trial with three periods of 2 weeks duration. In periods 1 (run-in) and 3 (run-out), the children were treated with emollient. In period 2, budesonide cream 0.025% was followed by emollient twice daily to all of the body except the face. Eczema was evaluated according to a score based on extent and activity. Knemometry was performed twice weekly. Compared to the run-in and run-out periods the mean growth rate during budesonide treatment was reduced by 0.11 mm/wk (p &gt; .05) and 0.40 mm/wk (p &lt; .05), respectively. The mean growth rate during run-out was increased by 0.29 mm/wk as compared to run-in (p &lt; .05). Compared to run-in the mean severity indices during budesonide treatment and run-out were reduced by 1.55 (p &lt; .05) and 1.55 points (p &lt;.05), respectively. The concomitant variations in lower leg growth rate and disease activity suggest that short-term treatment with topical glucocorticoids may provide a better growth potential during the weeks after withdrawal of the treatment. Whether this is due to improved disease control needs further study. Being a noninvasive method, knemometry may be useful for comparing different topical glucocorticoids and administration regimens in children in whom vasoconstrictor assays are difficult. Answer: Accelerometers, which are used in actigraphy to record movement, have been proposed as an objective method for assessing itch in children with eczema. However, the study described in abstract PUBMED:22970691 found that actigraphy did not correlate well with disease severity or quality of life when used as an objective outcome measure in a multicenter clinical trial. The study involved 336 participants from the Softened Water Eczema Trial (SWET), and the accelerometer data were compared with three standardized scales: the Six Area, Six Sign Atopic Dermatitis (SASSAD) severity score, the Patient Oriented Eczema Measure (POEM), and the Dermatitis Family Impact (DFI). The results showed poor convergent validity with other measures of eczema severity, with low correlation coefficients (SASSAD r = 0.15, POEM r = 0.10, and DFI r = 0.29). Additionally, the accelerometer data were not responsive to change over time, as indicated by low, negative correlations when comparing the change scores from baseline to week 12 for SASSAD, POEM, and DFI with the change in accelerometer scores. Despite the general tolerance of the units by participants, the study concluded that further work is needed to establish improved methods of distinguishing between eczema-related and eczema-nonrelated movements. Therefore, based on this study, accelerometers were not found to be a useful tool for measuring disease activity in children with eczema in a clinical trial setting.
Instruction: Is there a baseline CD4 cell count that precludes a survival response to modern antiretroviral therapy? Abstracts: abstract_id: PUBMED:12646794 Is there a baseline CD4 cell count that precludes a survival response to modern antiretroviral therapy? Objective: Therapeutic guidelines advise that 200-350 x 106 cells/l may approximate an irreversible threshold beyond which response to therapy is compromised. We evaluated whether non-immune-based factors such as physician experience and adherence could affect survival among HIV-infected adults starting HAART. Methods: Analysis of 1416 antiretroviral naive patients who initiated triple therapy between 1 August 1996 and 31 July 2000, and were followed until 31 July 2001. Patients whose physicians had previously enrolled six or more patients were defined as having an experienced physician. Patients who received medications for at least 75% of the time during the first year of HAART were defined as adherent. Cumulative mortality rates and adjusted relative hazards were determined for various CD4 cell count strata. Results: Among patients with &lt; 50 x 106 cells/l the adjusted relative hazard of mortality was 5.07 [95% confidence interval (CI), 2.50-10.26] for patients of experienced physicians and was 11.99 (95% CI, 6.33-22.74) among patients with inexperienced physicians, in comparison to patients with &gt; or = 200 x 106 cells/l treated by experienced physicians. Similarly, among patients with &lt; 50 x 106 cells/l, the adjusted relative hazard of mortality was 6.19 (95% CI, 3.03-12.65) for adherent patients and was 35.71 (95% CI, 16.17-78.85) for non-adherent patients, in comparison to adherent patients with &gt; or = 200 x 106 cells/l. Conclusion: Survival rates following the initiation of HAART are dramatically improved among patients starting with CD4 counts &lt; 200 x 106 cells/l once adjusted for conservative estimates of physician experience and adherence. Our results indicate that the current emphasis of therapeutic guidelines on initiating therapy at CD4 cell counts above 200 x 106 cells/l should be re-examined. abstract_id: PUBMED:15377076 Using baseline CD4 cell count and plasma HIV RNA to guide the initiation of highly active antiretroviral therapy. Conflicting evidence regarding the impact of baseline plasma HIV RNA and CD4 cell count on survival after the initiation of highly active antiretroviral therapy (HAART) in HIV-infected patients has resulted in wide variability in the expert recommendations regarding when to start therapy. Early initiation of HAART may result in avoidable toxicities and premature evolution of resistance, whereas delaying HAART may increase the risk of opportunistic infections and/or preclude a worse virological and clinical response to therapy. While there is widespread consensus that HAART can be delayed to a CD4 cell count of 0.350 x 10(9) cells/L, the range between this threshold and 0.200 x 10(9) cells/L remains controversial. Greater uncertainty surrounds the role of baseline plasma HIV RNA, with some guidelines recommending initiating HAART when this level rises above 55,000 c/mL regardless of baseline CD4 cell count. The following review examines the evidence in support of delaying the initiation of HAART to a CD4 cell count of 0.200 x 10(9) cells/L regardless of plasma HIV RNA levels and outlines supporting data from a Canadian prospective cohort study of antiretroviral naive patients treated with HAART. abstract_id: PUBMED:11722270 HIV viral load response to antiretroviral therapy according to the baseline CD4 cell count and viral load. Context: It is unclear whether delay in initiation of antiretroviral therapy (ART) may lead to a poorer viral load response for patients with human immunodeficiency virus (HIV). Objective: To characterize the relationship of viral load response to ART with baseline CD4 cell count and baseline viral load. Design: Inception cohort of 3430 therapy-naive patients with HIV, of whom 3226 patients had at least 1 viral load count after the start of ART. Setting: Three cohort studies of patients cared for in HIV clinics in Europe between 1996 and 2000. Patients: All patients initiating ART consisting of at least 3 drugs initiated in or after 1996 and for whom CD4 cell count and viral load were available in the prior 6 months (at most). Main Outcome Measures: Viral load decrease to below 500 copies/mL; viral load rebound to above 500 copies/mL (2 consecutive values). Results: Of 3226 patients during the median follow-up of 119 weeks, 2741 (85%) experienced viral suppression to less than 500 copies/mL by 32 weeks. Relative hazards (RHs) of achieving this were 1.08 (95% confidence interval [CI], 0.98-1.21) and 0.94 (95% CI, 0.84-1.04) for baseline CD4 cell counts between 200 and 349 x 10(6)/L and baseline CD4 cell counts lower than 200 x 10(6)/L, respectively, compared with baseline CD4 cell counts of 350 x 10(6)/L or higher, after adjustment for several factors including baseline viral load. For baseline viral load, the RHs were 0.95 (95% CI, 0.84-1.07) and 0.65 (95% CI, 0.58-0.74), for 10 000 to 99 999 and 100 000 copies/mL or greater, respectively, compared with less than 10 000 copies/mL, but the probability of viral load lower than 500 copies/mL at week 32 was similar in all 3 groups. Subsequent rebound above 500 copies/mL was no more likely with a lower baseline CD4 cell count or higher viral load. Conclusion: In this study, lower CD4 cell counts and higher viral loads at baseline were not associated with poorer virological outcome of ART. Those with baseline viral loads of greater than 100 000 copies/mL had a slower rate of achieving viral suppression. abstract_id: PUBMED:14699459 Use of total lymphocyte count for monitoring response to antiretroviral therapy. The CD4 cell count has become a key laboratory measurement in the management of human immunodeficiency virus (HIV) disease. In ideal situations, HIV-infected persons are followed up longitudinally with serial CD4 cell counts to determine disease progression, risk for opportunistic infection, and the need for prophylactic or therapeutic intervention. However, the use of the CD4 cell count in resource-limited settings is often not possible because of lack of availability and high cost. Thus, other laboratory markers have been proposed as substitutes for the CD4 cell count. The data regarding the clinical utility of the total lymphocyte count (TLC) as a potential surrogate marker of immune function in patients with HIV disease are examined. The role of the TLC in the initiation of antiretroviral therapy and opportunistic infection prophylaxis, as well as the role of the TLC in monitoring the response to antiretroviral therapy, are also addressed. abstract_id: PUBMED:15167289 Long-term CD4+ T-cell response to highly active antiretroviral therapy according to baseline CD4+ T-cell count. Current treatment guidelines for HIV infection recommend a relatively late initiation of highly active antiretroviral therapy (HAART). Nevertheless, there is still a concern that immune recovery may not be as complete once CD4+ T cells have decreased below a certain threshold. This study addressed the long-term response of CD4+ T-cell counts in patients on HAART and analyzed the influence of baseline CD4+ T-cell counts, baseline viral load, and age. An observational analysis of evolution of CD4+ T cells in 861 antiretroviral therapy-naive chronic HIV-1-infected patients who started treatment consisting of at least 3 drugs in or after 1996 was performed. Patients were classified in 4 groups according to baseline CD4+ T cells: &lt;200 cells/mm3, 200-349 cells/mm3, 350-499 cells/mm3, and &gt;or=500 cells/mm3. The main outcome measures were proportion of patients with CD4+ T cells &lt;200/mm3 and &gt;500/mm3 at last determination and rate of CD4+ T-cell recovery. Patients were followed-up for a median of 173 weeks (interquartile range [IQR], 100-234). There were no differences in follow-up between the 4 groups. CD4+ T cells increased in the whole cohort from a median of 214 cells/mm3 (IQR, 90-355) to 499 cells/mm3 (IQR, 312-733) (P&lt;0.001). Compared with the group with a baseline CD4+ T-cell count of &gt;or=500/mm3, the relative risk of having a last determination of CD4+ T-cell counts &gt;200 cells/mm3 was 0.79 (95% CI, 0.75-0.83), 0.92 (95% CI, 0.89-0.96) and 1 for baseline CD4+ T cells &lt;200 cells/mm3, 200-349 cells/mm3, and 350-499 cells/mm3, respectively. The relative risk of having a last determination of CD4+ T-cell counts &gt;500 cells/mm3 was 0.32 (95% CI, 0.27-0.39, P&lt;0.001), 0.69 (95% CI, 0.60-0.79, P&lt;0.001), and 0.94 (95% CI, 0.83-1.06, P=0.38) for baseline CD4+ T-cell counts &lt;200 cells/mm3, 200-349 cells/mm3, and 350-0499 cells/mm3, respectively, compared with a baseline CD4+ T-cell count of &gt;or=500 cells/mm3. The increase in CD4+ T cells from baseline was statistically significant and was maintained for up to 4 years of follow-up. This increase seemed to slow down after approximately 3 years and reached a plateau after 4-5 years of follow-up even in patients who achieved and maintained viral suppression in plasma. Long-term immune recovery is possible regardless of baseline CD4+ T-cell count. However, patients who start therapy with a CD4+ T-cell count &lt;200 cells/mm3 have poorer immunologic outcome as measured by the proportion of patients with CD4+ T cells &lt;200/mm3 or &gt;500/mm3 at last determination. It seems that the immune recovery slows down after approximately 3 years of HAART and reaches a plateau after 4-5 years of HAART. abstract_id: PUBMED:16945078 The impact of malnutrition on survival and the CD4 count response in HIV-infected patients starting antiretroviral therapy. Background: The impact that malnutrition at the time of starting antiretroviral therapy (ART) has on survival and the CD4 count response is not known. Methods: A retrospective cohort study of patients attending the national HIV referral centre in Singapore who had a CD4 count less than 250 cells/microL and a measurement of body weight performed at the time of starting ART was carried out. Demographic and clinical variables were extracted from an existing database. Body mass index (BMI) was calculated from the weight in kilograms divided by the square of the height in metres. Moderate to severe malnutrition was defined as BMI less than 17 kg/m(2). Intent-to-treat Cox models were used to determine the predictors of survival. Results: A total of 394 patients were included in the analysis, of whom 79 died during a median study follow-up of 2.4 years. Moderate to severe malnutrition was present in 16% of patients at the time of starting ART, and was found to be a significant independent predictor of death [hazard ratio (HR) 2.19, 95% confidence interval (CI) 1.29-3.73, P=0.004 for those with BMI&lt;17 compared with those with BMI&gt;18.5] as were stage of disease (HR 2.47, 95% CI 1.20-5.07, P=0.014 for those who were at stage C compared with those at stage A) and the type of ART [HR 0.50, 95% CI 0.27-0.93, P=0.03 for highly active antiretroviral therapy (HAART) compared with non-HAART treatment]. Malnutrition did not impair the magnitude of the increase in CD4 count at 6 or 12 months. Conclusions: Malnutrition at the time of starting ART was significantly associated with decreased survival, but the effect appeared not to be mediated by impaired immune reconstitution. Given the increasing access to ART in developing countries and the high frequency of HIV-associated wasting, studies of nutritional therapy as an adjunct to the initiation of HAART are urgently needed. abstract_id: PUBMED:11707666 Baseline CD4(+) cell count, not viral load, correlates with virologic suppression induced by potent antiretroviral therapy. Objective: To investigate the relationship between viral load suppression and baseline viral load as well as that between viral load suppression and baseline CD4(+) cell count. Design: Meta-analysis of published and presented studies. Methods: Trials of two nucleoside analogs plus nevirapine, indinavir, nelfinavir, or efavirenz as therapy for antiretroviral treatment-naive patients with HIV infection or AIDS who were followed-up for at least 6 months were included in the meta-analysis. The proportion of patients with viral loads of &lt;200-500 copies/ml at 6 and 12 months (total number of patients, 1619 and 761, respectively) was regressed to the mean or median baseline viral load and CD4(+) cell count. Results: Thirty-six treatment arms from 30 studies were identified. Multivariate regression demonstrated a significant correlation between baseline CD4(+) cell count and virologic suppression at 6 and 12 months ( t = 2.85, p =.008; and t = 3.08, p =.010, respectively) but not between baseline viral load and virologic suppression ( t = 0.92, p =.365; and t = 1.31, p =.215, respectively). The same pattern was seen in a subanalysis of trials of nevirapine-containing therapy (CD4(+) cell count: t = 2.89, p =.014 at 6 months; viral load suppression: t = 0.84, p =.415). Conclusions: Baseline CD4(+) cell count was a better predictor of virologic suppression induced by triple combination therapy than was baseline viral load. abstract_id: PUBMED:21628668 Impact of baseline HIV-1 tropism on viral response and CD4 cell count gains in HIV-infected patients receiving first-line antiretroviral therapy. Background: Viral tropism influences the natural history of human immunodeficiency type 1 (HIV-1) disease: X4 viruses are associated with faster decreases in CD4 cell count. There is scarce information about the influence of viral tropism on treatment outcomes. Methods: Baseline plasma samples from patients recruited to the ArTEN (Atazanavir/ritnoavir vs. Nevirapine on a background of Tenofovir and Emtricitabine) trial were retrospectively tested for HIV-1 tropism using the genotypic tool geno2pheno(FPR=5.75%). ArTEN compared nevirapine with atazanavir-ritonavir, both along with tenofovir-emtricitabine, in drug-naïve patients. Results: Of 569 ArTEN patients, 428 completed 48 weeks of therapy; 282 of these received nevirapine and 146 of these received atazanavir-ritonavir. Overall, non-B subtypes of HIV-1 were recognized in 96 patients (22%) and X4 viruses were detected in 55 patients (14%). At baseline, patients with X4 viruses had higher plasma HIV RNA levels (5.4 vs 5.2 log copies/mL, respectively; P = .044) and lower CD4 cell counts (145 vs 188 cells/μL, respectively; P &lt; .001) than those with R5 strains. At week 48, virologic responses were lower in patients with X4 viruses than in patients with R5 viruses (77% vs 92%, respectively; P = .009). Multivariate analysis confirmed HIV-1 tropism as an independent predictor of virologic response at week 24 (P = .012). This association was extended to week 48 (P = .007) in clade B viruses. Conversely, CD4 cell count recovery was not influenced by baseline HIV-1 tropism. Conclusions: HIV-1 tropism is an independent predictor of virologic response to first-line antiretroviral therapy. In contrast, it does not seem to influence CD4 cell count recovery. Clinical Trials Registration: NCT00389207. abstract_id: PUBMED:14640384 Total lymphocyte count as a possible surrogate of CD4 cell count to prioritize eligibility for antiretroviral therapy among HIV-infected individuals in resource-limited settings. Objective: To characterize the value of total lymphocyte counts in predicting risk of death among patients initiating triple combination antiretroviral therapy. Methods: Study subjects included antiretroviral-naive persons aged 18 years or older who initiated treatment with triple combination therapy between August 1 1996 and September 30 1999 in a population-based observational cohort of HIV-infected individuals. Total lymphocyte counts as well as CD4 count and plasma viral load were assessed at baseline. Separate Cox proportional hazards models were devised to evaluate the effect on survival of total lymphocyte count in lieu of or with CD4 count after adjustment for other prognostic factors including plasma viral load. Results: A total of 733 antiretroviral-naive persons initiated triple drug combination antiretroviral therapy over the study period with a median follow-up of 29.5 months. In the first analysis, only baseline CD4 cell counts of 50-199 cells/microl or less than 50 microl were associated with an increased risk of mortality [adjusted relative risk (ARR) 2.90; 95% CI: 1.40, 5.98] and (ARR 6.30; 95% CI: 2.93, 13.54), respectively. When CD4 counts were excluded from the analysis as if unavailable, total lymphocyte count of between 0.8 and 1.4 G/I, and less than 0.8 G/I were both significantly associated with an increased risk of mortality (ARR 2.36; 95% CI: 1.16, 4.78) and (ARR 6.17; 95% CI: 2.93, 13.01), respectively. Conclusion: Total lymphocyte count may provide a simple and cost-effective alternative for prioritizing therapy initiation in resource-limited settings. Our results suggest that, if appropriately validated, judicious application of total lymphocyte counts could overcome one of the practical obstacles to more widespread provision of antiretroviral therapy in resource-poor settings. abstract_id: PUBMED:12131562 The role of resistance characteristics of viral strains in the prediction of the response to antiretroviral therapy in HIV infection. Objective: To study the role of resistance characteristics of viral mutants in the prediction of virologic and immunologic response to antiretroviral therapy in HIV-infection. Methods: This study is based on a mathematical model that generates viral and immunologic dynamics of HIV infection, taking into account drug-resistant mutants and therapy. We analyzed predictive factors of the increase in CD4 cell count and of the decrease in viral load from baseline after 6 months of HAART on a sample of 300 simulated individuals. The set of potential predictors was constituted by patients' state at initiation of therapy and by resistance characteristics of viral strains at that time. Predictive models, obtained by stepwise regression, were selected and compared using Mallows' Cp criterion. Results: In addition to baseline viral load and CD4 cell count, known to influence response to therapy, baseline CD8 cell count and resistance characteristics of detectable strains are shown to improve the accuracy of the prediction. On the contrary, resistance parameters of low frequency viral mutants have no predictive value. Conclusions: Characteristics of preexisting detectable resistant mutants as determinants of virologic and immunologic response to antiretroviral therapy increase the capacity to predict the outcome of the treatment. Therefore, the use of phenotypic and genotypic testing could be crucial and should be considered for the choice of therapy. Answer: Based on the provided abstracts, there does not appear to be a definitive baseline CD4 cell count that precludes a survival response to modern antiretroviral therapy (ART). Several studies suggest that even patients with very low baseline CD4 cell counts can experience significant improvements in survival and immune recovery when treated with highly active antiretroviral therapy (HAART). One study found that survival rates following the initiation of HAART were dramatically improved among patients starting with CD4 counts < 200 x 10^6 cells/l once adjusted for factors such as physician experience and adherence (PUBMED:12646794). This indicates that while lower CD4 counts at the initiation of therapy are associated with higher mortality, this can be mitigated by other factors, suggesting that there is no absolute CD4 threshold beyond which therapy is ineffective. Another study showed that long-term immune recovery is possible regardless of baseline CD4+ T-cell count, although patients who start therapy with a CD4+ T-cell count < 200 cells/mm3 may have a poorer immunologic outcome (PUBMED:15167289). This suggests that while starting HAART at higher CD4 counts may be beneficial, significant benefits are still achievable even at lower counts. Furthermore, a study examining the impact of malnutrition on survival and CD4 count response in HIV-infected patients starting ART found that malnutrition at the time of starting ART was significantly associated with decreased survival, but the effect did not seem to be mediated by impaired immune reconstitution (PUBMED:16945078). This implies that factors other than baseline CD4 count, such as nutritional status, can also influence survival outcomes. In summary, while lower baseline CD4 cell counts may be associated with poorer outcomes, the evidence does not support the existence of a specific CD4 cell count threshold that completely precludes a survival response to modern ART. Factors such as physician experience, adherence to therapy, and nutritional status also play critical roles in determining survival and immune recovery in patients on HAART.
Instruction: Inhibin B and anti-Mullerian hormone: markers of ovarian response in IVF/ICSI patients? Abstracts: abstract_id: PUBMED:30503199 Follicular fluid humanin concentration is related to ovarian reserve markers and clinical pregnancy after IVF-ICSI: a pilot study. Research Question: Is humanin present in the human ovary and follicular fluid? What relationship exists between humanin concentration in the follicular fluid and ovarian reserve and clinical outcomes after IVF and intracytoplasmic sperm injection (ICSI)? Design: Follicular fluid samples were collected from 179 patients undergoing their first IVF or ICSI cycle during oocyte retrieval. Ovarian tissues were collected from two patients undergoing surgery for ovarian cysts. Ovarian humanin localization was analysed using immunofluorescence staining. Expression of humanin in granulosa cells was confirmed by reverse transcription polymerase chain reaction (RT-PCR) analysis. Follicular fluid humanin levels were evaluated with enzyme-linked immunosorbent assay. Relationships between follicular fluid humanin levels and ovarian reserve markers and clinical outcomes were analysed. Results: Strong humanin expression was found in the granulosa cells, oocytes and stromal cells of the ovary. Agarose gel electrophoresis of RT-PCR products showed rich humanin mRNA expression in human granulosa cells (119 bp). Follicular fluid humanin concentrations ranged from 86.40 to 417.60 pg/ml. They significantly correlated with FSH (r = -0.21; P &lt; 0.01), LH (r = -0.18; P = 0.02), antral follicle count (r = 0.27; P &lt; 0.01), anti-Müllerian hormone (r = 0.24; P = 0.03) and inhibin B (r = 0.46; P &lt; 0.01) levels. Patients were subdivided into four groups according to follicular fluid humanin concentration quartiles (Q1-Q4). Patients in Q4 were more likely to achieve a pregnancy than Q1 (OR = 3.60; 95% CI 1.09 to 11.84). Conclusions: Humanin concentration in the follicular fluid was positively associated with ovarian reserve and clinical pregnancy rate. abstract_id: PUBMED:37354554 Androgen and inhibin B levels during ovarian stimulation before and after 8 weeks of low-dose hCG priming in women with low ovarian reserve. Study Question: Does 8 weeks of daily low-dose hCG administration affect androgen or inhibin B levels in serum and/or follicular fluid (FF) during the subsequent IVF/ICSI cycle in women with low ovarian reserve? Summary Answer: Androgen levels in serum and FF, and inhibin B levels in serum, decreased following 8 weeks of hCG administration. What Is Known Already: Recently, we showed that 8 weeks of low-dose hCG priming, in between two IVF/ICSI treatments in women with poor ovarian responder (anti-Müllerian hormone (AMH) &lt;6.29 pmol/l), resulted in more follicles of 2-5 mm and less of 6-10-mm diameter at the start of stimulation and more retrieved oocytes at oocyte retrieval. The duration of stimulation and total FSH consumption was increased in the IVF/ICSI cycle after priming. Hypothetically, hCG priming stimulates intraovarian androgen synthesis causing upregulation of FSH receptors (FSHR) on granulosa cells. It was therefore unexpected that antral follicles were smaller and the stimulation time longer after hCG priming. This might indicate a different mechanism of action than previously suggested. Study Design, Size, Duration: Blood samples were drawn on stimulation day 1, stimulation days 5-6, trigger day, day of oocyte retrieval, and oocyte retrieval + 5 days in the IVF/ICSI cycles before and after hCG priming (the control and study cycles, respectively). FF was collected from the first aspirated follicle on both sides during oocyte retrieval in both cycles. The study was conducted as a prospective, paired, non-blinded, single-center study conducted between January 2021 and July 2021 at a tertiary care center. The 20 participants underwent two identical IVF/ICSI treatments: a control cycle including elective freezing of all blastocysts and a study cycle with fresh blastocyst transfer. The control and study cycles were separated by 8 weeks (two menstrual cycles) of hCG priming by daily injections of 260 IU recombinant hCG. Participants/materials, Setting, Methods: Women aged 18-40 years with cycle lengths of 23-35 days and AMH &lt;6.29 pmol/l were included. Control and study IVF/ICSI cycles were performed in a fixed GnRH-antagonist protocol. Main Results And The Role Of Chance: Inhibin B was lower on stimulation day 1 after hCG priming (P = 0.05). Dehydroepiandrosterone sulfate (DHEAS) was significantly lower on stimulation day 1 (P = 0.03), and DHEAS and androstenedione were significantly lower on stimulation days 5-6 after priming (P = 0.02 and P = 0.02) The testosterone level in FF was significantly lower in the study cycle (P = 0.008), while the concentrations of inhibin B and androstenedione in the FF did not differ between the study and control cycles. A lower serum inhibin B in the study cycle corresponds with the antral follicles being significantly smaller after priming, and this probably led to a longer stimulation time in the study cycle. This contradicts the theory that hCG priming increases the intraovarian androgen level, which in turn causes more FSHR on developing (antral up to preovulatory) follicles. However, based on this study, we cannot rule out that an increased intra-follicular androgen level was present at initiation of the ovarian stimulation, without elevating the androgen level in serum and that an increased androgen level may have rescued some small antral follicles that would have otherwise undergone atresia by the end of the previous menstrual cycle. We retrieved significantly more oocytes in the Study cycle, and the production of estradiol per follicle ≥10-mm diameter on trigger day was comparable in the study and control cycles, suggesting that the rescued follicles were competent in terms of producing oocytes and steroid hormones. Limitations, Reasons For Caution: The sample size was small, and the study was not randomized. Our study design did not allow for the measurement and comparison of androgen levels or FSHR expression in small antral follicles before and immediately after the hCG-priming period. Wider Implications Of The Findings: The results make us question the mechanism of action behind hCG priming prior to IVF. It is important to design a study with the puncture of small antral follicles before and immediately after priming to investigate the proposed hypothesis. Improved cycle outcomes, i.e. more retrieved oocytes, must be confirmed in a larger, preferably randomized study. Study Funding/competing Interest(s): This study was funded by an unrestricted grant from Gedeon Richter awarded to the institution. A.P. reports personal consulting fees from PregLem SA, Novo Nordisk A/S, Ferring Pharmaceuticals A/S, Gedeon Richter Nordics AB, Cryos International, and Merck A/S outside the submitted work and payment or honoraria for lectures from Gedeon Richter Nordics AB, Ferring Pharmaceuticals A/S, Merck A/S, and Theramex and Organon &amp; Co and payment for participation in an advisory board for Preglem. Grants to the institution have been provided by Gedeon Richter Nordics AB, Ferring Pharmaceuticals A/S, and Merck A/S, and equipment and travel support has been given to the institution by Gedeon Richter Nordics AB. The remaining authors have no conflicts of interest to declare. Trial Registration Number: ClinicalTrials.gov Identifier: NCT04643925. abstract_id: PUBMED:15521870 Inhibin B and anti-Mullerian hormone: markers of ovarian response in IVF/ICSI patients? Objective: The objective of this study was to investigate whether follicle stimulating hormone (FSH), anti-Mullerian hormone (AMH) and inhibin B could be useful in predicting the ovarian response to gonadotrophin stimulation in assisted reproduction patients who are considered to be poor responders. Design: Prospective study. Setting: Fertility unit. Sample: Blood samples were collected on day five or six in the early follicular phase of an untreated menstrual cycle. Samples were collected from 69 patients. Methods: Serum samples were assayed for FSH, AMH and inhibin B using commercial immunoassay kits. Main Outcome Measures: Response to gonadotrophin stimulation and number of eggs collected. Results: Among the 69 patients, 52 patients completed an IVF cycle and 17 patients had to cancel the cycle because of poor ovarian response to gonadotrophin stimulation. Mean FSH levels were significantly higher (P &lt; 0.05) in the cancelled group (10.69 +/- 2.27 mIU/mL) compared with the cycle-completed group (7.89 +/- 0.78 mIU/mL). Mean AMH levels were significantly lower (P &lt; 0.01) in the cancelled group (0.175 +/- 0.04 ng/mL) compared with the cycle-completed group (1.13 +/- 0.2 ng/mL). Mean inhibin B levels were significantly lower (P &lt; 0.001) in the cancelled group (70 +/- 12.79 pg/mL) compared with the completed group (126.9 +/- 8.8 pg/mL). Predictive statistics show that AMH is the best single marker and that the combination of FSH, AMH and inhibin B is modestly better than the single marker. Linear regression analysis in the cycle completed patients shows that although FSH (r= 0.25, P &lt; 0.05) and inhibin B (r= 0.35, P &lt; 0.05) have a significant linear association with the number of eggs collected, AMH has the greatest association (r= 0.69, P &lt; 0.001) with the number of eggs collected among the parameters measured. Conclusion: In this particular group of IVF patients, AMH is the best single marker of ovarian response to gonadotrophin stimulation. The combined markers modestly improved the prediction. abstract_id: PUBMED:25364032 Implications of Blood Type for Ovarian Reserve and Infertility - Impact on Oocyte Yield in IVF Patients. Introduction: Diminished ovarian reserve (DOR) has been linked to certain subpopulations and distinct gene polymorphisms. It has even been hypothesized that the AB0 blood group system could be linked to ovarian reserve (OR) as reflected by early follicular phase follicle stimulating hormone (FSH) levels. Although estimation of OR is routinely done using levels of anti-Müllerian hormone (AMH), FSH, estradiol or inhibin B, the diagnostic accuracy of these markers is often limited. The aim of this study was to evaluate whether there is any correlation between IVF patients' AB0 blood group system and ART outcome. Methods: In this retrospective observational single-center study we investigated the outcome of 1889 IVF cycles carried out between 2005 and 2012 with regard to blood type and OR in different age groups (21-36 years and 37-43 years). The number of cumulus oocyte complexes (COCs) and metaphase II oocytes obtained after ovarian stimulation, fertilization rate (FR), pregnancy rate (PR) and birth rate (BR) were evaluated with respect to maternal age (21-36 and 37-43 years, respectively). Results: We found no significant differences in the average number of COCs after ovum pick-up in either of the age groups. Moreover, the mean number of MII oocytes and 2PN stages were similar for all blood type groups. As regards IVF outcome measured in terms of PR and BR, no significant differences were observed between the different blood groups. In conclusion, no correlation was found between blood type and female fertility. Discussion: The most precise definition of OR is determining the number of competent oocytes. Based on the finding of our study, the hypothesis that there is a correlation between OR and AB0 blood group system can be dismissed for Caucasian IVF patients. abstract_id: PUBMED:16891297 A systematic review of tests predicting ovarian reserve and IVF outcome. The age-related decline of the success in IVF is largely attributable to a progressive decline of ovarian oocyte quality and quantity. Over the past two decades, a number of so-called ovarian reserve tests (ORTs) have been designed to determine oocyte reserve and quality and have been evaluated for their ability to predict the outcome of IVF in terms of oocyte yield and occurrence of pregnancy. Many of these tests have become part of the routine diagnostic procedure for infertility patients who undergo assisted reproductive techniques. The unifying goals are traditionally to find out how a patient will respond to stimulation and what are their chances of pregnancy. Evidence-based medicine has progressively developed as the standard approach for many diagnostic procedures and treatment options in the field of reproductive medicine. We here provide the first comprehensive systematic literature review, including an a priori protocolized information retrieval on all currently available and applied tests, namely early-follicular-phase blood values of FSH, estradiol, inhibin B and anti-Müllerian hormone (AMH), the antral follicle count (AFC), the ovarian volume (OVVOL) and the ovarian blood flow, and furthermore the Clomiphene Citrate Challenge Test (CCCT), the exogenous FSH ORT (EFORT) and the gonadotrophin agonist stimulation test (GAST), all as measures to predict ovarian response and chance of pregnancy. We provide, where possible, an integrated receiver operating characteristic (ROC) analysis and curve of all individual evaluated published papers of each test, as well as a formal judgement upon the clinical value. Our analysis shows that the ORTs known to date have only modest-to-poor predictive properties and are therefore far from suitable for relevant clinical use. Accuracy of testing for the occurrence of poor ovarian response to hyperstimulation appears to be modest. Whether the a priori identification of actual poor responders in the first IVF cycle has any prognostic value for their chances of conception in the course of a series of IVF cycles remains to be established. The accuracy of predicting the occurrence of pregnancy is very limited. If a high threshold is used, to prevent couples from wrongly being refused IVF, a very small minority of IVF-indicated cases (approximately 3%) are identified as having unfavourable prospects in an IVF treatment cycle. Although mostly inexpensive and not very demanding, the use of any ORT for outcome prediction cannot be supported. As poor ovarian response will provide some information on OR status, especially if the stimulation is maximal, entering the first cycle of IVF without any prior testing seems to be the preferable strategy. abstract_id: PUBMED:21843890 Comparisons of inhibin B versus antimüllerian hormone in poor ovarian responders undergoing in vitro fertilization. Objective: To evaluate serum inhibin B as a predictor of poor ovarian response in patients undergoing in vitro fertilization/intracytoplasmic sperm injection (IVF-ICSI) and to compare it with the performance of antimüllerian hormone (AMH). Design: Meta-analysis. Setting: University hospital. Patient(s): Patients undergoing IVF. Intervention(s): None. Main Outcome Measure(s): Poor ovarian response in controlled ovarian hyperstimulation (COH). Result(s): Fifteen studies on serum inhibin B and 12 studies on AMH were selected for meta-analysis. Both basal and stimulated inhibin B levels were statistically significantly lower in poor ovarian responders than in controls. The estimated summary receiver operating characteristic (ROC) curves suggested that stimulated inhibin B was more accurate than basal inhibin B and AMH in the prediction of poor ovarian response. Conclusion(s): Both basal and stimulated serum inhibin B levels are lower in poor responders than in controls. Compared with AMH, stimulated inhibin B is a more accurate predictor of ovarian response in patients undergoing IVF, making it a potentially useful tool in future IVF practice. abstract_id: PUBMED:18024272 Correlations between anti-müllerian hormone, inhibin B, and activin A in follicular fluid in IVF/ICSI patients for assessing the maturation and developmental potential of oocytes. Objective: The objective of the present study was to evaluate the correlation between anti-müllerian hormone (AMH), inhibin B, and activin A in follicular fluid from patients receiving treatment with in-vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI), to identify a parameter to assess the maturation and developmental potential of oocytes. - Materials And Methods: AMH, inhibin B, and activin A were measured in follicular fluid from 27 patients undergoing IVF/ICSI treatment for male-factor infertility, tubal occlusion, endometriosis, or anovulation. The values were correlated with the serum estradiol level, the numbers and maturation of the oocytes, and the outcome of IVF/ICSI. - Results: A positive correlation was found between AMH in follicular fluid and the number of oocytes retrieved. High inhibin B levels in follicular fluid and high serum E subset2 levels indicated a normal ovarian response to stimulation, corresponding to the oocyte numbers, while low inhibin B and 17-beta-estradiol (E subset2) levels indicated poor responders to stimulation. An activin A/inhibin B ratio of less than 1 and very high inhibin B levels correlated with large numbers of oocytes, while a ratio of 1-2 and high inhibin levels correlated with regular numbers of oocytes. An activin/inhibin ratio of more than 3 and low inhibin levels were found in poor responders. Pregnancies occurred predominantly in the group with a normal or high response. Patients with elevated ratios for 17-beta-estradiol/AMH, oocyte numbers/AMH, and metaphase II oocyte numbers/AMH had the best chances of becoming pregnant, indicating an inverse correlation between AMH and the maturation and developmental potential of the oocytes. - Conclusions: In IVF/ICSI patients, a positive correlation was found between AMH, inhibin B, and the activin A/inhibin B ratio in follicular fluid, on the one hand; and between serum 17-beta-estradiol levels and the numbers of oocytes retrieved, on the other. The activin A/inhibin B ratio correlated with the number of oocytes retrieved. The ratio for 17-beta-estradiol, oocyte numbers, and metaphase II oocytes relative to AMH indicated the best developmental potential, and it can therefore be assumed that there is a negative correlation between AMH levels and the maturation and quality of oocytes. abstract_id: PUBMED:24431654 "Anti-Mullerian Hormone: Marker for Ovarian Response in Controlled Ovarian Stimulation for IVF Patients": A First Pilot Study in the Indian Population. Objective: To measure the levels of early follicular phase Anti-Mullerian hormone (AMH) in Indian patients of IVF and to evaluate the AMH as a predictive marker of ovarian response in assisted reproductive technology outcome. Methods: Sixty women (age 25-40 years) selected for in vitro fertilization treatment were included in this study. Analysis of day-2 serum samples was done for the AMH, FSH, Inhibin B, and LH by ELISA kit methods. USG was done for the antral follicle count (AFC) and oocytes' retrieval. Hormone parameters were compared and correlated with the oocytes' retrieval count and the AFC. The discriminant analysis was done to compare relevance of different parameters for predicting ovarian response. Results: The Anti-Mullerian hormone showed a significant correlation with the oocytes' retrieval after ovulation induction for IVF (r = 0.648, p &lt; 0.0001) and no correlation was seen with serum FSH, LH, and Inhibin. Serum AMH levels show 80 % sensitivity and 80 % specificity in predicting poor ovarian response. Conclusions: There is a significant correlation between day-2 serum AMH levels and the oocytes' retrieval count in women undergoing ovulation induction for IVF, and the AMH is a good marker as the negative predictive values for the success of ART. There is no correlation found between other hormonal ovarian reserve markers and the oocytes' retrieval count. abstract_id: PUBMED:17114197 Evaluation of the utility of multiple endocrine and ultrasound measures of ovarian reserve in the prediction of cycle cancellation in a high-risk IVF population. Background: Unexpectedly poor response leading to IVF cycle cancellation is a distressing treatment outcome. We have prospectively assessed several markers of ovarian reserve in a high risk IVF population to determine their utility in predicting IVF cycle cancellation. Methods: Eighty-four women at high risk of cycle cancellation due to raised FSH, previous poor response and/or age &gt; or =40 years attending for high-dose short protocol IVF treatment had baseline measures of FSH, inhibin B, anti-Müllerian hormone (AMH), antral follicle count (AFC) and ovarian volume. A GnRH agonist was then administered and, 24 h later, estradiol (E(2)) and inhibin B measures were repeated. Results: Fifty-seven per cent of patients in this study had a poor response to stimulation, and 15% were cancelled. Using multivariate logistic regression, we found that day 3 inhibin B levels were the best predictor of cycle cancellation with an area under the receiver operating curve (ROC AUC) = 0.78 (P = 0.017). When only considering baseline variables, mean ovarian volume was the best predictor of cycle cancellation (ROC AUC = 0.78; P = 0.016). AMH concentrations were the best predictor of a poor response (P = 0.003), and AMH was also predictive of cycle cancellation (P = 0.007) with very little inter-cycle variability. None of the parameters studied were predictive of ongoing pregnancy. Conclusions: This group of at-risk patients had a high rate of poor response to simulation and cancellation. Although several measures of ovarian reserve were able to predict cycle cancellation, none were able to predict pregnancy. AMH was predictive of both cycle cancellation and poor response with little inter-cycle variability. abstract_id: PUBMED:18387961 Relevance of anti-Mullerian hormone measurement in a routine IVF program. Background: Diminished ovarian reserve has become a major cause of infertility. Anti-Mullerian hormone (AMH) seems to be a promising candidate to assess ovarian reserve and predict the response to controlled ovarian hyperstimulation (COH). This prospective study was conducted to evaluate the relevance of AMH in a routine IVF program. Methods: Three hundred and sixteen patients were prospectively enrolled to enter their first IVF/ICSI-cycle. Age, FSH-, inhibin B- and AMH-levels and their predictive values for ovarian response and clinical pregnancy rate were compared by discriminant analyses. Results: A total of 132 oocyte retrievals were performed. A calculated cut-off level &lt; or =1.26 ng/ml AMH alone detected poor responders (&lt; or =4 oocytes) with a sensitivity of 97%, and there was a 98% correct prediction of normal response in COH if levels were above this threshold. With levels &lt;0.5 ng/ml, a correct prediction of very poor response (&lt; or =2 oocytes) was possible in 88% of cases. Levels of AMH &gt; or =0.5 ng/ml were not significantly correlated with clinical pregnancy rates. Conclusions: AMH is a predictor of ovarian response and suitable for screening. Levels &lt; or =1.26 ng/ml are highly predictive of reduced ovarian reserve and should be confirmed by a second line antral follicle count. Measurement of AMH supports clinical decisions, but alone it is not a suitable predictor of IVF success. Answer: Inhibin B and anti-Müllerian hormone (AMH) are indeed markers of ovarian response in IVF/ICSI patients. Inhibin B levels have been shown to correlate with ovarian reserve markers and clinical outcomes after IVF and ICSI. A study found that follicular fluid humanin concentrations, which are related to inhibin B levels, were positively associated with ovarian reserve and clinical pregnancy rate (PUBMED:30503199). Another study demonstrated that inhibin B levels were significantly lower in poor ovarian responders compared to controls, and stimulated inhibin B was more accurate than basal inhibin B and AMH in predicting poor ovarian response (PUBMED:21843890). AMH is also a well-established marker of ovarian response. It has been shown to be the best single marker of ovarian response to gonadotrophin stimulation in IVF patients, with combined markers modestly improving the prediction (PUBMED:15521870). AMH levels have been found to correlate with the number of oocytes retrieved after ovulation induction for IVF, making it a good marker for the success of ART (PUBMED:24431654). Additionally, AMH was predictive of both cycle cancellation and poor response with little inter-cycle variability in a high-risk IVF population (PUBMED:17114197). However, it is important to note that while these markers can predict ovarian response, they have modest-to-poor predictive properties for the occurrence of pregnancy (PUBMED:16891297). AMH levels greater than 0.5 ng/ml were not significantly correlated with clinical pregnancy rates, indicating that while AMH is a predictor of ovarian response and suitable for screening, it is not a suitable predictor of IVF success alone (PUBMED:18387961). In conclusion, both inhibin B and AMH are valuable markers for assessing ovarian response in IVF/ICSI patients. They can help predict the number of oocytes that can be retrieved and the likelihood of a poor ovarian response, but their ability to predict the success of IVF in terms of achieving pregnancy is limited.
Instruction: Laparoscopic colposuspension. Is it cost-effective? Abstracts: abstract_id: PUBMED:9214328 Laparoscopic colposuspension. Is it cost-effective? Background: The laparoscopic approach must be shown to be cost-effective as well as safe and technically effective before being widely adopted. A review of 54 consecutive patients who underwent open and laparoscopic colposuspension is presented and a cost-analysis is performed comparing the two approaches. Methods: This study was a retrospective controlled review of patient records and accounts of in-hospital costs incurred at a private hospital. Results: Theater costs were significantly greater in the laparoscopic group but this was balanced by a shorter length of stay and subsequent reduced accommodation cost. There was no difference in the overall in-hospital costs between the two groups. Conclusion: The laparoscopic surgical approach is safe and effective and by no means more expensive than the open approach. In the future, the laparoscopic approach can only become more cost efficient; techniques will improve and there will be earlier returns to work and, subsequently, greater productivity. abstract_id: PUBMED:27553183 Laparoscopic Burch Colposuspension Using a 3-Trocar System: Tips and Tricks. Study Objective: To describe a technique for performing laparoscopic Burch colposuspension using a 3-trocar system. Design: This educational video provides step-by-step instructions for performing a laparoscopic Burch colposuspension. This study was exempt from institutional review board approval. Setting: Midurethral slings are an effective surgical treatment for women with stress urinary incontinence, but not all patients are candidates for, or desire, vaginal mesh. For stress incontinence, nonmesh surgical procedures include pubovaginal fascial slings and retropubic Burch colposuspension. Colposuspension may be performed via an open or laparoscopic approach. As with other minimally invasive surgeries, laparoscopic colposuspension has decreased blood loss, pain, and length of stay with equivalent outcomes at 2 years compared with open procedures. This video describes a technique for performing laparoscopic Burch colposuspension using a 3-trocar system. Interventions: A laparoscopic Burch colposuspension is described using a 3-trocar system. Detailed step-by-step instructions are given, along with visualization of pertinent anatomy. Supplies needed for this procedure include a 0-degree, 5-mm laparoscope; two 5-mm trocars, 1 to be placed in the umbilicus and 1 in the left lower quadrant; one 5/12-mm trocar to be placed in the right lower quadrant for passing needles; a closed knot pusher; laparoscopic scissors; and 2 needle drivers. This technique assumes that the primary surgeon (located on the patient's left) is right-handed and that both surgeons can suture and tie knots laparoscopically. Tips are highlighted to ensure safety and ensure successful completion of the procedure. Conclusion: Laparoscopic Burch colposuspension offers a nonmesh-based repair for women with stress urinary incontinence using a minimally invasive approach. It is a reasonable alternative to offer patients with stress urinary incontinence who do not desire repair using vaginal mesh. abstract_id: PUBMED:16956333 Cost-effectiveness analysis of open colposuspension versus laparoscopic colposuspension in the treatment of urodynamic stress incontinence. Objectives: To compare the cost effectiveness of laparoscopic versus open colposuspension for the treatment of female urinary stress incontinence. Design: Cost utility analysis alongside a randomised controlled trial. Setting: Six gynaecological surgical centres within the UK. Population/sample: Women with proven stress urinary incontinence requiring surgery. Methods: Open abdominal retropubic colposuspension or laparoscopic colposuspension carried out by experienced surgeons. Main Outcome Measures: Cost, measured in pounds sterling and generic health-related quality of life, measured using the EQ-5D. The latter was used to estimate patient-specific quality-adjusted life years (QALYs). Results: Healthcare resource use over 6-month follow up translated into costs of pound 1805 for the laparoscopic arm and pound 1433 for the open arm (differential mean cost pound 372; 95% credibility interval [CrI]: 274-471). At 6 months, QALYs were slightly higher in the laparoscopic arm relative to the open arm (0.005; 95% CrI: -0.012 to 0.023). Therefore, the cost of each extra QALY in the laparoscopic group (the incremental cost-effectiveness ratio [ICER]) was pound 74,400 at 6 months. At 24 months, the laparoscopic arm again had a higher mean QALY score compared to the open surgery group. Thus, assuming that beyond 6 months the laparoscopic colposuspension would not lead to any significant additional costs compared with open colposuspension, the ICER was reduced to pound 9300 at 24 months. Extensive sensitivity analyses were carried out to test assumptions made in the base case scenario. Conclusions: Laparoscopic colposuspension is not cost effective when compared with open colposuspension during the first 6 months following surgery, but it may be cost effective over 24 months. abstract_id: PUBMED:33032847 Laparoscopic latero-abdominal colposuspension: Description of the technique, advantages and preliminary results. Introduction: There are currently various fixation or suspension techniques for pelvic organ prolapse (POP) surgery. Laparoscopic colposacropexy is considered the gold standard. We present the surgical steps of the laparoscopic latero-abdominal colposuspension (LACS) technique and the preliminary results obtained. Material And Methods: Patients with anterior and/or apical compartment symptomatic POP undergoing LACS are included. The Baden-Walker scale, the Overactive Bladder Questionnaire-Short Form (OAB-q SF), the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ-12) and the Patient Global Impression of Improvement (PGI-I) scale were used to assess the degree of prolapse, urinary filling and sexual symptoms and the level of satisfaction before and after surgery, respectively. Conventional laparoscopic material and a polyvinylidene fluoride (PVDF) mesh were used. Results: Eighteen patients were included with a minimum follow-up time of 6months. The mean surgical time was 70.3±23.8min. Anatomic correction of prolapse was seen in all cases. Only one recurrence was detected. High levels of patient satisfaction were achieved. Conclusion: LACS allowed the anatomical reconstruction of the pelvic floor and proved to be a minimally invasive, fast, effective, safe and reproducible technique. More series are needed to evaluate its role against laparoscopic colposacropexy. abstract_id: PUBMED:24907550 Technical video: modified laparoscopic colposuspension. Background: Laparoscopic colposuspension has been shown in some studies to have equivocal results as open colposuspension, and in addition to treating stress incontinence can also reduce anterior vaginal wall compartment prolapse, as described by Burch in 1961 [1]. Study Objective: To demonstrate a novel modified technique for laparoscopic colposuspension. Design: Narrated step-by-step video demonstration of the modified laparoscopic colposuspension technique. Setting: Department of Obstetrics and Gynecology, Royal Surrey County Hospital. Intervention: Initially, 180 mL methylene blue with saline solution is instilled into the bladder for clear identification. Incision and dissection bilaterally, directly onto the ileopectineal ligament (Cooper's ligament) are performed. By using the Kent dissecting knotter, dissection down the space of Retzius to the paravaginal tissues is easily performed. Two 0 Ethibond sutures (Ethicon, Inc., Somerville, NJ) are then placed on each side, between the Cooper's ligament and the paravaginal tissues. These are tied via an extracorporeal knot using the other end of the Kent dissecting knotter. The peritoneal defects are then closed sequentially using 2/0 polyglactin 910 sutures (Vicryl; Ethicon) in a figure-of-eight intracorporeal surgical slip knot technique. Main Results: The patient had second-degree anterior wall prolapse with proved stress incontinence and descent of the bladder neck observed on video urodynamics. At 8 months after surgery she has no symptomatic or measurable prolapse and no stress incontinence. Conclusion: This modified laparoscopic colposuspension procedure can be used in most cases because it is a transperitoneal technique. It requires substantially less dissection than the traditional techniques do, which results in a markedly reduced operative time. abstract_id: PUBMED:9277654 Open compared with laparoscopic approach to Burch colposuspension: a cost analysis. Objective: To compare postoperative course and hospital charges of an open versus laparoscopic approach to Burch colposuspension for the treatment of genuine stress urinary incontinence. Methods: A retrospective chart review was performed to identify all patients undergoing open or laparoscopic Burch colposuspension by the same surgeon over a 2-year period. Patients undergoing additional surgical procedures at the time of colposuspension were excluded from the study. Twenty-one patients underwent open Burch colposuspension and 17 patients underwent laparoscopic colposuspension. Demographic data including age, parity, height, and weight were collected for each group. Both groups also were compared with regard to operative time, operating room charges, estimated blood loss, intraoperative complications, change in postoperative hematocrit, time required to resume normal voiding, length of hospital stay, and total hospital charges. Results: The laparoscopic colposuspension group had significantly longer operative times (110 versus 66 minutes, P &lt; .01) and increased operating room charges ($3479 versus $2138, P &lt; .001). There was no statistical difference in estimated blood loss or change in postoperative hematocrit between the two groups. No major intraoperative complications occurred in either group. Mean length of hospital stay was 1.3 days for the laparoscopic group and 2.1 days for the open group (P &lt; .005). However, total hospital charges for the laparoscopic group were significantly higher ($4960 versus $4079, P &lt; .01). Conclusion: Laparoscopic colposuspension has been described as a minimally invasive, cost-effective technique for the surgical correction of stress urinary incontinence. Although the laparoscopic approach was found to be associated with a reduction in length of hospital stay, it had significantly higher total hospital charges than the traditional open approach because of expenses associated with increased operative time and use of laparoscopic equipment. abstract_id: PUBMED:9139540 Laparoscopic-extraperitoneal colposuspension Female stress urinary incontinence is a highly prevalent disease with a broad range of surgical approaches with different percentages of success depending on the severity of the condition and the appropriateness of the technique in the indication. A few years ago, laparoscopy was introduced as another therapeutical possibility. The paper presents the Burch-like colposuspension through an extraperitoneal laparoscopic approach for the first time in our country, describing the technique and the preliminary results with a mean follow-up of over 6 months. Our preliminary results, as well as the more numerous from other authors seem to indicate that when a Burch-like colposuspension is indicated, extraperitoneal laparoscopic approach may be the ideal one once the learning curve for laparoscopic surgery is overcome. abstract_id: PUBMED:34712598 Osteitis pubis following laparoscopic Burch colposuspension: A case report. Osteitis pubis is a condition which predominantly affects young athletes. However, it may also occur following uro-gynecological interventions. We report a case of osteitis pubis following laparoscopic Burch colposuspension. There are several theories on the pathogenesis of postoperative osteitis pubis and a wide variety of treatment options have shown inconsistent outcomes. In our case, the condition was diagnosed radiologically and was managed with antibiotics and analgesics, which resulted in complete recovery. abstract_id: PUBMED:30620096 Burch colposuspension. Aims: To evaluate the historic and pathophysiologic issues which led to the development of Burch colposuspension, to describe anatomic and technical aspects of the operation and to provide an update on current evidence. Methods: We have performed a focused literature review and have searched the current available literature about historic dimension, technical descriptions, and efficacy of Burch colposuspension. Results: Burch colposuspension, performed either by an open or a laparoscopic approach, is an effective surgical treatment for stress urinary incontinence. Conclusions: In current recommendations, Burch colposuspension remains an option for secondary treatment. Because midurethral slings have recently become under scrutiny, it may return as a first-line treatment procedure. Both open and laparoscopic Burch colposuspension should therefore nowadays be provided in fellowship programs worldwide. abstract_id: PUBMED:30498710 Which method of distal pancreatectomy is cost-effective among open, laparoscopic, or robotic surgery? Background: The aim of this study was to analyze the clinical and economic impact of robotic distal pancreatectomy, laparoscopic distal pancreatectomy, and open distal pancreatectomy. Methods: All consecutive patients who underwent distal pancreatic resection for benign and malignant diseases between January 2012 and December 2015 were prospectively included. Cost analysis was performed; all charges from patient admission to discharge were considered. Results: There were 21 robotic (RDP), 25 laparoscopic (LDP), and 43 open (ODP) procedures. Operative time was longer in the RDP group (RDP =345 minutes, LDP =306 min, ODP =251 min, P=0.01). Blood loss was higher in the ODP group (RDP =192 mL, LDP =356 mL, ODP =573 mL, P=0.0002). Spleen preservation was more frequent in the RDP group (RDP =66.6%, LDP =61.9%, ODP =9.3%, P=0.001). The rate of patients with Clavien-Dindo &gt; grade III was higher in the ODP group (RDP =0%, LDP =12%, ODP =23%, P=0.01), especially for non-surgical complications, which were more frequent in the ODP group (RDP =9.5%, LDP =24%, ODP =41.8%, P=0.02). Length of hospital stay was increased in the ODP group (ODP =19 days, LDP =13 days, RDP =11 days, P=0.007). The total cost of the procedure, including the surgical procedure and postoperative course was higher in the ODP group (ODP =30,929 Euros, LDP =22,150 Euros, RDP =21,219 Euros, P=0.02). Conclusions: Cost-effective results of RDP seem to be similar to LDP with some better short-term outcomes. Answer: The cost-effectiveness of laparoscopic colposuspension has been evaluated in several studies with varying conclusions. One retrospective controlled review comparing open and laparoscopic colposuspension found no difference in overall in-hospital costs between the two groups, suggesting that the laparoscopic approach is by no means more expensive than the open approach (PUBMED:9214328). Another study, which provided a technique for laparoscopic Burch colposuspension using a 3-trocar system, highlighted the benefits of decreased blood loss, pain, and length of stay with equivalent outcomes at 2 years compared with open procedures (PUBMED:27553183). However, a cost utility analysis alongside a randomized controlled trial indicated that laparoscopic colposuspension was not cost-effective compared with open colposuspension during the first 6 months following surgery. It suggested that it may become cost-effective over 24 months, assuming no significant additional costs beyond 6 months (PUBMED:16956333). Another study reported that laparoscopic colposuspension had significantly higher total hospital charges than the traditional open approach due to expenses associated with increased operative time and use of laparoscopic equipment, despite a reduction in the length of hospital stay (PUBMED:9277654). In contrast, a description of the laparoscopic latero-abdominal colposuspension (LACS) technique and its preliminary results suggested that LACS is a minimally invasive, fast, effective, safe, and reproducible technique, although more series are needed to evaluate its role against laparoscopic colposacropexy (PUBMED:33032847). A technical video on a modified laparoscopic colposuspension technique also suggested reduced operative time compared to traditional techniques (PUBMED:24907550). In summary, the cost-effectiveness of laparoscopic colposuspension is not definitive and seems to depend on various factors, including the time frame of the cost analysis, the surgical technique used, and the costs associated with operative time and equipment. While some studies suggest that it may be cost-effective in the long term or with certain techniques, others indicate higher costs associated with the laparoscopic approach, especially in the short term.
Instruction: Surgical therapy for adult moyamoya disease. Can surgical revascularization prevent the recurrence of intracerebral hemorrhage? Abstracts: abstract_id: PUBMED:8711799 Surgical therapy for adult moyamoya disease. Can surgical revascularization prevent the recurrence of intracerebral hemorrhage? Background And Purpose: It is well recognized that revascularization surgery using direct and/or indirect bypass provides effective surgical management for pediatric moyamoya disease. However, surgical treatment of the adult hemorrhagic type remains controversial. In this study, the effect of surgery for adult moyamoya disease was investigated. Methods: We analyzed 35 patients with adult moyamoya disease (patient age, over 20 years), 24 patients with initial onset of intracerebral hemorrhage, and 11 patients with initial onset of cerebral ischemia who underwent both direct bypass surgery of the superficial temporal artery to the middle cerebral artery anastomosis and indirect revascularization of encephalo-duro-arteriomyo-synangiosis. Results: Of 24 patients with hemorrhagic-type disease, 3 showed rebleeding: of 11 patients with the ischemic type, 2 showed intracerebral hemorrhage after surgery. Overall, 5 of 35 patients (14.3%) had hemorrhage after revascularization surgery (mean follow-up period, 6.4 years). Postoperative angiography revealed that direct anastomosis is effective whereas indirect revascularization is not always effective for adult moyamoya disease. Moyamoya vessels, which are supposed to be responsible for hemorrhage, decreased in 25% of patients. Conclusions: Revascularization surgery cannot always prevent rebleeding. However, a decrease in moyamoya vessels was induced by surgery, which may reduce the risk of hemorrhage more effectively than conservative treatment. In cases of adult moyamoya disease, direct bypass is particularly important, since the indirect revascularization is not as useful in adult cases as in pediatric cases. abstract_id: PUBMED:34859335 Surgical revascularization vs. conservative treatment for adult hemorrhagic moyamoya disease: analysis of rebleeding in 322 consecutive patients. Whether surgical revascularization can prevent recurrent hemorrhage in hemorrhagic moyamoya disease (HMD) patients remains a matter of debate. This study mainly aims at the comparison of treatment effect between surgical revascularization and conservative treatment of adult HMD patients. We retrospectively enrolled 322 adult HMD patients, including 133 in revascularization group and 189 in conservative group. The revascularization group included patients who underwent combined (n = 97) or indirect revascularization alone (n = 36). Ninety-two and forty-one patients underwent unilateral and bilateral revascularization respectively. The modified Rankin scale (mRS) was used to assess the functional status. The comparison was made based on initial treatment paradigm among two categories: (1) revascularization vs. conservative, (2) unilateral vs. bilateral revascularization. The rebleeding rate was significantly lower in revascularization group than that in conservative group (14.3% vs. 27.0%, P = 0.007). As for the functional outcomes, the average mRS was significantly better in revascularization group (1.7 ± 1.5) than that in conservative group (2.8 ± 1.9) (P &lt; 0.001). The death rate in revascularization group was 8.3% (11/133), comparing to 20.1% (38/189) in conservative group (P = 0.004). While comparing between unilateral and bilateral revascularization within the revascularization group, the result demonstrated lower annual rebleeding rate in bilateral group (0.5%/side-year) than that in unilateral group (3.3%/side-year) (P = 0.001). This study proved the better treatment efficacy of surgical revascularization than that of conservative treatment in HMD patients, regarding both in rebleeding rate and mortality rate. Furthermore, bilateral revascularization seems more effective in preventing rebleeding than unilateral revascularization. abstract_id: PUBMED:25655688 The importance of encephalo-myo-synangiosis in surgical revascularization strategies for moyamoya disease in children and adults. Objective: The optimal surgical procedure (direct, indirect, or combined anastomosis) for management of moyamoya disease is still debated. We evaluated the outcome of our broad area revascularization protocol, the Tokyo Daigaku (The University of Tokyo) (TODAI) protocol, analyzing the relative importance of direct, indirect, and combination revascularization strategies to identify the optimal surgical protocol. Methods: The TODAI protocol was used to treat 65 patients with moyamoya disease (91 hemispheres, including 48 in 29 childhood cases collected during 1996-2012). The TODAI protocol combined direct superficial temporal artery (STA)-middle cerebral artery (MCA) anastomosis with indirect revascularization using encephalo-myo-synangiosis (EMS) for patients ≥10 years old or indirect revascularization using encephalo-duro-arterio-synangiosis (EDAS) with EMS for patients ≤9 years old. Clinical outcome was evaluated retrospectively. Digital subtraction angiography was performed for postoperative evaluation of revascularization in 47 patients (62 hemispheres; 27 adults and 35 children). Based on the relative contribution of additional flow from each revascularization path, 4 revascularization patterns were established. Results: The mean follow-up period was 90 months in children and 72 months in adults. Perioperative complications were seen in 4 of 48 operations in children and 1 of 43 operations in adults. Except for 1 child with recurrent transient ischemic attacks and 1 adult with intracerebral hemorrhage, the patients showed excellent clinical outcomes. Postoperative digital subtraction angiography evaluation showed that in STA-MCA anastomosis + EMS cases (34 hemispheres; 25 adults and 9 children), STA-MCA anastomosis provided greater revascularization than EMS (STA-MCA anastomosis &gt; EMS) in 7 hemispheres, the opposite was true (STA-MCA anastomosis &lt; EMS) in 14 hemispheres, an equivalent contribution to revascularization (STA-MCA anastomosis ≈ EMS) was present in 12 hemispheres, and no functioning anastomosis was present in 1 hemisphere. In cases of EDAS + EMS (28 hemispheres; 2 adults and 26 children), all hemispheres showed revascularization: EDAS was dominant to EMS (EDAS &gt; EMS) in 1 hemisphere, the opposite (EMS &gt; EDAS) was true in 14 hemispheres, and EDAS was equivalent to EMS (EDAS ≈ EMS) in 13 hemispheres. EMS plus direct or indirect anastomosis is an effective surgical procedure in adults and children. Conclusions: The TODAI protocol provided efficient revascularization and yielded excellent results in preventing strokes in patients with moyamoya disease with very few complications. EMS had a main role in revascularization in each of the combined techniques. abstract_id: PUBMED:26159234 Clinical features and outcomes in 154 patients with haemorrhagic moyamoya disease: comparison of conservative treatment and surgical revascularization. Objectives: Rebleeding is an unsatisfactory outcome for patients with haemorrhagic MMD. This study mainly investigated clinical features and outcomes in haemorrhagic MMD. Methods: A retrospective review was performed on a total of 154 patients with haemorrhagic MMD comprising 126 surgically treated and 28 conservatively treated patients. Results: There were 102 female and 52 male patients with a mean age at the initial bleeding of 33.95 years. Preoperative rebleeding occurred in 37 patients, and multivariate Cox regression analysis demonstrated that age at the time of initial bleeding (P &lt; 0.001, HR = 1.093) was a risk factor for preoperative rebleeding. Of 124 patients with surgical revascularization, perioperative ischaemic stroke occurred in five (4.03%) and intracranial bleeding in four (3.23%). The mean follow-up period was 36.12 months. Recurrent bleeding occurred in six (10.17%) of 59 patients treated with direct revascularization, seven (20.69%) of 34 patients treated with indirect revascularization, two (6.45%) of 31 patients treated with combined revascularization and six (21.43%) of 28 patients treated conservatively. Kaplan-Meier analysis revealed no statistical differences in preventing rebleeding between direct, indirect and combined revascularization and conservative treatment (P = 0.311). Conclusions: Age at the initial bleeding is a risk factor for rebleeding in haemorrhagic MMD. Although surgical revascularization show the tendency to decrease the rebleeding rate, there is no statistical difference between direct revascularization, indirect revascularization, combined revascularization and conservative treatment in preventing rebleeding. Further study is needed to determine whether surgical revascularization is effective in select population or with certain techniques. abstract_id: PUBMED:29061453 Prevention of the Rerupture of Collateral Artery Aneurysms on the Ventricular Wall by Early Surgical Revascularization in Moyamoya Disease: Report of Two Cases and Review of the Literature. Background: Collateral artery aneurysms are a source of intracranial hemorrhage in moyamoya disease. Several reports have shown that surgical revascularization leads to the obliteration of collateral artery aneurysms. However, its effect on the prevention of rebleeding has not been established, and the optimal timing of the operation remains unclear. The purpose of the present study is to evaluate the effects of surgical revascularization and to investigate the optimal operation timing in patients with moyamoya disease who have ruptured collateral artery aneurysms on the ventricular wall. Case Description: Two patients with moyamoya disease who presented with intraventricular hemorrhage caused by rupture of collateral artery aneurysms on the wall of the lateral ventricle are presented here. In both cases, the aneurysms reruptured approximately 1 month after the initial hemorrhage. Both patients successfully underwent superficial temporal artery-middle cerebral artery anastomosis combined with indirect bypass in the subacute stage. The aneurysms decreased with the development of collateral circulation through the direct bypasses, and rebleeding did not occur after the surgery. Conclusions: Because ruptured collateral artery aneurysms on the wall of the lateral ventricle in moyamoya disease are prone to rerupture within 1 month, surgical revascularization may be recommended as soon as the patients are stable and able to withstand the operation. abstract_id: PUBMED:38006812 Outcomes after surgical revascularization for adult Moyamoya disease: A Southeast Asian tertiary centre experience. There are numerous studies on the natural history and outcomes of adult Moyamoya disease (MMD) in the literature, but limited data from Southeast Asian cohorts. Hence, we aimed to retrospectively review the clinical characteristics and outcomes after surgical revascularization for adult MMD in our Southeast Asian cohort. Patients were included if they were above 18 years old at the first surgical revascularization for MMD, and underwent surgery between 2012 and 2022 at the National University Hospital, Singapore. The outcomes were transient ischemic attack (TIA), ischemic stroke, intracerebral hemorrhage, and all-cause mortality during the postoperative follow-up period. In total, 26 patients who underwent 27 revascularization procedures were included. Most patients were of Chinese ethnicity, and the mean (SD) age at the time of surgery was 47.7 (12.6) years. The commonest clinical presentation was intracerebral hemorrhage, followed by TIA and ischemic stroke. Direct revascularization with superficial temporal artery-middle cerebral artery (STA-MCA) bypass was the most common procedure (24/27 surgeries, 88.9 %). The mean (SD) follow-up duration was 4.2 (2.5) years, during which the overall incidence of postoperative TIA/stroke was 25.9 % (7/27 surgeries), with most cases occurring within 7 days postoperatively. There were no mortalities during the postoperative follow-up period. Risk factors for 30-day postoperative TIA/stroke included a higher number of TIAs/strokes preoperatively (p = 0.044) and indirect revascularization (p = 0.028). Diabetes mellitus demonstrated a trend towards an increased risk of 30-day postoperative TIA/stroke, but this was not statistically significant (p = 0.056). These high-risk patients may benefit from more aggressive perioperative antithrombotic and hydration regimens. abstract_id: PUBMED:32717035 Incidental De Novo Cerebral Microhemorrhages are Predictive of Future Symptomatic Macrohemorrhages After Surgical Revascularization in Moyamoya Disease. Background: Patients with moyamoya disease who develop incidental cerebral microhemorrhages (CMHs) on magnetic resonance imaging (MRI) have higher risk of developing subsequent symptomatic repeat macro hemorrhages. Objective: To evaluate the effect of surgical revascularization on development of de novo CMHs and assess its correlation with repeat hemorrhage rates and functional outcome in hemorrhagic onset moyamoya disease (HOMMD). Methods: We retrospectively reviewed a prospectively managed departmental database of all patients presenting with HOMMD treated between 1987 and 2019. The search yielded 121 patients with adequate MRI follow-up for inclusion into the study. Results: In total, 42 preoperative CMHs were identified in 18 patients (15%). Patients presenting with preoperative CMH were more likely to develop de novo CMH after surgical revascularization. 7 de novo CHMs were identified in 6 patients (5%) on routine postoperative MRI at distinct locations from previous sites of hemorrhage or CMH. Symptomatic repeat macro hemorrhage was confirmed radiographically in 15 patients (12%). A total 5 (83%) of 6 patients with de novo CMHs later suffered symptomatic repeat macro hemorrhage with 4 of 5 (80%) hemorrhages occurring at sites of previous CMH. On univariate and multivariate analysis, de novo CMHs was the only significant variable predictive for developing repeat symptomatic hemorrhage. Development of delayed repeat symptomatic hemorrhage was prognostic for higher modified Rankin Score and therefore poorer functional status, whereas preoperative functional status was predictive of final outcome. Conclusion: De novo CMHs after surgical revascularization might serve as a radiographic biomarker for refractory disease and suggest patients are at risk for future symptomatic macro hemorrhage. abstract_id: PUBMED:28600750 Intra-operative hemorrhage due to hyperperfusion during direct revascularization surgery in an adult patient with moyamoya disease: a case report. Hemorrhagic complication is one of the notable surgical complications of the revascularization surgery for moyamoya disease (MMD). Cerebral hyperperfusion (CHP) has been considered as the underlying cause of this complication. It mostly occurs several days after surgery, but the intra-operative hemorrhage immediately after bypass has not been reported previously. A 21-year-old woman presented right thalamic hemorrhage and was diagnosed as having MMD by cerebral angiography. In light of the location of the hemorrhage at the vascular territory of posterior circulation and the manifestation of transient ischemic attack during the follow-up period, she underwent revascularization surgery to prevent future ischemic attack and rebleeding. Superficial temporal artery (STA) was uneventfully anastomosed to the temporal M4 branch of the middle cerebral artery in an end-to-side manner. A few minutes after the completion of the anastomosis, hemorrhage occurred in the fissure adjacent to the site of anastomosis. Indocyanine green (ICG) video angiography just before hemorrhage showed focal early filling through the STA graft with early venous filling around the site of the anastomosis. The bleeding was controlled by immediate hypotensive therapy (systolic blood pressure 117 to 91 mmHg). The mean blood flows of the STA graft measured by ultrasonic flowmetry before and after hypotensive therapy were 52.8 and 24.2 ml/min, respectively. Single-photon emission computed tomography (SPECT) on the next day after surgery showed focal hyperperfusion in the surgical side. Intra-operative ultrasonic flowmetry, ICG, and postoperative SPECT would explain that CHP was the potential cause of the hemorrhagic complication. This is the first case describing intra-operative hemorrhagic complication during revascularization surgery for MMD. Surgeons need to be aware of this rare complication and its management method. abstract_id: PUBMED:23498370 Quality of life and psychological impact in adult patients with hemorrhagic moyamoya disease who received no surgical revascularization. Objectives: Surgical treatment for adult hemorrhagic moyamoya disease (MMD) remains controversial. A large proportion of Chinese adult patients with hemorrhagic MMD still choose conservative treatment. In this study, we investigated to assess psychological function and quality of life (QoL) in adult patients with hemorrhagic MMD who received no surgical revascularization. Methods: 26 adult patients with hemorrhagic MMD who presented with only intraventricular hemorrhage (IVH), 20 patients with spontaneous IVH whose DSA results were negative and 30 healthy controls were identified and matched for age, gender, living area, etc. Psychological function and QoL were evaluated by Short Form-36 (SF-36), Symptom Check List 90 (SCL-90), Self-rating Depression Scale (SDS), Self-rating Anxiety Scale (SAS) and daily life questionnaire respectively one year after the initial stroke. Multiple logistic regression model was built up to screen out the independent risk factors related to depression and reduced QoL. Results: Heavier social and mental burden was observed in adult patients with hemorrhagic MMD compared with the patients with spontaneous IVH. The QoL of cases was particularly reduced in the psychological domains. 19 (73%) cases developed depression, indicating the probable higher incidence of psychological disorder in Asian patients. Multiple logistic regression analysis suggested the independent risks of reduced QoL and depression involved in personality types and education background. Conclusions: Our data revealed that poor education background or introverted personality type may be attributed to the development of depression in Chinese adult hemorrhagic MMD patients who received no surgical revascularization associated with QoL impairment. The treatment decisions for these patients should consider the possible improvement of QoL. abstract_id: PUBMED:38267055 Long-term Outcomes of Combined Revascularization Surgery for Moyamoya Disease in the Elderly: A Single Institute Experience. The opportunity to treat older patients with Moyamoya disease (MMD) is increasing. However, the surgical outcomes after combined direct and indirect revascularization for elderly patients with MMD are not fully understood, especially for those ≥60 years old. This retrospective study examined 232 consecutive hemispheres of 165 adults with MMD who underwent combined revascularization. Clinical features and surgical outcomes were compared between the elderly (≥60 years) and nonelderly group (&lt;60 years). Thirteen (5.6%, 64.4 ± 4.0 years old) and 219 hemispheres (94.4%, 40.2 ± 10.8 years old) were included in the elderly and nonelderly group, respectively. The proportion of clinical presentations before surgery did not differ. However, the prevalence of hypertension and hyperlipidemia was significantly higher in the elderly group than in the nonelderly group. Meanwhile, hyperthyroidism was observed only in the nonelderly group. No significant intergroup differences were observed in the incidence of perioperative complications occurring within four weeks postsurgery. Notably, the elderly group was more prone to develop perioperative intracerebral hemorrhage (odds ratio (OR) 3.14, 95% confidence interval (CI) 0.45-13.5) than the nonelderly group. During a median follow-up period of 7.8 years, the incidence of stroke recurrence occurring later than four weeks postsurgery was not significantly different between the groups (hazard ratio, 1.19; 95% CI 0.133-10.6). The prevalence of independent outcomes (76.9% vs. 90.4%, P = 0.14) and mortality (7.7% vs. 1.4%, P = 0.21) did not differ significantly between the elderly and nonelderly groups, respectively. Perioperative intracerebral hemorrhage may be common in the elderly and should be considered to achieve a favorable surgical outcome. Answer: Surgical revascularization for adult moyamoya disease has been a topic of investigation to determine its efficacy in preventing the recurrence of intracerebral hemorrhage. Studies have shown varying results regarding the effectiveness of surgical interventions. One study analyzed 35 patients with adult moyamoya disease who underwent both direct bypass surgery and indirect revascularization. The results indicated that 14.3% of patients experienced hemorrhage after revascularization surgery over a mean follow-up period of 6.4 years. The study concluded that while revascularization surgery may not always prevent rebleeding, it could reduce the risk of hemorrhage more effectively than conservative treatment, especially with direct bypass being particularly important in adult cases (PUBMED:8711799). Another study involving 322 adult hemorrhagic moyamoya disease patients compared surgical revascularization with conservative treatment. The rebleeding rate was significantly lower in the revascularization group (14.3%) compared to the conservative group (27.0%). Additionally, the study found that bilateral revascularization was more effective in preventing rebleeding than unilateral revascularization (PUBMED:34859335). The importance of encephalo-myo-synangiosis (EMS) in surgical revascularization strategies was highlighted in a study that used the Tokyo Daigaku (TODAI) protocol, combining direct and indirect revascularization. The study reported excellent clinical outcomes with very few complications, suggesting that EMS plays a significant role in revascularization in both adults and children (PUBMED:25655688). However, another study comparing conservative treatment and surgical revascularization in 154 patients with hemorrhagic moyamoya disease found no statistical differences in preventing rebleeding among different surgical methods and conservative treatment (PUBMED:26159234). A case report suggested that early surgical revascularization might prevent the rerupture of collateral artery aneurysms on the ventricular wall in moyamoya disease, recommending surgical intervention as soon as patients are stable enough for the operation (PUBMED:29061453). In summary, while surgical revascularization, particularly direct bypass and bilateral revascularization, appears to reduce the risk of rebleeding in adult moyamoya disease compared to conservative treatment, it may not completely prevent it.
Instruction: Do depressive symptoms predict declines in physical performance in an elderly, biracial population? Abstracts: abstract_id: PUBMED:16046375 Do depressive symptoms predict declines in physical performance in an elderly, biracial population? Objective: We investigated whether depressive symptoms, assessed by the 10-item Center for Epidemiological Studies Depression Scale (CES-D), predicted change in physical function in elderly adults. Methods: Participants were from a biracial, population-based sample of adults aged 65 and older (N: 4069; 61% black; 61% female). Physical function was assessed as a summary performance measure of tandem stand, measured walk, and repeated chair stand (mean [standard deviation], 10.3 [3.5]; range, 0-15), commonly used measures of overall physical health in older adults. Generalized estimating equation models estimated physical function across 3 assessments over 5.4 years of follow up as a function of CES-D scores at baseline. Results: Adjusting for age, sex, race, and education, each 1-point higher CES-D score was associated with a 0.34-point lower absolute level of physical performance (p &lt; .0001), but there was no evidence of a CES-D by time interaction (p = .84), indicating that depressive symptoms at baseline were not associated with greater physical performance decline over time. In secondary analyses, with CES-D scores modeled in 4 categories, overall physical performance showed a graded, inverse association across CES-D categories (p's &lt; .0001). However, we observed no threshold effect for depressive symptoms in relation to change in physical performance. Compared with the referent group (CES-D = 0), the 2 middle CES-D categories (CES-D = 1 or 2-3) evidenced some decline in physical performance over time, but the highest CES-D group (CES-D &gt; or =4) showed no significant physical decline over time (p = .89). Conclusion: We observed a strong cross-sectional association between depressive symptoms and overall physical performance. Physical function declined over time, yet depressive symptoms did not consistently contribute to greater decline over an average of 5.4 years of follow up among older adults. Findings highlight the importance of longitudinal models in understanding the relation between depressive symptomatology and physical health. abstract_id: PUBMED:35443643 Association among calf circumference, physical performance, and depression in the elderly Chinese population: a cross-sectional study. Background: Depression and sarcopenia are common diseases in the elderly population. However, the association between them is controversial. Based on the Chinese Longitudinal Healthy Longevity Survey (CLHLS) database, a cross-sectional study was conducted to explore the relationship of calf circumference and physical performance with depression. Methods: From the 8th wave of CLHLS conducted in 2018, data on calf circumference, physical performance, depressive symptoms, and demographic, socioeconomic, and health-related characteristics were collected. Multiple logistic regression was conducted to explore the impact of calf circumference, physical performance and their combination on depressive symptoms. Results: We enrolled a total of 12,227 participants aged 83.4 ± 11.0 years, including 5689 (46.5%) men and 6538 (53.5%) women. Patients with depression were more likely to have low calf circumference (2274 [68.2%] vs. 5406 [60.8%], p&lt;0.001) and poor physical performance (3[0, 6] vs. 1[0, 4], p&lt;0.001). A significant multiplicative interaction was found between calf circumference and physical performance in their effect on depression. After adjusting for confounding factors, multiple logistic regression showed that a significant inverse correlation persisted between physical performance and depressive symptoms in normal (odds ratio [OR] = 1.20, 95% confidence interval [CI]: 1.15-1.26, p&lt;0.001) and low (OR = 1.14, 95% CI: 1.11-1.18, p&lt;0.001) calf circumference group, while the association between calf circumference and depression disappeared. Participants with low calf circumference and poor physical performance were 2.21 times more likely to have depression than those with normal calf circumference and physical performance. All results were found to be robust in sensitivity analyses. Conclusions: Physical performance was significantly associated with depression in the elderly Chinese population. Attention should be paid to assess depressive symptoms in patients with poor physical performance. abstract_id: PUBMED:33449338 Depressive symptoms predict low physical performance among older Mexican Americans. Background: Depressive symptoms are common in older adults and predict functional dependency. Aims: To examine the ability of depressive symptoms to predict low physical performance over 20 years of follow-up among older Mexican Americans who scored moderate to high in the Short Physical Performance Battery (SPPB) test and were non-disabled at baseline. Methods: Data were from the Hispanic Established Population for the Epidemiologic Study of the Elderly. Our sample included 1545 community-dwelling Mexican American men and women aged 65 and older. Measures included socio-demographics, depressive symptoms, SPPB, handgrip strength, activities of daily living, body mass index (BMI), mini-mental state examination, and self-reports of various medical conditions. General Equation Estimation was used to estimate the odds ratio of developing low physical performance over time as a function of depressive symptoms. Results: The mean SPPB score at baseline was 8.6 ± 1.4 for those with depressive symptoms and 9.1 ± 1.4 for those without depressive symptoms. The odds ratio of developing low physical performance over time was 1.53 (95% Confidence Interval = 1.27-1.84) for those with depressive symptoms compared with those without depressive symptoms, after controlling for all covariates. Conclusion: Depressive symptoms were a predictor of low physical performance in older Mexican Americans over a 20-year follow-up period. Interventions aimed at preventing decline in physical performance in older adults should address management of their depressive symptoms. abstract_id: PUBMED:27084314 Declines and Impairment in Executive Function Predict Onset of Physical Frailty. Background: Clinical cognitive impairment and physical frailty often co-occur. However, it is unclear whether preclinical impairment or decline in cognitive domains are associated with onset of physical frailty. We tested this hypothesis and further hypothesized that preclinical impairment and decline in executive functioning are more strongly associated with frailty onset than memory or general cognitive performance. Methods: We used 9 years of data from the Women's Health and Aging Study II (six visits) that longitudinally measured psychomotor speed and executive functioning using the Trail Making Test, parts A and B, respectively, and immediate and delayed word-list recall from the Hopkins Verbal Learning Test. We used Cox proportional hazards models to regress time to frailty on indicators for impairment on these cognitive tests and on rates of change of the tests. Models adjusted for depressive symptoms, age, years of education, and race. Results: Of the 331 women initially free of dementia and frailty, 44 (13%) developed frailty. A binary indicator of impaired executive functioning (Trail Making Test, part B [TMT-B]) was most strongly associated with hazard, or risk, of frailty onset (hazard ratio [HR] = 3.3, 95% confidence interval [CI] = 1.4, 7.6) after adjustment for covariates and other tests. Adjusting for baseline cognitive performance, faster deterioration on TMT-B (HR = 0.6, 95% CI = 0.4, 1.0) was additionally associated with hazard of frailty onset. Conclusions: Findings inform the association of executive functioning with transitions to frailty, suggesting both impairments in and declines in executive functioning are associated with risk of frailty onset. It remains to be determined whether these associations are causal or whether shared aging related or other mechanisms are involved. abstract_id: PUBMED:21584092 A study of major physical disorders among the elderly depressives. Psychiatric evaluation and assessment of common physical illnesses and disabilities was carried out in elderly depressives (aged 60 years and above). Correlation, if any, was seen between depression and physical problems. The 'patient group' comprised of 40 drawn from MHI, Cuttack, having a depressive disorder (ICD-10). The 'control group' of 20 was drawn from the general population with no psychiatric disorder. The presence of physical illness was looked for in both groups. The patient group had physical illnesses, 76% of which were previously undiagnosed. The control group had physical illnesses 71% of which were previously diagnosed. Undiagnosed physical illnesses are more common among elderly patients with depression than among matched control. The physical illnesses contributed in two thirds of the patients. So careful detection and management of physical illness is of equal importance in the management of depression. abstract_id: PUBMED:30084303 Identity Denied: Comparing American or White Identity Denial and Psychological Health Outcomes Among Bicultural and Biracial People. Because bicultural and biracial people have two identities within one social domain (culture or race), their identification is often challenged by others. Although it is established that identity denial is associated with poor psychological health, the processes through which this occurs are less understood. Across two high-powered studies, we tested identity autonomy, the perceived compatibility of identities, and social belonging as mediators of the relationship between identity denial and well-being among bicultural and biracial individuals. Bicultural and biracial participants who experienced challenges to their American or White identities felt less freedom in choosing an identity and perceived their identities as less compatible, which was ultimately associated with greater reports of depressive symptoms and stress. Study 2 replicated these results and measured social belonging, which also accounted for significant variance in well-being. The results suggest the processes were similar across populations, highlighting important implications for the generalizability to other dual-identity populations. abstract_id: PUBMED:29941833 Mental Well-Being of Older People in Finland during the First Year in Senior Housing and Its Association with Physical Performance. Growing numbers of older people relocate to senior housing, when their physical or mental performance declines. The relocation is known to be one of the most stressful events in the life of older people and affect their mental and physical well-being. More information about the relationships between mental and physical parameters is required. We examined self-reported mental well-being of 81 older people (aged 59⁻93, living in northern Finland), and changes in it 3 and 12 months after relocation to senior housing. The first measurement was 3 months and the second measurement 12 months after relocation. Most participants were female (70%). Their physical performance was also measured, and associations between these two were analyzed. After 12 months, mental capability was very good or quite good in 38% of participants, however 22% of participants felt depressive symptoms daily or weekly. Moreover, 39% of participants reported daily or weekly loneliness. After 12 months participants reported a significant increase in forgetting appointments, losing items and difficulties in learn new things. They felt that opportunities to make decisions concerning their own life significantly decreased. Furthermore, their instrumental activities of daily living (IADL), dominant hand&amp;rsquo;s grip strength and walking speed decreased significantly. Opportunities to make decisions concerning their life, feeling safe, loneliness, sleeping problems, negative thoughts as well as fear of falling or having an accident outdoors were associated with these physical parameters. In addition to assessing physical performance and regular exercise, the various components of mental well-being and their interactions with physical performance should be considered during adjustment to senior housing. abstract_id: PUBMED:25307294 Physical frailty predicts incident depressive symptoms in elderly people: prospective findings from the Obu Study of Health Promotion for the Elderly. Objective: The purpose of this study was to determine whether frailty is an important and independent predictor of incident depressive symptoms in elderly people without depressive symptoms at baseline. Design: Fifteen-month prospective study. Setting: General community in Japan. Participants: A total of 3025 community-dwelling elderly people aged 65 years or over without depressive symptoms at baseline. Measurements: The self-rated 15-item Geriatric Depression Scale was used to assess symptoms of depression with a score of 6 or more at baseline and 15-month follow-up. Participants underwent a structural interview designed to obtain demographic factors and frailty status, and completed cognitive testing with the Mini-Mental State Examination and physical performance testing with the Short Physical Performance Battery as potential predictors. Results: At a 15-month follow-up survey, 226 participants (7.5%) reported the development of depressive symptoms. We found that frailty and poor self-rated general health (adjusted odds ratio 1.86, 95% confidence interval 1.30-2.66, P &lt; .01) were independent predictors of incident depressive symptoms. The odds ratio for depressive symptoms in participants with frailty compared with robust participants was 1.86 (95% confidence interval 1.05-3.28, P = .03) after adjusting for demographic factors, self-rated general health, behavior, living arrangements, Mini-Mental State Examination, Short Physical Performance Battery, and Geriatric Depression Scale scores at baseline. Conclusions: Our findings suggested that frailty and poor self-rated general health were independent predictors of depressive symptoms in community-dwelling elderly people. abstract_id: PUBMED:34188575 Physical Exercise Behaviors and Depressive Symptoms Among Elderly Chinese Women: Focus on Exercise Components. Purpose: Several studies have investigated the association between physical exercise and depressive symptoms in the elderly population. However, the relationship between components of physical exercise such as frequency, intensity, duration, and depressive symptoms remains unclear. This study was conducted on elderly Chinese women to investigate the association between each component of physical exercise and depressive symptoms and to examine the association between physical exercise patterns and depressive symptoms. Patients And Methods: A total of 1429 Chinese women aged ≥60 years were enrolled in this cross-sectional study and provided information on their exercise behaviors through a self-reported questionnaire. Depressive symptoms were assessed using the Zung Self-Rating Depression Scale. Multiple logistic regression analysis was used to estimate adjusted odds ratios (ORs) to measure the association between exercise behaviors and depression. Results: High frequency, moderate intensity, long duration, and exercise habits were significantly associated with lower levels of depressive symptoms. Furthermore, participants with more than one optimal exercise behavior were more likely to have a lower risk of depressive symptoms. These associations were not changed after adjustment for confounding factors. Conclusion: This study shows that exercise behaviors were significantly associated with the risk of developing depression. This finding suggests that keeping an optimal exercise behavior may be associated with a lower risk of depressive symptoms in elderly Chinese women. abstract_id: PUBMED:30171700 The effects of depression, personal characteristics, and some habits on physical activity in the elderly. Purpose: The current study investigated the effects of depression, personal characteristics, and some habits on physical activity levels among the elderly. Design And Methods: This study followed a descriptive design. The population of the study consisted of 11 750 individuals at least 65 years old. The sample size of the study was determined to be 967 by power analysis. Descriptive statistics and logistic models were used for data analysis. Results: Together, age, the presence of a caregiver, chronic disease, and the Geriatric Depression Scale (GDS) score explained 32% of the variance in the physical activity of the elderly. Among these factors, the GDS score alone had the greatest effect. Practical Implications: To maintain health, it is important to reduce depression and increase physical activity among the elderly. Answer: The investigation into whether depressive symptoms predict declines in physical performance in an elderly, biracial population yields mixed results. According to the study with PUBMED:16046375, depressive symptoms, as measured by the CES-D, showed a strong cross-sectional association with overall physical performance in elderly adults. However, the study found that depressive symptoms at baseline were not associated with greater physical performance decline over time during the 5.4 years of follow-up. This suggests that while depressive symptoms may be related to lower levels of physical performance at a single point in time, they do not consistently contribute to further decline in physical performance over the observed period. In contrast, another study with PUBMED:33449338 found that depressive symptoms were a predictor of low physical performance in older Mexican Americans over a 20-year follow-up period. This study suggests that depressive symptoms can indeed predict a decline in physical performance among this specific population. It is important to note that these studies examined different populations and used different methodologies, which could account for the differing conclusions. The first study focused on a biracial population with a shorter follow-up period, while the second study focused on older Mexican Americans with a much longer follow-up period. Therefore, while there is evidence to suggest that depressive symptoms may predict declines in physical performance in some elderly populations, the relationship may not be consistent across all groups or time frames.
Instruction: Does the systolic pressure variation change in the prone position? Abstracts: abstract_id: PUBMED:19629727 Does the systolic pressure variation change in the prone position? Objective: The Systolic pressure variation (SPV) is known to be a sensitive indicator of hypovolemia. However, the SPV may be elevated due to other reasons, such as changes in lung compliance or tidal volumes. Using the SPV to monitor the hemodynamic status of patients in the prone position may, therefore, be problematic due to possible effects of increased abdominal pressure on both venous return and lung compliance. The purpose of this study is to examine whether or not the SPV changes significantly when placing the patient in the prone position. Methods: The arterial pressure waveform was recorded and SPV measured in 25 patients undergoing spine surgery. Patients that were elderly (age&gt;65 years), obese (BMI &gt;30), or had history of lung disease (COPD, Asthma), were excluded. Measurements were taken in the supine and prone position and the results were compared using the Paired Student's t-test. A P&lt;0.05 was considered significant. Values expressed are for mean+/-standard deviation. Results: The SPV was 6.9+/-1.9 and 7.0+/-1.8 mmHg in the supine and prone position respectively. These two results were not statistically significant. Conclusions: This study is important because it shows for the first time that the SPV does not change significantly in the prone position, and may therefore continue to be used as an indicator of the volume status. It also would appear to indicate that our methods for protecting the chest and abdomen in the prone position are effective. abstract_id: PUBMED:31216847 Comparison of pulse pressure variation and pleth variability index in the prone position in pediatric patients under 2 years old. Background: The assessment of intravascular volume status is very important especially in children during anesthesia. Pulse pressure variation (PPV) and pleth variability index (PVI) are well known parameters for assessing intravascular volume status and fluid responsiveness. We compared PPV and PVI for children aged less than two years who underwent surgery in the prone position. Methods: A total of 27 children were enrolled. We measured PPV and PVI at the same limb during surgery before and after changing the patients' position from supine to prone. We then compared PPV and PVI at each period using Bland-Altman plot for bias between the two parameters and for any correlation. We also examined the difference between before and after the position change for each parameter, along with peak inspiratory pressure, heart rate and mean blood pressure. Results: The bias between PPV and PVI was -2.2% with a 95% limits of agreement of -18.8% to 14.5%, not showing significant correlation at any period. Both PPV and PVI showed no significant difference before and after the position change. Conclusions: No significant correlation between PVI and PPV was observed in children undergoing surgery in the prone position. Further studies relating PVI, PPV, and fluid responsiveness via adequate cardiac output estimation in children aged less than 2 years are required. abstract_id: PUBMED:25664152 The changes of endotracheal tube cuff pressure by the position changes from supine to prone and the flexion and extension of head. Background: The proper cuff pressure is important to prevent complications related to the endotracheal tube (ETT). We evaluated the change in ETT cuff pressure by changing the position from supine to prone without head movement. Methods: Fifty-five patients were enrolled and scheduled for lumbar spine surgery. Neutral angle, which was the angle on the mandibular angle between the neck midline and mandibular inferior border, was measured. The initial neutral pressure of the ETT cuff was measured, and the cuff pressure was subsequently adjusted to 26 cmH2O. Flexed or extended angles and cuff pressure were measured in both supine and prone positions, when the patient's head was flexed or extended. Initial neutral pressure in prone was compared with adjusted neutral pressure (26 cmH2O) in supine. Flexed and extended pressure were compared with adjusted neutral pressure in supine or prone, respectively. Results: There were no differences between supine and prone position for neutral, flexed, and extended angles. The initial neutral pressure increased after changing position from supine to prone (26.0 vs. 31.5 ± 5.9 cmH2O, P &lt; 0.001). Flexed and extended pressure in supine were increased to 38.7 ± 6.7 (P &lt; 0.001) and 26.7 ± 4.7 cmH2O (not statistically significant) than the adjusted neutral pressure. Flexed and extended pressure in prone were increased to 40.5 ± 8.8 (P &lt; 0.001) and 29.9 ± 8.7 cmH2O (P = 0.002) than the adjusted neutral pressure. Conclusions: The position change from supine to prone without head movement can cause a change in ETT cuff pressure. abstract_id: PUBMED:23391343 The supine-to-prone position change induces modification of endotracheal tube cuff pressure accompanied by tube displacement. Study Objectives: To determine whether the supine-to-prone position change displaced the endotracheal tube (ETT) and, if so, whether the displacement related to this change modified ETT cuff pressure. Design: Prospective study. Setting: Operating room of a university hospital. Patients: 132 intubated, adult, ASA physical status 1, 2, and 3 patients undergoing lumbar spine surgery. Interventions And Measurements: After induction of anesthesia, each patient's trachea was intubated. The insertion depth of each ETT was 23 cm for men and 21 cm for women at the upper incisors. In the supine position and after the supine-to-prone position change with the head rotated to the right, the length from the carina to ETT tip and ETT cuff pressure were measured. Main Results: After the supine-to-prone position change, 91.7% patients had ETT tube displacement. Of these, 48% of patients' ETT moved ≥ 10 mm, whereas 86.3% of patients had changes in tube cuff pressure. There was a slight but significant correlation between ETT movement and change in cuff pressure. Depending on the position change, ETT cuff pressure decreased and the ETT tended to withdraw. Conclusions: After the supine-to-prone position change, patients had ETT tube displacement. Such ETT movement may be accompanied by a decrease in cuff pressure. abstract_id: PUBMED:36778833 Relationship between pulse pressure variation and stroke volume variation with changes in cardiac index during hypotension in patients undergoing major spine surgeries in prone position - A prospective observational study. Background And Aims: Dynamic indices such as pulse pressure variation (PPV) and stroke volume variation (SVV) are better predictors of fluid responsiveness than static indices. There is a strong correlation between PPV and SVV in the prone position when assessed with the fluid challenge. However, this correlation has not been established during intraoperative hypotension. Our study aimed to assess the correlation between PPV and SVV during hypotension in the prone position and its relationship with cardiac index (CI). Material And Methods: Thirty patients aged 18-70 years of ASA class I-III, undergoing spine procedures in the prone position were recruited for this prospective observational study. Hemodynamic variables such as heart rate (HR), mean arterial pressure (MAP), PPV, SVV, and CI were measured at baseline (after induction of anesthesia and positioning in the prone position). This set of variables were collected at the time of hypotension (T-before) and after correction (T-after) with either fluids or vasopressors. HR and MAP are presented as median with inter quartile range and compared by Mann-Whitney U test. Reliability was measured by intraclass correlation coefficients (ICC). Generalized estimating equations were performed to assess the change of CI with changes in PPV and SVV. Results: A statistically significant linear relationship between PPV and SVV was observed. The ICC between change in PPV and SVV during hypotension was 0.9143, and after the intervention was 0.9091 (P &lt; 0.001). Regression of changes in PPV and SVV on changes in CI depicted the reciprocal change in CI which was not statistically significant. Conclusion: PPV is a reliable surrogate of SVV during intraoperative hypotension in the prone position. abstract_id: PUBMED:30117033 Comparison of ability of pulse pressure variation to predict fluid responsiveness in prone and supine position: an observational study. We aimed to compare the ability of pulse pressure variation (PPV) to predict fluid responsiveness in prone and supine positions and investigate effect of body mass index (BMI), intraabdominal pressure (IAP) and static respiratory compliance (CS) on PPV. A total of 88 patients undergoing neurosurgery were included. After standardized anesthesia induction, patients' PPV, stroke volume index (SVI), CS and IAP values were recorded in supine (T1) and prone (T2) positions and after fluid loading (T3). Also, PPV change percentage (PPVΔ%) between T2 and T1 times was calculated. Patients whose SVI increased more than 15% after the fluid loading were defined as volume responders. In 10 patients, PPVΔ% was ≤ - 20%. All of these patients had CST2 &lt; 31 ml/cmH2O, seven had BMI &gt; 30 kg/m2, and two had IAPT2 &gt; 15 mmHg. In 16 patients, PPVΔ% was ≥ 20%. In these patients, 10 had CST2 &lt; 31 ml/cmH2O, 10 had BMI &gt; 30 kg/m2, and 12 had IAPT2 &gt; 15 mmHg. Thirty-nine patients were volume responder. When all patients were examined for predicting fluid responsiveness, area under curves (AUC) of PPVT2 (0.790, 95%CI 0.690-0.870) was significantly lower than AUC of PPVT1 (0.937, 95%CI 0.878-0.997) with ROC analysis (p = 0.002). When patients whose CST2 was &lt; 31 ml/cmH2O and whose BMI was &gt; 30 kg/m2 were excluded from analysis separately, AUC of PPVT2 became similar to PPVT1. PPV in the prone can predict fluid responsiveness as good as PPV in the supine, only if BMI is &lt; 30 kg/m2 and CS value at prone is &gt; 31 ml/cmH2O. abstract_id: PUBMED:15110372 Pressure on the face while in the prone position: ProneView versus Prone Positioner. Study Objective: To measure the surface pressure on the face of a patient placed in the prone position with the most commonly used prone positioning devices, a non-face-contoured positioner (PP) and a new face-contoured device (PV). Design: Prospective, randomized comparison. Setting: Operating room in an American academic medical center. Subjects: 35 randomly recruited adult volunteers. Interventions: Surface pressure on the face was measured in awake subjects placed in the prone position, with the head and neck in the position of most comfort, using both the PP and PV devices. Measurements: Surface pressure was obtained using an array of small transducers imbedded in a thin cushion that was interfaced between the face and positioning device. The amount of extension or flexion of the head on the neck was estimated using an angular measurement of eye-ear line and horizontal line. Main Results: The average surface pressure on the face was less with the PV than with the PP (21 +/- 3 mmHg vs. 27 +/- 5 mmHg; p &lt; 0.0001). The number of areas where pressure exceeded 30 mmHg and 50 mmHg was lower for the PV than the PP (15 +/- 7.5 areas vs. 19 +/- 7.2 areas &gt; 30 mmHg; p &lt; 0.05; 5.2 +/- 3.3 areas vs. 9.0 +/- 5.0 areas &gt; 50 mmHg; p &lt; 0.0001). Pressure on the chin increased with extension of the head or neck (p &lt; 0.05) with both devices. Conclusions: Surface pressure on the face in the prone position is 29% higher with the non-face-contoured PP than with the face-contoured PV. The number of areas on the face where the surface pressure is greater than 50 mmHg is 80% higher with the PP than the PV. Small degrees of head extension increases pressure on the chin. Both devices produce areas of pressure, typically over the chin, which may be associated with local skin damage. Keeping the head and neck in a non-flexed, non-extended position may minimize pressures. abstract_id: PUBMED:9526935 Hemodynamic evaluation of the prone position by transesophageal echocardiography. Study Objective: To evaluate the hemodynamic response in the prone position in surgical patients by measuring the effects of prone positioning on cardiac function using transesophageal echocardiography (TEE). Design: Prospective study. Setting: Elective surgery at a university hospital. Patients: 15 adult ASA physical status I and II patients free of significant coexisting disease undergoing lumbar laminectomy. Interventions And Measurements: Approximately 15 minutes after the induction of general anesthesia, we measured heart rate, blood pressure, and central venous pressure. We also measured left ventricular area (LVA) and fractional area change (FAC) automatically and calculated left ventricular volume (LVV), stroke volume index (SVI), cardiac index (CI), left ventricular ejection fraction (LVEF), left ventricular fractional shortening (LVFS), pulmonary venous flow velocity (PVFV), and pulmonary venous velocity time integral (PVVTI) via TEE. The same measurements were performed approximately 15 minutes after changing to the prone position with longitudinal bolsters. Main Results: In the prone position, there was significant reduction in end-systolic and end-diastolic LVA and LVV. There was a significant increase in LVEF, LVFS, and FAC in the prone position. In addition, there was diminishment of systolic PVFV and PVVTI and enhancement of diastolic PVFV and PVVTI. SVI and CI did not change significantly in the prone position. Conclusion: The prone position caused LVV to decrease. The prone position also led to decreased systolic PVFV and PVVTI and enhancement of diastolic PVFV and PVVTI. These changes were probably due to a decrease in the venous return due to inferior vena caval compression, and decreased left ventricular compliance due to increased intrathoracic pressure in the prone position. abstract_id: PUBMED:24565128 The effect of head rotation on intraocular pressure in prone position: a randomized trial. Background And Objectives: The increased intraocular pressure (IOP) - which decreases perfusion pressure on the optic nerve - increases by prone positioning (1). The aim of our study was to compare the effect of head rotation 45° laterally in prone position on the increase in IOP of upper placed and lower placed eyes in patients undergoing percutaneous nephrolithotomy (PCNL). Methods: Forty-five patients were randomly divided into 2 Groups. IOP of the patients were recorded bilaterally in supine position before the operation had started. Patients were turned to prone position. The head was placed on a prone headrest without external direct compression to both eyes. Patients in Group I were kept in strictly neutral prone position where as patients in Group II were placed prone with their heads rotated 45° laterally to the right side. At the end of the operation, patients were turned to supine position and their IOP was measured immediately. Results: There was no difference related to demographics, duration of surgery, blood loss and fluid input data. IOP values after surgery in prone position increased significantly compared to preoperative values in both groups (p &lt; 0.05). After surgery in prone position IOP values of the upper positioned eyes in Group II were significantly lower than Group I and lower positioned eyes in Group II (p &lt; 0.05). Conclusion: prone positioning increases IOP. In patients with prone position with a head rotation of 45° laterally, IOP in the upper positioned eye was significantly lower. abstract_id: PUBMED:32355626 Systematic Review and Meta-Analysis of Prone Position on Intraocular Pressure in Adults Undergoing Surgery. Background: Patients undergoing surgery in the prone position may be at risk for postoperative vision loss associated with increased intraocular pressure. The purpose of this systematic review and meta-analysis is to estimate the magnitude of the increase in intraocular pressure at specific perioperative time points in adult patients. The research question to be addressed is "What is the magnitude of the increase in intraocular pressure at specific perioperative time points in adults undergoing surgery in the prone position?" Methods: Comprehensive search strategies were used to identify nine eligible studies (N = 229). Standardized mean difference effect sizes were calculated for two intraoperative time points.Time points for meta-analysis were selected to achieve the greatest number of comparisons for analysis at each time point. Prediction intervals for each time point were also calculated to show the dispersion of true effect sizes around the mean. Results: Meta-analysis showed that intraocular pressure increased significantly between induction of anesthesia and up to 10 minutes of prone position (T1: standardized mean difference [d] = 2.55; P &lt; .001) and continued to increase significantly until the end of the prone position (T2: d = 3.44; P = .002). Conclusions: Intraocular pressure increases of this magnitude demonstrate the need for implementing interventions to reduce the risk for postoperative vision loss in patients undergoing surgery in the prone position. Clinical Relevance: Implementing preoperative ophthalmologic examinations for patients undergoing surgery in the prone position may help to reduce the risk for ocular injury. Intraoperative interventions that can be implemented to reduce or mitigate the increase in intraocular pressure include implementing a 5- to 10-degree reverse Trendelenburg prone position, reducing the amount of time the patient is in the prone position, considering staged procedures, monitoring intraocular pressure, providing periodic position changes or rest periods, preventing pressure on the eye, and administering specific medications or anesthetics. Answer: The systolic pressure variation (SPV) does not change significantly in the prone position. A study conducted on 25 patients undergoing spine surgery, who were not elderly, obese, or had a history of lung disease, showed that the SPV was 6.9+/-1.9 mmHg in the supine position and 7.0+/-1.8 mmHg in the prone position, with no statistically significant difference between the two positions (PUBMED:19629727). This suggests that SPV can continue to be used as an indicator of volume status in patients positioned prone during surgery.
Instruction: Acute psychiatric day hospital treatment: is the effectiveness of this treatment approach still questionable? Abstracts: abstract_id: PUBMED:15546056 Acute psychiatric day hospital treatment: is the effectiveness of this treatment approach still questionable? Objective: Currently, there is still a severe lack of methodologically sound empirical studies on acute psychiatric day hospital treatment in German-speaking countries that analyse the effectiveness of this increasingly important mode of service provision. Methods: Within a randomised controlled study design implemented at the Department of Psychiatry and Psychotherapy, Dresden University of Technology, 99 general psychiatric patients received conventional inpatient treatment and 92 patients received acute day hospital treatment. At up to four time-points during the index-treatment episode patients were assessed at different levels of outcome: Psychopathology was rated by researchers using the Brief Psychiatric Rating Scale (24-Item-Version), and patients evaluated their satisfaction with treatment (Patientenbogen zur Behandlungszufriedenheit); at admission and discharge patients also assessed their subjective quality of life (Manchester Assessment of Quality of Life). Mean scale scores of these instruments are used for the intention-to-treat-analysis. Discharge status on these scales as well as mean ratings on these scales within the index-treatment episode serve as measures of effectiveness. For statistically identifying differences between the two settings five linear (co-)variance-analytical models were calculated for each target variable. Four models were adjusted to baseline-rating or to the individual period spent in treatment. Results: Initially, both groups did not differ in their relevant socio-demographic and illness-related features. Day hospital treatment (87,7 days) lasted significantly longer than inpatient treatment (67,8 days). Only results from an unadjusted statistical model as well as from a model adjusted to the individual period of index-hospitalisation demonstrated superior effectiveness of day hospital treatment on the discharge status of psychopathological symptomatology. However, in all statistical models there were no systematic differences of treatment-effectiveness related to satisfaction with treatment and subjective quality of life. Conclusion: For the first time in German-speaking countries, this study provides evidence for the effectiveness of acute day hospital treatment as compared to conventional inpatient treatment. If detailed eligibility criteria for patients are used as defined here, approximately 30 % of the general psychiatric patients in need of acute hospital-based treatment may be cared for in this special mode of day hospital service provision. abstract_id: PUBMED:32523556 Psychiatric Acute Day Hospital as an Alternative to Inpatient Treatment. For the first time in the Swiss health care system, this evaluation study examined whether patients with acute psychiatric illness who were admitted for inpatient treatment could be treated in an acute day hospital instead. The acute day hospital is characterized by the possibility of direct admission of patients without preliminary consultation or waiting time and is open every day of the week. In addition, it was examined whether and to what extent there are cost advantages for day hospital treatment. Patients who were admitted to the hospital with a referral to an inpatient admission were treated randomly either fully inpatient or in the acute day hospital. As a pilot study, 44 patients were admitted to the study. Evidence of efficacy could be provided for both treatment settings based on significant reduction in psychopathological symptoms and improvement in functional level in the course of treatment. There were no significant differences between the two settings in terms of external assessment of symptoms, subjective symptom burden, functional level, quality of life, treatment satisfaction, and number of treatment days. Treatment in the day hospital was about 45% cheaper compared to inpatient treatment. The results show that acutely ill psychiatric patients of different symptom severity can be treated just as well in an acute day hospital instead of being admitted to the hospital. In addition, when direct treatment costs are considered, there are clear cost advantages for day hospital treatment. abstract_id: PUBMED:31160228 Differences between psychiatric disorders in the clinical and functional effectiveness of an acute psychiatric day hospital, for acutely ill psychiatric patients. Introduction: Intensive treatment in acute day-care psychiatric units may represent an efficient alternative to inpatient care. However, there is evidence suggesting that this clinical resource may not be equally effective for every psychiatric disorder. The primary aim of this study was to explore differences between main psychiatric diagnostic groups, in the effectiveness of an acute partial hospitalization program. And, to identify predictors of treatment response. Material And Methods: The study was conducted at an acute psychiatric day hospital. Clinical severity was assessed using BPRS, CGI, and the HoNOS scales. Main socio-demographic variables were also recorded. Patients were clustered into 4wide diagnostic groups (i.e.: non-affective psychosis; bipolar; depressive; and personality disorders) to facilitate statistical analyses. Results: A total of 331 participants were recruited, 115 of whom (34.7%) were diagnosed with non-affective psychosis, 97 (28.3%) with bipolar disorder, 92 (27.8%) with affective disorder, and 27 (8.2%) with personality disorder. Patients with a diagnosis of bipolar disorder showed greater improvement in BPRS (F=5.30; P=0.001) and CGI (F=8.78; P&lt;0.001) than those suffering from psychosis or depressive disorder. Longer length of stay in the day-hospital, and greater baseline BPRS severity, were identified as predictors of good clinical response. Thirty-day readmission rate was 3%; at long-term (6 months after discharge) only 11.8% (N=39) of patients were re-admitted to a psychiatric hospitalization unit, and no differences were observed between diagnostic groups. Conclusions: Intensive care in an acute psychiatric day hospital is feasible and effective for patients suffering from an acute mental disorder. However, this effectiveness differs between diagnostic groups. abstract_id: PUBMED:22161384 Day hospital versus admission for acute psychiatric disorders. Background: Inpatient treatment is an expensive way of caring for people with acute psychiatric disorders. It has been proposed that many of those currently treated as inpatients could be cared for in acute psychiatric day hospitals. Objectives: To assess the effects of day hospital versus inpatient care for people with acute psychiatric disorders. Search Methods: We searched the Cochrane Schizophrenia Group Trials Register (June 2010) which is based on regular searches of MEDLINE, EMBASE, CINAHL and PsycINFO. We approached trialists to identify unpublished studies. Selection Criteria: Randomised controlled trials of day hospital versus inpatient care, for people with acute psychiatric disorders. Studies were ineligible if a majority of participants were under 18 or over 65, or had a primary diagnosis of substance abuse or organic brain disorder. Data Collection And Analysis: Two review authors independently extracted and cross-checked data. We calculated risk ratios (RR) and 95% confidence intervals (CI) for dichotomous data. We calculated weighted or standardised means for continuous data. Day hospital trials tend to present similar outcomes in slightly different formats, making it difficult to synthesise data. We therefore sought individual patient data so that we could re-analyse outcomes in a common format. Main Results: Ten trials (involving 2685 people) met the inclusion criteria. We obtained individual patient data for four trials (involving 646 people). We found no difference in the number lost to follow-up by one year between day hospital care and inpatient care (5 RCTs, n = 1694, RR 0.94 CI 0.82 to 1.08). There is moderate evidence that the duration of index admission is longer for patients in day hospital care than inpatient care (4 RCTs, n = 1582, WMD 27.47 CI 3.96 to 50.98). There is very low evidence that the duration of day patient care (adjusted days/month) is longer for patients in day hospital care than inpatient care (3 RCTs, n = 265, WMD 2.34 days/month CI 1.97 to 2.70). There is no difference between day hospital care and inpatient care for the being readmitted to in/day patient care after discharge (5 RCTs, n = 667, RR 0.91 CI 0.72 to 1.15). It is likely that there is no difference between day hospital care and inpatient care for being unemployed at the end of the study (1 RCT, n = 179, RR 0.88 CI 0.66 to 1.19), for quality of life (1 RCT, n = 1117, MD 0.01 CI -0.13 to 0.15) or for treatment satisfaction (1 RCT, n = 1117, MD 0.06 CI -0.18 to 0.30). Authors' Conclusions: Caring for people in acute day hospitals is as effective as inpatient care in treating acutely ill psychiatric patients. However, further data are still needed on the cost effectiveness of day hospitals. abstract_id: PUBMED:16094549 The burden on relatives of acute mentally ill within the first four weeks of day hospital and inpatient treatment. Results from a randomized controlled trial Objective: To assess the burden on relatives as well as their mental well-being within the context of a randomized controlled trial on the effectiveness of acute psychiatric day hospital treatment as compared to inpatient treatment. Method: The study was conducted at the Department of Psychiatry and Psychotherapy, Dresden University of Technology. A sample of 95 relatives was assessed at admission and after four weeks of treatment using the Involvement Evaluation Questionnaire (IEQ) and the General Health Questionnaire (GHQ-28), the period to be rated being the last four weeks prior to assessment. Results: Relatives reported a mean level of burden, which for both settings decreased during the first four weeks of treatment. With respect to this period, burden on relatives of day hospital patients did not differ from that on relatives of inpatients. The relatives' mental well-being was markedly impaired, and only that of the inpatients' relatives did slightly improve during treatment. Conclusions: Treating acute mentally ill as day hospital patients does not result in greater burden on relatives compared to treating them as inpatients. Independently from treatment setting, relatives of psychiatric patients should be actively approached and offered information, coping strategies, and help. Further research should include qualitative methods. abstract_id: PUBMED:2807898 Early experience in the establishment of an integrated psychiatric day hospital. This paper describes the development of a new day-care provision for psychiatric patients. A purpose-built psychiatric day hospital was established in 1986 at the Royal Victoria Hospital, Edinburgh. A description is given of the first two years of operation. The unit's aim, to integrate treatment of both long-term and acute patients, has been successfully maintained after two years. Patient progress was assessed using the Morningside Rehabilitation Status Scale (MRSS). A problem area identified was that of patients who would not engage with the day hospital. abstract_id: PUBMED:17106839 On the efficacy of acute psychiatric day-care treatment in a one-year-follow-up. A comparison to inpatient treatment within a randomised controlled trial Objective: To compare the effectiveness of acute psychiatric day-hospital treatment and inpatient treatment with respect to a one-year follow-up. Method: Within a randomised controlled trial, patients and relatives were assessed at different levels of outcome three months and 12 months after patients' discharge using the Brief Psychiatric Rating Scale (24-Item-Version), the Manchester Assessment of Quality of Life (MANSA), the Groningen Social Disabilities Schedule (GSDS), the Berlin Inventory for the Assessment of Needs (BeBI), and the Involvement Evaluation Questionnaire (IEQ). Using estimation and test of contrasts in linear models of analysis of variance with structured covariance matrix, analyses conducted included data of all n = 191 patients having been included into the German centre of the study. Results: With respect to all measures, day-hospital treatment proved to be at least as effective as inpatient care. Conclusion: The study supports earlier findings that showed no differences in long-term effectiveness of acute psychiatric day-hospital treatment as compared to inpatient treatment. abstract_id: PUBMED:10125057 Mobilizing affect: a possible effect of day hospital treatment for chronic psychiatric patients. A study of 82 Psychiatric Day Hospital patients was undertaken to identify the program's specific effects on individuals with longer standing (i.e., chronic) psychiatric disability. Sociodemographic information and self-ratings, staff ratings and significant-other ratings were used to identify changes in functioning during the 3-week treatment as well as during the period 3 months after treatment. Findings suggest (1) that the Day Hospital patients were as seriously psychiatrically impaired as psychiatric inpatients, (2) that, as a group, they demonstrated a significant improvement in symptoms and functioning, and (3) that the more chronic patients displayed a distinctive pattern of decreased hostility and increased anxiety over the course of treatment. Findings are discussed in relation to the proposition that mobilizing the chronic patient's affect is an important factor in reengaging the therapeutic process. abstract_id: PUBMED:12535505 Day hospital versus admission for acute psychiatric disorders. Background: Inpatient treatment is an expensive way of caring for people with acute psychiatric disorders. It has been proposed that many of those currently treated as inpatients could be cared for in acute psychiatric day hospitals. Objectives: To assess the effects of day hospital versus inpatient care for people with acute psychiatric disorders. Search Strategy: We searched the Cochrane Controlled Trials Register (Cochrane Library, issue 4, 2000), MEDLINE (January 1966 to December 2000), EMBASE (1980 to December 2000), CINAHL (1982 to December 2000), PsycLIT (1966 to December 2000), and the reference lists of articles. We approached trialists to identify unpublished studies. Selection Criteria: Randomised controlled trials of day hospital versus inpatient care, for people with acute psychiatric disorders. Studies were ineligible if a majority of participants were under 18 or over 65, or had a primary diagnosis of substance abuse or organic brain disorder. Data Collection And Analysis: Data were extracted independently by two reviewers and cross-checked. Relative risks and 95% confidence intervals (CI) were calculated for dichotomous data. Weighted or standardised means were calculated for continuous data. Day hospital trials tend to present similar outcomes in slightly different formats, making it difficult to synthesise data. Individual patient data were therefore sought so that outcomes could be reanalysed in a common format. Main Results: Nine trials (involving 1568 people) met the inclusion criteria. Individual patient data were obtained for four trials (involving 594 people). Combined data suggested that, at the most pessimistic estimate, day hospital treatment was feasible for 23% (n=2268, CI 21 to 25) of those currently admitted to inpatient care. Individual patient data from three trials showed no difference in number of days in hospital between day hospital patients and controls (n=465, 3 RCTs, WMD -0.38 days/month CI -1.32 to 0.55). However, compared to controls, people randomised to day hospital care spent significantly more days in day hospital care (n=265, 3 RCTs, WMD 2.34 days/month CI 1.97 to 2.70) and significantly fewer days in inpatient care (n=265, 3 RCTs, WMD -2.75 days/month CI -3.63 to -1.87). There was no significant difference in readmission rates between day hospital patients and controls (n=667, 5 RCTs, RR 0.91 CI 0.72 to 1.15). For patients judged suitable for day hospital care, individual patient data from three trials showed a significant time-treatment interaction, indicating a more rapid improvement in mental state (n=407, Chi-squared 9.66, p=0.002), but not social functioning (n=295, Chi-squared 0.006, p=0.941) amongst patients treated in the day hospital. Four of five trials found that day hospital care was cheaper than inpatient care (with cost reductions ranging from 20.9 to 36.9%). Reviewer's Conclusions: Caring for people in acute day hospitals can achieve substantial reductions in the numbers of people needing inpatient care, whilst improving patient outcome. abstract_id: PUBMED:28265867 Acute day hospital as a cost-effective alternative to inpatient therapy Background: This is a study of the cost-benefit advantage of the model project acute day hospital (ATK) compared to inpatient care. Methods: The study was performed retrospectively and examined the direct costs and the Global Assessment of Functioning (GAF) in a matched sample. Results: The day clinic treatment compared to inpatient therapy showed at a cost benefit of 2.68: 1 at a slightly delayed increase in GAF-value. Conclusion: The treatment in ATK is more than twice as cheap compared to inpatient treatment, but at a slightly slower improvement in symptoms. Answer: The effectiveness of acute psychiatric day hospital treatment is supported by several studies, indicating that it can be a viable alternative to conventional inpatient treatment for individuals with acute psychiatric disorders. A study conducted in German-speaking countries provided evidence for the effectiveness of acute day hospital treatment compared to conventional inpatient treatment, showing that day hospital treatment, which lasted significantly longer, was superior in terms of psychopathological symptomatology at discharge, although no systematic differences were found related to satisfaction with treatment and subjective quality of life (PUBMED:15546056). Similarly, an evaluation study in the Swiss health care system found that acutely ill psychiatric patients could be treated just as effectively in an acute day hospital as in inpatient settings, with the added benefit of being about 45% cheaper (PUBMED:32523556). Another study highlighted that the effectiveness of acute psychiatric day hospital care differs between diagnostic groups, with patients diagnosed with bipolar disorder showing greater improvement than those with psychosis or depressive disorder (PUBMED:31160228). A Cochrane review concluded that caring for people in acute day hospitals is as effective as inpatient care in treating acutely ill psychiatric patients, although further data on cost-effectiveness were needed (PUBMED:22161384). Research also indicated that treating acute mentally ill patients as day hospital patients does not result in greater burden on relatives compared to treating them as inpatients (PUBMED:16094549). A one-year follow-up study supported the long-term effectiveness of acute psychiatric day-hospital treatment, showing it to be at least as effective as inpatient care (PUBMED:17106839). Furthermore, a study on chronic psychiatric patients suggested that day hospital treatment could lead to significant improvements in symptoms and functioning (PUBMED:10125057). Lastly, a cost-benefit analysis demonstrated that treatment in an acute day hospital is more than twice as cheap compared to inpatient treatment, albeit with a slightly slower improvement in symptoms (PUBMED:28265867). In conclusion, the effectiveness of acute psychiatric day hospital treatment is not as questionable as it once might have been, with evidence suggesting it can be a cost-effective alternative to inpatient care for certain patient groups, without compromising on treatment outcomes.
Instruction: Can procalcitonin be a diagnostic marker for catheter-related blood stream infection in children? Abstracts: abstract_id: PUBMED:27131015 Can procalcitonin be a diagnostic marker for catheter-related blood stream infection in children? Objective: The potential role of procalcitonin (PCT) in the diagnosis of catheter-related bloodstream infection (CRBSIs) is still unclear and requires further research. The diagnostic value of serum PCT for the diagnosis of CRBSI in children is evaluated here. Method: This study was conducted between October 2013 and November 2014, and included patients with suspected CRBSI from 1 month to 18 years of age who were febrile, with no focus of infection, and had a central venous catheter. Levels of PCT and other serum markers were measured, and their utility as CRBSI markers was assessed. Additionally, the clinical performance of a new, automated, rapid, and quantitative assay for the detection of PCT was tested. Results: Among the 49 patients, 24 were diagnosed with CRBSI. The PCT-Kryptor and PCT-RTA values were significantly higher in proven CRBSI compared to those in unproven CRBSI (p=0.03 and p=0.03, respectively). There were no differences in white blood cell count and C-reactive protein (CRP) levels between proven CRBSI and unproven CRBSI. Among the 24 patients with CRBSI, CRP was significantly higher among those with Gram-negative bacterial infection than in those with Gram-positive bacterial infections. PCT-Kryptor was also significantly higher among patients with Gram-negative bacterial infection than in those with Gram-positive bacterial infections (p=0.01 and p=0.02, respectively). Conclusions: The authors suggest that PCT could be a helpful rapid diagnostic marker in children with suspected CRBSIs. abstract_id: PUBMED:28919353 Presepsin: A new marker of catheter related blood stream infections in pediatric patients. Background: Catheter related blood stream infections (CRBSI) are mostly preventable hospital-acquired conditions. We aimed to investigate the value of presepsin in detection of CRBSI in hospitalized children. Methods: Hospitalized pediatric patients who had clinical suspicion of CRBSI were followed. Results of peripheral blood cultures and blood cultures from central venous catheters, procalcitonin (PCT), C-reactive protein (CRP), total white blood cell (WBC) counts were recorded. Serum samples for presepsin were studied at the same time with the samples of healthy controls. The patients with positive blood cultures were defined as proven CRBSI and with negative cultures as suspected CRBSI. Results: Fifty-eight patients and 80 healthy controls were included in the study. Proven CRBSI group consisted of 36 patients (62%) with positive blood cultures and compared with the suspected CRBSI group (n = 22, 36%) with negative culture results. There was no difference between proven and suspected CRBSI groups concerning WBC, PCT, CRP and presepsin. Presepsin was significantly higher in patient groups when compared with healthy controls. The receiver operating characteristic curve area under the curve was 0.98 (%95 CI: 0.97-1) and best cut-off value was 990 pg/ml. Conclusion: In hospitalized pediatric patients with CRBSI, presepsin may be a helpful rapid marker in early diagnosis. abstract_id: PUBMED:31910808 Early diagnostic value of serum procalcitonin levels for catheter-related blood stream infection in first-ever acute ischemic stroke patients. Objective: The traditional approaches for diagnosing catheter-related bloodstream infection(CRBSI) is time consuming, which could not meet the clinical requirement. Our aim was to investigate the value of serum procalcitonin(PCT) in predicting CRBSI in first-ever acute ischemic stroke patients with central venous catheters (CVCs). Methods: This was a retrospective study. First-ever acute ischemic stroke patients hospitalized in neurological intensive care unit(NICU) of Aerospace Center Hospital and NICU of Beijing Chaoyang Hospital during January 2010 and December 2017 with clinically suspected CRBSI were enrolled. Peripheral blood white blood cell (WBC) count, neutrophils percentage(NE%), the levels of serum PCT, dwell time of catheterization and outcome of the patients were collected. According to the diagnosis of CRBSI or not, they were divided into CRBSI group and no CRBSI group. We used receiver operating characteristic curve (ROC) to evaluate the value of serum PCT levels in predicting CRBSI in patients with clinically suspected CRBSI. Results: Forty-five patients with suspected CRBSI were included in this study, and 13 patients were diagnosed with CRBSI. Comparing to those in no CRBSI group, the maximum body temperature (Tmax) (p = 0.036) and the PCT levels (P = 0.013) in CRBSI group were both significantly higher. The area under ROC of the serum PCT levels and the Tmax to predict the CRBSI were 0.803 (0.95CI,0.660-0.946) and 0.680 (0.95CI,0.529-0.832) respectively. The PCT cut-off value was 0.780 ng/ml, with the sensitivity 69.23%, specificity 87.50%, positive predictive values 69.23% and negative predictive values 87.50%. Conclusion: It could be helpful to adopt PCT as a rapid diagnostic biomarker for first-ever acute stroke patients with suspected CRBSI. abstract_id: PUBMED:28904808 Intracardiac fistula: an unusual complication of catheter-related blood stream infection. In end stage renal disease patients on dialysis, the use of catheter as a vascular access is associated with a significant risk of sepsis compared to an arterio-venous fistula. Our case emphasizes the importance of having high index of suspicion for unusual complications in patients presenting with possible catheter-related blood stream infection and early use of complementary tools such as trans-oesophageal echocardiography whenever applicable. abstract_id: PUBMED:30333275 Catheter-related Blood Stream Infection in a Patient with Hemodialysis. A 31-year-old patient came to visit the outpatient clinic at the hospital for his routine twice-weekly hemodialyis (HD) session. During HD, the patient suddenly developed a fever with shivering. At that time, a diagnosis of catheter-related blood stream infection (CR-BSI) was developed, HD catheter or the catheter double lumen (CDL) was uninstalled and the patient was hospitalized. Results of culture withdrawn through the tip of catheter lumen and peripheral blood revealed identical microorganism, i.e. the Enterobacter cloacae. Diagnosis of CR-BSI in the present case was made based on the 2009 Infectious Disease Society of America (IDSA) criteria. In general, prevention measures for CR-BSI should be taken into account including education for patient, awareness of the health care providers who install the CDL, implementation of procedure for appropriate skin aseptic technique and best practice for HD catheter care, particularly on the exit site of the CDL to prevent the development of CR-BSI. abstract_id: PUBMED:30159055 Role of Procalcitonin As an Inflammatory Marker in a Sample of Egyptian Children with Simple Obesity. Background: Obesity is a multifactorial disease, associated with metabolic disorders and chronic low-grade inflammation. Procalcitonin (PCT) is well known as a biomarker of infection, and systemic inflammation. Recently, it has potential as a marker for chronic low-grade inflammation. Aim: This study aims to evaluate the role of serum PCT as an inflammatory biomarker in the diagnosis of obesity-related low-grade inflammation. Method: In this case-control study, 50 obese and 35 normal weight children and adolescents aged 5-15 years were enrolled. Anthropometric parameters were measured in all subjects. Blood samples were collected for measurement of lipid profile, blood glucose, insulin, high sensitivity-CRP (Hs-CRP) and serum procalcitonin. Serum (PCT) levels were assessed using enzyme-linked immunosorbent assay. Results: Obese participants had higher concentrations of serum PCT, total cholesterol, triglycerides, LDL-c, glucose and Hs-CRP than control group. On correlation analysis, procalcitonin had significant positive correlation with (BMI) z-score (P = 0.02), insulin (P = 0.00), insulin resistance (HOMA-IR) (P = 0.006), Hs-CRP (P = 0.02), total cholesterol (P = 0.04) and triglycerides (P = 0.00) in obese group. Conclusion: The increased serum procalcitonin concentrations were closely related to measures of adiposity, Hs-CRP and insulin resistance, suggesting that PCT may be an excellent biomarker for obesity-related chronic low-grade inflammation in children and adolescents. abstract_id: PUBMED:24082609 Catheter related blood stream infections in the paediatric intensive care unit: A descriptive study. Context: Catheter related blood stream infections (CRBSI) contributes significantly to morbidity, mortality and costs in intensive care unit (ICU). The patient profile, infrastructure and resources in ICU are different in the developing world as compared to western countries. Studies regarding CRBSI from pediatric intensive care unit (PICU) are scanty in the Indian literature. Aims: To determine the frequency and risk factors of CRBSI in children admitted to PICU. Settings And Design: Descriptive study done in the PICU of a tertiary care teaching hospital over a period of four months. Materials And Methods: Study children were followed up from the time of catheterization till discharge. Their clinical and treatment details were recorded and blood culture was done every 72 h, starting at 48 h after catheterization. The adherence of doctors to Centre for Disease Control (CDC) guidelines for catheter insertion was assessed using a checklist. Statistical Analysis: Clinical parameters were compared between colonized and non-colonized subjects and between patients with and without CRBSI. Unpaired t-test and Chi-square test were used to test the significance of observed differences. Results: Out of the 41 children, 21 developed colonization of their central venous catheter (66.24/1000 catheter days), and two developed CRBSI (6.3/1000 catheter days). Infants had a higher risk for developing colonization (P = 0.01). There was 85% adherence to CDC guidelines for catheter insertion. Conclusions: The incidence of CRBSI and catheter colonization is high in our in spite of good catheter insertion practices. Hence further studies to establish the role of adherence to catheter maintenance practices in reducing risk of CRBSI is required. The role of a composite package of interventions including insertion and maintenance bundles specifically targeting infants needs to be studied to bring down the catheter colonization as well as CRBSI rates. abstract_id: PUBMED:22442890 Catheter related blood stream infections--prevalence and interventions Background: Catheter related blood stream infections are a significant complication of intensive care with worldwide prevalence rate around 5 cases per 1000 catheter-days. Only scanty Czech data have been published. Our study monitored the occurrence of catheter-related blood stream infections in a high dependency unit of regional hospital. Methods: In 2008 we commenced to monitor the occurrence rate of catheter-related blood stream infections in short-term central venous catheters without antimicrobial coating. We organized a training session for medical staff and started to strictly adhere to published guidelines. After two years of keeping a register we analysed individual cases as proven, possible, or not proven blood stream infections. Results: From March 2008 to March 2010 we inserted 142 central venous catheters for total time of 1423 catheter days (median 9 days). Ten catheters were removed after median of 17 days due to unexplained pyrexia. Blood stream infection was proven in 4 cases and possible in 2 cases. We have noted total 2.81 proven cases, and 4.22 proven and/or possible cases of blood stream infection per 1000 catheter-days. Conclusion: The register of catheter related blood stream infections is an inexpensive and time-efficient tool that improves the quality of intensive abstract_id: PUBMED:36329285 The cost of hospitalizations for treatment of hemodialysis catheter-associated blood stream infections in children: a retrospective cohort study. Background: Hospitalization costs for treatment of hemodialysis (HD) catheter-associated blood stream infections (CA-BSI) in adults are high. No studies have evaluated hospitalization costs for HD CA-BSI in children or identified factors associated with high-cost hospitalizations. Methods: We analyzed 160 HD CA-BSIs from the Standardizing Care to Improve Outcomes in Pediatric End-stage Kidney Disease (SCOPE) collaborative database linked to hospitalization encounters in the Pediatric Health Information System (PHIS) database. Charge-to-cost ratios were used to convert hospitalization charges reported in PHIS database to estimated hospital costs. Generalized linear mixed modeling was used to assess the relationship between higher-cost hospitalization (cost above 50th percentile) and patient and clinical characteristics. Generalized linear regression models were used to assess differences in mean service line costs between higher- and lower-cost hospitalizations. Results: The median (IQR) length of stay for HD CA-BSI hospitalization was 5 (3-10) days. The median (IQR) cost for HD CA-BSI hospitalization was $18,375 ($11,584-$36,266). ICU stay (aOR 5.44, 95% CI 1.62-18.26, p = 0.01) and need for a catheter procedure (aOR = 6.08, 95% CI 2.45-15.07, p &lt; 0.001) were associated with higher-cost hospitalization. Conclusions: Hospitalizations for HD CA-BSIs in children are often multiple days and are associated with substantial costs. Interventions to reduce CA-BSI may reduce hospitalization costs for children who receive chronic HD. A higher resolution version of the Graphical abstract is available as Supplementary information. abstract_id: PUBMED:35495370 Case report: Catheter related blood stream infection caused by Candida vulturna. Candida vulturna is a newly emerging candida species belong to Candida haemulonii species complex of Metschnikowiaceae family. Numerous clinical samples have been reported to isolate C. vulturna since discovery. We report a case of catheter related blood stream infection in which C. vulturna was isolated from blood in patient after prolong antibiotic therapy for recurrent infection of retroperitoneal cyst. The blood isolate was identified to species level by molecular assay targeting D1/D2 regions of 26s rDNA gene. The patient improved with administration of intravenous micafungin despite lack of antifungal susceptibility breakpoints. Answer: Yes, procalcitonin (PCT) can be a diagnostic marker for catheter-related bloodstream infection (CRBSI) in children. A study conducted between October 2013 and November 2014, which included pediatric patients with suspected CRBSI, found that PCT levels were significantly higher in proven CRBSI cases compared to unproven CRBSI cases. The study also tested the clinical performance of a new, automated, rapid, and quantitative assay for the detection of PCT and suggested that PCT could be a helpful rapid diagnostic marker in children with suspected CRBSIs (PUBMED:27131015). Another study investigating the value of serum PCT in predicting CRBSI in first-ever acute ischemic stroke patients with central venous catheters (CVCs) concluded that PCT could be a helpful biomarker for early diagnosis of CRBSI in this patient population (PUBMED:31910808). These findings support the potential role of PCT as a diagnostic marker for CRBSI in pediatric patients.
Instruction: Choice of approach for hepatectomy for hepatocellular carcinoma located in the caudate lobe: isolated or combined lobectomy? Abstracts: abstract_id: PUBMED:22876044 Choice of approach for hepatectomy for hepatocellular carcinoma located in the caudate lobe: isolated or combined lobectomy? Aim: To investigate the significance of the surgical approaches in the prognosis of hepatocellular carcinoma (HCC) located in the caudate lobe with a multivariate regression analysis using a Cox proportional hazard model. Methods: Thirty-six patients with HCC underwent caudate lobectomy at a single tertiary referral center between January 1995 and June 2010. In this series, left-sided, right-sided and bilateral approaches were used. The outcomes of patients who underwent isolated caudate lobectomy or caudate lobectomy combined with an additional partial hepatectomy were compared. The survival curves of the isolated and combined resection groups were generated by the Kaplan-Meier method and compared by a log-rank test. Results: Sixteen (44.4%) of 36 patients underwent isolated total or partial caudate lobectomy whereas 20 (55.6%) received a total or partial caudate lobectomy combined with an additional partial hepatectomy. The median diameter of the tumor was 6.7 cm (range, 2.1-15.8 cm). Patients who underwent an isolated caudate lobectomy had significantly longer operative time (240 min vs 170 min), longer length of hospital stay (18 d vs 13 d) and more blood loss (780 mL vs 270 mL) than patients who underwent a combined caudate lobectomy (P &lt; 0.05). There were no perioperative deaths in both groups of patients. The complication rate was higher in the patients who underwent an isolated caudate lobectomy than in those who underwent combined caudate lobectomy (31.3% vs 10.0%, P &lt; 0.05). The 1-, 3- and 5-year disease-free survival rates for the isolated caudate lobectomy and the combined caudate lobectomy groups were 54.5%, 6.5% and 0% and 85.8%, 37.6% and 0%, respectively (P &lt; 0.05). The corresponding overall survival rates were 73.8%, 18.5% and 0% and 93.1%, 43.6% and 6.7% (P &lt; 0.05). Conclusion: The caudate lobectomy combined with an additional partial hepatectomy is preferred because this approach is technically less demanding and offers an adequate surgical margin. abstract_id: PUBMED:28715719 Isolated caudate lobectomy: Left-sided approach. Case reports. Introduction: The caudate lobe is a distinct liver lobe and surgical resection requires expertise and precise anatomic knowledge. Left-sided approach was described for resection of small tumors originated in the Spiegel lobe but now the procedure has been performed even for tumors more than five centimeters. The aim of this study is to present three cases of tumor of caudate lobe underwent isolated lobectomy by left-sided approach. Presentation Of Case: Three patients with metastasis of colorectal cancer, carcinoma hepatocellular and metastasis of neuroendocrine tumor underwent resection. After modified Makuuchi incision, early control of short hepatic e short portal veins before hepatectomy was performed. The operative time was 200, 270 and 230min respectively. No blood transfusion was used and no postoperative complications were observed. The length of stay was 7, 11 and 5days respectively. Discussion: Some approaches have been described to access and resect tumors of the caudate lobe, including the left-sided approach, right-sided approach, combined left- and right-sided approach and the anterior transhepatic approach. For liver resection in patients with malignant disease, parenchymal preservation is important in order to avoid postoperative liver failure or due to the risk of second hepatectomy. In these patients isolated caudate lobectomy is a safe option. Conclusion: Isolated caudate lobectomy is a feasible procedure. Left-sided approach can be preformed even for tumors larger than 5cm. abstract_id: PUBMED:30242643 Laparoscopic Extended Left Hemi-Hepatectomy plus Caudate Lobectomy for Caudate Lobe Hepatocellular Carcinoma. Background: Laparosopic hepatectomy for caudate lobe is classified as one of the most difficult procedures to perform.1 For malignant caudate lobe tumor which is close to hepatic veins, extended hemi-hepatectomy may be more suitable. Methods: A 60-year-old man was diagnosed with hepatitis B virus infection-related hepatocellular carcinoma (HCC). His liver function was Child-Pugh A and ICG-15 test was 2.1%. Abdominal CT showed a 5 × 6 cm mass located in caudate lobe with middle and left hepatic vein encroached. Caudate lobectomy was not adopted because of the suspicious hepatic vein invasion by HCC. Instead, laparoscopic extended left hemi-hepatectomy plus caudate lobectomy was planned. Results: The patient was placed in supine position. Three 12-mm trocars and two 5-mm trocars were used. After fully mobilization, the caudate lobe was exposed. The third porta hepatis was dissected before parenchyma transection.. The cutline was along the right side of middle hepatic vein. Pringle maneuver (15 min clamping and 5 min release, total Pringle time was 60 min with 4 times clamping) was performed during transection. The superficial tissue was divided using ultrasonic shears, while the deeper tissue was divided using LigaSure. The left pedicle was dissected and transected meticulously. The main trunk of right hepatic vein was continuously exposed from the caudal side. A linear stapler was used to transect the middle and left hepatic vein from the root. Bipolar was used for hemostasis. The specimen was removed from suprapubic incision. The operation time was 200 min and estimated blood loss was 100 ml. HCC was confirmed by postoperative pathological examination. The postoperative course was uneventful. Conclusions: Laparoscopic extended left hemi-hepatectomy plus caudate lobectomy is feasible and safe for caudate lobe HCC with suspicious hepatic veins invasion. abstract_id: PUBMED:35790286 Standardized and Feasible Laparoscopic Approach for Tumors Located in the Caudate Lobe. Background/aim: Although laparoscopic hepatectomy has been widely used in the management of liver tumors for its reduced invasiveness and magnified view, in the caudate lobe it remains challenging especially for patients with cirrhosis. Thus, this study aimed to evaluate patients undergoing laparoscopic hepatectomy for hepatic tumors in the caudate lobe and establish strategies for performing such procedure. Patients And Methods: Laparoscopic hepatectomy in the caudate lobe was performed in nine patients. We performed inflow control to reduce bleeding during hepatic transection and retraction of the left lateral section to the cranial side to obtain a sufficient surgical field using a Nathanson liver retractor. We approached tumors in the Spiegel lobe (SP) from caudal side for segment 1 (S1) partial hepatectomy and from caudal and left side for Spiegel lobectomy, the lower paracaval portion (PC) from caudal side for S1 partial hepatectomy, and the upper PC from caudal and bilateral side for total caudate lobectomy. Results: In 6 cases the tumors were in the SP and in 3 cases in the PC. The types of laparoscopic hepatectomy performed were total caudate lobectomy (n=1), Spiegel lobectomy (n=2), and partial hepatectomy of segment 1 (n=6). All the tumors were curatively resected, and no patient had complications. Operative time for tumors located in the PC was significantly longer than that for tumors located in the SP. Laparoscopic hepatectomy in the caudate lobe was safely performed for five patients with liver cirrhosis. Conclusion: Laparoscopic hepatectomy in the caudate lobe may become the standard surgical technique with hepatic inflow control, sufficient surgical field exposure, and appropriate approach. abstract_id: PUBMED:19936187 Anterior hepatic transection for caudate lobectomy. Resection of the caudate lobe (segment I- dorsal sector, segment IX- right paracaval region, or both) is often technically difficult due to the lobe's location deep in the hepatic parenchyma and because it is adjacent to the major hepatic vessels (e.g., the left and middle hepatic veins). A literature search was conducted using Ovid MEDLINE for the terms "caudate lobectomy" and "anterior hepatic transection" (AHT) covering 1992 to 2007. AHT was used in 110 caudate lobectomies that are discussed in this review. Isolated caudate lobectomy was performed on 28 (25.4%) patients, with 11 case (11%) associated with hepatectomy, while 1 (0.9%) was associated with anterior segmentectomy. Complete caudate lobectomy was performed on 82 (74.5%) patients. Hepatocellular carcinoma was observed in 106 (96.3%) patients, while 1 (0.9%) had hemangioma and 3 (2.7%) had metastatic caudate tumors. AHT was used in 108 (98.1%) caudate resections, while AHT associated with a right-sided approach was performed in 2 (1.8%) cases. AHT is recommended for tumors located in the paracaval portion of the caudate lobe (segment IX). AHT is usually a safe and potentially curative surgical option. abstract_id: PUBMED:27069152 Modified Liver Hanging Maneuver for En-bloc Right-sided Hepatectomy Combined with Total Caudate Lobectomy for Colon-Cancer Liver Metastasis and Hepatocellular Carcinoma. Background: A right-sided hepatectomy with total caudate lobectomy is indicated for colorectal-cancer liver metastases (CLM) and hepatocellular carcinomas (HCC) located in the caudate lobe with extension to the right lobe of the liver. Caudate-lobe resection (i.e. segmentectomy 1 according to the Brisbane terminology) is one of the most difficult types of hepatectomy to carry out radically and safely. The deep portion of hepatic transection around the caudate lobe, hepatic veins and inferior vena cava is a critical source of massive bleeding. Prolonged transection can increase blood loss. Patients And Methods: We analyzed the outcome of 10 patients who underwent right-sided hepatectomy with caudate lobectomy using a modified liver hanging maneuver (mLHM) in comparison with 16 patients who underwent the operation without mLHM. Results: Blood loss during liver transection and blood loss per unit area of cut surface were significantly less in the mLHM group (p=0.014 and 0.015, respectively). In patients diagnosed pathologically with liver impairment, transection time was significantly shorter in the mLHM group (p=0.038), as were red blood cell transfusion volume (p=0.042) and blood loss (p=0.049) during transection. Conclusion: Use of mLHM can potentially improve surgical outcomes by reducing blood loss and transection time, which are especially important for patients with liver impairment. abstract_id: PUBMED:35609476 Hepatocellular carcinoma with situs inversus totalis treated by caudate lobectomy: A case report. Introduction: Situs inversus totalis (SIT) is a congenital anatomical variant in which organs and vasculature are positioned in a mirror-image relationship to the normal condition. Therefore, the surgical procedures need to be carefully planned with these factors in mind. Case Presentation: A 57-year-old man with SIT was diagnosed with a hepatocellular carcinoma (HCC) and was planned for caudate lobectomy. As preoperative preparation, 3D reconstructed images were created based on the contrast-enhanced CT images, and careful simulations were performed on the vascular anomalies and location of the tumor. There was a replaced left hepatic artery forming a common trunk with a left gastric artery. In addition, using media player software, a previous caudate lobectomy video was played in right and left inverted mode to simulate the abdominal surgical field image in SIT. The operative time was 285 min, and the blood loss was 440 ml. The preoperative careful simulation allowed us to proceed with the surgery without significant discomfort. Conclusion: Even in the case of hepatocellular carcinoma with SIT, hepatectomy for hepatocellular carcinoma can be safely performed by careful preoperative simulations. abstract_id: PUBMED:36074095 A Double Suspension Technique for Laparoscopic Isolated Caudate Lobectomy. Background: Laparoscopic isolated caudate lobectomy is still a challenging procedure for hepatobiliary surgeons because of its deep location and narrow operating space. Hilar exposure and adequate operation space play an important role during laparoscopic caudate lobectomy. Very few references are available on this technique, and in this study, we present a new suspension technique to assist laparoscopic caudate lobectomy. Materials and Methods: The data of patients with caudate hepatic tumors who underwent laparoscopic isolated caudate lobectomy with or without the double suspension technique at the Eastern Hepatobiliary Surgery Hospital were retrospectively analyzed. Results: A total of 25 patients underwent laparoscopic isolated caudate lobectomy at Eastern Hepatobiliary Surgery Hospital between June 2016 and March 2022. Eight patients had perioperative complications, and no patient died within 30 days after surgery. There were no significant differences between the two groups in terms of conversion rate (8.3% versus 7.7%; P = .954), complication rate (25.0% versus 38.5%; P = .480), length of stay (8.0 [6.0-11.0] days versus 9.0 [6.0-19.0] days; P = .098), and postoperative liver function changes. Patients who underwent resection in the suspension group had shorter operation time (154.9 ± 44.3 minutes versus 224 ± 86.3 minutes; P = .018), inferior vena cava dissection time (30.1 ± 5.4 minutes versus 44.8 ± 7.4 minutes; P &lt; .001), and less bleeding (125.0 [20-800.0] mL versus 350 [80-850.0] mL, P = .011). Conclusions: This double suspension technique is a safe and feasible method to assist laparoscopic caudate lobectomy. It provides clear exposure and adequate surgical space, thereby shortening the operation time and reducing intraoperative blood loss. abstract_id: PUBMED:29151697 Anatomic isolated caudate lobectomy: Is it possible to establish a standard surgical flow? Aim: To establish the surgical flow for anatomic isolated caudate lobe resection. Methods: The study was approved by the ethics committee of the Second Affiliated Hospital Zhejiang University School of Medicine (SAHZU). From April 2004 to July 2014, 20 patients were enrolled who underwent anatomic isolated caudate lobectomy at SAHZU. Clinical and postoperative pathological data were analyzed. Results: Of the total 20 cases, 4 received isolated complete caudate lobectomy (20%) and 16 received isolated partial caudate lobectomy (80%). There were 4 cases with the left approach (4/20, 20%), 6 cases with the right approach (6/20, 30%), 7 cases with the bilateral combined approach (7/20, 35%), 3 cases with the anterior approach (3/20, 15%), and the hanging maneuver was also combined in 2 cases. The median tumor size was 5.5 cm (2-12 cm). The median intra-operative blood loss was 600 mL (200-5700 mL). The median intra-operative blood transfusion volume was 250 mL (0-2400 mL). The median operation time was 255 min (110-510 min). The median post-operative hospital stay was 14 d (7-30 d). The 1- and 3-year survival rates for malignant tumor were 88.9% and 49.4%, respectively. Conclusion: Caudate lobectomy was a challenging procedure. It was demonstrated that anatomic isolated caudate lobectomy can be done safely and effectively. abstract_id: PUBMED:33411037 Isolated caudate lobectomy using a modified hanging maneuver. Background: The caudate lobe is located deep in the dorsal portion of the liver. Complete resection is an extremely demanding surgery due to the limited surgical field, especially in cases with severe intra-abdominal complications. A major concern of isolated caudate lobectomy is the difficulty associated with securing the contralateral visual field during parenchymal transection. To overcome this issue, we present a new technique for isolated caudate lobectomy that uses a modified hanging maneuver. Methods: We performed an anatomical isolated caudate lobectomy via the high dorsal resection technique using our new modified hanging maneuver in two patients with HCC in November and December 2019. Results: Patient 1 was severely obese, so the upper abdominal cavity was occupied by a large amount of great omental fat, and fibrous adhesions were observed around the spleen. Patient 2 had undergone six preoperative treatments, and a high degree of adhesion was observed in the abdominal cavity around the liver. It was difficult to secure the surgical field due to severe abdominal complications in both cases. The total operation times in these two cases were 617 and 763 min, respectively, while the liver parenchymal dissection times of the caudate lobe were 96 and 108 min, respectively. The resection margin was negative in both patients (R0). Neither patient had any complications after surgery; both were discharged on postoperative day 14. Conclusion: Our modified hanging maneuver is useful, particularly in cases with a narrow surgical field due to severe adhesions, bulky tumors, and/or hypertrophy of the Spiegel lobe. Answer: The choice of approach for hepatectomy for hepatocellular carcinoma (HCC) located in the caudate lobe can be either isolated caudate lobectomy or combined caudate lobectomy with additional partial hepatectomy. The decision on which approach to use depends on various factors, including the technical demands of the surgery, the size of the tumor, and the potential for achieving adequate surgical margins. One study suggests that caudate lobectomy combined with an additional partial hepatectomy is preferred because this approach is technically less demanding and offers an adequate surgical margin. Patients who underwent combined caudate lobectomy had significantly shorter operative times, shorter hospital stays, less blood loss, and lower complication rates compared to those who underwent isolated caudate lobectomy. Additionally, the combined approach showed better disease-free and overall survival rates (PUBMED:22876044). However, isolated caudate lobectomy can also be a feasible procedure. A left-sided approach has been described for resection of small tumors originated in the Spiegel lobe and has been performed even for tumors larger than five centimeters. In the reported cases, no blood transfusion was used, and no postoperative complications were observed, with relatively short lengths of stay (PUBMED:28715719). Laparoscopic approaches have also been explored, with studies indicating that laparoscopic extended left hemi-hepatectomy plus caudate lobectomy can be feasible and safe for caudate lobe HCC with suspicious hepatic veins invasion (PUBMED:30242643). Another study highlighted that laparoscopic hepatectomy in the caudate lobe may become the standard surgical technique with hepatic inflow control, sufficient surgical field exposure, and appropriate approach (PUBMED:35790286). In conclusion, the choice between isolated or combined lobectomy for HCC in the caudate lobe should be individualized based on the specific characteristics of the tumor and the patient, as well as the surgeon's expertise and preference. Both approaches have been shown to be viable, with the combined approach potentially offering better outcomes in terms of operative time, blood loss, and survival rates, while isolated lobectomy can be safe and feasible, particularly with the use of advanced laparoscopic techniques.
Instruction: Are hematinic deficiencies the cause of anemia in chronic heart failure? Abstracts: abstract_id: PUBMED:15131553 Are hematinic deficiencies the cause of anemia in chronic heart failure? Background: Anemia in chronic heart failure (CHF) is common, varying in prevalence between 14.4% and 55%, and is more frequent in patients with more severe heart failure. Patients with CHF who have anemia have a poorer quality of life, higher hospital admission rates, and reduced exercise tolerance. We explored the relation between hematinic levels and hemoglobin (Hb) levels and exercise tolerance in a group of patients with CHF. Methods: We analyzed data from 173 patients with left ventricular systolic dysfunction (LVSD), 123 patients with symptoms of heart failure, but preserved left ventricular (LV) systolic function ("diastolic dysfunction"), and 58 control subjects of similar age. Each underwent echocardiography, a 6-minute walk test, and blood tests for renal function and Hb and hematinic levels (vitamin B12, iron, and folate). We classified patients as having no anemia (Hb level &gt;12.5 g/dL), mild anemia (Hb level from 11.5-12.5 g/dL), or moderate anemia (Hb level &lt;11.5 g/dL). Results: Of patients with LVSD, 16% had moderate anemia and 19% had mild anemia. Of patients with preserved LV function, 16% had moderate anemia and 17% had mild anemia. Four control subjects had a Hb level &lt;12.5 g/dL. Of all patients, 6% were vitamin B12 deficient, 13% were iron deficient, and 8% were folate deficient. There was no difference between patients with LVSD and the diastolic dysfunction group. In patients with LVSDS, the average Hb level was lower in New York Heart Association class III than classes II and I. The distance walked in 6 minutes correlated with Hb level in both groups of patients with CHF (r = 0.29; P &lt;.0001). Patients with anemia achieved a lower pVO2 (15.0 [2.3] vs 19.5 [4.4], P &lt;.05). Peak oxygen consumption correlated with Hb level (r = 0.21, P &lt;.05) in the patients, but not in the control subjects. In patients with anemia, the mean creatinine level was higher than in patients with a Hb level &gt;12.5 g/dL, but there was no clear relationship with simple regression. Hematocrit level and mean corpuscular volume were not different in the patients with diastolic dysfunction, patients with LV dysfunction, or the control subjects. Hematocrit levels were not influenced by diuretic dose. Patients with anemia were not more likely to be hematinic deficient than patients without anemia. Conclusions: Patients with symptoms and signs of CHF have a high prevalence of anemia (34%) whether they have LV dysfunction or diastolic dysfunction, but few patients have hematinic deficiency. Hemoglobin levels correlate with subjective and objective measures of severity and renal function. abstract_id: PUBMED:27254409 Pharmacotherapy for comorbidities in chronic heart failure: a focus on hematinic deficiencies, diabetes mellitus and hyperkalemia. Introduction: Chronic heart failure (HF) is frequently accompanied by one or more comorbidities. The presence of comorbidities in chronic HF is strongly correlated to HF severity and impaired outcome. Areas Covered: This review will address several comorbidities with high prevalence and/or high impact in patients with chronic HF, including diabetes, anemia, hematinic deficiencies, and hyperkalemia. The background and subsequent pharmacotherapeutic options of these comorbidities will be discussed. For this review, a MEDLINE search was performed. Expert Opinion: Heart failure is increasingly considered a multimorbid syndrome, including metabolic derangements and chronic inflammation. Persistent metabolic derangements and low-grade inflammation might lead to progression of HF and the development of comorbidities. Although several comorbidity-specific drugs became available in the past decade, most of these therapies are studied in relatively small cohorts using surrogate end-points. Therefore, larger studies are needed to address whether treating these comorbidities will improve patient outcome in chronic HF. abstract_id: PUBMED:27439011 Prevalence and Outcomes of Anemia and Hematinic Deficiencies in Patients With Chronic Heart Failure. Importance: Detailed information on the prevalence, associations, and consequences of anemia and iron deficiency in epidemiologically representative outpatients with chronic heart failure (HF) is lacking. Objective: To investigate the epidemiology of anemia and iron deficiency in a broad range of patients referred to a cardiology clinic with suspected HF. Design, Setting, And Participants: We collected clinical data, including hemoglobin, serum iron, transferrin saturation, and serum ferritin concentrations, on consecutive patients referred with suspected HF to a single outpatient clinic serving a local community from January 1, 2001, through December 31, 2010. Follow-up data were censored on December 13, 2011. Patients underwent phenotyping by echocardiography and plasma N-terminal pro-brain natriuretic peptide measurement and were followed for up to 10 years. Main Outcome Measures: Prevalences of anemia and iron deficiency and their interrelationship, all-cause mortality, and cardiovascular mortality. Results: Of 4456 patients enrolled in the study, the median (interquartile range) age was 73 (65-79) years, 2696 (60.5%) were men, and 1791 (40.2%) had left ventricular systolic dysfunction (LVSD). Of those without LVSD, plasma N-terminal pro-brain natriuretic peptide concentration was greater than 400 pg/mL in 1172 (26.3%), less than 400 pg/mL in 841 (18.9%), and not measured in 652 (14.6%). Overall, 1237 patients (27.8%) had anemia, with a higher prevalence (987 [33.3%]) in patients who met the criteria for HF with or without LVSD. Depending on the definition applied, iron deficiency was present in 270 (43.2%) to 425 (68.0%) of patients with and 260 (14.7%) to 624 (35.3%) of patients without anemia. Lower hemoglobin (hazard ratio 0.92; 95% CI, 0.89-0.95; P &lt; .001) and serum iron (hazard ratio 0.98; 95% CI, 0.97-0.99; P = .007) concentrations were independently associated with higher all-cause and cardiovascular mortality in multivariable analyses. Conclusions And Relevance: Anemia is common in patients with HF and often associated with iron deficiency. Both anemia and iron deficiency are associated with an increase in all-cause and cardiovascular mortality and might both be therapeutic targets in this population. abstract_id: PUBMED:28382470 Comorbidities in Heart Failure. Comorbidities frequently accompany chronic heart failure (HF), contributing to increased morbidity and mortality, and an impaired quality of life. We describe the prevalence of several high-impact comorbidities in chronic HF patients and their impact on morbidity and mortality. Furthermore, we try to explain the underlying pathophysiological processes and the complex interaction between chronic HF and specific comorbidities. Although common risk factors are likely to contribute, it is reasonable to believe that factors associated with HF might cause other comorbidities and vice versa. Potential factors are inflammation, neurohormonal activation, and hemodynamic changes. abstract_id: PUBMED:18607519 Anemia and erythropoietin in heart failure. Anemia is frequently observed in patients with chronic heart failure (CHF) and is related to an impaired outcome. The origin of anemia in CHF is diverse and is associated with several factors including renal failure, resistance of the bone marrow to erythropoietin (EPO), hematinic deficiencies, and medication use. Recently, several small-scale clinical trials have shown that EPO treatment might improve clinical parameters in anemic heart failure patients. In addition, several preclinical studies have shown that EPO possesses non-hematopoietic effects. This current review focuses on the etiology, consequences, and treatment of anemia in heart failure patients. The pleiotropic effects of EPO in an experimental setting will also be discussed. Heart Fail Monit 2008;6(1):28-33. abstract_id: PUBMED:16860030 Anemia, renal dysfunction, and their interaction in patients with chronic heart failure. Anemia and renal dysfunction (RD) are frequent complications seen in chronic heart failure (HF). However, the prevalence and interaction of these co-morbidities in a representative population of outpatients with chronic HF is poorly described. In this study, it was sought to determine the association between RD and anemia in patients with HF enrolled in a community-based HF program. Nine hundred fifty-five patients with HF due to left ventricular systolic dysfunction were investigated for the prevalence of anemia and its cause and followed for a median of 531 days. Anemia was defined as hemoglobin &lt; 12.0 g/dl in women and &lt; 13.0 g/dl in men. RD was defined as a calculated glomerular filtration rate of &lt; 60 ml/min. The prevalence of anemia was 32%. Fifty-three percent of patients with and 27% of those without anemia had &gt; or = 1 test suggesting hematinic deficiency. The prevalence of RD was 54%. Forty-one percent of patients with and 22% of patients without RD had anemia, with similar proportions associated with iron deficiency in the presence or absence of RD. Anemia and RD independently predicted a worse outcome, and this effect was additive. In conclusion, in outpatients with chronic HF, anemia and RD are common and co-exist but confer independent prognostic information. A deficiency of conventional hematinic factors may cause about 1/3 of anemia in this clinical setting. abstract_id: PUBMED:16172283 Levels of hematopoiesis inhibitor N-acetyl-seryl-aspartyl-lysyl-proline partially explain the occurrence of anemia in heart failure. Background: Anemia is common in patients with chronic heart failure (CHF) and is associated with a poor prognosis. However, only a minority of patients with CHF have impaired renal function or underlying hematinic deficiencies. It has been shown that inhibition of the renin-angiotensin system is associated with the development of anemia. The aim of the present study was to determine possible mechanisms linking anemia to renin-angiotensin system activity in CHF patients. Methods And Results: We initially evaluated 98 patients with advanced stable CHF who were treated with ACE inhibitors (left ventricular ejection fraction, 28+/-1%; age, 69+/-1 years; 80% male), 10 of whom had an unexplained anemia (normal hematinics and no renal failure). These 10 anemic patients were matched with 10 nonanemic patients in terms of age and left ventricular ejection fraction. Serum ACE activity was 73% lower in anemic CHF patients compared with nonanemic CHF patients (P=0.018). Moreover, serum of these patients inhibited in vitro the proliferation of bone marrow-derived erythropoietic progenitor cells of healthy donors by 17% (P=0.003). Levels of the hematopoiesis inhibitor N-acetyl-seryl-aspartyl-lysyl-proline (Ac-SDKP), which is almost exclusively degraded by ACE, were significantly higher in anemic CHF patients and were clearly correlated to erythroid progenitor cell proliferation (r=-0.64, P=0.001). Conclusions: Serum ACE activity is markedly lower in anemic CHF patients, and serum of these patients inhibits hematopoiesis. The clear correlation between Ac-SDKP and proliferation of erythroid progenitor cells suggests an inhibitory role of Ac-SDKP on hematopoiesis in CHF patients, which may explain the observed anemia in patients treated with ACE inhibitors. abstract_id: PUBMED:35436828 Parenteral Iron in Heart Failure: An Indian Perspective. Iron deficiency (ID) is clinically significant comorbidity usually reported with acute and chronic heart failure (HF) and associated with prognostic outcomes, independent of anemia. The exact cause of ID and anemia and their association with HF is not entirely clear. Current evidence highlights neuro-hormonal and proinflammatory cytokine activation and renal dysfunction favoring the development of anemia and ID. Intravenous iron therapy (IV Iron) enhances exercise capacity, HF-associated symptoms and health-related quality of life. Oral iron therapy might be less effective compared to IV Iron in HF patients. At the same time, large, well-designed cardiovascular outcome studies are warranted to establish the long-term efficacy and safety of IV Iron in patients with HF with coexisting ID. In India, the high prevalence of anemia increases the burden of ID in patients with HF. HF being a complex multifactorial disease, it is essential to understand the association of ID with HF which can be easily corrected to improve the patient outcomes. At the same time, there is a need to generate more robust clinical evidence on IV Iron therapy for in Indian patients of HF. abstract_id: PUBMED:18392791 Approaches to the treatment of anaemia in patients with chronic heart failure. An association between anaemia, poor functional status and, compared to non-anaemic patients, worse clinical status and a higher risk of hospitalisation and death has been consistently reported in chronic heart failure (CHF), although cause an effect has not been proven. While it is attractive to think that correction of a co-morbidity that exacerbates already diminished delivery of oxygen to the tissues in heart failure is likely to beneficial, the possible haemodynamic effects of increasing haemoglobin, for example vasoconstriction, might not be. Consequently, the balance of benefit and risk of anaemia correction in CHF is uncertain, may vary according to the severity of anaemia (and other factors) and needs to be properly evaluated. To date, most studies of anaemia correction in CHF have used erythropoiesis stimulating agents (ESAs). The trials with erythropoietin have been of small size, uncontrolled or unblended/single blind, raising concerns again about interpretation of subjective outcomes. In addition, the analyses of these trials have been suboptimal. Two double-blind, placebo-controlled, darbepoetin studies have been published in full. Neither showed an improvement in functional capacity or consistent effect on patient reported symptoms/quality of life. Darbepoetin is, however, currently being tested in a large-scale, phase III morbidity and mortality trial, the Reduction of Events with Darbepoetin alfa in Heart Failure (RED-HF) which should contribute important information of the safety and efficacy of ESAs in this syndrome. Other approaches, notably parenteral iron supplementation, are also being evaluated and other agents for anaemia correction are under development. abstract_id: PUBMED:32843482 Efficacy and safety of iron therapy in patients with chronic heart failure and iron deficiency: a systematic review and meta-analysis based on 15 randomised controlled trials. Trials studying iron administration in patients with chronic heart failure (CHF) and iron deficiency (ID) have sprung up these years but the results remain inconsistent. The aim of this meta-analysis was to comprehensively evaluate the efficacy and safety of iron therapy in patients with CHF and ID. A literature search was conducted across PubMed, Embase, Cochrane Library, OVID and Web of Science up to 31 July 2019 to search for randomised controlled trials (RCT) comparing iron therapy with placebo in CHF with ID, regardless of presence of anaemia. Published studies reporting data of any of the following outcomes were included: all-cause death, cardiovascular hospitalisation, adverse events, New York Heart Association (NYHA) functional class, left ventricular ejection fraction (LVEF), N-terminal pro b-type natriuretic peptide, peak oxygen consumption, 6 min walking test (6MWT) distance and quality of life (QoL) parameters. 15 RCTs with a total of 1627 patients (911 in iron therapy and 716 in control) were included. Iron therapy was demonstrated to reduce the risk of cardiovascular hospitalisation (OR 0.35, 95% CI 0.12 to 0.99, p=0.049), but was ineffective in reducing all-cause death (OR 0.59, 95% CI 0.33 to 1.06, p=0.078) or cardiovascular death (OR 0.80, 95% CI 0.39 to 1.63, p=0.540). Iron therapy resulted in a reduction in NYHA class (mean difference (MD) -0.73, 95% CI -0.99 to -0.47, p&lt;0.001), an increase in LVEF (MD +4.35, 95% CI 0.69 to 8.00, p=0.020), 6MWT distance (MD +35.44, 95% CI 11.55 to 59.33, p=0.004) and an improvement in QoL: EQ-5D score (MD +4.07, 95% CI 0.84 to 7.31, p=0.014); Minnesota Living With Heart Failure Questionnaire score (MD -19.47, 95% CI -23.36 to -15.59, p&lt;0.001) and Patients Global Assessment (PGA) scale (MD 0.71, 95% CI 0.32 to 1.10, p&lt;0.001). There was no significant difference in adverse events or serious adverse events between iron treatment group and control group. Iron therapy reduces cardiovascular hospitalisation in patients with CHF with ID, and additionally improves cardiac function, exercise capacity and QoL in patients with CHF with ID and anaemia, without an increase of adverse events. Answer: Hematinic deficiencies are not the primary cause of anemia in chronic heart failure (CHF). While anemia is common in patients with CHF, few patients have hematinic deficiency. A study analyzing data from patients with left ventricular systolic dysfunction (LVSD) and those with symptoms of heart failure but preserved LV systolic function found that only a small percentage were deficient in vitamin B12 (6%), iron (13%), or folate (8%), indicating that hematinic deficiencies are not the predominant cause of anemia in this population (PUBMED:15131553). Moreover, the prevalence of anemia in patients with CHF is high, but the association between anemia and hematinic deficiencies is not straightforward. For instance, in a community-based HF program, while anemia and renal dysfunction (RD) were common and co-existed, they conferred independent prognostic information, and a deficiency of conventional hematinic factors may cause about one-third of anemia in this clinical setting (PUBMED:16860030). Additionally, the etiology of anemia in CHF is diverse and associated with several factors, including renal failure, resistance of the bone marrow to erythropoietin (EPO), hematinic deficiencies, and medication use (PUBMED:18607519). Furthermore, levels of the hematopoiesis inhibitor N-acetyl-seryl-aspartyl-lysyl-proline (Ac-SDKP) were found to be higher in anemic CHF patients, suggesting an inhibitory role on hematopoiesis that may explain the observed anemia in patients treated with ACE inhibitors (PUBMED:16172283). In summary, while hematinic deficiencies can contribute to anemia in CHF, they are not the sole cause. Anemia in CHF is multifactorial, with other factors such as renal dysfunction, inflammation, neurohormonal activation, and medication use also playing significant roles.
Instruction: Hypertension, pregnancy and weather: is seasonality involved? Abstracts: abstract_id: PUBMED:28300655 Health Conditions and Racial Differences Among Justice-Involved Adolescents, 2009 to 2014. Objective: Providers can optimize care for high-risk adolescents by understanding the health risks among the 1 million US adolescents who interact with the justice system each year. We compared the prevalence of physical health, substance use, and mood disorders among adolescents with and without recent justice involvement and analyzed differences according to race/ethnicity. Methods: Cross-sectional analysis using the 2009 to 2014 National Survey on Drug Use and Health. Prevalence data were adjusted for sociodemographic differences between adolescents with and without justice involvement. Justice-involved adolescents had a history of past year arrest, parole/probation, or juvenile detention. Results: Our sample consisted of adolescents aged 12 to 17 years with (n = 5149) and without (n = 97,976) past year justice involvement. In adjusted analyses, adolescents involved at any level of the justice system had a significantly higher prevalence of substance use disorders (P &lt; .001), mood disorders (P &lt; .001), and sexually transmitted infections (P &lt; .01). Adolescents on parole/probation or in juvenile detention in the past year had a higher prevalence of asthma (P &lt; .05) and hypertension (P &lt; .05) compared with adolescents without justice involvement. Among justice-involved adolescents, African American adolescents were significantly less likely to have a substance use disorder (P &lt; .001) or mood disorder (P &lt; .01) compared with white or Hispanic adolescents, but had significantly higher prevalence of physical health disorders (P &lt; .01). Conclusions: Adolescents involved at all levels of the justice system have high-risk health profiles compared with the general adolescent population, although these risks differ across racial/ethnic groups. Policymakers and health care providers should ensure access to coordinated, high-quality health care for adolescents involved at all levels of the justice system. abstract_id: PUBMED:38158777 Outcome of children with multicystic dysplastic kidney: Does involved side matter? Background: Multicystic dysplastic kidney (MCDK) is a common anomaly detected on antenatal ultrasound. We aimed to assess the profile of children with MCDK and to investigate whether the involved side has any effect on outcome. Methods: Thirty-nine patients with MCDK and 20 controls were enrolled. Patients who estimated glomerular filtration rate (eGFR) values over 90 mL/min/1.73 m2 were compared with controls. Comparison was made between the involved sides. Results: MKDB was right-sided in 20 (51.3%) and left-sided in 19 (48.7%) patients. 33.3% had additional urinary tract abnormality, 10.2% had systemic abnormality. 82% showed contralateral kidney enlargement. 48.7% involuted, 17.9% underwent nephrectomy. 35.8% suffered from urinary tract infection (UTI). 5.1% had renal scarring (RS). 30% developed microalbuminuria. 12.8% complicated with hypertension. 17.9% progressed to chronic kidney disease (CKD). Hypertension was independent risk factor for developing CKD. Blood pressure, cystatin C and urine microalbumin/creatinine levels were increased, and eGFR values were decreased in patients compared to controls. No significant difference was found between the two sides for rates of involution, UTI, RS, nephrectomy, and additional abnormality. Cystatin C levels were higher on the right than left sides (p = .033). Conclusion: Children with MCDK predispose to renal deterioration even at normal eGFR values. Although cystatin C levels tended to increase in right-sided patients, the involved side seemed to have no significant effect on renal outcome. Hypertension was main determinant for progression to CKD in MCDK. abstract_id: PUBMED:10429523 Adrenomedullin. A new peptide involved in the regulation of the cardiovascular system Adrenomedullin was originally discovered in human pheochromocytoma but is now known to be widely distributed in various organs. Adrenomedullin is a potent vasodilatator peptide that exerts major effects on cardiovascular function. Plasma adrenomedullin concentration is increased in patients with cardiovascular disease such as hypertension, congestive heart failure, myocardial infarction, renal failure and other diseases. The present review summarizes the recent advances on adrenomedullin research and demonstrates that adrenomedullin is one of the important vasoactive peptides involved in the physiology and pathophysiology of cardiovascular system. abstract_id: PUBMED:30780153 Pulmonary hypertension secondary to congenital diaphragmatic hernia: factors and pathways involved in pulmonary vascular remodeling. Congenital diaphragmatic hernia (CDH) is a severe birth defect that is characterized by pulmonary hypoplasia and pulmonary hypertension (PHTN). PHTN secondary to CDH is a result of vascular remodeling, a structural alteration in the pulmonary vessel wall that occurs in the fetus. Factors involved in vascular remodeling have been reported in several studies, but their interactions remain unclear. To help understand PHTN pathophysiology and design novel preventative and treatment strategies, we have conducted a systematic review of the literature and comprehensively analyzed all factors and pathways involved in the pathogenesis of pulmonary vascular remodeling secondary to CDH in the nitrofen model. Moreover, we have linked the dysregulated factors with pathways involved in human CDH. Of the 358 full-text articles screened, 75 studies reported factors that play a critical role in vascular remodeling secondary to CDH. Overall, the impairment of epithelial homeostasis present in pulmonary hypoplasia results in altered signaling to endothelial cells, leading to endothelial dysfunction. This causes an impairment of the crosstalk between endothelial cells and pulmonary artery smooth muscle cells, resulting in increased smooth muscle cell proliferation, resistance to apoptosis, and vasoconstriction, which clinically translate into PHTN. abstract_id: PUBMED:23671712 Comparative proteomics analysis suggests that placental mitochondria are involved in the development of pre-eclampsia. Introduction: Pre-eclampsia (PE), a severe pregnancy-specific disease characterized by the new onset of hypertension, proteinuria, edema, and a series of other systematic disorders, is a state of widespread mitochondrial dysfunction of the placenta. Methods: We compared the morphology of mitochondria in pre-eclamptic and normotensive placentae using electron microscopy. To reveal the systematic protein expression changes of placental mitochondria that might explain the pathogenesis of PE, we performed iTRAQ analysis combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS) on differentially expressed placental mitochondria proteins from 4 normotensive and 4 pre-eclamptic pregnancies. Bioinformatics analysis was used to find the relative processes that these differentially expressed proteins were involved in. Three differentially expressed proteins were chosen to confirm by Western blotting and immunohistochemistry. Results: Morphological data demonstrated degenerative and apoptotic changes in the mitochondria of pre-eclamptic placentae. We found four proteins were upregulated and 22 proteins were downregulated in pre-eclamptic placentae compared with normotensive placentae. Bioinformatics analysis showed that these proteins were involved in many critical processes in the development of pre-eclampsia such as apoptosis, fatty acid oxidation, the respiratory chain, reactive oxygen species generation, the tricarboxylic acid cycle and oxidative stress. Conclusions: This preliminary work provides a better understanding of the proteomic alterations of mitochondria from pre-eclamptic placentae and may aid in our understanding of the importance of mitochondria in the development of pre-eclampsia. abstract_id: PUBMED:30195776 Long Noncoding RNA 00473 Is Involved in Preeclampsia by LSD1 Binding-Regulated TFPI2 Transcription in Trophoblast Cells. Preeclampsia (PE) is a syndrome manifested by high blood pressure that could develop in the latter half of pregnancy; however, the underlying mechanisms are not understood. Recent evidence points to the function of noncoding RNAs (ncRNAs) as novel regulators of the invasion, migration, proliferation, and apoptosis of trophoblasts involved in the development of placental vasculature. Here, we investigated the role of long intergenic ncRNA 00473 (linc00473) in PE and the associated molecular mechanisms. The expression of linc00473 was downregulated in the placenta of patients with severe PE as revealed by qRT-PCR analysis. In vitro, linc00473 knockdown in trophoblast cell lines HTR-8/SVneo, JAR, and JEG3 significantly inhibited cell proliferation and promoted apoptosis, whereas linc00473 overexpression stimulated trophoblast proliferation. The mechanistic insights were provided by RNA-seq and qRT-PCR, which revealed that linc00473 could regulate the transcription of genes relevant to cell growth, migration, and apoptosis. In particular, linc00473 inhibited the expression of tissue factor pathway inhibitor 2 (TFPI2) through binding to lysine-specific demethylase 1 (LSD1). These results indicate that linc00473 could be involved in the pathogenesis and development of PE and may be a candidate biomarker as well as therapeutic target for this disease. abstract_id: PUBMED:38219630 Placentation and complications of ART pregnancy. An update on the different possible etiopathogenic mechanisms involved in the development of obstetric complications. Introduction: Infertile couples' percentage is increasing all over the world, especially in Italy, with high number of children born in our country through assisted reproductive techniques (ART). However, pregnancies obtained by ART have increased potential obstetrical risks which could be caused by fetus-placenta unit development, most of all due to placentation's evolution. These can be reassumed into miscarriage, chromosomal abnormalities, preterm delivery, multiple pregnancy, IUGR, placenta previa, abruptio placentae, preeclampsia and hypertensive disorders, postpartum hemorrhage. Methods: The aim of this article is to evaluate hypothetic mechanism involved in placentation process and in the etiopathology of ART pregnancies disorders, giving an updating overview of different etiopathogenetic pathways and features. On this scenario, we create an updated review about the etiopathogenesis of abnormal placentation in ART pregnancies. Results: Several features and different etiopathogenetic characteristic might impact differently such as advanced maternal age, poor ovarian reserve, oocyte quality and causes of subfertility themselves, and the ART techniques itself, as hormonal medical treatments and laboratory techniques such as gamete and embryo laboratory culture, cryopreservation versus fresh ET, number of embryos transferred. Conclusion: To further explore the molecular mechanisms behind placentation in ART pregnancies, further studies are necessary to gain a better understanding of the various aspects involved, particularly those which are not fully comprehended. This could prove beneficial to clinicians in both ART care and obstetric care, as it could help to stratify obstetrical risk and decrease complications in women undergoing ART, as well as perinatal disorders in their children. Correct placentation is essential for a successful pregnancy for both mother and baby. abstract_id: PUBMED:36835024 The Oncogenic Theory of Preeclampsia: Is Amniotic Mesenchymal Stem Cells-Derived PLAC1 Involved? The pathomechanisms of preeclampsia (PE), a complication of late pregnancy characterized by hypertension and proteinuria, and due to improper placentation, are not well known. Mesenchymal stem cells derived from the amniotic membrane (AMSCs) may play a role in PE pathogenesis as placental homeostasis regulators. PLACenta-specific protein 1 (PLAC1) is a transmembrane antigen involved in trophoblast proliferation that is found to be associated with cancer progression. We studied PLAC1 in human AMSCs obtained from control subjects (n = 4) and PE patients (n = 7), measuring the levels of mRNA expression (RT-PCR) and secreted protein (ELISA on conditioned medium). Lower levels of PLAC1 mRNA expression were observed in PE AMSCs as compared with Caco2 cells (positive controls), but not in non-PE AMSCs. PLAC1 antigen was detectable in conditioned medium obtained from PE AMSCs, whereas it was undetectable in that obtained from non-PE AMSCs. Our data suggest that abnormal shedding of PLAC1 from AMSC plasma membranes, likely by metalloproteinases, may contribute to trophoblast proliferation, supporting its role in the oncogenic theory of PE. abstract_id: PUBMED:21858206 Bothrops jararaca peptide with anti-hypertensive action normalizes endothelium dysfunction involved in physiopathology of preeclampsia. Preeclampsia, a pregnancy-specific syndrome characterized by hypertension, proteinuria and edema, is a major cause of fetal and maternal morbidity and mortality especially in developing countries. Bj-PRO-10c, a proline-rich peptide isolated from Bothrops jararaca venom, has been attributed with potent anti-hypertensive effects. Recently, we have shown that Bj-PRO-10c-induced anti-hypertensive actions involved NO production in spontaneous hypertensive rats. Using in vitro studies we now show that Bj-PRO-10c was able to increase NO production in human umbilical vein endothelial cells from hypertensive pregnant women (HUVEC-PE) to levels observed in HUVEC of normotensive women. Moreover, in the presence of the peptide, eNOS expression as well as argininosuccinate synthase activity, the key rate-limiting enzyme of the citrulline-NO cycle, were enhanced. In addition, excessive superoxide production due to NO deficiency, one of the major deleterious effects of the disease, was inhibited by Bj-PRO-10c. Bj-PRO-10c induced intracellular calcium fluxes in both, HUVEC-PE and HUVEC, which, however, led to activation of eNOS expression only in HUVEC-PE. Since Bj-PRO-10c promoted biological effects in HUVEC from patients suffering from the disorder and not in normotensive pregnant women, we hypothesize that Bj-PRO-10c induces its anti-hypertensive effect in mothers with preeclampsia. Such properties may initiate the development of novel therapeutics for treating preeclampsia. abstract_id: PUBMED:25142008 Effect of a low-protein diet during pregnancy on expression of genes involved in cardiac hypertrophy in fetal and adult mouse offspring. Gene markers for cardiomyocyte growth, proliferation and remodeling were examined in mouse fetuses and adult male offspring exposed to maternal low-protein (LP) diet during pregnancy. Whole heart volume, measured by magnetic resonance imaging, was smaller in day 15 LP fetuses v. those from chow-fed dams (C), whereas heart volume was greater in adult LP v. C offspring. These LP offspring were hypertensive and had larger cardiomyocytes v. C animals. The mRNA levels of cyclin G1, a marker for cell growth, were lower in LP fetal hearts v. C hearts, but similar in the left ventricle of adult LP and C offspring. Opposite trends were found in brain natriuretic peptide levels (a marker of cardiac hypertrophy). Thus, maternal LP during pregnancy results in smaller fetal hearts and is accompanied by changes in expression of genes involved in cardiomyocyte growth, which are associated with cardiac hypertrophy and hypertension in adulthood. Answer: The provided abstracts do not directly address the question of whether seasonality is involved in the relationship between hypertension and pregnancy. None of the abstracts mention seasonality or weather as factors in the development or outcomes of hypertension during pregnancy. Instead, they focus on various other aspects related to hypertension in the context of pregnancy and other health conditions: - PUBMED:28300655 discusses the prevalence of health conditions, including hypertension, among justice-involved adolescents and notes racial differences in the prevalence of these conditions. - PUBMED:38158777 investigates the outcomes of children with multicystic dysplastic kidney (MCDK) and whether the involved side affects the outcome, mentioning hypertension as a complication. - PUBMED:10429523 reviews the role of adrenomedullin, a vasodilator peptide, in the regulation of the cardiovascular system and its increased concentration in patients with cardiovascular diseases such as hypertension. - PUBMED:30780153 focuses on pulmonary hypertension secondary to congenital diaphragmatic hernia and the factors involved in pulmonary vascular remodeling. - PUBMED:23671712 suggests that placental mitochondria are involved in the development of pre-eclampsia, a condition characterized by hypertension during pregnancy. - PUBMED:30195776 explores the role of a long noncoding RNA in preeclampsia, a hypertensive disorder of pregnancy. - PUBMED:38219630 provides an update on the etiopathogenic mechanisms involved in the development of obstetric complications in pregnancies achieved through assisted reproductive techniques (ART), including hypertensive disorders. - PUBMED:36835024 investigates the role of PLAC1 in preeclampsia, a pregnancy complication involving hypertension. - PUBMED:21858206 discusses a peptide from Bothrops jararaca venom with anti-hypertensive action and its potential role in normalizing endothelium dysfunction in preeclampsia. - PUBMED:25142008 examines the effect of a low-protein diet during pregnancy on gene expression related to cardiac hypertrophy and hypertension in offspring. In summary, while these abstracts provide valuable information on hypertension in various contexts, including pregnancy, they do not provide evidence or discussion about the involvement of seasonality or weather in the development or management of hypertension during pregnancy.
Instruction: Is there a role for peptide receptor radionuclide therapy in medullary thyroid cancer? Abstracts: abstract_id: PUBMED:36198028 Peptide Receptor Radionuclide Therapy. The concept of using a targeting molecule labeled with a diagnostic radionuclide for using positron emission tomography or single photon emission computed tomography imaging with the potential to demonstrate that tumoricidal radiation can be delivered to tumoral sites by administration of the same or a similar targeting molecule labeled with a therapeutic radionuclide termed "theranostics." Peptide receptor radionuclide therapy (PRRT) with radiolabeled somatostatin analogs (SSAs) is a well-established second/third-line theranostic treatment for somatostatin receptor-positive well-differentiated (neuro-)endocrine neoplasms (NENs). PRRT with 177Lu-DOTATATE was approved by the regulatory authorities in 2017 and 2018 for selected patients with low-grade well-differentiated gastroenteropancreatic (GEP) NENs. It improves progression-free survival as well as quality of life of GEP NEN patients. Favorable symptomatic and biochemical responses using PRRT with 177Lu-DOTATATE have also been reported in patients with functioning metastatic GEP NENs like metastatic insulinomas, Verner Morrison syndromes (VIPomas), glucagonomas, and gastrinomas and patients with carcinoid syndrome. This therapy might also become a valuable therapeutic option for inoperable low-grade bronchopulmonary NENs, inoperable or progressive pheochromocytomas and paragangliomas, and medullary thyroid carcinomas. First-line PRRT with 177Lu-DOTATATE and combinations of this therapy with cytotoxic drugs are currently under investigation. New radiolabeled somatostatin receptor ligands include SSAs coupled with alpha radiation emitting radionuclides and somatostatin receptor antagonists coupled with radionuclides. abstract_id: PUBMED:33138305 Advances in the Management of Medullary Thyroid Carcinoma: Focus on Peptide Receptor Radionuclide Therapy. Effective treatment options in advanced/progressive/metastatic medullary thyroid carcinoma (MTC) are currently limited. As in other neuroendocrine neoplasms (NENs), peptide receptor radionuclide therapy (PRRT) has been used as a therapeutic option in MTC. To date, however, there are no published reviews dealing with PRRT approaches. We performed an in-depth narrative review on the studies published in this field and collected information on registered clinical trials related to this topic. We identified 19 published studies, collectively involving more than 200 patients with MTC, and four registered clinical trials. Most cases of MTC were treated with PRRT with somatostatin analogues (SSAs) radiolabelled with 90 yttrium (90Y) and 177 lutetium (177Lu). These radiopharmaceuticals show efficacy in the treatment of patients with MTC, with a favourable radiological response (stable disease, partial response or complete response) in more than 60% of cases, coupled with low toxicity. As MTC specifically also expresses cholecystokinin receptors (CCK2Rs), PRRT with this target has also been tried, and some randomised trials are ongoing. Overall, PRRT seems to have an effective role and might be considered in the therapeutic strategy of advanced/progressive/metastatic MTC. abstract_id: PUBMED:25471282 Can peptide receptor radionuclide therapy (PRRT) be useful in radioiodine-refractory differentiated thyroid cancer? We report on a 70-year-old man affected by radioiodine-refractory differentiated thyroid cancer (DTC) in whom metastases were treated by peptide receptor radionuclide therapy (PRRT). Seven years before, patient had undergone total thyroidectomy. Pathological examination was conclusive for DTC. The patient underwent some radioiodine treatments (RaIT). The last post-therapy whole body scan (pT-WBS) performed five days after RaIT did not show abnormal radioiodine uptake but serum thyroglobulin (Tg) value was high in absence of thyroglobulin-antibodies (Tg-Ab). In-111 DTPA-pentetreotide scintigraphy showed several lung lesions with high somatostatin receptor density. Patient underwent PRRT using Lu-177 DOTATOC. pT-WBS scan confirmed the metastases already demonstrated by In-111 DTPA pentetreotide but radioiodine negative. abstract_id: PUBMED:15153440 Peptide receptor radionuclide therapy. On their plasma membranes, cells express receptor proteins with high affinity for regulatory peptides, such as somatostatin. Changes in the density of these receptors during disease, for example, overexpression in many tumors, provide the basis for new imaging methods. The first peptide analogues successfully applied for visualization of receptor-positive tumors were radiolabeled somatostatin analogues. The next step was to label these analogues with therapeutic radionuclides for peptide receptor radionuclide therapy (PRRT). Results from preclinical and clinical multicenter studies already have shown an effective therapeutic response when using radiolabeled somatostatin analogues to treat receptor-positive tumors. Infusion of positively charged amino acids reduces kidney uptake, enlarging the therapeutic window. For PRRT of CCK-B receptor-positive tumors, such as medullary thyroid carcinoma, radiolabeled minigastrin analogues currently are being successfully applied. The combination of different therapy modalities holds interest as a means of improving the clinical therapeutic effects of radiolabeled peptides. The combination of different radionuclides, such as (177)Lu- and (90)Y-labeled somatostatin analogues, to reach a wider tumor region of high curability, has been described. A variety of other peptide-based radioligands, such as bombesin and NPY(Y(1)) analogues, receptors for which are expressed on common cancers such as prostate and breast cancer, are currently under development and in different phases of (pre)clinical investigation. Multireceptor tumor targeting using the combination of bombesin and NPY(Y(1)) analogues is promising for scintigraphy and PRRT of breast carcinomas and their lymph node metastases. abstract_id: PUBMED:30953466 Peptide receptor radionuclide therapy in patients with medullary thyroid carcinoma: predictors and pitfalls. Background: For progressive metastatic medullary thyroid carcinoma (MTC), the available treatment options with tyrosine kinase inhibitors result in grade 3-4 adverse events in a large number of patients. Peptide Receptor Radionuclide Therapy (PRRT), which has also been suggested to be a useful treatment for MTC, is usually well tolerated, but evidence on its effectivity is very limited. Methods: Retrospective evaluation of treatment effects of PRRT in a highly selected group of MTC patients, with progressive disease or refractory symptoms. In addition, a retrospective evaluation of uptake on historical 111In-DTPA-octreotide scans was performed in patients with detectable tumor size &gt; 1 cm. Results: Over the last 17 years, 10 MTC patients were treated with PRRT. Four out of 10 patients showed stable disease at first follow-up (8 months after start of therapy) whereas the other 6 were progressive. Patients with stable disease were characterized by a combination of both a high uptake on 111In-DTPA-octreotide scan (uptake grade ≥ 3) and a positive somatostatin receptor type 2a (SSTR2a) expression of the tumor by immunohistochemistry. Retrospective evaluation of historical 111In-DTPA-octreotide scans of 35 non-treated MTC patients revealed low uptake (uptake grade 1) in the vast majority of patients 31/35 (89%) with intermediate uptake (uptake grade 2) in the remaining 4/35 (11%). Conclusions: PRRT using 177Lu-octreotate could be considered as a treatment in those patients with high uptake on 111In-DTPA-octreotide scan (uptake grade 3) and positive SSTR2a expression in tumor histology. Since this high uptake was present in a very limited number of patients, this treatment is only suitable in a selected group of MTC patients. abstract_id: PUBMED:17653893 Peptide Receptor Radionuclide Therapy with radiolabelled somatostatin analogues in patients with somatostatin receptor positive tumours. Peptide Receptor Radionuclide Therapy (PRRT) with radiolabelled somatostatin analogues is a promising treatment option for patients with inoperable or metastasised neuroendocrine tumours. Symptomatic improvement may occur with all of the various (111)In, (90)Y, or (177)Lu-labelled somatostatin analogues that have been used. Since tumour size reduction was seldom achieved with (111)Indium labelled somatostatin analogues, radiolabelled somatostatin analogues with beta-emitting isotopes like (90)Y and (177)Lu were developed. Reported anti-tumour effects of [(90)Y-DOTA(0),Tyr(3)]octreotide vary considerably between various studies: Tumour regression of 50% or more was achieved in 9 to 33% (mean 22%). With [(177)Lu-DOTA(0),Tyr(3)]octreotate treatments, tumour regression of 50% or more was achieved in 28% of patients and tumour regression of 25 to 50% in 19% of patients, stable disease was demonstrated in 35% and progressive disease in 18%. Predictive factors for tumour remission were high tumour uptake on somatostatin receptor scintigraphy and limited amount of liver metastases. The side-effects of PRRT are few and mostly mild, certainly when using renal protective agents: Serious side-effects like myelodysplastic syndrome or renal failure are rare. The median duration of the therapy response for [(90)Y-DOTA(0),Tyr(3)]octreotide and [(177)Lu-DOTA(0),Tyr(3)]octreotate is 30 months and more than 36 months respectively. Lastly, quality of life improves significantly after treatment with [(177)Lu-DOTA(0),Tyr(3)]octreotate. These data compare favourably with the limited number of alternative treatment approaches, like chemotherapy. If more widespread use of PRRT is possible, such therapy might become the therapy of first choice in patients with metastasised or inoperable gastroenteropancreatic neuroendocrine tumours. Also the role in somatostatin receptor expressing non-GEP tumours, like metastasised paraganglioma/pheochromocytoma and non-radioiodine-avid differentiated thyroid carcinoma might become more important. abstract_id: PUBMED:24380044 Peptide receptor radionuclide therapy of treatment-refractory metastatic thyroid cancer using (90)Yttrium and (177)Lutetium labeled somatostatin analogs: toxicity, response and survival analysis. The overall survival rate of non-radioiodine avid differentiated (follicular, papillary, medullary) thyroid carcinoma is significantly lower than for patients with iodine-avid lesions. The purpose of this study was to evaluate toxicity and efficacy (response and survival) of peptide receptor radionuclide therapy (PRRT) in non-radioiodine-avid or radioiodine therapy refractory thyroid cancer patients. Sixteen non-radioiodine-avid and/or radioiodine therapy refractory thyroid cancer patients, including follicular thyroid carcinoma (n = 4), medullary thyroid carcinoma (n = 8), Hürthle cell thyroid carcinoma (n = 3), and mixed carcinoma (n = 1) were treated with PRRT by using (90)Yttrium and/or (177)Lutetium labeled somatostatin analogs. (68)Ga somatostatin receptor PET/CT was used to determine the somatostatin receptor density in the residual tumor/metastatic lesions and to assess the treatment response. Hematological profiles and renal function were periodically examined after treatment. By using fractionated regimen, only mild, reversible hematological toxicity (grade 1) or nephrotoxicity (grade 1) were seen. Response assessment (using EORTC criteria) was performed in 11 patients treated with 2 or more (maximum 5) cycles of PRRT and showed disease stabilization in 4 (36.4%) patients. Two patients (18.2%) showed partial remission, in the remaining 5 patients (45.5%) disease remained progressive. Kaplan-Meier analysis resulted in a mean survival after the first PRRT of 4.2 years (95% CI, range 2.9-5.5) and median progression free survival of 25 months (inter-quartiles: 12-43). In non-radioiodine-avid/radioiodine therapy refractory thyroid cancer patients, PRRT is a promising therapeutic option with minimal toxicity, good response rate and excellent survival benefits. abstract_id: PUBMED:34379772 Metastatic Medullary Thyroid Cancer: The Role of 68Gallium-DOTA-Somatostatin Analogue PET/CT and Peptide Receptor Radionuclide Therapy. Context: Metastatic medullary thyroid cancer (MTC) is a rare malignancy with minimal treatment options. Many, but not all, MTCs express somatostatin receptors. Objective: Our aim was to explore the role of 68Ga-DOTA-somatostatin analogue (SSA) positron emission tomography (PET)/computed tomography (CT) in patients with metastatic MTC and to determine their eligibility for peptide receptor radionuclide therapy (PRRT). Methods: We retrospectively identified patients with metastatic MTC who had 68Ga-DOTA-SSA PET/CT at 5 centers. We collected characteristics on contrast-enhanced CT, 68Ga-DOTA-SSA and 18F-FDG PET/CT. The efficacy of PRRT was explored in a subgroup of patients. Kaplan-Meier analysis was used to estimate time to treatment failure (TTF) and overall survival (OS). Results: Seventy-one patients were included (10 local recurrence, 61 distant disease). Of the patients with distant disease, 16 (26%) had ≥50% of disease sites with tracer avidity greater than background liver, including 10 (10/61, 16%) with &gt;90%. In 19 patients with contemporaneous contrast-enhanced CT, no disease regions were independently identified on 68Ga-DOTA-SSA PET/CT. Thirty-five patients had an 18F-FDG PET/CT, with 18F-FDG positive/68Ga-DOTA-SSA negative metastases identified in 15 (43%). Twenty-one patients had PRRT with a median TTF of 14 months (95% CI 8-25) and a median OS of 63 months (95% CI 21-not reached). Of the entire cohort, the median OS was 323 months (95% CI 152-not reached). Predictors of poorer OS included a short calcitonin doubling-time (≤24 months), strong 18F-FDG avidity, and age ≥60 years. Conclusions: The prevalence of high tumor avidity on 68Ga-DOTA-SSA PET/CT is low in the setting of metastatic MTC; nevertheless, PRRT may still be a viable treatment option in select patients. abstract_id: PUBMED:33354174 Clinical efficacy of 177Lu-DOTATATE peptide receptor radionuclide therapy in thyroglobulin-elevated negative iodine scintigraphy: A "not-so-promising" result compared to GEP-NETs. This study aimed at assessing the performance of 177Lu-DOTATATE-based peptide receptor radionuclide therapy (PRRT) in de-differentiated thyroid carcinoma thyroglobulin-elevated negative iodine scintigraphy (TENIS) in terms of clinical efficacy and outcome. This is a retrospective analysis of patients of TENIS who had undergone PRRT in a tertiary care setting. The selected patients were analyzed for the following parameters: (i) the patient characteristics, (ii) the metastatic burden, (iii) study of PRRT cycles and activity, (iv) response assessment (undertaken by three-parameter scale: symptomatic including Karnofsky/Lansky Performance scoring, biochemical and scan features) employing predefined criteria (detailed in methods), and (v) Grade III/IV hematological or renal toxicity. According to the qualitative uptake of the tracer in somatostatin receptor (SSTR)-based imaging (with either 99mTc-HYNIC-TOC/68Ga-DOTATATE), the lesions were divided into the following four categories: Grade 0: no uptake, Grade I: uptake less than the liver but more than background, Grade II: uptake equal to the liver, and Grade III: uptake more than the liver. A total of eight patients of TENIS who had undergone 177Lu-DOTATATE were retrieved. Among those eight patients, the follow-up duration (from the time of the 1st PRRT cycle) at the time of analysis ranged from 7 to 52 months, with an average of 34 months. At the time of assessment, two (25%) out of the eight patients had expired due to extensive metastatic disease and 6 (75%) were alive. On symptomatic response, complete disappearance of symptoms was found in one patient (12.5%), whereas three patients (37.5%) showed partial improvement in symptoms after PRRT and four patients (50%) showed worsening of and appearance of new symptoms. On biochemical response, reduction in serum thyroglobulin (TG) was found in three patients (37.5%) after PRRT and increase in serum TG was noticed in the rest of five patients (62.5%). Imaging response showed stable scan in two patients (25%) and progressive disease (PD) in six patients (75%), following a progression-free survival ranging from 7 to 16 months, when they were considered for tyrosine kinase inhibitors in view of PD. There was no obvious evidence of Grade III/IV hematological or renal toxicity in any of the patients, suggesting that the therapy in this group of patients is well tolerated. In addition, we also observed that most patients of TENIS showed low-grade uptake on SSTR-based imaging (Grade II as per our semi-quantitative scale), with only one patient showing Grade III uptake. 177Lu-DOTATATE PRRT demonstrates modest response in SSTR-positive metastatic TENIS patients: (i) low SSTR expression and tracer avidity, and correspondingly lesser degree of targeting by the therapeutic agent and (ii) the fact that most of the TENIS patients usually have fluorodeoxyglucose (FDG)-avid disease, where high FDG avidity is commensurate with aggressive biology and could be the reason for the relatively less response documented. Larger prospective data need to be accrued in this domain in view of its well tolerability and nonavailability of better efficacious and less toxic treatment at present; however, this needs to be tried in receptor-positive cases with high-grade uptake (Score III/IV) for a definitive conclusion. abstract_id: PUBMED:26384593 Role of Fluorodeoxyglucose PET/Computed Tomography in Targeted Radionuclide Therapy for Endocrine Malignancies. This review provides practical guidance for clinicians involved in the management of endocrine malignancies, including endocrinologists, medical oncologists, surgeons and nuclear medicine specialists regarding the indications and use of 2-fluoro-2-deoxy-d-glucose F-18 (FDG) PET/computed tomography (CT), particularly with respect to targeted radionuclide therapy. Key principles of FDG PET/CT for radionuclide therapy are explored in detail using gastroenteropancreatic neuroendocrine tumors as a prototype endocrine malignancy. The relevant literature is reviewed, and practical application in this new and emerging field is highlighted with the use of case examples. Answer: Yes, there is a role for peptide receptor radionuclide therapy (PRRT) in medullary thyroid cancer (MTC). PRRT has been used as a therapeutic option in advanced, progressive, or metastatic MTC, which is a neuroendocrine neoplasm (NEN) that can express somatostatin receptors (SSTRs) and, in some cases, cholecystokinin receptors (CCK2Rs) (PUBMED:33138305). The majority of MTC cases have been treated with PRRT using somatostatin analogues (SSAs) radiolabelled with 90 yttrium (90Y) and 177 lutetium (177Lu), showing efficacy with a favorable radiological response in more than 60% of cases and low toxicity (PUBMED:33138305). Additionally, PRRT with radiolabelled somatostatin analogues has been suggested as a useful treatment for MTC, usually well tolerated, although evidence of its effectiveness is limited (PUBMED:30953466). In a retrospective evaluation of PRRT in MTC patients, stable disease was observed in patients with high uptake on 111In-DTPA-octreotide scan and positive SSTR2a expression by immunohistochemistry (PUBMED:30953466). Furthermore, PRRT with radiolabelled minigastrin analogues targeting CCK-B receptors, which are also expressed in MTC, has been applied and is currently under investigation in randomized trials (PUBMED:33138305). However, the prevalence of high tumor avidity on 68Ga-DOTA-somatostatin analogue PET/CT is low in metastatic MTC, but PRRT may still be a viable treatment option for select patients (PUBMED:34379772). The clinical efficacy of 177Lu-DOTATATE PRRT in thyroglobulin-elevated negative iodine scintigraphy (TENIS) patients, which includes some MTC cases, has shown modest responses, suggesting that PRRT should be considered in receptor-positive cases with high-grade uptake (PUBMED:33354174).
Instruction: Is lateral pin fixation for displaced supracondylar fractures of the humerus better than crossed pins in children? Abstracts: abstract_id: PUBMED:23653099 Is lateral pin fixation for displaced supracondylar fractures of the humerus better than crossed pins in children? Background: Closed reduction and percutaneous pin fixation is considered standard management for displaced supracondylar fractures of the humerus in children. However, controversy exists regarding whether to use an isolated lateral entry or a crossed medial and lateral pinning technique. Questions/purposes: We performed a meta-analysis of randomized controlled trials (RCTs) to compare (1) the risk of iatrogenic ulnar nerve injury caused by pin fixation, (2) the quality of fracture reduction in terms of the radiographic outcomes, and (3) function in terms of criteria of Flynn et al. and elbow ROM, and other surgical complications caused by pin fixation. Methods: We searched PubMed, Embase, the Cochrane Library, and other unpublished studies without language restriction. Seven RCTs involving 521 patients were included. Two authors independently assessed the methodologic quality of the included studies with use of the Detsky score. The median Detsky quality score of the included trials was 15.7 points. Dichotomous variables were presented as risk ratios (RRs) or risk difference with 95% confidence intervals (CIs) and continuous data were measured as mean differences with 95% CI. Statistical heterogeneity between studies was formally tested with standard chi-square test and I(2) statistic. For the primary objective, a funnel plot of the primary end point and Egger's test were performed to detect publication bias. Results: The pooled RR suggested that iatrogenic ulnar nerve injury was higher with the crossed pinning technique than with the lateral entry technique (RR, 0.30; 95% CI, 0.10-0.89). No publication bias was further detected. There were no statistical differences in radiographic outcomes, function, and other surgical complications. No significant heterogeneity was found in these pooled results. Conclusions: We conclude that the crossed pinning fixation is more at risk for iatrogenic ulnar nerve injury than the lateral pinning technique. Therefore, we recommend the lateral pinning technique for supracondylar fractures of the humerus in children. abstract_id: PUBMED:7560029 Clinical evaluation of crossed-pin versus lateral-pin fixation in displaced supracondylar humerus fractures. The radiographs and patient charts of 47 children treated with closed reduction and percutaneous pin fixation of displaced supracondylar humerus fractures were reviewed. Twenty-seven fractures were fixed with crossed medial and lateral pins. Twenty fractures were treated with two parallel laterally placed pins. Baumann's angle on the anteroposterior elbow film and the humerocapitellar angle on the lateral elbow film were independently measured by the three authors on initial postoperative films and on films taken at the time of pin removal. No statistically significant differences regarding maintenance of reduction were found when comparing the two fixation groups. There were two complications in the medial pin group (one cubitus varus and one ulnar nerve injury) and none in the lateral-pin group. We conclude that crossed-pin fixation offers no clinically significant advantage over two laterally placed pins in the treatment of supracondylar humerus fractures. abstract_id: PUBMED:29922066 Medial comminution as a risk factor for the stability after lateral-only pin fixation for pediatric supracondylar humerus fracture: an audit. Background And Purpose: Closed reduction and lateral-only pin fixation is one of the common treatment methods for displaced supracondylar fracture in children. However, several risk factors related to the stability have been reported. The aim of this study was to evaluate the medial comminution as a potential risk factor related to the stability after appropriate lateral-only pin fixation for Gartland type III supracondylar humerus fracture. Methods: Sixty-seven patients with type III supracondylar fractures who were under the age of 12 years were included. Immediate postoperative and final Baumann and humerocapitellar angles were measured. Pin separation at fracture site was evaluated to estimate the proper pin placement. Presence of the medial comminution was recorded when two pediatric orthopedic surgeons agreed to the loss of cortical contact at the medial column by the small butterfly fragment or comminuted fracture fragments. Factors including age, sex, body mass index, pin number, pin separation at fracture site, and medial comminution were analyzed. Results: Medial comminution was noted in 20 patients (29.8%). The average pin separation at fracture site was significantly decreased in patients with medial comminution compared to patients without medial comminution (P=0.017). A presence of medial comminution was associated with a 4.151-fold increase in the log odds for the Baumann angle changes of more than average difference between immediate postoperative and final follow-up angle (P=0.020). Conclusion: When lateral-only pin fixation is applied for Gartland type III supracondylar humerus fracture in children, the medial comminution may be a risk factor for the stability because of the narrow pin separation at fracture site. We recommend additional medial pin fixation for supracondylar humerus fracture with medial comminution. abstract_id: PUBMED:21966310 The role of lateral-entry Steinmann pins in the treatment of pediatric supracondylar humerus fractures. Purpose: Loss of pin fixation in supracondylar fractures can occur with failure to achieve bicortical fixation. Bicortical fixation may be challenging for those pins that attempt to penetrate the diaphyseal cortex, where the bone is thick. Lateral-entry Steinmann pins may allow for better penetration through cortical bone because they are more rigid than typical Kirschner wires. Methods: A retrospective review of 16 children with type III supracondylar fractures treated by a single surgeon using Steinmann pins was undertaken. The average age at presentation was 6 years. Following closed reduction, all fractures were maintained with three lateral-entry pins. At least one Steinmann pin was placed in the lateral column of the distal humerus in each pin construct. Results: Follow-up radiographs indicated a mean Baumann's angle of 72.9° (range 64°-82°). There was no statistically significant change in the Baumann's angle or axial alignment at final follow-up. All but one fracture healed in an anatomic position on the lateral view. Conclusions: Steinmann pins placed through a lateral-entry point are effective in controlling the reduction of high-grade supracondylar fractures. The fixation is excellent and avoids potential ulnar nerve complications of medial entry. abstract_id: PUBMED:26972812 Increased pin diameter improves torsional stability in supracondylar humerus fractures: an experimental study. Background: Pediatric supracondylar humerus fractures are the most common elbow fractures seen in children, and account for 16 % of all pediatric fractures. Closed reduction and percutaneous pin fixation is the current treatment technique of choice for displaced supracondylar fractures of the distal humerus in children. The purpose of this study was to determine whether pin diameter affects the torsional strength of supracondylar humerus fractures treated by closed reduction and pin fixation. Methods: Pediatric sawbone humeri simulating a Gartland type III fracture were utilized. Four different pin configurations were compared. Specimens were subjected to a torsional load producing internal rotation of the distal fragment. The stability provided by 1.25- and 1.6-mm pins was compared. Results: The amount of torque required to produce 15° and 25° of rotation was greater using larger diameter pins in all models tested. The two lateral and one medial large pin (1.6 mm) configuration required the highest amount of torque to produce both 15° and 25° of rotation. Conclusions: In a synthetic pediatric humerus model of supracondylar humerus fractures, larger diameter pins (1.6 mm) provided increased stability compared with small diameter pins (1.25 mm). Fixation using larger diameter pins created a stronger construct and improved the strength of fixation. abstract_id: PUBMED:10906858 Crossed pin fixation of displaced supracondylar humerus fractures in children. The results of 42 children with displaced supracondylar fractures of the humerus (six Gartland Type II and 36 Gartland Type III) treated with crossed pin fixation are reported. In 37 fractures (88%) the teardrop configuration was restored successfully. All fractures healed without loss of reduction. No patients had iatrogenic ulnar nerve injury. Crossed-pin fixation of supracondylar humeral fractures is a safe and effective way of maintaining skeletal stability in children. Careful technique safeguards against ulnar nerve injury. abstract_id: PUBMED:38249197 How Kirschner Wires Crossing Each Other at the Fracture Site Affect Radiological and Clinical Results in Children With Gartland Type 3 Supracondylar Humerus Fractures? Background In this study, we compared two groups of children with Gartland Type 3 supracondylar humerus fractures with respect to the crossing point of Kirschner wires (K-wires) in terms of radiological and clinical results after closed reduction and fixation with the crossed-pin technique. We hypothesized that even if medial and lateral pins cross each other at the fracture line, satisfactory radiological and clinical results would be achieved with the crossed-pin technique. Methodology A total of 59 patients with Gartland extension Type 3 supracondylar humerus fractures who underwent closed reduction and percutaneous crossed-pin fixation were included in the study. K-wires were crossing each other proximal to the fracture site in the proximal crossing group and at the fracture level in the fracture site crossing group. Loss of reduction, Baumann angle, shaft condyle angle, range of motion, and carrying angle were compared between the two groups. Results There were 43 males and 16 females in this study, with a mean age of 5.3±2.4 years. The average follow-up duration was 21.9 ± 5.2 weeks. In terms of loss of reduction in the coronal and sagittal planes, there was no statistical difference between the two groups. When the Baumann angle and shaft condylar angle of both groups were analyzed, no statistically significant differences were found at both early postoperative examination and final follow-up. Conclusions Although the crossing point of K-wires has been shown to be an important factor in the protection of reduction in biomechanical studies, it was not a significant factor for loss of reduction in this study. Factors except for the crossing point of K-wires may play a more important role in the outcomes of crossed-pin fixation. abstract_id: PUBMED:16765353 Lateral versus crossed wire fixation for displaced extension supracondylar humeral fractures in children. Reduction and percutaneous pin fixation is widely accepted treatment for displaced humeral supracondylar fractures in children, but the best pin configuration is still debatable. This study examined the outcome for crossed and lateral pins placement in type IIB and III supracondylar humeral fractures. Clinical notes and radiographs of 131 children with an average age of 6 years were retrospectively reviewed. Lateral pins fixation was used in 66 children and crossed wires in 65. The groups were similar with regard to gender, age, follow-up, severity of displacement and number of closed/open reductions. There was no statistical difference between the two groups either clinically or radiologically in the quality of outcome. However, postoperative ulnar nerve injuries occurred in 6% of patients treated with crossed wire fixation, whilst none of the group with pins inserted laterally suffered this complication. We recommend fixation of displaced humeral supracondylar fractures with two or three lateral pins inserted parallel or in a divergent fashion. This method of fixation gives similar results to crossed wires but prevents iatrogenic ulnar nerve injuries. abstract_id: PUBMED:31706361 A two-stage retrospective analysis to determine the effect of entry point on higher exit of proximal pins in lateral pinning of supracondylar humerus fracture in children. Background: Kirschner wire fixation remains to be the mainstream treatment modality in unstable or displaced supracondylar humerus fracture in children, with divergent lateral pins being the most preferred due to their sufficient stability and decreased risk of ulnar nerve injury. However, the entry point at which the proximal lateral pin can be inserted to achieve a more proximal exit and maximum divergence has not been reported. This study retrospectively analyzed the characteristics and factors influencing the entry and exit points of the proximal lateral pins. Methods: The study was divided into two stages. In stage one, the entry and exit points of the proximal pins of lateral pinning configuration were analyzed from intra-operative radiographs of children treated for extension-type supracondylar humerus fractures. The coronal and sagittal pin angles formed by the proximal pins were also measured. Using the findings of stage one, we intentionally tried to achieve a more proximal exit with the proximal pins in stage two. Comparisons between groups of patients treated by random and intentional pinnings were done statistically. Results: In the first stage, 47 (29.2%) of the 161 proximal pins exited above the metaphyseal-diaphyseal junction (MDJ) region. Of these, 85.1% entered from lateral and posterior to the ossific nucleus of the capitellum (ONC). The pin angles averaged 58.4° and 90.5° in the coronal and sagittal planes respectively. In the second stage, 47 (65.3%) proximal pins in the intended group exited above the MDJ region, while only 32 (36%) in the random group exited above the MDJ region. Conclusion: While aiming at the upper border of the distal MDJ during pinning, lateral pins can easily achieve a higher, proximal exit above the MDJ if inserted from lateral and posterior to the ONC and parallel to the humeral shaft in the sagittal plane. Higher exit can also be easily achieved in younger patients and patients fixated with smaller diameter pins. abstract_id: PUBMED:16531696 Closed reduction and percutaneous lateral pin fixation in the treatment of displaced supracondylar fractures of the humerus in children Objectives: We evaluated the results of closed reduction and percutaneous lateral-pin fixation in the treatment of displaced supracondylar fractures of the humerus in children. Methods: Thirty-four children (25 boys, 9 girls; mean age 7.2 years; range 3 to 13 years) were treated for displaced supracondylar fractures of the humerus. Five patients had Gartland type 2, and 29 patients had type 3 fractures. After closed reduction, lateral-pin fixation was performed with two parallel (n=11) or crossed (n=23) K-wires. Three K-wires were used in five patients. The mean duration of fixation was 3.8 weeks (range 3 to 6 weeks). For comparison with the normal side, the Baumann and carrying angles were measured on anteroposterior, and the humerocapitellar angle on lateral radiographs. The range of motion of the elbow was assessed clinically. The results were evaluated according to the criteria of Flynn et al. after a mean follow-up of 22.6 months (range 10 to 48 months). Results: Union was achieved in all the patients. Complications such as pin-tract infections, myositis ossificans, compartment syndrome, or nerve injuries did not occur. According to the criteria of Flynn et al., functional and radiographic results were satisfactory in all the patients (100%) and in 33 patients (97.1%), respectively. One patient developed cubitus varus of 14 degrees. No significant differences were found between the mean Baumann, humerocapitellar, and carrying angles of the normal and affected sides (p&gt;0.05). Conclusion: Closed reduction and percutaneous lateral pinning proved an efficient, reliable, and safe method in the treatment of displaced supracondylar fractures of the humerus in children. Answer: Based on the provided abstracts, lateral pin fixation appears to be a preferred method over crossed pins for treating displaced supracondylar fractures of the humerus in children. The meta-analysis of randomized controlled trials (RCTs) in abstract PUBMED:23653099 indicates that lateral pinning technique has a lower risk of iatrogenic ulnar nerve injury compared to the crossed pinning technique. The study found no statistical differences in radiographic outcomes, function, and other surgical complications between the two methods, leading to the recommendation of the lateral pinning technique for these fractures in children. Supporting this conclusion, the clinical evaluation in abstract PUBMED:7560029 found no clinically significant advantage of crossed-pin fixation over lateral-pin fixation in terms of maintaining reduction, and there were fewer complications in the lateral-pin group. Similarly, abstract PUBMED:16765353 reported that lateral pins fixation prevented iatrogenic ulnar nerve injuries, which occurred in 6% of patients treated with crossed wire fixation, and recommended lateral pins insertion as a safer method. However, abstract PUBMED:29922066 suggests that medial comminution may be a risk factor for stability after lateral-only pin fixation, recommending additional medial pin fixation for supracondylar humerus fractures with medial comminution. Abstract PUBMED:21966310 supports the use of lateral-entry Steinmann pins for their rigidity and effectiveness in controlling the reduction of high-grade supracondylar fractures without the risk of ulnar nerve complications associated with medial entry. Abstract PUBMED:26972812 emphasizes that larger diameter pins provide increased stability compared to smaller diameter pins, which could be a consideration in the choice of pinning technique. Abstract PUBMED:10906858 reports successful outcomes with crossed-pin fixation without ulnar nerve injury, suggesting that careful technique can mitigate the risks associated with this method. Abstract PUBMED:38249197 indicates that the crossing point of Kirschner wires may not be a significant factor for loss of reduction, suggesting that other factors may be more important in the outcomes of crossed-pin fixation. Abstract PUBMED:31706361 discusses the entry and exit points of lateral pins, providing insights into achieving a more proximal exit and maximum divergence, which could enhance the stability of lateral pinning.
Instruction: Unplanned attempts to quit smoking: missed opportunities for health promotion? Abstracts: abstract_id: PUBMED:19681806 Unplanned attempts to quit smoking: missed opportunities for health promotion? Objectives: To investigate the occurrence, determinants and reported success of unplanned and planned attempts to quit smoking, and sources of support used in these attempts. Design: Cross-sectional questionnaire survey of 3512 current and ex-smokers. Setting: Twenty-four general practices in Nottinghamshire, UK. Participants: Individuals who reported making a quit attempt within the last 6 months. Measurements: Occurrence, triggers for, support used and success of planned and unplanned quit attempts. Results: A total of 1805 (51.4%) participants returned completed questionnaires, reporting 394 quit attempts made within the previous 6 months of which 37% were unplanned. Males were significantly more likely to make an unplanned quit attempt [adjusted odds ratio (OR) 1.60, 95% confidence interval (CI) 1.04-2.46], but the occurrence of unplanned quit attempts did not differ significantly by socio-economic group or amount smoked. The most common triggers for unplanned quit attempts were advice from a general practitioner or health professional (27.9%) and health problems (24.5%). 5.4% and 4.1% of unplanned quit attempts used National Health Service cessation services on a one to one and group basis, respectively, and more than half (51.7%) were made without any support. Nevertheless, unplanned attempts were more likely to be reported to be successful (adjusted OR 2.01, 95% CI 1.23-3.27, P &lt; 0.01). Conclusions: Unplanned quit attempts are common among smokers in all socio-demographic groups, are triggered commonly by advice from a health professional and are more likely to succeed; however, the majority of these unplanned attempts are unsupported. It is important to develop methods of providing behavioural and/or pharmacological support for these attempts, and determine whether these increase cessation rates still further. abstract_id: PUBMED:20642512 Unplanned attempts to quit smoking: a qualitative exploration. Aims: To gain a greater understanding of the process of unplanned attempts to quit smoking and the use of support in such attempts. Design: Qualitative study using semi-structured interviews with 20 smokers and ex-smokers. Setting: Twenty-four general practices in Nottinghamshire, UK. Participants Smokers and ex-smokers who reported that their most recent attempt to quit smoking was unplanned. Measurements: Descriptions of the unplanned quit attempts and reported use of support within these. Findings: Smokers who report making 'unplanned' quit attempts exhibit substantial variation in what they mean by this; many quit attempts reported as 'unplanned' were actually delayed and involved some planning and use of cessation support. Conclusions: Reported 'unplanned' quit attempts often involve elements of planning and delay for quitters to access to cessation support. It is important, therefore, that smoking cessation services offer flexible and adaptable support which can be used readily by potential quitters. abstract_id: PUBMED:28367399 Stress-related expectations about smoking cessation and future quit attempts and abstinence - a prospective study in daily smokers who wish to quit. Smokers who wish to quit may refrain from doing so if they expect to experience more stress after haven given up. We test if stress-related expectations about smoking cessation are associated with quit attempts and abstinence among smokers who are motivated to quit. The study included 1809 daily smokers in Denmark in 2011-2013. Stress-related expectations (do you think you will be more, less or equally stressed as a non-smoker?) were measured at baseline. Quit attempts, 30-day point prevalence abstinence and prolonged abstinence (defined as having been abstinent since baseline), were measured after 3, 8 and 14 months. We found that the association between expecting to be more stressed if giving up smoking differed between participants who had previously attempted to quit and those who had not: In participants who previously attempted to quit (47%), expecting to be more stressed was associated with significantly lower odds of abstinence compared to smokers who expected the same or a lower level of stress (odds ratios were 0.49 (95% CI: 0.31-0.79) for 30-day abstinence and was 0.28 (95% CI: 0.08-0.99) for prolonged abstinence). In participants who had not previously attempted to quit, expectations about stress were not associated with abstinence. Results indicate that expectations about stress in relation to smoking cessation are an important determinant of cessation in smokers who previously attempted to quit. Addressing stress and how to handle stressful situations may increase the likelihood of a successful quit attempt. abstract_id: PUBMED:37247291 Smoking Cessation, Quit Attempts and Predictive Factors among Vietnamese Adults in 2020. Objective: This study aims to describe the updated smoking cessation and quit attempt rates and associated factors among Vietnamese adults in 2020. Methods: Data on tobacco use among adults in Vietnam in 2020 was derived from the Provincial Global Adult Tobacco Survey. The participants in the study were people aged 15 and older. A total of 81,600 people were surveyed across 34 provinces and cities. Multi-level logistic regression was used to examine the associations between individual and province-level factors on smoking cessation and quit attempts. Results: The smoking cessation and quit attempt rates varied significantly across the 34 provinces. The average rates of people who quit smoking and attempted to quit were 6.3% and 37.2%, respectively. The factors associated with smoking cessation were sex, age group, region, education level, occupation, marital status, and perception of the harmful effects of smoking. Attempts to quit were significantly associated with sex, education level, marital status, perception of the harmful effects of smoking, and visiting health facilities in the past 12 months. Conclusions: These results may be useful in formulating future smoking cessation policies and identifying priority target groups for future interventions. However, more longitudinal and follow-up studies are needed to prove a causal relationship between these factors and future smoking cessation behaviors. abstract_id: PUBMED:38403997 What Motivates Betel Quid Chewers to Quit? An Analysis of Several Cessation-Relevant Variables. Introduction: Betel quid (BQ) is globally the fourth most consumed psychoactive substance. It is consumed by an estimated 600 million people worldwide, accounting for nearly 8% of the world's population. There have been very few studies assessing chewers' motivation to quit. Objectives: In the current study, we sought to understand the relationship between several cessation-relevant variables and chewers' motivation to quit. Hypotheses: Based on analogous research on cigarette smoking, we hypothesized that the following cessation-relevant variables would be associated with motivation to quit: health risk perceptions, number of chews per day, cost, degree of BQ dependence, withdrawal symptoms, number of quit attempts, reasons for use, personal health improvement, and type of BQ chewed. Methods: A total of 351 adult BQ chewers from Guam participated in the survey and served as the sample for the analyses. Results: Majority of chewers want to quit and intend to quit. Chewers relatively high in motivation to quit evinced greater health risk perceptions of BQ chewing, greater perceived health benefits to quitting, and a greater number of past quit attempts, compared to those relatively low in motivation to quit. Conclusions: Understanding which factors are associated with chewers' motivation to quit can be helpful for designing BQ cessation programs. The results suggest that BQ cessation programs could be improved by an increased emphasis on information about the negative health effects of BQ chewing and relapse-prevention. abstract_id: PUBMED:31104040 Effects of a multi-behavioral health promotion program at worksite on smoking patterns and quit behavior. Background: Tobacco use is associated with various severe health risks. Therefore, the need to decrease smoking rates is a great public health concern. The workplace has capability as a setting through which large groups of smokers can be reached to encourage smoking cessation. Objective: The aim of the present study was to evaluate effects of a multi behavioral worksite health promotion intervention. The primary outcome was the change of smoking rate. Secondary outcomes were changes in smoking attitudes and readiness to stop smoking among employees over an intervention period of 12 months. Method: 112 and 110 employees were enrolled in the intervention and control arm respectively. The intervention group received a 12-month multicomponent health promotion intervention. One of the main elements of the multicomponent intervention was a smoking cessation and counseling program. During the pilot year, participants completed a self-evaluation questionnaire at baseline and again after 12 months to related outcomes and changes. Results: Results showed that participants' quit behavior and smoking behavior changed over time in the intervention group (IG). Readiness to quit smoking also increased in the IG compared to the comparison group (CG). Some positive intervention effects were observed for cognitive factors (e.g., changes attitudes towards smoking). Baseline willingness to change smoking behavior was significantly improved over time. Conclusions: This study showed initial results of a long-term multicomponent worksite health promotion program with regard to changes in smoking behavior, attitudes towards smoking and readiness to quit smoking. The evaluation suggests that a worksite health promotion program may lead to improvements in smoking behavior for a number of workers. abstract_id: PUBMED:35358964 Mental Health Symptoms and Associations with Tobacco Smoking, Dependence, Motivation, and Attempts to Quit: Findings from a Population Survey in Germany (DEBRA Study). Introduction: This study aimed to estimate prevalence rates of mental health symptoms (anxiety, depression, and overall psychological distress) by tobacco smoking status, and associations between such symptoms and the level of dependence, motivation, and attempts to quit smoking in the German population. Methods: Cross-sectional analysis of data from six waves of a nationally representative household survey collected in 2018/19 (N = 11,937 respondents aged ≥18). Mental health symptoms were assessed with the Patient Health Questionnaire-4. Associations with smoking status, dependence, motivation to quit, and ≥1 past-year quit attempt (yes/no) were analysed with adjusted regression models among the total group, and among subgroups of current (n = 3,248) and past-year smokers (quit ≤12 months ago, n = 3,357). Results: Weighted prevalence rates of mental health symptoms among current, former, and never smokers were: 4.1%, 2.4%, 2.5% (anxiety), 5.4%, 4.7%, 4.0% (depression), and 3.1%, 2.5%, 2.4% (psychological distress). Current versus never smokers were more likely to report symptoms of anxiety and depression. Smokers with higher versus lower levels of dependence were more likely to report higher levels of all three mental health symptoms. Higher versus lower levels of overall psychological distress were associated with a higher motivation to quit smoking and, among past-year smokers, with higher odds of reporting a past-year quit attempt. Conclusions: We found various relevant associations between mental health symptoms and smoking behaviour. Healthcare professionals need to be informed about these associations and trained to effectively support this vulnerable group in translating their motivation into abstinence. abstract_id: PUBMED:32171957 If at first you don't succeed, when should you try again? A prospective study of failed quit attempts and subsequent smoking cessation. Objective: To assess the association between likelihood of success of smoking cessation attempts and time since most recent attempt. Methods: Prospective study of 823 smokers who reported a failed quit attempt in the last 12 months at baseline and ≥1 quit attempt over 6-month follow-up. The input variable was time in months between the end (and in an exploratory analysis, the start) of the most recent failed quit attempt reported retrospectively at baseline and start of the first attempt made during the 6-month follow-up period. The outcome variable was success in the latter quit attempt. Results: Success rates for failed quitters who waited &lt;3, 3-6, and 6-12 months between their failed quit attempt ending and making a subsequent quit attempt were 13.8%, 17.5%, and 19.0% respectively. After adjustment for covariates, the odds of cessation relative to those who made a subsequent quit attempt within 3 months were 1.42 (95%CI 0.79-2.55) and 1.52 (95%CI 0.81-2.86) for those who waited 3-6 and 6-12 months respectively before trying again. Bayes factors indicated the data were insensitive. The exploratory analysis showed the odds of cessation were 1.55 (95%CI 0.78-3.08), 1.92 (95%CI 0.94-3.92), and 2.47 (95%CI 1.04-5.83) greater for those with an interval of 3-6, 6-12, and 12-18 months respectively than those who tried again within 3 months. Conclusions: While pre-planned analyses were inconclusive, exploratory analysis of retrospective reports of quit attempts and success suggested the likelihood of success of quit attempts may be positively associated with number of months since beginning a prior quit attempt. However, only the longest inter-quit interval examined (12-18 months) was associated with significantly greater odds of quit success relative to a &lt;3 month interval in fully adjusted models; all other comparisons were inconclusive. abstract_id: PUBMED:24333037 Prevalence of unassisted quit attempts in population-based studies: a systematic review of the literature. Aims: The idea that most smokers quit without formal assistance is widely accepted, however, few studies have been referenced as evidence. The purpose of this study is to systematically review the literature to determine what proportion of adult smokers report attempting to quit unassisted in population-based studies. Methods: A four stage strategy was used to conduct a search of the literature including searching 9 electronic databases (PUBMED, MEDLINE (OVID) (1948-), EMBASE (1947-), CINAHL, ISI Web of Science with conference proceedings, PsycINFO (1806-), Scopus, Conference Papers Index, and Digital Dissertations), the gray literature, online forums and hand searches. Results: A total of 26 population-based prevalence studies of unassisted quitting were identified, which presented data collected from 1986 through 2010, in 9 countries. Unassisted quit attempts ranged from a high of 95.3% in a study in Christchurch, New Zealand, between 1998 and 1999, to a low of 40.6% in a national Australian study conducted between 2008 and 2009. In 24 of the 26 studies reviewed, a majority of quit attempts were unassisted. Conclusions: This systematic review demonstrates that a majority of quit attempts in population-based studies to date are unassisted. However, across and within countries over time, it appears that there is a trend toward lower prevalence of making quit attempts without reported assistance or intervention. abstract_id: PUBMED:24837754 Predictors of successful and unsuccessful quit attempts among smokers motivated to quit. Introduction: Despite their positive motivation to quit, many smokers do not attempt to quit or relapse soon after their quit attempt. This study investigated the predictors of successful and unsuccessful quit attempts among smokers motivated to quit smoking. Methods: We conducted secondary data analysis among respondents motivated to quit within 6 months, randomized to the control group (N=570) of a Web-based smoking cessation intervention study. Using chi-square tests and ANOVA with Tukey post hoc comparisons, we investigated baseline differences by smoking status (successful quitter/relapse/persistent smoker) assessed after 6 weeks (N=214). To identify independent predictors of smoking status, multivariate multinomial logistic regression analyses were conducted. Results: Successful quitters at 6-week follow-up (26%) had reported significantly higher baseline levels of self-efficacy than relapsers (45%) and persistent smokers (29%). Furthermore, both successful quitters and relapsers had reported a significantly higher baseline intention to quit than persistent smokers and successful quitters had reported significantly more preparatory planning at baseline than persistent smokers. Results from regression analyses showed that smokers' baseline intention to quit positively predicted quit attempts reported after 6 weeks, while self-efficacy positively predicted quit attempt success. Conclusions: Different factors appear to play a role in predicting quit attempts and their success. Whereas intention to quit only appeared to play a role in predicting quit attempts, self-efficacy was the main factor predicting quit attempt success. More research is needed to determine the role of preparatory planning and plan enactment and to investigate whether these findings can be replicated on the long term. Answer: Unplanned attempts to quit smoking are indeed common and represent potential missed opportunities for health promotion. A study found that 37% of quit attempts were unplanned, and these were more likely to be successful compared to planned attempts (adjusted odds ratio 2.01) (PUBMED:19681806). However, the majority of these unplanned attempts were made without any support, despite being commonly triggered by advice from a health professional or health problems. This suggests that there is a significant opportunity to provide behavioral and/or pharmacological support to increase cessation rates further (PUBMED:19681806). Qualitative research has shown that what smokers report as 'unplanned' quit attempts often involve some degree of planning and delay, allowing them to access cessation support (PUBMED:20642512). This indicates that smoking cessation services need to offer flexible and adaptable support that can be readily used by those attempting to quit, whether their attempts are planned or unplanned (PUBMED:20642512). Moreover, stress-related expectations about smoking cessation are an important determinant of cessation, especially among smokers who have previously attempted to quit. Addressing stress and providing strategies to handle stressful situations may increase the likelihood of a successful quit attempt (PUBMED:28367399). In summary, unplanned quit attempts are a significant phenomenon among smokers and are more likely to lead to success. However, these attempts are often unsupported, highlighting a gap in health promotion efforts. By providing timely and adaptable support, particularly addressing stress management, health professionals can capitalize on these opportunities to assist smokers in their cessation efforts (PUBMED:19681806; PUBMED:20642512; PUBMED:28367399).
Instruction: Does device-based testing save time during automatic implantable cardioverter-defibrillator implantation? Abstracts: abstract_id: PUBMED:29428133 Implantable Cardioverter Defibrillator Implantation with or Without Defibrillation Testing. Defibrillation testing (DFT) during implantable cardioverter-defibrillator (ICD) implantation is still considered standard of care in some, but in increasingly fewer centers. The goal is to ensure that the device system functions as intended by testing in the controlled laboratory setting. Although safe, complications can occur and DFT is associated with an increased procedural time and cost. DFT is useful in assessing device function when programming changes or patient characteristics raise concerns regarding ICD efficacy. DFT remains standard of practice following implantation of subcutaneous ICDs and other specific circumstances. Implanting physicians should remain familiar with the process of DFT and situations where it is useful for individual patients. abstract_id: PUBMED:28073910 Implantation of the Subcutaneous Implantable Cardioverter-Defibrillator: An Evaluation of 4 Implantation Techniques. Background: Alternative techniques to the traditional 3-incision subcutaneous implantation of the subcutaneous implantable cardioverter-defibrillator may offer procedural and cosmetic advantages. We evaluate 4 different implant techniques of the subcutaneous implantable cardioverter-defibrillator. Methods And Results: Patients implanted with subcutaneous implantable cardioverter-defibrillators from 2 hospitals between 2009 and 2016 were included. Four implantation techniques were used depending on physician preference and patient characteristics. The 2- and 3-incision techniques both place the pulse generator subcutaneously, but the 2-incision technique omits the superior parasternal incision for lead positioning. Submuscular implantation places the pulse generator underneath the serratus anterior muscle and subfascial implantation underneath the fascial layer on the anterior side of the serratus anterior muscle. Reported outcomes include perioperative parameters, defibrillation testing, and clinical follow-up. A total of 246 patients were included with a median age of 47 years and 37% female. Fifty-four patients were implanted with the 3-incision technique, 118 with the 2-incision technique, 38 with submuscular, and 37 with subfascial. Defibrillation test efficacy and shock lead impedance during testing did not differ among the groups; respectively, P=0.46 and P=0.18. The 2-incision technique resulted in the shortest procedure duration and time-to-hospital discharge compared with the other techniques (P&lt;0.001). A total of 18 complications occurred, but there were no significant differences between the groups (P=0.21). All infections occurred in subcutaneous implants (3-incision, n=3; 2-incision, n=4). In the 2-incision group, there were no lead displacements. Conclusions: The presented implantation techniques are feasible alternatives to the standard 3-incision subcutaneous implantation, and the 2-incision technique resulted in shortest procedure duration. abstract_id: PUBMED:15129791 Does device-based testing save time during automatic implantable cardioverter-defibrillator implantation? Background: Defibrillation testing can be done either via an external cardiac defibrillator or directly via the implanted defibrillator during implantation (device-based testing). The advantage of one testing methodology over the other has not been adequately studied. Methods And Results: Seventy-four patients (72% men) were randomized into two groups depending on the defibrillation testing methodology used--external cardiac defibrillation and device-based testing groups. R-wave, pacing threshold, pacing impedance, defibrillation threshold, defibrillation pathway impedance and total procedure time were not significantly different between the two groups. Conclusions: Device-based testing did not significantly reduce the procedure time. Lead and defibrillation parameters were similar in both the groups; lead repositioning and replacement were required in three patients in the external cardiac defibrillation group. abstract_id: PUBMED:28289531 Critical analysis of ineffective post implantation implantable cardioverter-defibrillator-testing. Aim: To test of the implantable-cardioverter-defibrillator is done at the time of implantation. We investigate if any testing should be performed. Methods: All consecutive patients between January 2006 and December 2008 undergoing implantable cardioverter-defibrillator (ICD) implantation/replacement (a total of 634 patients) were included in the retrospective study. Results: Sixteen patients (2.5%) were not tested (9 with LA/LV-thrombus, 7 due to operator's decision). Analyzed were 618 patients [76% men, 66.4 + 11 years, 24% secondary prevention (SP), 46% with left ventricular ejection fraction (LVEF) &lt; 20%, 56% had coronary artery disease (CAD)] undergoing defibrillation safety testing (SMT) with an energy of 21 + 2.3 J. In 22/618 patients (3.6%) induced ventricular fibrillation (VF) could not be terminated with maximum energy of the ICD. Six of those (27%) had successful SMT after system modification or shock lead repositioning, 14 patients (64%) received a subcutaneous electrode array. Younger age (P = 0.0003), non-CAD (P = 0.007) and VF as index event for SP (P = 0.05) were associated with a higher incidence of ineffective SMT. LVEF &lt; 20% and incomplete revascularisation in patients with CAD had no impact on SMT. Conclusion: Defibrillation testing is well-tolerated. An ineffective SMT occurred in 4% and two third of those needed implantation of a subcutaneous electrode array to pass a SMT &gt; 10 J. abstract_id: PUBMED:29766894 Prospective Evaluation of Implantable Cardioverter-Defibrillator Lead Function During and After Left Ventricular Assist Device Implantation. Objectives: This study investigated the mechanism of lead malfunction by monitoring lead parameters throughout left ventricular assist device (LVAD) implantation. Background: Implantable cardioverter-defibrillator (ICD) lead malfunction can occur after LVAD implantation. Methods: ICD lead data were prospectively evaluated during and after LVAD implantation and at 12 pre-specified intraoperative time points. Results: We prospectively evaluated 32 patients with ICDs who underwent LVAD implantation, of whom 20 patients underwent serial testing at 12 intraoperative steps. Post-operative right ventricle (RV) sensing had decreased by &gt;50% from baseline in 7 patients (22%), with RV sensing improving at 1 to 7 weeks in 2 patients (28.6%). Nine patients (28.1%) had &gt;10-ohm (Ω) high-voltage (HV) impedance changes from baseline to final impedance. In all 5 patients with &gt;50% decrease in RV sensing and all 7 patients with a &gt;10-Ω HV impedance change who underwent intraoperative testing, changes were not detected until after weaning from cardiopulmonary bypass. Patients with decreased RV lead sensing &gt;50% (n = 7) had lower glomerular filtration rates (48.7 ± 21.9 ml/min/1.73 m2 vs. 68.4 ± 22.5 ml/min/1.73 m2, respectively, p = 0.0489), were more likely to have undergone concomitant RVAD placement (42.9% vs. 0%, respectively, p = 0.0071), concomitant tricuspid valve surgery (57.1% vs. 16%, p = 0.0469), or to have had cardiac tamponade or unplanned return to the operating room (57.1% vs. 12%, p = 0.0258). Conclusions: ICD lead malfunction can occur following LVAD implantation but may improve over time. Intraoperative RV sensing and HV impedance changes were not detected until after weaning from cardiopulmonary bypass, suggesting the mechanism of RV lead malfunction may be related to LV unloading and concomitant leftward septal shift. A conservative approach is warranted in many patients with ICD parameter changes after LVAD implantation because parameter abnormalities may improve over time. (Implantable Cardioverter Defibrillator (ICD) Function During Ventricular Assist Device (VAD) Implantation; NCT01576562). abstract_id: PUBMED:33404997 Wearable cardioverter defibrillator: bridging for implantable defibrillators in left ventricular assist device patients. There is currently conflicting data available regarding the use of implantable cardioverter-defibrillators (ICD) in left ventricular assist device (LVAD) patients. While the benefit of an ICD in heart failure patients is well demonstrated, such benefit has failed to reach the LVAD population. In lack of randomized control trial data on the topic of ICD use in LVAD recipients, major societal guidelines are in disagreement when comes to the topic of routine implantation of a permanent defibrillator in prospective ventricular assist device patients. Alternative permanent defibrillator strategies have been suggested for the LVAD population such as subcutaneous implantable cardioverter defibrillators (S-ICDs) but eligibility of patients for such practice remains disappointing. Although most of the heart failure patients undergoing LVAD implantation already bear an ICD, clinicians are left with the decision of de novo implanting an ICD in an important number of patients. Wearable cardioverter defibrillators could prove beneficial in LVAD recipients by utilizing them as a bridge to decision towards the implantation of a permanent defibrillator. abstract_id: PUBMED:23608953 Implantable cardioverter defibrillator This article aims to give an overview over important articles in the field of implantable cardioverter defibrillator (ICD) therapy in 2012. Important publications concern analyses on therapy efficacy and safety of the subcutaneous ICD, gender-specific differences in the complication rate and prognosis after ICD implantation, the necessity of intraoperative testing of the defibrillation threshold and the impact of preventive measures to reduce ICD therapies on prognosis after device implantation. The relevance of the study findings for daily clinical practice is briefly discussed. abstract_id: PUBMED:31180377 Relation of multicenter automatic defibrillator implantation trial implantable cardioverter-defibrillator score with long-term cardiovascular events in patients with implantable cardioverter-defibrillator. Objective: To test the hypothesis that multicenter automatic defibrillator implantation trial (MADIT) - implantable cardioverter-defibrillator (ICD) scores predict replacement requirement and appropriate shock in a mixed population including both primary and secondary prevention and long-term adverse cardiovascular events. Methods: The study has a retrospective design. Patients who were implanted with ICD in the cardiology clinic of Atatürk University Faculty of Medicine between 2000 and 2013 were included in the study. For this purpose, 1394 patients who were implanted with a device in our clinic were reviewed. Then, those who were implanted with permanent pacemaker (n=1005), cardiac resynchronization treatment (CRT) (n=45) and CRT-ICD (n=198) were excluded. Results: A total of 146 patients (98 males, 67.1%) with a mean age of 61.1 (±14.8) years were recruited. The median follow-up time was 21.5 months (mean 30.6±25.9 months; minimum 4 months, and maximum 120 months). The median MADIT-ICD scores in the patients were 2. MADIT-ICD scores were categorized as low in 15.1%, intermediate in 57.5%, and high score in 27.4% of patients. Accordingly, MADIT-ICD scores (1.29 [1.00-1.68], p=0.050), hemoglobin (0.86 [0.75-0.99], p=0.047), and left ventricular ejection fraction (EF) (0.97 [0.94-0.99], p=0.023) were determined as independent predictors of major adverse cardiovascular events in the long-term follow-up of ICD-implanted population. Conclusion: In this study, we showed that there was an independent association of long-term adverse cardiovascular events with MADIT-ICD score, hemoglobin, and EF in patients implanted with ICD. abstract_id: PUBMED:28678000 Implantable cardioverter defibrillator knowledge and end-of-life device deactivation: A cross-sectional survey. Background: End-of-life implantable cardioverter defibrillator deactivation discussions should commence before device implantation and be ongoing, yet many implantable cardioverter defibrillators remain active in patients' last days. Aim: To examine associations among implantable cardioverter defibrillator knowledge, patient characteristics and attitudes to implantable cardioverter defibrillator deactivation. Design: Cross-sectional survey using the Experiences, Attitudes and Knowledge of End-of-Life Issues in Implantable Cardioverter Defibrillator Patients Questionnaire. Participants were classified as insufficient or sufficient implantable cardioverter defibrillator knowledge and the two groups were compared. Setting/participants: Implantable cardioverter defibrillator recipients ( n = 270, mean age 61 ± 14 years; 73% male) were recruited from cardiology and implantable cardioverter defibrillator clinics attached to two tertiary hospitals in Melbourne, Australia, and two in Kentucky, the United States. Results: Participants with insufficient implantable cardioverter defibrillator knowledge ( n = 77, 29%) were significantly older (mean age 66 vs 60 years, p = 0.001), less likely to be Caucasian (77% vs 87%, p = 0.047), less likely to have received implantable cardioverter defibrillator shocks (26% vs 40%, p = 0.031), and more likely to have indications of mild cognitive impairment (Montreal Cognitive Assessment score &lt;24: 44% vs 16%, p &lt; 0.001). Insufficient implantable cardioverter defibrillator knowledge was associated with attitudes suggesting unwillingness to discuss implantable cardioverter defibrillator deactivation, even during the last days towards end of life ( p &lt; 0.05). Conclusion: Implantable cardioverter defibrillator recipients, especially those who are older or have mild cognitive impairment, often have limited knowledge about implantable cardioverter defibrillator deactivation. This study identified several potential teachable moments throughout the patients' treatment trajectory. An interdisciplinary approach is required to ensure that discussions about implantable cardioverter defibrillator deactivation issues are initiated at appropriate time points, with family members ideally also included. abstract_id: PUBMED:27721868 Combined Subpectoral Implantation of Implantable Cardioverter-Defibrillator and Augmentation Mammoplasty in a Young Female Patient. Subcutaneous implantation of a cardiac implantable electronic device is the standard method. Occasionally, subpectoral cardiac implantable electronic device (CIED) implantation via axillary incisions is performed in young female patients for cosmetic purposes. Because subpectoral CIED implantation and augmentation mammoplasty involve the same layer, it is feasible to perform both procedures simultaneously. We report a case of combined subpectoral implantation of an implantable cardioverter-defibrillator and augmentation mammoplasty via the axillary approach in a young female patient with dilated cardiomyopathy and small breasts. Answer: Device-based testing during automatic implantable cardioverter-defibrillator (ICD) implantation does not significantly reduce the procedure time. A study comparing device-based testing to external cardiac defibrillation during ICD implantation found that the total procedure time, as well as R-wave, pacing threshold, pacing impedance, defibrillation threshold, and defibrillation pathway impedance, were not significantly different between the two groups. Lead repositioning and replacement were required in three patients in the external cardiac defibrillation group, but overall, the advantage of one testing methodology over the other in terms of time savings was not demonstrated (PUBMED:15129791).
Instruction: Visual Estimation of the Severity of Aortic Stenosis and the Calcium Burden by 2-Dimensional Echocardiography: Is It Reliable? Abstracts: abstract_id: PUBMED:26307124 Visual Estimation of the Severity of Aortic Stenosis and the Calcium Burden by 2-Dimensional Echocardiography: Is It Reliable? Objectives: Guidelines have recommended aortic valve surgery in asymptomatic patients with severe aortic stenosis and a large aortic valve calcium burden. The purpose of this study was to determine whether visual assessment of aortic valve calcium and stenosis severity are reliable based on 2-dimensional echocardiography alone. Methods: We prospectively enrolled 68 patients with aortic stenosis and compared them with 30 control participants without aortic stenosis. All had aortic valve calcium score assessment by computed tomography. In a random order, 2-dimensional images without hemodynamic data were independently reviewed by 2 level 3-trained echocardiographers, who then classified these patients into categories based on aortic valve calcium and stenosis severity. Results: The 68 patients (mean age ± SD, 74 ± 10 years) were classified as having mild (n = 28), moderate (n = 22), and severe (n = 18) aortic stenosis. When the observers were asked to grade the degree of valve calcification, the agreement between them was poor (κ = 0.33-0.39). The visual ability to determine stenosis severity compared with Doppler echocardiography had high specificity (81% and 88% for observers 1 and 2). However, sensitivity was unacceptably low (56%-67%), and the positive predictive value was poor (44%-50%). Agreement was fair (κ= 0.58-0.69) between the observers for determining severe stenosis. Conclusions: Our results suggest that visual assessment of aortic valve calcium has high interobserver variability; the visual ability to determine severe aortic stenosis has low sensitivity but high specificity. Our results may have important implications for treatment of patients with aortic stenosis and guiding the use of handheld echocardiography. Further research with larger cohorts is needed to validate the variability, sensitivity, and specificity reported in our study. abstract_id: PUBMED:33040296 Validity of visual assessment of aortic valve morphology in patients with aortic stenosis using two-dimensional echocardiography. The diagnostic value of a visual assessment of aortic valve (AV) morphology for grading aortic stenosis (AS) remains unclear. A visual score (VS) for assessing the AV was developed and its reliability with respect to Doppler measurements and the calcium score (ctCS) derived by multislice computed tomography was evaluated. 99 Patients with AS of various severity and 38 patients without AS were included in the analysis. Echocardiographic studies were evaluated using the new VS which includes echogenicity, thickening, localization of lesions and leaflet mobility, with a total score ranging from 0 to 11. The association of VS with ctCS and the severity of AS was analyzed. There was a significant correlation of VS with AV hemodynamic parameters and with ctCS. The cut-off value for the detection of AS of any grade was a VS of 6 (sensitivity 95%, specificity 85% for women; sensitivity 85%, specificity 88% for men). A VS of 9 for women and of 10 for men was able to predict severe AS with a high specificity (96% in women and 94% in men, AUC 0.8 and 0.86, respectively). The same cut-off values were identified for the detection of ctCS of ≥ 1600 AU and ≥ 3000 AU with a specificity of 77% and 82% (AUC 0.69 and 0.81, respectively). Assessment of aortic valve morphology can serve as an additional diagnostic tool for the detection of AS and an estimation of its severity. abstract_id: PUBMED:34344508 A Novel Two-Dimensional Echocardiography Method to Objectively Quantify Aortic Valve Calcium and Predict Aortic Stenosis Severity. Aortic valve calcium (AVC) is a strong predictor of aortic stenosis (AS) severity and is typically calculated by multidetector computed tomography (MDCT). We propose a novel method using pixel density quantification software to objectively quantify AVC by two-dimensional (2D) transthoracic echocardiography (TTE) and distinguish severe from non-severe AS. A total of 90 patients (mean age 76 ± 10 years, 75% male, mean AV gradient 32 ± 11 mmHg, peak AV velocity 3.6 ± 0.6 m/s, AV area (AVA) 1.0 ± 0.3 cm2, dimensionless index (DI) 0.27 ± 0.08) with suspected severe aortic stenosis undergoing 2D echocardiography were retrospectively evaluated. Parasternal short axis aortic valve views were used to calculate a gain-independent ratio between the average pixel density of the entire aortic valve in short axis at end diastole and the average pixel density of the aortic annulus in short axis (2D-AVC ratio). The 2D-AVC ratio was compared to echocardiographic hemodynamic parameters associated with AS, MDCT AVC quantification, and expert reader interpretation of AS severity based on echocardiographic AVC interpretation. The 2D-AVC ratio exhibited strong correlations with mean AV gradient (r = 0.72, p &lt; 0.001), peak AV velocity (r = 0.74, p &lt; 0.001), AVC quantified by MDCT (r = 0.71, p &lt;0.001) and excellent accuracy in distinguishing severe from non-severe AS (area under the curve = 0.93). Conversely, expert reader interpretation of AS severity based on echocardiographic AVC was not significantly related to AV mean gradient (t = 0.23, p = 0.64), AVA (t = 2.94, p = 0.11), peak velocity (t = 0.59, p = 0.46), or DI (t = 0.02, p = 0.89). In conclusion, these data suggest that the 2D-AVC ratio may be a complementary method for AS severity adjudication that is readily quantifiable at time of TTE. abstract_id: PUBMED:23891412 Impact of three-dimensional echocardiography on classification of the severity of aortic stenosis. Background: Owing to its elliptical shape, the left ventricle outflow tract (LVOT) area is underestimated by two-dimensional (2D) diameter-based calculations which assume a circular shape. This results in overestimation of aortic stenosis (AS) by the continuity equation. In cases of moderate to severe AS, this overestimation can affect intraoperative clinical decision making (expectant management versus replacement). The purpose of this intraoperative study was to compare the aortic valve area calculated by 2D diameter based and three-dimensional (3D) derived LVOT area via transesophageal echocardiography (TEE) and its impact on severity of AS. Methods: The LVOT area was calculated using intraoperative 2D and 3D TEE data from patients undergoing aortic valve replacement (AVR) and coronary artery bypass graft (CABG) surgery using the 2D diameter (RADIUS), 3D planimetry (PLANE), and 3D biplane (π·x·y) measurement (ELLIPSE) methods. For each method, the LVOT area was used to determine the aortic valve area by the continuity equation and the severity of AS categorized as mild, moderate, or severe. Results: A total of 66 patients completed the study. The RADIUS method (3.5 ± 0.9 cm(2)) underestimated LVOT area by 21% (p &lt; 0.05) compared with the PLANE method (4.1 ± 0.1 cm(2)) and by 18% (p &lt; 0.05) compared with the ELLIPSE method (4.0 ± 0.9 cm(2)). There was no significant difference between the two 3D methods, namely, PLANE and ELLIPSE. Seven AVR patients (18%) and 1 CABG surgery patient (6%) who had originally been classified as severe AS by the 2D method were reclassified as moderate AS by the 3D methods (p &lt; 0.001). Conclusions: Three-dimensional echocardiography has the potential to impact surgical decision making in cases of moderate to severe AS. abstract_id: PUBMED:37635033 A systematic review of contrast-enhanced computed tomography calcium scoring methodologies and impact of aortic valve calcium burden on TAVI clinical outcomes. Different methodologies have been used to assess the role of AV calcification (AVC) on TAVI outcomes. This systematic review aims to describe the burden of AVC, synthesize the different methods of calcium score quantification, and evaluate the impact of AVC on outcomes after TAVI. We included studies of TAVI patients who had reported AV calcium scoring by contrast-enhanced multidetector CT and the Agatston method. The impact of calcification on TAVI outcomes without restrictions on follow-up time or outcome type was evaluated. Results were reported descriptively, and a meta-analysis was conducted when feasible. Sixty-eight articles were included, with sample sizes ranging from 23 to 1425 patients. Contrast-enhanced calcium scoring was reported in 30 studies, calcium volume score in 28 studies, and unique scoring methods in two. All studies with calcium volume scores had variable protocols, but most utilized a modified Agatston method with variable attenuation threshold values of 300-850 HU. Eight studies used the Agatston method, with the overall mean AV calcium score in studies published from 2010 to 2012 of 3342.9 AU [95%CI: 3150.4; 3535.4, I2 ​= ​0%]. The overall mean score was lower and heterogenous in studies published from 2014 to 2020 (2658.9 AU [95% CI: 2517.3; 2800.5, I2 ​= ​79%]. Most studies reported a positive association between calcium burden and increased risk of adverse outcomes, including implantation of permanent pacemaker (7/8 studies), paravalvular leak (13/13 studies), and risk of aortic rupture (2/2 studies). AVC quantification methodology with contrast-enhanced CT is still variable. AVC negatively impacts TAVI outcomes independently of the quantification method. abstract_id: PUBMED:15518622 Relation of calcium-phosphorus product to the severity of aortic stenosis in patients with normal renal function. Calcium-phosphorus product (CaxP) has been associated with severity of aortic stenosis (AS) in dialysis patients, but it is unknown whether a relation exists in patients with normal renal function. One hundred seven patients with AS and normal serum creatinine were studied to determine whether there was an association between CaxP and AS severity, and it was found that CaxP was inversely related to AS severity, as measured by aortic valve area and transvalvular gradients. abstract_id: PUBMED:22808832 Correlation of calcification on excised aortic valves by micro-computed tomography with severity of aortic stenosis. Background And Aim Of The Study: The quantification of incidentally found aortic valve calcification on computed tomography (CT) is not performed routinely, as data relating to the accuracy of aortic valve calcium for estimating the severity of aortic stenosis (AS) is neither consistent nor validated. As aortic valve calcium quantification by CT is confounded by wall and coronary ostial calcification, as well as motion artifact, the ex-vivo micro-computed tomography (micro-CT) of stenotic aortic valves allows a precise measurement of the amounts of calcium present. The study aim, using excised aortic valves from patients with confirmed AS, was to determine if the amount of calcium on micro-CT correlated with the severity of AS. Methods: Each of 35 aortic valves that had been excised from patients during surgical valve replacement were examined using micro-CT imaging. The amount of calcium present was determined by absolute and proportional values of calcium volume in the specimen. Subsequently, the correlation between calcium volume and preoperative mean aortic valve gradient (MAVG), peak transaortic velocity (V(max)), and aortic valve area (AVA) on echocardiography, was evaluated. Results: The mean calcium volume across all valves was 603.2 +/- 398.5 mm3, and the mean ratio of calcium volume to total valve volume was 0.36 +/- 0.16. The mean aortic valve gradient correlated positively with both calcium volume and ratio (r = 0.72, p &lt; 0.001). V(max) also correlated positively with the calcium volume and ratio (r = 0.69 and 0.76 respectively; p &lt; 0.001). A logarithmic curvilinear model proved to be the best fit to the correlation. A calcium volume of 480 mm3 showed sensitivity and specificity of 0.76 and 0.83, respectively, for a diagnosis of severe AS, while a calcium ratio of 0.37 yielded sensitivity and specificity of 0.82 and 0.94, respectively. Conclusion: A radiological estimation of calcium amount by volume, and its proportion to the total valve volume, were shown to serve as good predictive parameters for severe AS. An estimation of the calcium volume may serve as a complementary measure for determining the severity of AS when aortic valve calcification is identified on CT imaging. abstract_id: PUBMED:30061016 Direct Comparison of Severity Grading Assessed by Two-Dimensional, Three-Dimensional, and Doppler Echocardiography for Predicting Prognosis in Asymptomatic Aortic Stenosis. Background: Reliable assessment of aortic stenosis (AS) severity relies on stroke volume (SV) determination using Doppler echocardiography, but it can also be estimated with two-dimensional/three dimensional echocardiography (2DE/3DE). The aim of this study was to compare SV measurements and AS subgroup classifications among the three modalities and determine their prognostic strength in asymptomatic AS. Methods: We prospectively enrolled 359 patients with asymptomatic AS. SV was determined using three methods, and the patients were divided into four AS subgroups according to indexed aortic valve area (iAVA) and SV index (SVI) determined by each method and mean pressure gradient. The primary end point was major adverse cardiovascular events (MACEs), which included cardiac death, ventricular fibrillation, heart failure, and aortic valve replacement. We also assessed the presence or absence of upper septal hypertrophy. Results: Doppler-derived SVI was significantly larger than that derived from 2DE/3DE with modest correlations (r = 0.33 and 0.47). Thus, group classification varied substantially by modality. During the median follow-up period of 17 months, 112 patients developed a major adverse cardiovascular event. Although iAVA assessed by Doppler echocardiography had a significantly better net reclassification improvement compared with iAVA by 2DE or 3DE, prognostic values were nearly identical among the three methods. Ventricular septal geometry affected the accuracy of risk stratification. Conclusions: AS severity grading varied considerably according to the methods applied for calculating SV. Thus, SV measurements are not interchangeable, even though their prognostic power is similar. Hence, examiners should select one of the three methods to assess AS severity and should use the same method in longitudinal examinations. abstract_id: PUBMED:27887818 Three-Dimensional Morphology of the Left Ventricular Outflow Tract: Impact on Grading Aortic Stenosis Severity. Background: Left ventricular outflow tract (LVOT) measurement is a critical step in the quantification of aortic valve area. The assumption of a circular morphology of the LVOT may induce some errors. The aim of this study was to assess the three-dimensional (3D) morphology of the LVOT and its impact on grading aortic stenosis severity. Methods: Fifty-eight patients with aortic stenosis were studied retrospectively. LVOT dimensions were measured using 3D transesophageal echocardiography at three levels: at the hinge points (HP) of the aortic valve and at 4 and 8 mm proximal to the annular plane. Results were compared with standard two-dimensional echocardiographic measurements. Results: Three-dimensional transesophageal echocardiography showed a funnel shape that was more circular at the HP and more elliptical at 4 and 8 mm proximal to the annular plane (circularity index = 0.92 vs 0.83 vs 0.76, P &lt; .001). Cross-sectional area was smaller at the HP and larger at 4 and 8 mm from the annular plane (3.6 vs 3.9 vs 4.1 cm2, P = .001). The best correlation between two-dimensional and 3D transesophageal echocardiographic dimensions was at the HP (intraclass correlation coefficient = 0.75; 95% CI, 0.59-0.86). When the HP approach was selected, there was a reduction in the percentage of patients with low flow (from 41% to 29%). Conclusions: A large portion of patients with aortic stenosis have funnel-shaped and elliptical LVOTs, a morphology that is more pronounced in the region farther from the annular plane. Two-dimensional LVOT measurement closer to the annular plane has the best correlation with 3D measurements. Measurement of the LVOT closer to the annular plane should be encouraged to reduce measurement errors. abstract_id: PUBMED:15365294 Images in geriatric cardiology. Usefulness of live three-dimensional transthoracic echocardiography in aortic valve stenosis evaluation. Aortic valve stenosis (AS) severity can be estimated by various modalities. Due to some of the limitations of the currently available methods, the usefulness of live three-dimensional transthoracic echocardiography (3D TTE) in the assessment of AS was explored. Live 3D TTE was able to visualize the aortic valve orifice in all 11 patients studied. Live 3D TTE correctly estimated the severity of AS in all 10 patients in whom AS severity could be evaluated at surgery. These included eight patients with severe AS and two with moderate AS. Two of these 10 patients with AS had associated hypertrophic cardiomyopathy and underwent myectomy at the time of aortic valve replacement. Aortic valve orifice area measurements by live 3D TTE correlated well with intraoperative three-dimensional transesophageal echocardiographic reconstruction measurements (r=0.85) but not as well with two-dimensional transesophageal echocardiography measurements (r=0.64). Live 3D TTE measurements of the aortic valve orifice area also did not correlate well with two-dimensional transthoracic echocardiography measurements (r=0.46) but the number of patients studied with two-dimensional transthoracic echocardiography was smaller (only seven) and four of these did not undergo two-dimensional transthoracic echocardiography at the authors' institution. Altogether, four patients with severe AS by live 3D TTE, and subsequently confirmed at surgery, were misdiagnosed as having moderate AS by two-dimensional transthoracic echocardiography. Because it is completely noninvasive and views the aortic valve in three dimensions, 3D TTE could be a useful complement to the existing modalities in the evaluation of AS severity. Answer: The reliability of visual estimation of the severity of aortic stenosis (AS) and the calcium burden by 2-dimensional echocardiography (2DE) has been investigated in several studies, and the findings suggest that while it has high specificity, its sensitivity and interobserver agreement may be limited. One study found that visual assessment of aortic valve calcium by 2DE had high interobserver variability, and the ability to visually determine severe AS had low sensitivity (56%-67%) but high specificity (81%-88%). The positive predictive value was also poor (44%-50%), and the agreement between observers for determining severe stenosis was fair (κ= 0.58-0.69) (PUBMED:26307124). Another study developed a visual score (VS) for assessing aortic valve morphology, which includes echogenicity, thickening, localization of lesions, and leaflet mobility. This study found a significant correlation of VS with aortic valve hemodynamic parameters and calcium score (ctCS) derived by multislice computed tomography. The cut-off values for the detection of severe AS had high specificity (96% in women and 94% in men) (PUBMED:33040296). A novel method using pixel density quantification software to objectively quantify aortic valve calcium (AVC) by 2DE was proposed, which showed strong correlations with mean aortic valve gradient, peak aortic valve velocity, and AVC quantified by multidetector computed tomography (MDCT). This method exhibited excellent accuracy in distinguishing severe from non-severe AS (PUBMED:34344508). In contrast, three-dimensional echocardiography (3DE) has been shown to potentially impact surgical decision-making in cases of moderate to severe AS, as it may provide a more accurate assessment of the left ventricular outflow tract (LVOT) area and, consequently, the severity of AS (PUBMED:23891412). In summary, while visual estimation of AS severity and calcium burden by 2DE can be specific, its reliability is challenged by low sensitivity and interobserver variability. Objective quantification methods and the use of 3DE may provide more accurate assessments.
Instruction: Do injections of 5-fluorouracil after trabeculectomy have toxic effects on the anterior segment? Abstracts: abstract_id: PUBMED:15969640 Do injections of 5-fluorouracil after trabeculectomy have toxic effects on the anterior segment? Objective: To discourage fibrosis of the filtering bleb, 5 fluorouracil (5-FU) may be injected after trabeculectomy. 5-FU is an antimetabolite that also can damage extraocular tissues at concentrations as low as 0.5%. This study ascertained whether repeated injection of 5-FU has toxic effects on intraocular structures. Methods: After unilateral trabeculectomy in anesthetized New Zealand rabbits, 5-FU (5.0 mg/0.1 mL) was injected at the trabeculectomy site every 5 days for 15 days. Evaluation included slit-lamp examination, confocal microscopy, and intraocular pressure (IOP). After sacrifice, aqueous humor (AH) was drawn and eyes excised for scanning electron microscopy (SEM) and light microscopy. Results: The 5-FU injection not decrease IOP beyond trabeculectomy alone. Bleb height remained constant, thickness increased, and vascularity decreased. No changes in cornea or anterior segment were observed. No inflammation was observed in the bleb or surrounding tissues by slit-lamp or histologic examination. Protein in AH increased from 0.6 +/- 0.5 microg/mL at baseline to 19.8 +/- 4.4 microg/mL after trabeculectomy but only to 0.9 +/- 0.6 microg/mL after trabeculectomy plus 5-FU. Both in vivo confocal microscopy and SEM revealed deleterious effects on corneal epithelial and endothelial cells with a minor shift toward smaller cells. Conclusions: In this study 5-FU did not provoke an intraocular inflammatory response and had minimal effect on extraocular structures. Changes in corneal epithelium and endothelium detectable by confocal microscopy suggest a small toxic effect. These in vivo measurements by confocal microscopy were confirmed by SEM. Repeated administration did not cause additional cumulative toxic effects in the anterior segment. Therefore, multiple injections of 5- FU into the filtering bleb pose minimal risk to intraocular structures. abstract_id: PUBMED:11272780 Indocyanine green anterior segment angiography for studying conjunctival vascular changes after trabeculectomy. Background: The aim of the study was to evaluate the use of indocyanine green (ICG) for angiography of the anterior segment to characterize conjunctival and episcleral vasculature changes after trabeculectomy. Methods: This was a prospective evaluation of anterior segment ICG angiography in 10 eyes of 10 patients undergoing trabeculectomy for the first time. Trabeculectomy was performed with intraoperative sponge application of 5-fluorouracil (5 cases) or mitomycin C (5 cases). Anterior segment ICG angiography was performed prior to surgery, then at 2 weeks and 2 months after surgery. Results: With ICG, the anterior segment vessels were well delineated, including deep episcleral veins, which have not been clearly shown in previous angiographic techniques. Late phases of the angiogram could also be studied. The vascular alterations after trabeculectomy noted included oss of vascularity over the bleb area and vascular anastomoses along the perimeter of the avascular bleb. Conclusions: Angiography using ICG has potential as an investigative tool to study the conjunctival and episcleral vasculature changes after trabeculectomy. abstract_id: PUBMED:8446328 Anterior chamber reaction after mitomycin and 5-fluorouracil trabeculectomy: a comparative study. We measured aqueous flare in 16 glaucomatous eyes after trabeculectomy in which 5-fluorouracil (5-FU) or mitomycin C (MMC) had been used as an adjunctive therapy. The eyes were divided into a 5-FU and an MMC group, matched for factors that might influence the postoperative inflammatory response to intraocular surgery. Seven eyes of seven patients received subconjunctival injections of 5-FU (50 mg in 2 weeks) and nine eyes of nine patients were given 0.2 mg/0.5 mL MMC intraoperatively. The aqueous flare converted to an albumin concentration (mg/dL) was significantly higher in the 5-FU group than in the MMC group (359.6 +/- 113.8 mg/dL and 143.2 +/- 46.7 mg/dL, respectively; Mann-Whitney U test, P &lt; .05) on the second postoperative day. Intraoperative MMC appears to be no more harmful to the blood-aqueous barrier than 5-FU. abstract_id: PUBMED:11861206 Autologous blood injections for treating hypotonies after trabeculectomy The authors carried out a retrospective study in order to assess the efficacy of intrabled autologous blood injections after trabeculectomy. The indication for treatment was hypotony associated with overfiltration. Twelve eyes of 12 patients including seven men (58.3%) and five women (41.67 %) underwent from one to four (mean 1.7) subconjunctival injections. The age of the patients ranged from 31 to 66 years (mean 52.4 years). All the patients were diagnosed with open-angle glaucoma. Three eyes underwent trabeculectomy with mitomycine C, one with 5-fluorouracil and eight with no antimetabolite. The mean post-needling period was 12.3 months (ranging from 7 to 28 months). After intrabled blood injections, the average intraocular pressure increased from 2.7 1.2 mmHg (ranging from 0 to 6 mmHg) to 8.2 4.2 mmHg (ranging from 4 to 16 mmHg). The difference was statistically significant (P &lt; 0.5). After treatment, the average visual acuity increased from 1.8/10 to 3.2/10. This difference was not statistically significant (P &gt; 0.5). However, the procedure was ineffective in two patients (16.7%) as regards intraocular pressure and in seven patients (58.3%) as regards visual acuity. Hyphema, the most frequent complication (58.3% of our cases) is usually small, transient, and without sequelae. Although it may be delayed, it may be important and it induce intraocular hypertony (10% of our cases) or it may be associated with intravital blood. abstract_id: PUBMED:20148657 Use of 5-Fluorouracil injections to reduce the risk of trabeculectomy bleb failure after cataract surgery. Purpose: To determine whether the use of postoperative subconjunctival 5-fluorouracil (5-FU) reduces the risk of trabeculectomy bleb failure after uncomplicated small incisional cataract surgery. Methods: Twenty-five consecutive patients with primary open-angle glaucoma and a functioning trabeculectomy bleb and who underwent uncomplicated phacoemulsification surgery were given subconjunctival injections of 5 mg 5-FU at 2, 4, and 12 weeks after cataract surgery (5-FU group). The mean postoperative intraocular pressure (IOP) over a 2-year period and the trabeculectomy survival rate, as determined by Kaplan-Meier survival analysis, was compared with a historical series of patients who had undergone cataract surgery in the presence of a filtering trabeculectomy bleb, but who had not received 5-FU (control group). Results: After a 2-year follow-up period, there was no significant difference in the mean IOP between the 5-FU (15.1 mm Hg SD 3.1) and control (15.3 mm Hg SD 3.3) groups (P = 0.67). An IOP &gt; 21 mm Hg at any time point after the first postoperative month after cataract surgery was found in 4.0% cases in the 5-FU group and 16.7% cases in the control group (P = 0.78). Using Kaplan-Meier survival analysis, the difference in the cumulative probability of survival between the 5-FU and control groups was not significant (P = 0.30). Conclusion: Cataract surgery is a significant risk factor for trabeculectomy bleb failure. The use of subconjunctival 5-FU injections at 2, 4, and 12 weeks after cataract surgery in elderly white patients with primary open-angle glaucoma does not reduce the risk of trabeculectomy failure. abstract_id: PUBMED:2696671 Effect of postoperative subconjunctival 5-fluorouracil injections on the surgical outcome of trabeculectomy in the Japanese. A controlled study was carried out to evaluate the effect of postoperative subconjunctival 5-fluorouracil (5-FU) injections on the surgical outcomes of trabeculectomy in the Japanese (a total of 196 eyes in 157 patients). The eyes that had undergone trabeculectomy with postoperative 5-FU (5-FU group) included 36 eyes with primary open-angle glaucoma (POAG) and 17 with secondary glaucoma (SG) undergoing their first or second trabeculectomy. There were also 34 eyes with refractory glaucoma. The eyes that had had trabeculectomy without postoperative 5-FU (control group) included 46 POAG and 31 SG eyes undergoing their first or second trabeculectomy and 24 refractory glaucoma eyes. The surgical techniques and postoperative care were virtually identical between the two groups, except that the control group did not receive 5-FU. The results were analyzed by means of a life table method and a postoperative intraocular pressure (IOP) level equal to or less than 20 mmHg was adopted as the criterion for successful IOP control. In the 5-FU group, the success probability (%) at the 3-year follow-up was 93.9 +/- 4.2 (SE) for POAG eyes, 93.8 +/- 6.1 for SG eyes, and 86.7 +/- 5.6 for refractory glaucoma eyes. In the control group, it was 55.0 +/- 7.9, 37.2 +/- 13.5, and 16.1 +/- 7.4, respectively. The difference in success probability between the 5-FU and control groups was highly significant (P less than 0.001 or 0.01). In the POAG and SG eyes, the mean postoperative IOP was significantly lower in the 5-FU group than in the control group.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:1574291 Subconjunctival injection of 5-fluorouracil following trabeculectomy for congenital and infantile glaucoma. Trabeculectomy and subsequent subconjunctival injections of 5-fluorouracil (5-FU) were performed in four eyes (two children) with congenital glaucoma. Each of these eyes had previously undergone either goniotomy, trabeculotomy, or both; these procedures, however, had failed to control intraocular pressure (IOP) and progressive optic nerve damage. Sixteen and a half months (+/- 1.5 months) after the trabeculectomy and 5-FU treatments, the IOP in these eyes was in the low teens and there was no evidence of further optic-nerve or visual-field deterioration. Although trabeculectomy has been shown to be unsuccessful in managing congenital glaucoma, when it is done with the adjunct of subconjunctival injections of 5-FU, it may be advisable in these cases after previous surgery has failed. abstract_id: PUBMED:9340415 Use of 5-fluorouracil shortly after trabeculectomy Purpose: To present own results of treatment wit 5-FU in the cases with high intraocular pressure and increased cicatrization shortly after trabeculectomy. Material And Methods: Subconjunctival injections of 5-FU were applied in 12 patients in whom, immediately after trabeculectomy, intraocular pressure was above 22 mmHg and conjunctiva on the filtering bleb was thickened. Results: Normalization of the intraocular pressure and good function of the filtering bleb was achieved in 83.3% of patients. abstract_id: PUBMED:1335565 Toxic effects of 5-fluorouracil on fibroblasts following trabeculectomy. The inhibitory effect of 5-fluorouracil (5-FU) on fibroblast proliferation is well established. In addition, toxic effects of 5-FU on existing fibroblasts, in rabbits and in vitro, were demonstrated. We examined human subconjunctival scar tissue which was removed during Molteno tube implantation. Surgery was performed 9 weeks after filtering surgery with 5-FU that resulted in bleb scarring. In the tissue, intracytoplasmic vacuoles were detected in some myofibroblasts, with no visible collagen in their vicinity. This presumed toxic effect of 5-FU may be one explanation for the adequacy of fewer than twice 5-FU injections daily following filtering surgery, and for less than 14 days, as originally recommended for inhibiting bleb scarring. abstract_id: PUBMED:20973895 Combined trabeculotomy-trabeculectomy augmented with 5-fluorouracil in paediatric glaucoma. Background: To describe our experience of combined trabeculotomy-trabeculectomy in paediatric glaucomas with a special emphasis on the use of 5-fluorouracil and releasable sutures. Design: Retrospective review carried out at Manchester Royal Eye Hospital, UK, a tertiary referral centre. Participants: Twenty-nine eyes of 21 consecutive patients with congenital glaucoma undergoing combined trabeculotomy-trabeculectomy augmented with 5-fluorouracil. Methods: 5-Fluorouracil augmented combined trabeculotomy-trabeculectomy was carried out with intense postoperative management and suture adjustment of releasable sutures within the first 3 weeks after surgery. Peribleb 5-fluorouracil injections were given repeatedly if there were signs of aggressive bleb scarring. Main Outcome Measures: Absolute success was defined as intraocular pressure of 21 mmHg or less, clear cornea and absence of progressive glaucomatous optic disc changes at last follow up, whereas qualified success was defined as these endpoints with anti-glaucoma medication. Results: Absolute success was achieved in 19 out of 29 eyes (65.5%), and a further 4 (13.8%) had qualified success. There was no difference in the surgical outcomes of primary infantile glaucoma and secondary causes of paediatric glaucoma such as anterior segment dysgenesis. Combined trabeculotomy-trabeculectomy had a significantly greater success rate as a secondary procedure rather than as a primary procedure. Conclusion: 5-Fluorouracil-enhanced combined trabeculotomy-trabeculectomy with releasable sutures appears to be an effective procedure for congenital glaucoma refractory to goniotomy. It is less effective as a primary procedure when severe corneal haze prevents goniotomy in newborn congenital glaucoma. Intense postoperative monitoring including active bleb manipulation with needling and 5-fluorouracil injections may increase the success of the procedure. Answer: Injections of 5-fluorouracil (5-FU) after trabeculectomy do not appear to have significant toxic effects on the anterior segment. A study conducted on New Zealand rabbits found that repeated injections of 5-FU at the trabeculectomy site did not provoke an intraocular inflammatory response and had minimal effect on extraocular structures. Although changes in corneal epithelium and endothelium were detectable by confocal microscopy, suggesting a small toxic effect, these changes were confirmed by scanning electron microscopy (SEM) to be minor and did not cause additional cumulative toxic effects in the anterior segment (PUBMED:15969640). Furthermore, a comparative study measuring aqueous flare in glaucomatous eyes after trabeculectomy with either 5-FU or mitomycin C (MMC) as adjunctive therapy found that intraoperative MMC appears to be no more harmful to the blood-aqueous barrier than 5-FU (PUBMED:8446328). In clinical practice, 5-FU injections have been used to reduce the risk of trabeculectomy bleb failure after cataract surgery, although a study found that the use of subconjunctival 5-FU injections did not reduce the risk of trabeculectomy failure (PUBMED:20148657). Additionally, 5-FU has been used postoperatively to improve the surgical outcomes of trabeculectomy in Japanese patients, significantly increasing the success probability of intraocular pressure control (PUBMED:2696671). In summary, while there may be minor toxic effects on corneal cells, the overall evidence suggests that 5-FU injections after trabeculectomy pose minimal risk to intraocular structures and do not result in significant toxic effects on the anterior segment.