input
stringlengths
6.82k
29k
Instruction: Is there a bone-preserving bone remodelling in short-stem prosthesis? Abstracts: abstract_id: PUBMED:29100185 Numerical evaluation of bone remodelling and adaptation considering different hip prosthesis designs. Background: The change in mechanical properties of femoral cortical bone tissue surrounding the stem of the hip endoprosthesis is one of the causes of implant instability. We present an analysis used to determine the best conditions for long-term functioning of the bone-implant system, which will lead to improvement of treatment results. Methods: In the present paper, a finite element method coupled with a bone remodelling model is used to evaluate how different three-dimensional prosthesis models influence distribution of the density of bone tissue. The remodelling process begins after the density field is obtained from a computed tomography scan. Then, an isotropic Stanford model is employed to solve the bone remodelling process and verify bone tissue adaptation in relation to different prosthesis models. Findings: The study results show that the long-stem models tend not to transmit loads to proximal regions of bone, which causes the stress-shielding effect. Short stems or application in the calcar region provide a favourable environment for transfer of loads to the proximal region, which allows for maintenance of bone density and, in some cases, for a positive variation, which causes absence of the aseptic loosening of an implant. In the case of hip resurfacing, bone mineral density changes slightly and is closest to an intact femur. Interpretation: Installation of an implant modifies density distribution and stress field in the bone. Thus, bone tissue is stimulated in a different way than before total hip replacement, which evidences Wolff's law, according to which bone tissue adapts itself to the loads imposed on it. The results suggest that potential stress shielding in the proximal femur and cortical hypertrophy in the distal femur may, in part, be reduced through the use of shorter stems, instead of long ones, provided stem fixation is adequate. abstract_id: PUBMED:20714981 Is there a bone-preserving bone remodelling in short-stem prosthesis? DEXA analysis with the Nanos total hip arthroplasty Background: It has been suggested that the use of a short-stem prosthesis could conserve proximal bone by proximal load transfer. Proximal stress shielding should be reduced, a phenomenon that has been associated with bone resorption around traditional stems. Bone remodelling of a metaphyseal fixed stem (Nanos, Smith & Nephew Int.) was analysed by the dual-energy x-ray absorptiometry. Patients And Method: This study included 36 patients undergoing the total hip replacement using the Nanos short stem in comparison to 36 patients operated by a traditional long-stemmed femoral stem (Alloclassic). In all cases a threaded cup was inserted. Both groups were not different in regard to the BMI or in regard to the quality of bone (BMI). The average age of the group of patients with the short-stem prosthesis was slightly younger (average 54.2 years [range: 29 to 75]) than the patient group with the long-stem prosthesis (average 61.1 years [range: 39 to 71]). A prospective clinical analysis was done by the Harris hip score (HHS) and the Sutherland score to evaluate the social quality of life. With a minimum follow-up of 12 months in all cases, radiological changes in regard to stem subsidence, periprosthetic osteolysis or linear radiolucencies were analysed. The changes of periprosthetic bone density were examined with DEXA in all patients 3 and 12 months postoperatively. Results: No patients required reoperation because of loosening or subsidence of the short-stem prosthesis. The HHS improved from a mean of 43.1 (range: 9 to 51) to 96.5 points (range: 79 to 100) in the short-stem group and to 91.3 points (range: 61 to 100) in the group of patients with long-stemmed femoral component. Radiographic follow-up revealed no evidence of component loosening or migration of the short-stem. Along the greater trochanter an osteolysis of the bone structure was found in two cases. A decrease of the proximal periprosthetic bone density (Gruen zone I, -6.4%) and in zone VII (-7.2%) were measured. An increase of the BMD in the lateral inferior region (Gruen zone II, +9.7%) superior to the polished tip of the short stem was observed over a period of one year after implantation. At the polished tip of the prosthesis a significant change of bone density in zone III (+1.03%) and in zone V (+0.7%) could not be observed. Conclusion: The desired proximal load transfer of a short-stemmed implant in the metaphyseal region of the proximal femur could not be reached with this device. On the basis of the excellent clinical results of the patients operated with the Nanos short-stem prosthesis we conclude that the component induces bone ingrowth in the lateral/distal region of the proximal femur. abstract_id: PUBMED:33132094 Mid-term gender-specific differences in periprosthetic bone remodelling after implantation of a curved bone-preserving hip stem. Background: The implant-specific periprosthetic bone remodelling in the proximal femur is considered to be an important factor influencing the long-term survival of cementless hip stems. Particularly data of gender-specific differences regarding bone-preserving stems are very rare in literature and mainly limited to short-term investigations. Therefore, we investigated at mid-term one arm of a prospective randomised study to evaluate if there is an influence of gender on implant-specific stress shielding after implantation of a curved bone preserving hip stem (Fitmore) 5 years postoperatively. Hypothesis: We hypothesised there will be no gender-specific differences in periprosthetic bone remodelling. Patients And Methods: A total of 20 female and 37 male patients underwent total hip arthroplasty using the Fitmore stem. Clinical, radiological as well as osteodensitometric examinations were performed preoperatively, 7 days and 3, 12 and 60 months postoperatively. Clinical data collection included the Western Ontario and McMaster Universities Arthritis Index (WOMAC) and the Harris Hip Score (HHS). Periprosthetic bone mineral density (BMD) was measured using Dual Energy X-ray Absorptiometry (DXA) and the periprosthetic bone was divided into 7 regions of interest (ROI) for analysis. The results at 3, 12 and 60 months were compared with the first postoperative measurement after 7 days to obtain a percentage change. Results: Periprosthetic BMD showed a decrease in all 7 ROIs for both groups 5 years postoperatively referred to the baseline value, except ROI 3 (0.8%, p=0.761), representing the distal lateral part of the stem, and ROI 5 (0.3%, p=0.688), representing the distal medial part of the stem in the male cohort. Significant gender differences were found in ROI 1 (-16.0% vs. -3.5%, p=0.016) and ROI 6 (-9.9% vs. -2.1%, p=0.04) in favour of the male patients. Clinical results showed no significant gender differences 5 years postoperatively with regard to WOMAC (mean 0.4 (±0.8, 0-3.3) in women vs. 0.3 (±0.8, 0-4.2) in men, p=0.76) and HHS (mean 93.0 (±9.7, 66.0-100.0) in women vs. 93.9 (±11.5, 53.0-100.0) in men, p=0.36). Conclusion: Proximal stress shielding was observed independent of gender 5 years postoperatively. However, there was a significantly lower bone loss proximal lateral and medial below the calcar in male patients, indicating a more physiological load transfer. [ClinicalTrials.gov identifier: NCT03147131 (Study ID D.3067-244/10). Registered 10 May 2017 - retrospectively registered, https://clinicaltrials.gov/ct2/show/NCT03147131?term=Bieger&draw=2&rank=1] LEVEL OF EVIDENCE: IV; prospective study without control group. abstract_id: PUBMED:34435238 Hydroxyapatite-coated compaction short stem represents a characteristic pattern of peri-prosthetic bone remodelling after total hip arthroplasty. Purpose: We aimed to investigate the differences in peri-prosthetic bone remodelling between the full hydroxyapatite (HA)-collared compaction short stem and the short tapered-wedge stem. Methods: This retrospective cohort study enrolled 159 consecutive patients (159 joints) undergoing total hip arthroplasty (THA) using the full HA compaction short (n = 64) and short tapered-wedge (n = 95) stems. Body mass index (BMI), peri-prosthetic bone mineral density (BMD), and clinical factors, including the Japanese Orthopaedic Association score and the University of California Los Angeles (UCLA) activity score were assessed and compared. Results: Stem related complications were seen in three cases. Both groups showed similar peri-prosthetic BMD changes. Peri-prosthetic BMD was almost maintained in the distal femur and Gruen zone 6 with both type of stems, but significant BMD loss was found in zones 1 and 7 in both groups and in zone 2 of the full HA compaction stem group. No significant correlations were found between the proximal femoral BMD changes and the age, BMI, and UCLA score in both the full HA compaction and tapered-wedge stem groups. Femoral bone shape affected the peri-prosthetic BMD changes in the tapered-wedge stem group but not in the full HA compaction group. The stem collar of the full HA compaction stem did not affect peri-prosthetic BMD, but unique bone remodelling in the calcar region was observed in 27.6% cases. A significant difference in the peri-prosthetic BMD changes at Gruen zone 2 was found in patients with or without thigh pain. Conclusion: Peri-prosthetic bone remodelling remained unaffected by clinical and radiographic factors after THA with the new short full HA compaction stem. Therefore, this new stem may be useful in a variety of cases. abstract_id: PUBMED:26560021 Radiologic bone adaptations on a cementless short-stem shoulder prosthesis. Background: This study evaluated the timing and location of radiologic bone adaptations related to shoulder arthroplasty using a single type of cementless short-stem shoulder prosthesis. Methods: Uncemented short-stem shoulder arthroplasties were evaluated in 52 patients at a mean age of 71.6 years (range, 58.1-86.6) with a minimum clinical and radiologic follow-up of 2 years (mean, 32 months; range, 23-52 months). All radiographs were analyzed for inclination of the stem, filling ratio of metaphysis and diaphysis, bone remodeling around the stem, radiolucent lines around the glenoid, and subsidence of the humeral stem. Finally, the radiographic and clinical findings were compared between patients with low and high bone adaptations. Results: At final follow-up, no loosening, subsidence, or osteolysis was seen. High bone adaptations were present in 27 patients (51.9%). Cortical thinning and osteopenia in the medial cortex (82.7%) and spot welds in the lateral cortex (78.6%) were the most frequently occurring bone adaptations. Patients with high bone adaptations had significantly higher metaphyseal (0.60 ± 0.05 vs. 0.56 ± 0.06; P = .024) and diaphyseal filling ratio (0.66 ± 0.04 vs. 0.61 ± 0.06; P = .019) at 2-year follow-up than patients with low bone adaptations. Clinical outcome was not influenced by the radiographic changes. Conclusion: The clinical and radiologic results of the short-stem shoulder arthroplasty are comparable to those with the third and fourth generations of standard stem arthroplasty. Higher filling ratios in the metaphysis and the diaphysis were significantly associated with the occurrence of high bone adaptations. abstract_id: PUBMED:32219497 Bone remodelling and integration of two different types of short stem: a dual-energy X-ray - absorptiometry study. Purpose: Different kinds of bone preserving hip stems have been created to assure a more physiological distribution of the strengths on the femur. The aim of this research is to evaluate the density reaction of the periprosthetic bone while changing the conformation of the prosthetic implant on dual-energy X-ray - absorptiometry (DXA). Methods: This is a prospective, single-centre study assessing bone remodelling changes after implantation of two short hip stems, dividing the patients in two groups according to the implant used: 20 in group A, Metha (B-Braun), and 16 in group B, SMF (Smith and Nephew). All participants had a pre-operative and a post-operative (24 months) DXA evaluating the changes in bone mass density (BMD) occurred in the five Gruen's zones. Results: Compared to the pre-operative value, differences in BMD percentage were statistically significant only in ROI 4 (p < 0.05), with an increase in both groups (9 and 18%, respectively). The average increase in BMD was of 7.3% and 7.2% in the 2 groups. Conclusion: According to our study, both stems have proved able to provide good load distribution across the metaphyseal region favouring proper system integration. Nonetheless, is certainly needed to perform other studies with longer follow-up and bigger populations to give strength to these conclusions. abstract_id: PUBMED:32538203 Is diaphyseal fixation of short neck-retaining stem prostheses related to the size of the implant? Introduction: Short-stem hip prostheses present variable proximal femoral bone radiological findings. The aims of this study were to analyse, in our patients with implanted collum femoris-preserving (CFP) stems, cancellous bone remodelling, cortical distal hypertrophy and pedestal formation, and the relationship between those radiological changes that suggest distal fixation with the size of the stem. Methods: From October 2001 to December 2012 a total of 199 consecutive primary total hip arthroplasties in 180 patients were performed at our department using the CFP stem and followed up for a minimum of 5 years until December 2017. Results: Stress shielding was present in 74% of oversized stems cases, but in normal or undersized stems, stress shielding was present in 8.5%. Cortical hyperthrophy was observed in 49% of the oversized stems and in 6% of the normal or undersized ones. Finally, non-statistically significant differences (p = 0.089) in pedestal formation were found, present in 16.3% of the oversized stems and in 6% of normal or undersized ones. Conclusions: Oversized stems cause more stress shielding and distal cortical hypertrophy in the distal part of the stem, which indicates distal fixation in bigger sizes of stem. abstract_id: PUBMED:29178044 Periprosthetic bone remodelling of short-stem total hip arthroplasty: a systematic review. Purpose: Short-stem hip arthroplasty (SHA) was designed to preserve bone stock and provide an improved load transfer. To gain more evidence regarding the load transfer, this review analysed the periprosthetic bone remodelling of SHA in comparison to standard hip arthroplasty (THA). Methods: PubMed and ScienceDirect were screened to extract dual-energy X-ray absorptiometry (DXA) studies evaluating the periprosthetic bone remodelling of SHA and two proven THA designs. From the studies included, the postoperative change in periprosthetic bone mineral density (BMD) after one year and the trend over two years was determined. Results: Fifteen studies with four SHAs (CFP, Metha, Nanos, Fitmore) and two THAs (CLS and Bicontact) designs were included. All SHA and THA stems revealed an initial decrease at the calcar and major trochanter (Gruen 1 and 7) with the Metha, Nanos and Fitmore showing a smaller and more balanced remodelling compared to THA. The pattern after one year and the trend over two years argue for a methaphyseal anchorage of the Metha and Nanos, whereas the Fitmore and CFP seem to anchor metha-diaphyseal. Clearly different pattern of bone remodelling were observed between all four SHAs. Conclusions: Periprosthetic bone remodelling is also present in SHA, with the main bone reduction observed proximally. However, certain SHA stems show a more balanced remodelling compared to THA, arguing for a favourable load transfer. Also, the femoral length where bone remodelling occurs is clearly shorter in SHA. As distinctively different pattern between the SHA designs were observed, they should not be judged as a single implant group. abstract_id: PUBMED:28784121 Bone preserving level of osteotomy in short-stem total hip arthroplasty does not influence stress shielding dimensions - a comparing finite elements analysis. Background: The main objective of every new development in total hip arthroplasty (THA) is the longest possible survival of the implant. Periprosthetic stress shielding is a scientifically proven phenomenon which leads to inadvertent bone loss. So far, many studies have analysed whether implanting different hip stem prostheses result in significant preservation of bone stock. The aim of this preclinical study was to investigate design-depended differences of the stress shielding effect after implantation of a selection of short-stem THA-prostheses that are currently available. Methods: Based on computerised tomography (CT), a finite elements (FE) model was generated and a virtual THA was performed with different stem designs of the implant. Stems were chosen by osteotomy level at the femoral neck (collum, partial collum, trochanter sparing, trochanter harming). Analyses were performed with previously validated FE models to identify changes in the strain energy density (SED). Results: In the trochanteric region, only the collum-type stem demonstrated a biomechanical behaviour similar to the native femur. In contrast, no difference in biomechanical behaviour was found between partial collum, trochanter harming and trochanter sparing models. All of the short stem-prostheses showed lower stress-shielding than a standard stem. Conclusion: Based on the results of this study, we cannot confirm that the design of current short stem THA-implants leads to a different stress shielding effect with regard to the level of osteotomy. Somehow unexpected, we found a bone stock protection in metadiaphyseal bone by simulating a more distal approach for osteotomy. Further clinical and biomechanical research including long-term results is needed to understand the influence of short-stem THA on bone remodelling and to find the optimal stem-design for a reduction of the stress shielding effect. abstract_id: PUBMED:21849737 Initial stability of a new uncemented short-stem prosthesis, Spiron®, in dog bone. Background: Use of the proximal part of the femur in total hip arthroplasty enables preservation of the distal femur for later revisions. To use this advantage, different types of short-stem prosthesis have been developed in recent years. Although cementless hip arthroplasty is not common in the treatment of canine osteoarthritis, the use of cementless short-stems might be an alternative therapy. The new cementless short-stem prosthesis called Spiron® is self-tapping, and is constructed with a conical shape with threads. We measured the relative motion in the bone/prosthesis interface with specified loads in the femora of dogs to investigate two aspects: the primary stability of two systems of uncemented prosthesis with different principles of anchoring, and the theoretical use of the Spiron® in dog bone. We measured the cyclic behaviour (i.e., reversible, elastic), subsidence (i.e., irreversible, plastic, migration) and maximal applied load. Methods: Twenty-four pairs of fresh femur bones from adult German shepherd dogs were used. After measuring the total bone mineral density (TBMD), 16 bones were used in each of the short-stem prosthesis group (group A), the Zweymuller prosthesis group (group B), and the no-prosthesis control group (group C). Micromotion between bone and prostheses was measured for 16,200 N axial load steps, beginning with 200 N and increasing to 3000 N (1600 cycles/femur). Simple analysis of variance and non-parametric tests were used to compare the groups. Results: The Spiron prosthesis had significantly less motion in the bone/prosthesis interface compared with the Zweymuller prosthesis. Conclusions: The new principle of anchoring of the Spiron short-stem prosthesis may provide higher primary stability compared with conventional techniques. The findings of this study support the assumption that the use of the Spiron prosthesis to treat osteoarthritis in the dog is feasible. Answer: Yes, there is evidence of bone-preserving bone remodeling in short-stem prostheses. Studies have suggested that short-stem prostheses can conserve proximal bone by transferring loads proximally, which should reduce proximal stress shielding—a phenomenon associated with bone resorption around traditional long stems (PUBMED:20714981). The use of short stems, such as the Nanos total hip arthroplasty, has been shown to induce bone ingrowth in the lateral/distal region of the proximal femur, although the desired proximal load transfer in the metaphyseal region could not be reached with this device (PUBMED:20714981). Additionally, other research has indicated that short-stem designs, which aim to preserve bone stock and provide improved load transfer, do show periprosthetic bone remodeling with the main bone reduction observed proximally. However, certain short-stem hip arthroplasty (SHA) stems show a more balanced remodeling compared to traditional hip arthroplasty (THA), suggesting a favorable load transfer (PUBMED:29178044). Moreover, the femoral length where bone remodeling occurs is clearly shorter in SHA compared to THA, and distinctively different patterns between SHA designs were observed, indicating that they should not be judged as a single implant group (PUBMED:29178044). Finite element analysis has also been used to evaluate how different hip prosthesis designs influence the distribution of bone tissue density, with results suggesting that shorter stems may reduce potential stress shielding in the proximal femur and cortical hypertrophy in the distal femur, provided stem fixation is adequate (PUBMED:29100185). In summary, short-stem prostheses are associated with bone-preserving remodeling, although the extent and pattern of this remodeling can vary depending on the specific design of the prosthesis.
Instruction: Does pregnancy increase cardiac risk for LQT1 patients with the KCNQ1-A341V mutation? Abstracts: abstract_id: PUBMED:17010804 Does pregnancy increase cardiac risk for LQT1 patients with the KCNQ1-A341V mutation? Objectives: The purpose of this study was to assess the pregnancy-related cardiovascular risk in LQT1 patients. Background: Only 1 study addressed this issue in genotyped patients and reported that the highest risk is for LQT2 patients. Methods: This case-control study, performed in a cohort of patients from 22 families affected by LQT1 and all sharing the common KCNQ1-A341V mutation, involved 36 mutation carriers and 24 of their unaffected sisters for a total of 182 pregnancies. Results: There were 3 (2.6%) cardiac events (2 cardiac arrests) in the 115 LQT1 pregnancies. Because they occurred only among the 27 mothers with previous symptoms, all off-therapy, the risk for symptomatic patients is 11%, but decreases to 0 in symptomatic patients treated with beta-blockers. Carriers and control subjects did not differ for the incidence of miscarriage (10% vs. 15%). Cesarean sections (C-sections), elective or owing to fetal distress, were performed more often in carriers than in non-carriers (27% vs. 14%). Beta-blocker therapy did not influence the prevalence of fetal distress. Among the infants born to carriers, all those with fetal distress were carriers of the A341V mutation (10 of 10, 100%). Among the offspring of the carriers, 48 of 92 (52%) were mutation carriers, and of those, 15% died suddenly at age 14 +/- 6 years. Conclusions: Women affected by the common KCNQ1-A341V mutation are at low risk for cardiac events during pregnancy and without excess risk of miscarriage; their infants delivered by C-section because of fetal distress are extremely likely to also be mutation carriers. Beta-blockers remain recommended. These conclusions likely apply to most LQT1 patients. abstract_id: PUBMED:15851119 Identification of a common genetic substrate underlying postpartum cardiac events in congenital long QT syndrome. Objectives: The aim of this study was to elucidate the genetic basis for long QT syndrome (LQTS) in patients with a personal or family history of postpartum cardiac events. Background: The postpartum period is a time of increased arrhythmogenic susceptibility in women with LQTS. Methods: Between August 1997 and May 2003, 388 unrelated patients (260 females, average age at diagnosis, 23 years, and average QTc, 482 ms) were referred to Mayo Clinic's Sudden Death Genomics Laboratory for LQTS genetic testing. Comprehensive mutational analysis of the 5 LQTS-causing channel genes was performed. The postpartum period was defined as the 20 weeks after delivery. Cardiac events included sudden cardiac death, aborted cardiac arrest, and syncope. The presence of a personal and/or family history of cardiac events during postpartum period was determined by review of the medical records and/or phone interviews and was blinded to the status of genetic testing. Results: Fourteen patients (3.6% of cohort) had personal (n = 4) and/or family history (n = 11) of cardiac events during the defined postpartum period. Thirteen of 14 patients (93%) possessed an LQT2 mutation and 1 had an LQT1 mutation. Postpartum cardiac events were found more commonly in patients with LQT2 (13 of 80, 16%) than in patients with LQT1 (1 of 103, <1%, P = .0001). Conclusions: There is a relatively gene-specific molecular basis underlying cardiac events during the postpartum period in LQTS. Along with previous gene-specific associations involving swimming and LQT1 as well as auditory triggers and LQT2, this association between postpartum cardiac events and LQT2 can facilitate strategic genotyping. abstract_id: PUBMED:26019114 Third trimester fetal heart rate predicts phenotype and mutation burden in the type 1 long QT syndrome. Background: Early diagnosis and risk stratification is of clinical importance in the long QT syndrome (LQTS), however, little genotype-specific data are available regarding fetal LQTS. We investigate third trimester fetal heart rate, routinely recorded within public maternal health care, as a possible marker for LQT1 genotype and phenotype. Methods And Results: This retrospective study includes 184 fetuses from 2 LQT1 founder populations segregating p.Y111C and p.R518X (74 noncarriers and 110 KCNQ1 mutation carriers, whereof 13 double mutation carriers). Pedigree-based measured genotype analysis revealed significant associations between fetal heart rate, genotype, and phenotype; mean third trimester prelabor fetal heart rates obtained from obstetric records (gestational week 29-41) were lower per added mutation (no mutation, 143±5 beats per minute; single mutation, 134±8 beats per minute; double mutations, 111±6 beats per minute; P<0.0001), and lower in symptomatic versus asymptomatic mutation carriers (122±10 versus 137±9 beats per minute; P<0.0001). Strong correlations between fetal heart rate and neonatal heart rate (r=0.700; P<0.001), and postnatal QTc (r=-0.762; P<0.001) were found. In a multivariable model, fetal genotype explained the majority of variance in fetal heart rate (-10 beats per minute per added mutation; P<1.0×10(-23)). Arrhythmia symptoms and intrauterine β-blocker exposure each predicted -7 beats per minute, P<0.0001. Conclusions: In this study including 184 fetuses from 2 LQT1 founder populations, third trimester fetal heart rate discriminated between fetal genotypes and correlated with severity of postnatal cardiac phenotype. This finding strengthens the role of fetal heart rate in the early detection and risk stratification of LQTS, particularly for fetuses with double mutations, at high risk of early life-threatening arrhythmias. abstract_id: PUBMED:16599043 Case report congenital LQTS--an electrocardiographic and genotype correlation. The congenital Long QT Syndrome (LQTS) is characterized by abnormally prolonged ventricular repolarization due to inherited defect in cardiac sodium and potassium channels, which predisposes the patients to syncope, ventricular arrhythmias, and sudden cardiac death. Early diagnosis and preventive treatment are instrumental to prevent sudden cardiac death in patients with the congenital LQTS. The diagnostic criteria for congenital LQTS are based on certain electrocardiographic findings and clinical history. Recently genotype specific electrocardiographic pattern in the congenital LQTS has also been described. Recent studies suggest feasibility of genotype specific treatment of LQTS and in near future, mutation specific treatment will probably become a novel approach to this potentially fatal syndrome. We describe two cases that fulfilled the electrocardiographic and historical diagnostic criteria with morphology on electrocardiogram (ECG) suggestive of LQT1 genotype. abstract_id: PUBMED:33611903 Analyses of triggers for recurrent cardiac events in 38 patients with symptomatic long QT syndrome Objective: To evaluate the main triggers of recurrent cardiac events in patients with symptomatic congenital long QT syndrome (cLQTS). Methods: In this retrospective case analysis study, clinical characteristics were reviewed from 38 patients with recurrent cardiac events after first visit out of 66 symptomatic cLQTS patients. General clinical data such as gender, age, clinical presentation, family history and treatment were collected, auxiliary examination results such as electrocardiogram and gene detection were analyzed. LQTS-related cardiac events were defined as arrhythmogenic syncope, implantable cardioverter defibrillator (ICD) shock, inappropriate ICD shock, aborted cardiac arrest, sudden cardiac death or ventricular tachycardia. Results: A total of 38 patients with recurrent symptoms were enrolled in this study, including 30 females (79%) and 14 children (37%). The average age of onset was (15.6±14.6) years, and the recurrence time was (3.6±3.5) years. Subtype analysis showed that there were 11 cases (29%) of LQT1 (including 2 cases of jervel-Lange Nielson syndrome), 19 cases (50%) of LQT2, 5 cases (13%) of LQT3 and 3 cases (8%) of other rare subtypes (1 LQT5, 1 LQT7 and 1 LQT11) in this patient cohort. LQT1 patients experienced recurrent cardiac event due to drug withdrawal (6 (55%)), specific triggers (exercise and emotional excitement) (4 (36%)) and medication adjustment (1 (9%)). For LQT2 patients, main triggers for cardiac events were drug withdrawal (16 (84%)), specific triggers (shock, sound stimulation, waking up (6 (32%)). One patient (5%) had recurrent syncope after pregnancy. One patient (20%) had inappropriate ICD shock. For LQT3 patients, 4 (80%) patients developed syncope during resting state, and 1 (20%) developed ventricular tachycardia during exercise test. One LQT5 patients experienced syncope and ICD shock under specific triggers (emotional excitement). One LQT11 patient had repeated ICD shocks under specific inducement (fatigue). One LQT7 patient experienced inappropriate ICD shock. Left cardiac sympathetic denervation (LCSD) significantly alleviated the symptoms in 2 children with Jervell-Lange Nielson syndrome (JLNS) post ineffective β-blocker medication. Nadolol succeeded in eliminating cardiac events in one patient with LQT2 post ineffective metoprolol medication. Mexiletine significantly improved symptoms in 2 patients with LQT2 post ineffective β-blocker medication. Conclusions: Medication withdrawal is an important trigger of the recurrence of cardiac events among patients with symptomatic congenital long QT syndrome. abstract_id: PUBMED:16109388 De novo KCNQ1 mutation responsible for atrial fibrillation and short QT syndrome in utero. Objective: We describe a genetic basis for atrial fibrillation and short QT syndrome in utero. Heterologous expression of the mutant channel was used to define the physiological consequences of the mutation. Methods: A baby girl was born at 38 weeks after induction of delivery that was prompted by bradycardia and irregular rythm. ECG revealed atrial fibrillation with slow ventricular response and short QT interval. Genetic analysis identified a de novo missense mutation in the potassium channel KCNQ1 (V141M). To characterize the physiological consequences of the V141M mutation, Xenopus laevis oocytes were injected with cRNA encoding wild-type (wt) KCNQ1 or mutant V141M KCNQ1 subunits, with or without KCNE1. Results: Ionic currents were recorded using standard two-microelectrode voltage clamp techniques. In the absence of KCNE1, wtKCNQ1 and V141M KCNQ1 currents had similar biophysical properties. Coexpression of wtKCNQ1+KCNE1 subunits induced the typical slowly activating and voltage-dependent delayed rectifier K(+) current, I(Ks). In contrast, oocytes injected with cRNA encoding V141M KCNQ1+KCNE1 subunits exhibited an instantaneous and voltage-independent K(+)-selective current. Coexpression of V141M and wtKCNQ1 with KCNE1 induced a current with intermediate biophysical properties. Computer modeling showed that the mutation would shorten action potential duration of human ventricular myocytes and abolish pacemaker activity of the sinoatrial node. Conclusions: The description of a novel, de novo gain of function mutation in KCNQ1, responsible for atrial fibrillation and short QT syndrome in utero indicates that some of these cases may have a genetic basis and confirms a previous hypothesis that gain of function mutations in KCNQ1 channels can shorten the duration of ventricular and atrial action potentials. abstract_id: PUBMED:17349890 Long QT syndrome and pregnancy. Objectives: This study was designed to investigate the clinical course of women with long QT syndrome (LQTS) throughout their potential childbearing years. Background: Only limited data exist regarding the risks associated with pregnancy in women with LQTS. Methods: The risk of experiencing an adverse cardiac event, including syncope, aborted cardiac arrest, and sudden death, during and after pregnancy was analyzed for women who had their first birth from 1980 to 2003 (n = 391). Time-dependent Kaplan-Meier and Cox proportional hazard methods were used to evaluate the risk of cardiac events during different peripartum periods. Results: Compared with a time period before a woman's first conception, the pregnancy time was associated with a reduced risk of cardiac events (hazard ratio [HR] 0.28, 95% confidence interval [CI] 0.10 to 0.76, p = 0.01), whereas the 9-month postpartum time had an increased risk (HR 2.7, 95% CI 1.8 to 4.3, p < 0.001). After the 9-month postpartum period, the risk was similar to the period before the first conception (HR 0.91, 95% CI 0.55 to 1.5, p = 0.70). Genotype analysis (n = 153) showed that women with the LQT2 genotype were more likely to experience a cardiac event than women with the LQT1 or LQT3 genotype. The cardiac event risk during the high-risk postpartum period was reduced among women using beta-blocker therapy (HR 0.34, 95% CI 0.14 to 0.84, p = 0.02). Conclusions: Women with LQTS have a reduced risk for cardiac events during pregnancy, but an increased risk during the 9-month postpartum period, especially among women with the LQT2 genotype. Beta-blockers were associated with a reduction in cardiac events during the high-risk postpartum time period. abstract_id: PUBMED:26022593 Jervell and Lange-Nielsen syndrome with homozygous missense mutation of the KCNQ1 gene. Jervell and Lange-Nielsen syndrome (JLNS) is an autosomal recessive cardioauditory ion channel disorder characterized by congenital bilateral sensorineural deafness and long QT interval. JLNS is a ventricular repolarization abnormality and is caused by mutations in the KCNQ1 or KCNE1 gene. It has a high mortality rate in childhood due to ventricular tachyarrhythmias, episodes of torsade de pointes which may cause syncope or sudden cardiac death. Here, we present a 4.5-year-old female patient who had a history of syncope and congenital sensorineural deafness. She had a cochlear implant operation at 15 months of age and received an implantable cardioverter defibrillator (ICD) at 3 years of age because of recurrent syncope attacks. Five months after cochlear implant placement, she could say her first words and is now able to speak. With β-blocker therapy and ICD, she has remained syncope-free for a year. On the current admission, the family visited the genetics department to learn about the possibility of prenatal diagnosis of sensorineural deafness, as the mother was 9 weeks pregnant. A diagnosis of JLNS was established for the first time, and a homozygous missense mutation in the KCNQ1 gene (c.128 G>A, p.R243H) was detected. Heterozygous mutations of KCNQ1 were identified in both parents, thereby allowing future prenatal diagnoses. The family obtained prenatal diagnosis for the current pregnancy, and fetal KCNQ1 analysis revealed the same homozygous mutation. The pregnancy was terminated at the 12th week of gestation. The case presented here is the third molecularly confirmed Turkish JLNS case; it emphasizes the importance of timely genetic diagnosis, which allows appropriate genetic counseling and prenatal diagnosis, as well as proper management of the condition. abstract_id: PUBMED:28292826 Arrhythmia risk and β-blocker therapy in pregnant women with long QT syndrome. Background: Pregnancy is one of the biggest concerns for women with long QT syndrome (LQTS). Objectives: This study investigated pregnancy-related arrhythmic risk and the efficacy and safety of β-blocker therapy for lethal ventricular arrhythmias in pregnant women with LQTS (LQT-P) and their babies. Methods: 136 pregnancies in 76 LQT-P (29±5 years old; 22 LQT1, 36 LQT2, one LQT3, and 17 genotype-unknown) were enrolled. We retrospectively analysed their clinical and electrophysiological characteristics and pregnancy outcomes in the presence (BB group: n=42) or absence of β-blocker therapy (non-BB group: n=94). Results: All of the BB group had been diagnosed with LQTS with previous events, whereas 65% of the non-BB group had not been diagnosed at pregnancy. Pregnancy increased heart rate in the non-BB group; however, no significant difference was observed in QT and Tpeak-Tend intervals between the two groups. In the BB group, only two events occurred at postpartum, whereas 12 events occurred in the non-BB group during pregnancy (n=6) or postpartum period (n=6). The frequency of spontaneous abortion did not differ between the two groups. Fetal growth rate and proportion of infants with congenital malformation were similar between the two groups, but premature delivery and low birthweight infants were more common in those taking BB (OR 4.79, 95% CI 1.51 to 15.21 and OR 3.25, 95% CI 1.17 to 9.09, respectively). Conclusions: Early diagnosis and β-blocker therapy for high-risk patients with LQTS are important for prevention of cardiac events during pregnancy and the postpartum period, and β-blocker therapy may be tolerated for babies in LQT-P cases. abstract_id: PUBMED:23995044 Arrhythmia phenotype during fetal life suggests long-QT syndrome genotype: risk stratification of perinatal long-QT syndrome. Background: Fetal arrhythmias characteristic of long QT syndrome (LQTS) include torsades de pointes (TdP) and/or 2° atrioventricular block, but sinus bradycardia, defined as fetal heart rate<3% for gestational age, is most common. We hypothesized that prenatal rhythm phenotype might predict LQTS genotype and facilitate improved risk stratification and management. Method And Results: Records of subjects exhibiting fetal LQTS arrhythmias were reviewed. Fetal echocardiograms, neonatal ECG, and genetic testing were evaluated. We studied 43 subjects exhibiting fetal LQTS arrhythmias: TdP±2° atrioventricular block (group 1, n=7), isolated 2° atrioventricular block (group 2, n=4), and sinus bradycardia (group 3, n=32). Mutations in known LQTS genes were found in 95% of subjects tested. SCN5A mutations occurred in 71% of group 1, whereas 91% of subjects with KCNQ1 mutations were in group 3. Small numbers of subjects with KCNH2 mutations (n=4) were scattered in all 3 groups. Age at presentation did not differ among groups, and most subjects (n=42) were live-born with gestational ages of 37.5±2.8 weeks (mean±SD). However, those with TdP were typically delivered earlier. Prenatal treatment in group 1 terminated (n=2) or improved (n=4) TdP. The neonatal heart rate-corrected QT interval (mean±SE) of group 1 (664.7±24.9) was longer than neonatal heart rate-corrected QT interval in both group 2 (491.2±27.6; P=0.004) and group 3 (483.1±13.7; P<0.001). Despite medical and pacemaker therapy, postnatal cardiac arrest (n=4) or sudden death (n=1) was common among subjects with fetal/neonatal TdP. Conclusions: Rhythm phenotypes of fetal LQTS have genotype-suggestive features that, along with heart rate-corrected QT interval duration, may risk stratify perinatal management. Answer: Pregnancy does not appear to significantly increase cardiac risk for LQT1 patients with the KCNQ1-A341V mutation. A study involving 36 mutation carriers and 24 of their unaffected sisters, totaling 182 pregnancies, found that there were only 3 (2.6%) cardiac events (2 cardiac arrests) in the 115 LQT1 pregnancies. Importantly, these events occurred only among the 27 mothers who had previous symptoms and were not on therapy. For symptomatic patients off-therapy, the risk was 11%, but it decreased to 0 for symptomatic patients treated with beta-blockers. The study concluded that women affected by the common KCNQ1-A341V mutation are at low risk for cardiac events during pregnancy and do not have an excess risk of miscarriage. Moreover, their infants delivered by C-section due to fetal distress are extremely likely to also be mutation carriers. Beta-blockers are recommended and these conclusions likely apply to most LQT1 patients (PUBMED:17010804). In contrast, another study found that postpartum cardiac events were more commonly associated with LQT2 mutations than with LQT1, suggesting a gene-specific risk that is higher for LQT2 patients during the postpartum period (PUBMED:15851119). However, this does not directly address the risk during pregnancy for LQT1 patients with the specific KCNQ1-A341V mutation. Additionally, a study on long QT syndrome and pregnancy found that women with LQTS have a reduced risk for cardiac events during pregnancy but an increased risk during the 9-month postpartum period, especially among women with the LQT2 genotype. Beta-blockers were associated with a reduction in cardiac events during the high-risk postpartum time period (PUBMED:17349890). This further supports the notion that the postpartum period, rather than pregnancy itself, may present a higher risk for cardiac events in LQTS patients, and that beta-blocker therapy is beneficial. In summary, pregnancy does not significantly increase cardiac risk for LQT1 patients with the KCNQ1-A341V mutation, especially when treated with beta-blockers. The postpartum period, however, may present an increased risk, particularly for LQT2 patients.
Instruction: Does submucosal fibrosis affect the results of endoscopic submucosal dissection of early gastric tumors? Abstracts: abstract_id: PUBMED:25083097 Endoscopic submucosal tunnel dissection salvage technique for ulcerative early gastric cancer. Endoscopic submucosal dissection is an effective treatment modality for early gastric cancer (EGC), though the submucosal fibrosis found in ulcerative EGC is an obstacle for successful treatment. This report presents two cases of ulcerative EGC in two males, 73- and 80-year-old, with severe fibrosis. As endoscopic ultrasonography suggested that the EGCs had invaded the submucosal layer, the endoscopic submucosal tunnel dissection salvage technique was utilized for complete resection of the lesions. Although surgical gastrectomy was originally scheduled, the two patients had severe coronary heart disease, and surgeries were refused because of the risks associated with their heart conditions. The endoscopic submucosal tunnel dissection salvage technique procedures described in these cases were performed under conscious sedation, and were completed within 30 min. The complete en bloc resection of EGC using endoscopic submucosal tunnel dissection salvage technique was possible with a free resection margin, and no other complications were noted during the procedure. This is the first known report concerning the use of the endoscopic submucosal tunnel dissection salvage technique salvage technique for treatment of ulcerative EGC. We demonstrate that endoscopic submucosal tunnel dissection salvage technique it is a feasible method showing several advantages over endoscopic submucosal dissection for cases of EGC with fibrosis. abstract_id: PUBMED:31887810 Efficacy of Current Traction Techniques for Endoscopic Submucosal Dissection. This systematic review aimed to assess the efficacy of the current approach to tissue traction during the endoscopic submucosal dissection (ESD) of superficial esophageal cancer, early gastric cancer, and colorectal neoplasms. We performed a systematic electronic literature search of articles published in PubMed and selected comparative studies to investigate the treatment outcomes of tractionassisted versus conventional ESD. Using the keywords, we retrieved 381 articles, including five eligible articles on the esophagus, 13 on the stomach, and 12 on the colorectum. A total of seven randomized controlled trials and 23 retrospective studies were identified. Clip line traction and submucosal tunneling were effective in reducing the procedural time during esophageal ESD. The efficacy of traction methods in gastric ESD varied in terms of the devices and strategies used depending on the lesion location and degree of submucosal fibrosis. Several prospective and retrospective studies utilized traction devices without the need to reinsert the colonoscope. When pocket creation is included, the traction devices and methods effectively shorten the procedural time during colorectal ESD. Although the efficacy is dependent on the organ and tumor locations, several traction techniques have been demonstrated to be efficacious in facilitating ESD by maintaining satisfactory traction during dissection. abstract_id: PUBMED:30077787 AGA Institute Clinical Practice Update: Endoscopic Submucosal Dissection in the United States. Endoscopic submucosal dissection (ESD) is an established endoscopic resection method in Asian countries, which is increasingly practiced in Europe and by early adopters in the United States for removal of early cancers and large lesions from the luminal gastrointestinal tract. The intent of this expert review is to provide an update regarding the clinical practice of ESD with a particular focus on its use in the United States. This review is framed around the 16 best practice advice points agreed upon by the authors, which reflect landmark and recent published articles in this field. This expert review also reflects our experience as advanced endoscopists with extensive experience in performing and teaching others to perform ESD in the United States. Best Practice Advice 1: Endoscopic submucosal dissection should be recognized as a mature endoscopic technique that enables complete removal of lesions that are too large for en bloc endoscopic mucosal resection or are at increased risk of containing cancer. Best Practice Advice 2: The safety and feasibility of endoscopic submucosal dissection for early gastric cancer is well established. The absolute indications for curative endoscopic resection include moderately and well-differentiated, nonulcerated, mucosal lesions that are ≤2 cm in size. Best Practice Advice 3: Other relative (expanded) indications for gastric endoscopic submucosal dissection include moderately and well-differentiated superficial cancers that are >2 cm, lesions ≤3 cm with ulceration or that contain early submucosal invasion, and poorly differentiated superficial cancers ≤2 cm in size. The risk of lymph node metastasis when endoscopic submucosal dissection is performed for these indications is higher than when it is performed for absolute indications but remains acceptably low. Best Practice Advice 4: Endoscopic submucosal dissection may be considered in selected patients with Barrett's esophagus with the following features: large or bulky area of nodularity, lesions with a high likelihood of superficial submucosal invasion, recurrent dysplasia, endoscopic mucosal resection specimen showing invasive carcinoma with positive margins, equivocal preprocedural histology, and intramucosal carcinoma. Best Practice Advice 5: Endoscopic submucosal dissection is the primary modality for treatment of squamous cell dysplasia and cancer confined to the superficial esophageal mucosa. Any degree of submucosal invasion caries an increased risk of lymph node metastasis and alternative/additional therapy should be considered. Best Practice Advice 6: Duodenal endoscopic submucosal dissection is associated with an increased risk of intraprocedural perforation and delayed adverse events. Duodenal endoscopic submucosal dissection should be limited to endoscopists with extensive experience in performing endoscopic submucosal dissection in other locations. It is strongly suggested that endoscopists in the United States refrain from performing duodenal endoscopic submucosal dissection during the early phase of their endoscopic submucosal dissection practice. Best Practice Advice 7: All colorectal lesions should be evaluated for suitability for endoscopic resection. Accumulating evidence has shown that the majority of colorectal neoplasms without signs of deep submucosal invasion or advanced cancer can be treated by advanced endoscopic resection techniques. Best Practice Advice 8: Colorectal neoplasms containing dysplasia confined to the mucosa have no risk for lymph node metastasis and endoscopic resection should be considered as the criterion standard. Best Practice Advice 9: Large (>2 cm) colorectal lesions frequently (>43%) require piecemeal removal when endoscopic mucosal resection is used, which is associated with increased (up to 20%) rates of recurrent neoplasia. Endoscopic submucosal dissection enables higher rates of en bloc resection and lower recurrence rates for these lesions. Patients with large complex colorectal polyps should be referred to a high-volume, specialized center for endoscopic removal by endoscopic mucosal resection or endoscopic submucosal dissection. Best Practice Advice 10: Endoscopic resection for colorectal lesions offers significant cost benefit compared with surgery, and case-based endoscopic submucosal dissection selection for high-risk lesions could offer cost savings. Best Practice Advice 11: Endoscopists in the United States embarking on performing endoscopic submucosal dissection should be familiar with currently available endoscopic tissue closure devices. Both clip closure and endoscopic suturing techniques have been shown to be effective in managing intraprocedural perforation. Complete closure of a post-endoscopic submucosal dissection site may be considered in certain circumstances based on patient factors, procedural factors, and the location of the lesion. Best Practice Advice 12: Careful coagulation of exposed blood vessels in the resection site may reduce the risk of delayed bleeding after endoscopic submucosal dissection. The use of low-voltage coagulation current is recommended for this technique. Best Practice Advice 13: Endoscopists should affix the endoscopic submucosal dissection specimen to a flat surface (eg, pin the specimen to cork board) and immerse it in formalin. An expert gastrointestinal pathologist should evaluate the specimen for margin involvement, degree of differentiation, presence or absence of lymphovascular invasion, depth of submucosal invasion (if present), and tumor budding. Best Practice Advice 14: Acquiring high-level competency in endoscopic submucosal dissection is achievable in the United States. Alternative educational models should be used in the United States because of the limited number of experts and the differing prevalence of gastrointestinal luminal diseases as compared with Asia. Best Practice Advice 15: The endoscopic submucosal dissection educational model most suited for the current environment in the United States is a stepwise approach consisting of didactic self-study, attending training courses with increasing levels of complexity, self-practice on animal models, and observation of live cases performed by experts. Endoscopists should perform their initial endoscopic submucosal dissections on patients with lesions that have well-established indications for endoscopic submucosal dissection and are of the lowest technical complexity. Best Practice Advice 16: Endoscopists in the United States who perform endoluminal resection should educate referring physicians to avoid practices that may induce submucosal fibrosis hampering future endoscopic mucosal resection or endoscopic submucosal dissection. These practices include tattooing in close proximity to or beneath a lesion for marking and partial snare resection of a portion of a lesion for histopathology. abstract_id: PUBMED:35686215 A successful case of endoscopic submucosal dissection using the water pressure method for early gastric cancer with severe fibrosis. Video 1A successful case of endoscopic submucosal dissection using the water pressure method for early gastric cancer. abstract_id: PUBMED:37120374 Magnetic ring-assisted endoscopic submucosal dissection for gastric lesions with submucosal fibrosis: A preliminary study in beagle model. Background: During endoscopic submucosal dissection (ESD) for gastric lesions with fibrosis, appropriate traction could provide clear submucosal dissection visualization to improve safety and efficiency of procedures. Therefore, the aim of this study was to evaluate the feasibility of magnetic ring-assisted ESD (MRA-ESD) for gastric fibrotic lesions. Method: In the eight healthy beagles, 2-3 mL of 50% glucose solution was injected into submucosal layer of the stomach to induce gastric fibrotic lesions. A week after submucosal injection, two endoscopists at different levels performed MRA-ESD or standard ESD (S-ESD) for gastric simulated lesions, respectively. The magnetic traction system consisted of external handheld magnet and internal magnetic ring. The feasibility and procedure outcomes of the magnetic traction system were mainly evaluated. Results: Forty-eight gastric simulated lesions with ulceration were confirmed to have submucosal fibrosis formation by preoperative endoscopic ultrasonography. The magnetic traction system could be easily established, only took 1.57 min, and allowed excellent submucosal visualization. The total procedure time was significantly shorter in the MRA-ESD group than in the S-ESD group for both endoscopists (mean: 46.83 vs. 25.09 min, p < 0.001), and this difference was accentuated in non-skilled endoscopist. There was significant difference between two groups in bleeding and perforation rates. Histological analysis showed the depth of resected specimens was a little deeper around the fibrotic portion in the S-ESD group (p < 0.001). Conclusion: The magnetic ring-assisted ESD technique may be an effective and safe treatment for gastric fibrotic lesions and may shorten the endoscopic learning curve for non-skilled endoscopists. abstract_id: PUBMED:23189214 Endoscopic submucosal dissection and surgical treatment for gastrointestinal cancer. Endoscopic submucosal dissection (ESD) is widely used in Japan as a minimally invasive treatment for early gastric cancer. The application of ESD has expanded to the esophagus and colorectum. The indication criteria for endoscopic resection (ER) are established for each organ in Japan. Additional treatment, including surgery with lymph node dissection, is recommended when pathological examinations of resected specimens do not meet the criteria. Repeat ER for locally recurrent gastrointestinal tumors may be difficult because of submucosal fibrosis, and surgical resection is required in these cases. However, ESD enables complete resection in 82%-100% of locally recurrent tumors. Transanal endoscopic microsurgery (TEM) is a well-developed surgical procedure for the local excision of rectal tumors. ESD may be superior to TEM alone for superficial rectal tumors. Perforation is a major complication of ESD, and it is traditionally treated using salvage laparotomy. However, immediate endoscopic closure followed by adequate intensive treatment may avoid the need for surgical treatment for perforations that occur during ESD. A second primary tumor in the remnant stomach after gastrectomy or a tumor in the reconstructed organ after esophageal resection has traditionally required surgical treatment because of the technical difficulty of ER. However, ESD enables complete resection in 74%-92% of these lesions. Trials of a combination of ESD and laparoscopic surgery for the resection of gastric submucosal tumors or the performance of sentinel lymph node biopsy after ESD have been reported, but the latter procedure requires a careful evaluation of its clinical feasibility. abstract_id: PUBMED:24368936 Repeat endoscopic submucosal dissection for recurrent gastric cancers after endoscopic submucosal dissection. Aim: To clarify the safety and efficacy of repeat endoscopic submucosal dissection (re-ESD) for locally recurrent gastric cancers after ESD. Methods: A retrospective evaluation was performed of the therapeutic efficacy, complications and follow-up results from ESD treatment for early gastric cancers in 521 consecutive patients with 616 lesions at St. Luke`s International Hospital between April 2004 and November 2012. In addition, tumor size, the size of resected specimens and the operation time were compared between re-ESD and initial ESD procedures. A flex knife was used as the primary surgical device and a hook knife was used in cases with severe fibrosis in the submucosal layer. Continuous variables were analyzed using the non-parametric Mann-Whitney U test and are expressed as medians (range). Categorical variables were analyzed using a Fisher's exact test and are reported as proportions. Statistical significance was defined as a P-value less than 0.05. Results: The number of cases in the re-ESD group and the initial ESD group were 5 and 611, respectively. The median time interval from the initial ESD to re-ESD was 14 (range, 4-44 mo). En bloc resection with free lateral and vertical margins was successfully performed in all re-ESD cases without any complications. No local or distant recurrence was observed during the median follow-up period of 48 (range, 11-56 mo). Tumor size was not significantly different between the re-ESD group and the initial ESD group (median 22 mm vs 11 mm, P = 0.09), although the size of resected specimens was significantly larger in the re-ESD group (median 47 mm vs 34 mm, P < 0.05). There was a non-significant increase observed in re-ESD operation time compared to initial ESD (median 202 min vs 67 min, respectively, P = 0.06). Conclusion: Despite the low patient number and short follow-up, the results suggest that re-ESD is a safe and effective endoscopic treatment for recurrent gastric cancer after ESD. abstract_id: PUBMED:36292169 Endoscopic Submucosal Dissection in Patients with Early Gastric Cancer in the Remnant Stomach. Endoscopic submucosal dissection (ESD) in patients with early gastric cancers (EGCs) in the remnant stomach is technically difficult, owing to the limited space and fibrosis under the suture lines and anastomoses. Conversely, ESD for patients with EGCs in the remnant stomach is less invasive and provides better quality of life than completion total gastrectomy. To clarify the effectiveness and safety of ESD, we reviewed the medical records of patients with EGCs in the remnant stomach who underwent ESD between July 2006 and October 2020 at our institution. All identified patients were included in the analysis. Of 25 patients with 27 lesions, the en bloc and R0 resection rates were 88.9% and 85.2%, respectively. Neither perforation nor postoperative bleeding was observed. During a median follow-up period of 48 (range, 5-162) months, the 5-year overall survival rate was 71.0%, whereas the 5-year cause-specific survival rate was 100%. No obvious differences in the outcomes of procedures with suture line involvement and without suture line or anastomosis involvement were noted. In conclusion, ESD was effective and safe in patients with EGCs in the remnant stomach despite the suture line involvement. abstract_id: PUBMED:22726467 Does submucosal fibrosis affect the results of endoscopic submucosal dissection of early gastric tumors? Background: Endoscopic submucosal dissection (ESD) is an effective treatment of early gastric tumors, but submucosal fibrosis can be an obstacle to successful ESD. Objective: To examine the association between endoscopic and pathologic factors and submucosal fibrosis in early gastric tumors, and to measure the association between degree of submucosal fibrosis and outcomes of ESD. Design: A retrospective study. Setting: An academic medical center. Patients: From November 2006 to April 2011, 161 patients with 167 early gastric tumors treated by ESD. Intervention: ESD. Main Outcome Measurements: Endoscopic and pathologic factors related to submucosal fibrosis. Procedure time, en bloc resection rate, and complications according to degree of submucosal fibrosis. Results: In univariate analysis, the presence of endoscopic submucosal fibrosis was significantly related to tumor size, location, ulceration, histologic findings, and submucosal invasion. Multivariate analysis for these factors showed that endoscopic submucosal fibrosis was independently associated with lesions in tumor size greater than 30 mm, in the proximal portion of the stomach, and more common in adenocarcinomas than in adenomas. After correction for multiple testing, only the middle of the stomach as a locational risk factor retains statistical significance. Also, the more advanced the endoscopic submucosal fibrosis, the longer the time required for ESD (P < .0001). The severity of endoscopic submucosal fibrosis was associated with a lower en bloc resection rate and with abundant immediate bleeding. Limitations: Retrospective, single-center study. Conclusion: Submucosal fibrosis of early gastric tumors is closely related to tumor size, location, ulceration, histologic findings, and submucosal invasion. Moreover, the greater the degree of submucosal fibrosis the longer the time taken for the ESD procedure and the higher the frequency of complications such as perforation and immediate bleeding. abstract_id: PUBMED:26879552 Endoscopic Submucosal Dissection for Early Gastric Neoplasia Occurring in the Remnant Stomach after Distal Gastrectomy. Background/aims: Endoscopic submucosal dissection (ESD) for tumors occurring in the remnant stomach is technically difficult to perform because of limited working space and severe fibrosis and staples present around the suture line. We aimed to elucidate the feasibility and clinical outcomes of performing ESD for tumors in the remnant stomach. Methods: Between December 2007 and January 2013, 18 patients underwent ESD for tumors (six adenomas and 12 differentiated-type early gastric cancers [EGCs]) occurring in the remnant stomach after distal gastrectomy. Clinicopathologic features and clinical outcomes after ESD were retrospectively analyzed. Results: Two-thirds of the lesions were located on the body, and half were located on the suture line. En bloc resection, R0 resection, and en bloc with R0 resection rates were 88.9%, 100%, and 88.9%, respectively. Curative resection rate for EGC was 91.7%. Perforation occurred in one patient (5.6%) and was successfully managed by endoscopic closure with metallic clips and conservative management. There was no significant bleeding after ESD. During a median follow-up of 47.5 months, no local, metachronous, or extragastric recurrence was seen for either EGC or adenoma lesions. Conclusions: ESD is a feasible and effective treatment modality and can be considered a primary intervention for early gastric neoplasia occurring in the remnant stomach. Answer: Yes, submucosal fibrosis does affect the results of endoscopic submucosal dissection (ESD) of early gastric tumors. Submucosal fibrosis can be an obstacle to successful ESD, as it is associated with a longer procedure time, a lower en bloc resection rate, and a higher frequency of complications such as perforation and immediate bleeding (PUBMED:22726467). The presence of endoscopic submucosal fibrosis is significantly related to tumor size, location, ulceration, histologic findings, and submucosal invasion (PUBMED:22726467). Moreover, the greater the degree of submucosal fibrosis, the longer the time taken for the ESD procedure (PUBMED:22726467). In cases where severe fibrosis is present, such as in ulcerative early gastric cancer (EGC), alternative techniques like the endoscopic submucosal tunnel dissection salvage technique have been utilized to achieve complete resection of the lesions (PUBMED:25083097). This technique has shown several advantages over conventional ESD in cases of EGC with fibrosis (PUBMED:25083097). Other studies have also explored different traction techniques to facilitate ESD by maintaining satisfactory traction during dissection, which can be particularly useful in the presence of submucosal fibrosis (PUBMED:31887810). Magnetic ring-assisted ESD has been evaluated as a feasible and safe treatment for gastric fibrotic lesions, potentially shortening the endoscopic learning curve for non-skilled endoscopists (PUBMED:37120374). Overall, submucosal fibrosis is a significant factor that can complicate the ESD procedure for early gastric tumors, and it necessitates the use of specialized techniques and devices to improve the safety and efficacy of the procedure.
Instruction: Air leaks following pulmonary resection for lung cancer: is it a patient or surgeon related problem? Abstracts: abstract_id: PUBMED:22943333 Air leaks following pulmonary resection for lung cancer: is it a patient or surgeon related problem? Introduction: Prolonged air leak (PAL) is the most common complication after partial lung resection and the most important determinant of length of hospital stay for patients post-operatively. The aim of this study was to determine the risk factors involved in developing air leaks and the consequences of PAL. Methods: All patients undergoing lung resection between January 2002 and December 2007 in our hospital were studied retrospectively. Univariate analysis to predict risk factors for developing post-operative air leaks included patient demographics, smoking status, pulmonary function tests, disease aetiology (benign, malignant), neoadjuvant therapy (pre-operative radiotherapy/chemotherapy), extent and type of resection, and different consultant surgeons' practice. A logistic regression model was used for multivariate analysis. Results: A total of 1,911 lung resections were performed over the 6-year study period. An air leak lasting more than 6 days post-operatively was present in 129 patients (6.7%). This included 100 out of the 1,250 patients (8%) from the lobectomy group and 29 out of the 661 patients (4.4%) from the wedge/segmentectomy group. Using the multivariate analysis, the risk factors for developing an air leak included a low predicted forced expiratory volume in 1 second (pFEV(1)) (p<0.001), performing an upper lobectomy (p=0.002) and different consultant practice (p=0.02). PAL was associated with increased length of stay (p<0.0001), in-hospital mortality (p=0.003) and intensive care unit readmission (p=0.05). Conclusions: Air leaks after pulmonary resections were at an acceptable rate in our series. Particular patients are at a higher risk but meticulous surgical technique is vital in reducing their incidence. Our study shows that pFEV1 is the strongest predictor of post-operative air leaks. abstract_id: PUBMED:33882977 Low suction on digital drainage devices promptly improves post-operative air leaks following lung resection operations: a retrospective study. Background: We investigated the most effective suction pressure for preventing or promptly improving postoperative air leaks on digital drainage devices after lung resection. Methods: We retrospectively analyzed the postoperative data of 242 patients who were monitored with a digital drainage system after pulmonary resection in our institution between December 2017 and June 2020. We divided the patients into three groups according to the suction pressure used: A (low-pressure suction group: - 5 cm H2O), B (intermediate-pressure group: - 10 cm H2O), and C (high-pressure suction group: - 20 cm H2O). We evaluated the duration of air leaks, timing of chest tube replacement, the amount of postoperative air leak, volume of fluid drained before chest tube removal, and the total number of air leaks during drainage. Results: In total, 217 patients were included in this study. The duration of air leaks gradually decreased with significant difference between the groups, the highest decrease in A, the lowest decrease in C (P = 0.019). Timing of chest tube replacement, on the other hand, did not significantly differ between the three groups (P = 0.126). The number of postoperative air leaks just after surgery did not significantly differ between the three groups (P = 0.175), but the number of air leaks on postoperative day 1 were fewest in group A, then B, and greatest in group C (P = 0.033). The maximum amount of air leaks during drainage was lowest in A, then B, and highest in C (P = 0.036). Volume of fluid drained before chest tube removal did not significantly differ between the three groups (P = 0.986). Conclusion: Low-pressure suction after pulmonary resection seems to avoid or promptly improve postoperative air leaks in digital drainage devices after lung resection. Trial Registration: This is a single-institution, retrospective analysis-based study of data from an electronic database. Study protocol was approved by the Akashi Medical Center Institutional Research Ethics Board (approval number: 2020-9). abstract_id: PUBMED:27955681 Prospective evaluation of biodegradable polymeric sealant for intraoperative air leaks. Background: A biodegradable polymeric sealant has been previously shown to reduce postoperative air leaks after open pulmonary resection. The aim of this study was to evaluate safety and efficacy during minimally invasive pulmonary resection. Methods: In a multicenter prospective single-arm trial, 112 patients with a median age of 69 years (range 34-87 years) were treated with sealant for at least one intraoperative air leak after standard methods of repair (sutures, staples or cautery) following minimally invasive pulmonary resection (Video-Assisted Thoracic Surgery (VATS) or Robotic-Assisted). Patients were followed in hospital and 1 month after surgery for procedure-related and device-related complications and presence of air leak. Results: Forty patients had VATS and 72 patients had Robotic-Assisted procedures with the majority (80/112, 71%) undergoing anatomic resection (61 lobectomy, 13 segmentectomy, 6 bilobectomy). There were no device-related adverse events. The overall morbidity rate was 41% (46/112), with major complications occurring in 16.1% (18/112). In-hospital mortality and 30-day mortality were 1.9% (2/103). The majority of intraoperative air leaks (107/133, 81%) were sealed after sealant application, and an additional 16% (21/133) were considered reduced. Forty-nine percent of patients (55/112) were free of air leak throughout the entire postoperative study period. Median chest tube duration was 2 days (range 1 - 46 days), and median length of hospitalization was 3 days (range 1 - 20 days). Conclusions: This study demonstrated that use of a biodegradable polymer for closure of intraoperative air leaks as an adjunct to standard methods is safe and effective following minimally invasive pulmonary resection. Trial Registration: ClinicalTrials.gov: NCT01867658 . Registered 3 May 2013. abstract_id: PUBMED:21525031 Air leaks following pulmonary resection for malignancy: risk factors, qualitative and quantitative analysis. Air leaks are a common complication of pulmonary resection. The aims of this study were to analyze risk factors for postoperative air leak and to evaluate the role of air leak measurement in identifying patients at increased risk for cardiorespiratory morbidity and prolonged air leak. From March to December 2009, 142 consecutive patients underwent pulmonary resection for malignancy and were prospectively followed up. Preoperative and intraoperative risk factors for air leak were evaluated. Air leaks were qualitatively and quantitatively labeled twice daily. There were 52 (36.6%) patients who had an air leak on day 1, and 32 (22.5%) who had an air leak on day 2. Air leak was ≥180 ml/min in 12 (37.5%) of these patients. Independent predictors of air leak on day 2 included type of pulmonary resection, presence of adhesions, and incomplete fissures. Cardiorespiratory morbidity was significantly higher (34.4%) in patients who experienced air leak on day 2 than in those who did not (10.9%) (P=0.002). Nine (75%) out of 12 patients with air leak ≥180 ml/min on day 2 had prolonged air leak (greater than five days) (P=0.0001). abstract_id: PUBMED:9641332 Technique to reduce air leaks after pulmonary lobectomy. Objective: Patients undergoing pulmonary resections often present postoperative air leaks of varying magnitude and duration; this complication is more frequent with incomplete or absent interlobar fissures. Small leaks close spontaneously within 5-7 days; larger leaks may persist longer and could be associated with increased morbidity and prolonged hospitalization. We evaluated the role of different techniques to complete interlobar fissures before pulmonary lobectomy to prevent postoperative air leaks and reduce hospital stay and costs. Methods: A total of 30 patients undergoing pulmonary lobectomy for lung cancer and presenting incomplete interlobar fissures that needed to be opened both anteriorly and posteriorly were randomized into three groups. In Group I, fissures were created with a GIA stapler and buttressed with bovine pericardial sleeves. In Group II, we used TA 55 staplers alone; in Group III we used the 'old fashion' cautery, clamps and silk ties. The three groups were homogeneous for age, type of pulmonary resection and stage of the tumor. The duration of postoperative air leaks and hospital stay were compared with the one-way variance analysis. Results: Postoperative air leaks for Groups I, II and III persisted for 2 +/- 0.94, 5.3 +/- 2 and 5.3 +/- 1.7 days, respectively. Mean hospital stay was 4.4 +/- 0.96, 7.8 +/- 2.14 and 7.2 +/- 1.5, respectively. The difference between groups in terms of duration of postoperative air leaks and hospital stay was statistically significant (P = 0.0001). Conclusions: The use of GIA staplers and pericardial sleeves to complete interlobar fissures for pulmonary lobectomy significantly reduces the duration of postoperative air leaks and hospital stay; no complications were associated with this technique. abstract_id: PUBMED:11687173 Surgical sealant for preventing air leaks after pulmonary resections in patients with lung cancer. Background: Postoperative air leak is a frequent complication after pulmonary resection for lung cancer. It may cause serious complications, such as empyema, or prolong the need for chest tube and hospitalisation. Surgical sealants of different types have been developed to prevent or to reduce postoperative air leaks. A systematic review was therefore undertaken to evaluate the evidence on their effectiveness. Objectives: To evaluate the effectiveness of surgical sealants in preventing or in reducing postoperative air leaks after pulmonary resection for lung cancer. Search Strategy: Electronic databases and, bibliographies were searched and, hand searching of conference proceedings was conducted to identify published and unpublished trials. Selection Criteria: Randomised controlled clinical trials were included in which standard closure techniques plus a sealant were compared with the same intervention with no use of any sealant in patients undergoing elective pulmonary resection provided that a large proportion of the patients studied had undergone pulmonary resection for lung cancer. Data Collection And Analysis: Two reviewers independently selected the trials to be included in the review, assessed methodological quality of each trial and extracted data using a standardised form. Because of several limitations, narrative synthesis was used at this stage. Main Results: Two hundred and thirty-two patients from 4 trials were included. In two trials no differences were found between treatment and control patients in terms of reduction of duration of air leaks, chest tube drainage, hospitalisation or complications attributable to prolonged intercostal drainage. In the other two trials, postoperative air leaks were significantly reduced in the treatment groups, but there were no differences in hospital stay, complications, fever, intraoperative and postoperative intubation times, chest tube drainage or cost. Reviewer's Conclusions: Systematic use of surgical sealants in clinical practice cannot be recommended at the moment. More randomised controlled clinical trials are needed. abstract_id: PUBMED:16034884 Surgical sealant for preventing air leaks after pulmonary resections in patients with lung cancer. Background: Postoperative air leak is a frequent complication after pulmonary resection for lung cancer. It may cause serious complications, such as empyema, or prolong the need for chest tube and hospitalisation. Surgical sealants of different types have been developed to prevent or to reduce postoperative air leaks. A systematic review was therefore undertaken to evaluate the evidence on their effectiveness. Objectives: To evaluate the effectiveness of surgical sealants in preventing or in reducing postoperative air leaks after pulmonary resection for lung cancer. Search Strategy: The electronic databases MEDLINE (1966 to 2004), EMBASE (1974 to 2004), Cancerlit (1993 to 2004), the Cochrane Central Register of Controlled Trials (The Cochrane Library, Issue 3/2004) and listed references were searched, and handsearching of conference proceedings was conducted to identify published and unpublished trials. Selection Criteria: Randomised controlled clinical trials were included in which standard closure techniques plus a sealant were compared with the same intervention with no use of any sealant in patients undergoing elective pulmonary resection provided that a large proportion of the patients included in the studies had undergone pulmonary resection for lung cancer. Data Collection And Analysis: Three reviewers independently selected the trials to be included in the review, assessed methodological quality of each trial and extracted data using a standardised form. Because of several limitations, narrative synthesis was used at this stage. Main Results: Twelve trials, with 1097 patients in total, were included. In eight trials there was a statistically significant difference between treatment and control patients in reducing postoperative air leaks. However this reduction only proved a significant reduction of hospital stay in one trial. Only in one trial reduction of time of chest drain removal and reduction of percentage of patient with persistent air leak were significantly smaller in the treatment group. Authors' Conclusions: Although surgical sealants seem to reduce postoperative air leaks, length of hospitalisation is not affected and infectious complications may be increased. Therefore, systematic use of surgical sealants in clinical practice cannot be recommended at the moment. More randomised controlled clinical trials are needed. abstract_id: PUBMED:17320580 Pleurodesis with an autologous blood patch to prevent persistent air leaks after lobectomy. Objective: Air leakage after pulmonary lobectomy is a well-known problem often contributing to extended hospitalization. Many techniques have been proposed to prevent and treat air leakage, but none have been proved incontrovertibly effective. We evaluated the role of an autologous blood patch after pulmonary lobectomy. Methods: Twenty-five patients with air leaks on the sixth postoperative day after lobectomy were enrolled in this study. They were randomly assigned to 2 groups: group A (12 patients), with 50 mL of autologous blood infused in the pleural cavity; and group B (13 patients), with 100 mL of blood infused. These 2 groups were retrospectively compared with the last 15 patients showing the presence of air leaks for at least 6 days (group C) (in this group the duration of leakage after the sixth postoperative day was compared). We recorded the duration of posttreatment air leaks and hospitalization. Results: Air leaks stopped 2.3 +/- 0.6 days after the procedure in group A, 1.5 +/- 0.6 days after the procedure in group B, and after 6.3 +/- 3.7 days in group C. The air leakage disappeared within 72 hours in all patients in groups A and B. There was a statistically significant difference in the duration of drainage between groups A and B (P = .005), groups A and C (P = .0009), and groups B and C (P = .0001), showing the effectiveness of an autologous blood patch, particularly with 100 mL of blood. Conclusions: Management of air leaks after lobectomy with an autologous blood patch is easy, safe, and effective, and does not add costs. It may become the gold standard treatment early in the postoperative course. abstract_id: PUBMED:20091536 Surgical sealant for preventing air leaks after pulmonary resections in patients with lung cancer. Background: Postoperative air leak is a frequent complication after pulmonary resection for lung cancer. It may cause serious complications, such as empyema, or prolong the need for chest tube and hospitalization. Different types of surgical sealants have been developed to prevent or to reduce postoperative air leaks. A systematic review was therefore undertaken to evaluate the evidence on their effectiveness. Objectives: To evaluate the effectiveness of surgical sealants in preventing or reducing postoperative air leaks after pulmonary resection for lung cancer. Search Strategy: We searched the electronic databases MEDLINE (1966 to September 2008), EMBASE (1974 to September 2008), and the Cochrane Central Register of Controlled Trials (CENTRAL)(The Cochrane Library, Issue 3, 2008) and listed references. We hand searched conference proceedings to identify published and unpublished trials. Selection Criteria: We included randomized controlled clinical trials in which standard closure techniques plus a sealant were compared with the same intervention with no use of any sealant in patients undergoing elective pulmonary resection provided that a large proportion of the patients studied had undergone pulmonary resection for lung cancer. Data Collection And Analysis: Four reviewers independently selected the trials to be included in the review, assessed methodological quality of each trial and extracted data using a standardized form. Because of several limitations, narrative synthesis was used at this stage. Main Results: Sixteen trials, with 1642 randomized patients in total were included. In thirteen trials there were differences between treatment and control patients in reducing postoperative air leaks. This reduction proved to be significant in six trials. Three trials showed a significant reduction in time to chest drain removal in the treatment group. In two trials, the percentage of patients with persistent air leak was significantly smaller in the treatment group. Finally, three trials including 352 patients showed a statistically significant reduction in length of hospital stay. Authors' Conclusions: Surgical sealants reduce postoperative air leaks and time to chest drain removal but this reduction is not always associated with a reduction in length of postoperative hospital stay. Therefore, systematic use of surgical sealants with the objective of reducing hospital stay cannot be recommended at the moment. More and larger randomized controlled clinical trials are needed. abstract_id: PUBMED:28965258 Low-voltage coagulation, polyglycolic acid sheets, and fibrin glue to control air leaks in lung surgery. Objective: We evaluated the efficacy of low-voltage coagulation (LVC) with polyglycolic acid (PGA) sheets (Neoveil, GUNZE Ltd., Japan) and fibrin glue to control intraoperative alveolar air leaks after lung surgery. Methods: We included 176 patients with non-small cell lung cancer who underwent thoracoscopic lobectomies. When alveolar air leak was confirmed after lung resection, we applied LVC system to the pleural defect followed by layers of PGA sheets and fibrin glue (n = 40). We then analyzed postoperative air leaks (rate of occurrence and duration time). Results: 73% of patients (29/40 cases) experienced no postoperative air leaks. Although 11 patients experienced air leaks after surgery, there were no prolonged air leaks (>7 days) (resolution time, 3.5 ± 1.4 days; range, 2-6 days). Two patients required drainage for late-onset air leaks, but their conditions improved without further treatment. There were no further adverse events. Conclusions: The use of LVC with PGA sheets and fibrin glue following pulmonary resection efficiently prevented both intraoperative air leaks and prolonged air leaks after lung surgery. Answer: Air leaks following pulmonary resection for lung cancer can be influenced by both patient-related and surgeon-related factors. Patient-related factors include the patient's pulmonary function and the nature of their disease. For instance, a low predicted forced expiratory volume in 1 second (pFEV1) is a strong predictor of post-operative air leaks, indicating that patients with poorer pulmonary function are at higher risk (PUBMED:22943333). Additionally, the type of pulmonary resection, presence of adhesions, and incomplete fissures are independent predictors of air leaks (PUBMED:21525031). Moreover, the presence of air leaks can lead to increased length of hospital stay, in-hospital mortality, and intensive care unit readmission (PUBMED:22943333). Surgeon-related factors include the surgical technique and the type of intervention used to prevent or manage air leaks. Different consultant surgeons' practices were found to be a risk factor for developing air leaks, suggesting that surgical technique and experience may play a role (PUBMED:22943333). Techniques to complete interlobar fissures, such as using GIA staplers and pericardial sleeves, have been shown to significantly reduce the duration of postoperative air leaks and hospital stay (PUBMED:9641332). The use of biodegradable polymeric sealants and autologous blood patches has also been evaluated, with some studies showing effectiveness in reducing air leaks after minimally invasive pulmonary resection (PUBMED:27955681) and after lobectomy (PUBMED:17320580), respectively. Moreover, the use of low suction on digital drainage devices has been shown to promptly improve post-operative air leaks following lung resection operations (PUBMED:33882977). The application of low-voltage coagulation with polyglycolic acid sheets and fibrin glue has also been effective in preventing both intraoperative and prolonged air leaks after lung surgery (PUBMED:28965258). However, the effectiveness of surgical sealants in preventing or reducing postoperative air leaks has been debated, with some systematic reviews suggesting that while they may reduce air leaks, they do not always lead to a reduction in hospital stay, and their systematic use cannot be recommended without further evidence (PUBMED:16034884, PUBMED:20091536).
Instruction: A single training center's experience with 200 consecutive cases of diverticulitis: can all patients be approached laparoscopically? Abstracts: abstract_id: PUBMED:18347863 A single training center's experience with 200 consecutive cases of diverticulitis: can all patients be approached laparoscopically? Background: This study aimed to evaluate the outcomes for consecutive patients with diverticular disease who underwent elective laparoscopic sigmoid colectomy. Methods: Data for this patient population were collected by chart review and analyzed retrospectively. Results: Between December 2001 and March 2007, 200 consecutive patients (93 men and 107 women) with an average age of 55 years were identified. All cases were managed by one of two colorectal surgeons. Of the 200 patients, 158 had recurrent diverticulitis, 20 had fistulas, 12 had abscesses, 8 had strictures, 1 had a mass, and 1 had a bleed. The mean operative time was 159 min, and the conversion rate was 8%. A total of 30 early postoperative complications occurred for 26 patients including wound infection (n = 9), ileus (n = 8), Clostridium difficile colitis (n = 3), urinary retention (n = 3), pelvic abscess (n = 2), deep vein thrombosis and pulmonary embolism (n = 1), pneumonia (n = 1) urinary tract infection (n = 1), anastomotic leak (n = 1), and small bowel obstruction (n = 1). Late complications experienced by 11 patients included Clostridium difficile colitis (n = 3), incisional hernia (n = 3), wound infection (n = 3), wound hematoma (n = 1), and intraabdominal hemorrhage (n = 1). Conclusions: The authors believe it is feasible to offer elective laparoscopic sigmoid colectomy to all patients with symptomatic diverticular disease despite preoperative risk factors. abstract_id: PUBMED:9527055 Laparoscopically assisted anterior resection for diverticular disease: follow-up of 100 consecutive patients. Purpose: The objectives of this study were to refine the technique of laparoscopically assisted anterior resection (LAR) for diverticular disease and to analyze the morbidity and mortality rates, and longer term follow-up of the first 100 consecutive patients. Methods: Data were collected prospectively, and follow-up was performed by an independent assessor using a standardized questionnaire. Results: The median duration of surgery was 180 minutes, the median time for passage of flatus was 2 days after surgery, and the median length of hospital stay was 4 days. Overall, the morbidity rate was 21%, and the wound infection rate was 5%. There were no deaths. Eight patients underwent open laparotomy. The rate of complications was significantly greater in the latter group of patients (75%) than in those who underwent laparoscopy (16%, p = 0.002). The comparison between the first 20 cases and the last 20 patients revealed a significantly shorter duration of surgery (median 225 min. vs. 150 min.; p < 0.0001) and decreased length of stay (6 days vs. 4 days, p < 0.0001). Apart from a nonsignificant increase in the length of surgery, there were no differences in other study parameters when comparisons were made between those patients who underwent LAR for complicated diverticular disease and those patients who underwent uncomplicated diverticular disease. Follow-up: Ninety patients were available for follow-up at a median time of 37 months. Ninety-three percent of the patients reported that the surgery had improved their symptoms. No patient required hospitalization, and no one was treated with antibiotics for recurrent symptoms. Conclusion: Laparoscopically assisted anterior resection for diverticular disease has acceptable morbidity and mortality rates and a median postoperative hospital stay of only 4 days. Follow-up investigations revealed no recurrence of diverticulitis, and patients reported satisfaction regarding cosmetic and functional results. abstract_id: PUBMED:21887555 Single incision laparoscopic colorectal surgery: a single surgeon experience of 102 consecutive cases. Background: Due to the recent heightened interest in even less invasive surgery, single port laparoscopic colorectal surgery is quickly gaining acceptance. While this access technique was first described in 2007 for colorectal resective procedures, large series are lacking. Methods: Between January 2009 and October 2010, all patients undergoing single port colorectal surgery performed by a single surgeon were prospectively entered into an IRB-approved database and studied with regard to perioperative events, morbidity, and mortality. Results: One hundred and two consecutive patients underwent a single port colorectal procedure. Mean age was 47 years (9-93 years), and average body mass index was 26 kg/m(2) (15-39 kg/m(2)). Primary diagnoses included ulcerative colitis (51), neoplasia (23), Crohn's disease (14), diverticulitis (11), familial adenomatous polyposis (1), and other (2). Procedures included 23 total colectomies, 40 segmental colectomies, and 19 other procedures. There was 1 conversion to an open operation, and 18 (18%) patients required placement of additional ports (1 port: N = 13; 2 ports: N = 2; 3 ports: N = 3). Average operating room time was 99 min (13-245), mean length of incision was 3.7 cm (1.2-7.8 cm), and average estimated blood loss was 140 ml (0-750 ml). There was one postoperative death, and 39 (38%) patients experienced minor postoperative complications. Mean lymph node harvest for oncologic resections was 44 (14-142). The average length of hospital stay was 5.9 days (2-24 days). Conclusions: With proper patient selection and laparoscopic experience, single port colorectal surgery can be performed for even the most complex colorectal procedures. Further studies are needed to assess the benefits that single port colorectal surgery has over a conventional laparoscopic approach. abstract_id: PUBMED:10550344 Laparoscopically guided reversal of Hartmann's procedure Unlabelled: Morbidity and mortality after reversal of Hartmann's procedure following perforated sigmoid diverticulitis are high and the rate of intestinal restoration is low. Aim: To investigate whether laparoscopically assisted reversal of Hartmann's procedure is technically feasible and whether the laparoscopic procedure offers any benefit to the patient. Method: Nineteen patients were investigated. The postoperative course was followed prospectively. All patients were reinvestigated 9 months after surgery. Results: Laparoscopic reversal of Hartmann's procedure was attempted in 19 patients. One patient did not want the laparoscopic technique. In two cases (11 %) conversion to the conventional technique was necessary; thus, 16 patients were operated laparoscopically. Median operative time was 114 (65-180) min. With the exception of three wound infections no immediate postoperative complications were noticed. Patients' convalescence was fast. First evacuation took place 3.3 (3-5) days after surgery, complete oral nutrition 3.6 (3-5) days after surgery. Duration of postoperative hospitalisation was 7.5 (5-12) days. One patient developed later a clinically significant anastomotic stricture which needed endoscopic dilatation. Conclusion: Laparoscopically assisted Hartmann's reversal is technically demanding but feasible. Postoperative morbidity is low, duration of hospitalisation short, convalescence fast. Thus, good arguments exist for performing reversal of Hartmann's procedures laparoscopically. abstract_id: PUBMED:25829061 Single-port laparoscopic resection for diverticular disease: experiences with more than 300 consecutive patients. Background: Single-port laparoscopic surgery (SILS) is a new minimally invasive technique, which has been developed to minimize the surgical access trauma. For colorectal resection, the access trauma can be limited to the one incision, which is needed for specimen extraction anyways, but dissection might be more demanding than in multiport laparoscopic surgery. The aim of this study was to evaluate the usefulness of SILS for the treatment of diverticular disease of the sigmoid colon. Methods: Between July 2009 and December 2013, a total of 329 consecutive patients with intended SILS sigmoid colectomy for complicated or frequently recurring diverticulitis were studied. Clinical data were collected in a prospective database. Telephone follow-ups were performed to evaluate long-term morbidity and quality of life. Results: Of the 329 patients (139 male) with intended SILS sigmoid colectomy, 309 were successfully operated on in SILS technique, while 20 (6.1%) were converted to open surgery. The mean duration of surgery was 153.5 (65-434) min. Total morbidity rate was 18.3%. Anastomotic leakage was the most serious complication occurring in 13 patients (leak rate 4%) with one consecutive death (mortality rate 0.3%). Quality of life had significantly improved 6 months after surgery in comparison with the preoperative value. At a mean follow-up of 18.6 months, 16 patients (4.9%) had incisional hernia and one patient had recurrent diverticulitis. Conclusion: In spite of almost 5% incisional hernia 6 months after surgery, single-incision sigmoid colectomy for diverticulitis is feasible and save and is therefore an alternative to multiport laparoscopic surgery. Further trials are necessary to evaluate its benefits over multiport laparoscopic surgery. abstract_id: PUBMED:32048755 Initial clinical experience of single-incision robotic colorectal surgery with da Vinci SP platform. Background: The da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA) was introduced to overcome the limitations of single-incision laparoscopic surgery, which is challenging due to its restrictions regarding triangulation and retraction. The purpose of this article is to describe the initial experience with single-incision surgery using the da Vinvci Single-Port Platform (dVSP). Methods: The medical records of patients with colorectal disease, who underwent single-incision robotic surgery using the dVSP, were retrospectively reviewed. Results: Five patients with appendiceal and colorectal cancer, and two with diverticulitis were enrolled. All procedures were completed using a pure single-incision approach, with an exception for low anterior resection. There were two minor complications. For patients with colorectal cancer, the number of retrieved lymph nodes and status of the resection margin were acceptable, and cosmetic results were satisfactory. Conclusion: The dVSP is a novel surgical platform that can be used as an alternative surgical modality for colorectal surgery. abstract_id: PUBMED:25303915 Single-Incision Robotic Colectomy (SIRC) case series: initial experience at a single center. Background: Laparoscopic colectomy has been associated with favorable outcomes when compared to open colectomy. Single-Incision Robotic Colectomy (SIRC) is a novel procedure hypothesized to improve upon conventional three-port laparoscopic colectomy. We hereby present and analyze our institution's initial experience with SIRC. Methods: We performed a retrospective review of 59 patients who underwent SIRC between May 2010 and September 2013, attempting to identify factors associated with conversion rate and postoperative complication rate. Results: Our study included 34 males (57.6%) and 25 females (42.4%). The mean age was 60.3 years (range 29-92 years), and the mean BMI was 26.6 kg/m(2) (range 14.9-39.7 kg/m(2)). We identified 31 right hemicolectomies (53.4%), 20 sigmoid colectomies (34.5%), 5 left hemicolectomies (1.7%), 2 low anterior resections (3.5%), and 1 total colectomy (1.7%). The overall median operative time was 188 min with an interquartile range of 79 min. Surgical indications included diverticulitis (n = 23, 39.0%), benign colonic mass (n = 18, 30.5%), colon cancer (n = 16, 27.1%), familial adenomatous polyposis (n = 1, 1.7%), and Crohn's disease (n = 1, 1.7%). There were four conversions to open procedure (6.8%), three conversions to multiport robotic procedure (5.1%), and one conversion to single-port laparoscopic procedure (1.7%). Reasons for conversions include difficulty mobilizing the colon and robotic equipment malfunction. Conversions were associated with both higher complication rates (62.5 vs 25.5%, p = 0.035) and longer LOS (7.4 vs 4.0 days, p = 0.0003). Postoperative complications occurred in 16 of the 59 cases (27.1%). Higher BMI was the only significant risk factor for postoperative complications. The overall median LOS was 4 ± 2 days, while the median estimated blood loss was 100 ± 90 ml. Conclusions: Our experience has shown that SIRC can be a safe and feasible procedure for both benign and malignant disease. Patient selection is the key to improving surgical outcomes in SIRC. abstract_id: PUBMED:33211162 Initial clinical experience with Single-Port robotic (SP r) left colectomy using the SP surgical system: description of the technique. Background: The daVinci Single-Port (SP) robot is a new robotic platform designed to overcome the challenges of Single-Incision Laparoscopic Surgery. The objective of this study is to demonstrate the feasibility and technical aspects of SP robotic (SP r) left colectomy using the SP platform. Methods: Under Institutional Review Board approval and registration on ClinicalTrials.gov, we performed SP rLeft colectomy using the daVinci SP surgical system on four patients. The primary end-point of this study was to report and describe the technical feasibility to perform SP rLeft colectomy. The secondary end-points included perioperative metrics and outcomes. Results: Four patients underwent successful SP rLeft colectomy for diverticulitis through a single incision (average size: 4.4 cm) without intraoperative complications or conversions. The robot was docked 2.7 times on average (range 2-4). The average docking time was 8.4 min (range: 3-33 min). The mean estimated blood loss was 91 mL (range: 20-250 mL). There were no morbidities or mortalities. Patients were discharged on POD 2 and 3. Conclusion: We demonstrated in this initial clinical series the SP rLeft colectomy to be feasible and safe to perform in select patients. The SP robot's single-arm design and flexible instruments have shown to provide excellent visualization and retraction with minimal collisions. We predict that the SP robot will be widely utilized in the field of colorectal surgery as it becomes available to colorectal surgeons. Further experience and larger studies are needed to define the advantages and identify the problems with the SP rLeft colectomy. abstract_id: PUBMED:9876737 Laparoscopically assisted surgery for colonic perforation with peritonitis--a case report. Elective laparoscopic colonic surgery is increasingly recognized as feasible and perhaps preferential. A case of laparoscopically assisted surgery for trauma to the rectum with bacterial peritonitis is presented. It presents an example of the application of this modality to the treatment of iatrogenic colon perforations and perhaps selected diverticulitis. abstract_id: PUBMED:30251070 Laparoscopic Surgery for Diverticular Fistulas: Outcomes of 111 Consecutive Cases at a Single Institution. Background: The purpose of this study was to review our experience with laparoscopic colectomy and fistula resection, evaluate the frequency of conversion to open, and to compare the perioperative courses of the complete laparoscopic and conversion groups. Methods: This study is a retrospective analysis of 111 consecutive adult patients with diverticular fistulae diagnosed clinically or radiographically over 11 years at a single institution. Five patients were excluded for preoperative comorbidities. The remaining 106 consecutive patients underwent minimally invasive sigmoid colectomy with primary anastomosis. Preoperative, intraoperative, and postoperative variables were collected from the colorectal surgery service database. A retrospective cohort analysis was performed between laparoscopic and converted groups. Results: Within the group, 47% had colovesical fistulas, followed by colovaginal, coloenteric, colocutaneous, and colocolonic fistulas. The overall conversion rate to laparotomy was 34.7% (n = 37). The most common reason for conversion was dense fibrosis. Mean operative time was similar between groups. Combined postoperative complications occurred in 26.4% of patients (21.4% laparoscopic and 37.8% converted, p = 0.075). Length of stay was significantly shorter in the laparoscopic group (5.8 vs 8.1 days, p = 0.014). There were two anastomotic leaks, both in the open group. There were no 30-day mortalities. Conclusions: Laparoscopic sigmoid colectomy for diverticular fistula is safe, with complication rates comparable to open sigmoid resection. We identify a conversion rate which allows the majority of patients to benefit from minimally invasive procedures. Answer: Based on the single training center's experience with 200 consecutive cases of diverticulitis, it appears that it is feasible to offer elective laparoscopic sigmoid colectomy to all patients with symptomatic diverticular disease despite preoperative risk factors (PUBMED:18347863). The study conducted between December 2001 and March 2007 involved 200 consecutive patients managed by one of two colorectal surgeons. The patients presented with various complications of diverticular disease, including recurrent diverticulitis, fistulas, abscesses, strictures, a mass, and a bleed. The mean operative time was 159 minutes, and the conversion rate to open surgery was 8%. The early postoperative complications included wound infection, ileus, Clostridium difficile colitis, urinary retention, pelvic abscess, deep vein thrombosis and pulmonary embolism, pneumonia, urinary tract infection, anastomotic leak, and small bowel obstruction. Late complications included Clostridium difficile colitis, incisional hernia, wound infection, wound hematoma, and intraabdominal hemorrhage. Despite these complications, the authors concluded that elective laparoscopic sigmoid colectomy could be offered to all patients with symptomatic diverticular disease.
Instruction: Can early neurosonology predict outcome in acute stroke? Abstracts: abstract_id: PUBMED:29334161 The Role of Neurosonology in the Diagnosis and Management of Patients with Carotid Artery Disease: A Review. Carotid artery disease (CAD) is a common cause of ischemic stroke with high rates of recurrence. Carotid endarterectomy (CEA) or carotid artery stenting (CAS) are highly recommended for the secondary prevention of symptomatic CAD during the first 14 days following the index event of transient ischemic attack or minor stroke. CEA or CAS may also be offered in selected cases with severe asymptomatic stenosis. Herein, we review the utility of neurosonology in the diagnosis and pre-/peri-interventional assessment of CAD patients who undergo carotid revascularization procedures. Carotid ultrasound may provide invaluable information on plaque echogenicity, ulceration, risk of thrombosis, and rupture. Transcranial Doppler or transcranial color-coded sonography may further assist by mapping collateral circulation, evaluating the impairment of vasomotor reactivity, detecting microembolization, or reperfusion hemorrhage in real time. Neurosonology examinations are indispensable bedside tools assisting in the diagnosis, risk stratification, peri-interventional monitoring, and follow-up of patients with CAD. abstract_id: PUBMED:28692175 Neurosonology Accuracy for Isolated Acute Vestibular Syndromes. Objectives: The clinical approach to acute vestibular syndromes is often complex for the physician. Neurosonology offers a noninvasive method to study the cervicocephalic circulation when a vascular etiology is suspected. We aim to evaluate the diagnostic accuracy of a vascular neurosonological exam in isolated acute vestibular syndrome. Methods: All patients submitted to cerebrovascular ultrasound and magnetic resonance imaging during the period between 2011 and 2015 with acute isolated vestibular syndrome. Those with any clinical sign of brainstem lesion on presentation were excluded. All patients performed the neuroimaging study (brain computed tomography and magnetic resonance imaging) and neurologic surveillance. Neurosonological exam included all intra- and extracranial segments of the vertebrobasilar circulation. Positive ultrasound exam was defined as the presence of stenotic or occlusive disease in any of these segments related to the infarcted area. Results: A total of 108 patients were included: 60 (53.6%) were males (mean age: 60.75 years (standard deviation, 14.17)). In 27 patients (25.0%) a cerebral ischemic lesion was found to be the cause of the vertigo. Neurosonological assessment showed a sensitivity of 40.7% (95% confidence interval (CI): 22.4; 61.2), specificity of 100% (95% CI: 95.5; 100.0), positive predictive value (PPV) of 100% (95% CI: 71.5; 100.0), and negative predictive value (NPV) of 83.5% (95% CI: 74.6; 90.3). Conclusions: Our study suggests that cerebrovascular ultrasound is a highly specific method for the diagnosis of cerebrovascular vertigo. However, its low sensitivity makes it a poor candidate for screening. abstract_id: PUBMED:20299389 Early functional magnetic resonance imaging activations predict language outcome after stroke. An accurate prediction of system-specific recovery after stroke is essential to provide rehabilitation therapy based on the individual needs. We explored the usefulness of functional magnetic resonance imaging scans from an auditory language comprehension experiment to predict individual language recovery in 21 aphasic stroke patients. Subjects with an at least moderate language impairment received extensive language testing 2 weeks and 6 months after left-hemispheric stroke. A multivariate machine learning technique was used to predict language outcome 6 months after stroke. In addition, we aimed to predict the degree of language improvement over 6 months. 76% of patients were correctly separated into those with good and bad language performance 6 months after stroke when based on functional magnetic resonance imaging data from language relevant areas. Accuracy further improved (86% correct assignments) when age and language score were entered alongside functional magnetic resonance imaging data into the fully automatic classifier. A similar accuracy was reached when predicting the degree of language improvement based on imaging, age and language performance. No prediction better than chance level was achieved when exploring the usefulness of diffusion weighted imaging as well as functional magnetic resonance imaging acquired two days after stroke. This study demonstrates the high potential of current machine learning techniques to predict system-specific clinical outcome even for a disease as heterogeneous as stroke. Best prediction of language recovery is achieved when the brain activation potential after system-specific stimulation is assessed in the second week post stroke. More intensive early rehabilitation could be provided for those with a predicted poor recovery and the extension to other systems, for example, motor and attention seems feasible. abstract_id: PUBMED:23250680 Knowledge of vascular status for therapeutic decision-making in acute ischemic stroke: which is the role of neurosonology? In the last years there is an increasing interest in the knowledge of vascular status in patients with acute stroke. Detection and localization of an artery occlusion is of great interest for an accurate prognosis and the selection of the most appropriate recanalizing therapy. Neurosonology is a useful diagnostic tool for vascular status study in patients with acute stroke. Different situations where ultrasounds offer a valuable diagnostic information are reviewed, such as middle cerebral artery (MCA) occlusion, 'T' internal carotid artery (ICA) occlusion, 'tandem' ICA-MCA occlusion, monitoring of intra-cranial artery occlusions, acute occlusion of extracranial ICA, and free-floating thrombus in the ICA. Neurosonology offers evident advantages compared with other diagnostic techniques: it is faster, dynamic, cheaper, harmless, and accessible, allows real-time monitoring of patients vascular status, avoids delays in acute treatments and has a therapeutic effect (sonothrombolysis). Neurosonology has an essential role in the diagnosis of vascular status and in therapeutic decision-making of acute ischemic stroke patients. abstract_id: PUBMED:12470856 Design of a multicentre study on neurosonology in acute ischaemic stroke. A project of the neurosonology research group of the World Federation of Neurology. This report summarises the design and organisation of a multicentre study on neurosonology in acute ischaemic stroke. The Neurosonology in Acute Ischaemic Stroke Study will determine whether extracranial and transcranial Doppler and duplex sonography performed within 6 h after onset of stroke improves prediction of functional outcome if applied in addition to routine diagnostic admission investigations, i.e. medical history, standardised neurological examination, brain imaging by computed or magnetic resonance tomography, electrocardiography, and baseline laboratory examination. The primary hypothesis is that there is a consistent and persuasive difference between patients with an occluded middle cerebral artery and those with an open artery in terms of the functional deficit after 3 months. Power calculations are based on the assumption of alpha=0.05 (two-sided test) and a probability of a maximally mild functional deficit of 0.4. Detection of a 20% difference with a power of 0.8 resulted in a calculated sample of 400 patients to be observed. Calculation took into consideration that only 50% of admitted patients would have a moderate to severe neurological deficit of whom only 30% will have an occlusion of the corresponding middle cerebral artery. Furthermore, the study is designed to evaluate a difference of the functional outcome in relation to occurrence and time of recanalisation in-patients presenting with an initially occluded middle cerebral artery. abstract_id: PUBMED:33505280 Stroke in Asia: Neurosonology in Neurocritical Care. N/A abstract_id: PUBMED:35933692 Early surrogates of outcome after thrombectomy in posterior circulation stroke. Background: Early surrogates for functional outcome in anterior circulation stroke have been described with the National Institute of Health Stroke Scale (NIHSS) at 24 h being reported as the most accurate metric. We compare discriminatory power of established definitions of early neurological improvement (ENI) and NIHSS scores at admission and 24 h to predict functional outcome at 90 days after thrombectomy in posterior circulation stroke (PCS). Methods: All patients enrolled in the German Stroke Registry (June 2015-December 2019) with PCS and at least vertebral or basilar artery occlusions were included. NIHSS admission, 24 h and ENI definitions (improvement of 8/10 NIHSS points or 0/1 NIHSS points at 24 h) were compared for predicting functional outcome at 90 days. Favourable and good outcome were defined as modified Rankin Scale (mRS) 0-2 and 0-3. Multivariable logistic regression analysis was conducted to identify factors impairing predictive power. Results: Three hundred and eighty-seven patients were included. NIHSS 24 h had the highest discriminative power with receiver operator characteristics area under the curve of 0.87 (95% confidence interval: 0.83; 0.90) for good and 0.89 (0.85; 0.92) for favourable outcome; optimal cut-off values were ≤9 and ≤5. Higher age (odds ratio = 1.10 [1.05; 1.16]), adverse events during treatment (9.46 [1.52; 72.5]) and until discharge (18.34 [2.33; 172]) and high NIHSS scores at 24 h (1.29 [1.10; 1.53]) were independent predictors for turning the outcome prognosis from good (mRS ≤3) to poor (mRS ≥4). Conclusions: NIHSS 24 h ≤9 points serves best as surrogate for good functional outcome after thrombectomy in PCS. Advanced age, severe neurological symptoms at admission and adverse events decrease its predictive value. abstract_id: PUBMED:38426178 Editorial: Neurosonology in stroke medicine and neurocritical care. N/A abstract_id: PUBMED:18845799 Can early neurosonology predict outcome in acute stroke?: a metaanalysis of prognostic clinical effect sizes related to the vascular status. Background And Purpose: Prediction of short- and long-term prognosis is an important issue in acute stroke care. This metaanalysis explores the prognostic value of initial bed-side transcranial ultrasound in acute stroke. Methods: All studies prospectively applying TCCS or TCD within 24 hours of symptom onset in acute stroke, with a minimal cohort size of 20 patients, and reporting clinical outcome variables in relation to the vascular findings were included into this metaanalysis. Study quality was assessed by 2 independent reviewers. Results: Twenty-five studies with 1813 included patients identified by electronic and manual search fulfilled the inclusion criteria. Middle cerebral artery (MCA) occlusion was associated with a significantly increased risk for a fatal course of stroke (OR 2.46, 95% CI 1.33 to 4.52). Patients with patent MCA were more likely to clinically improve within 4 days than patients with MCA occlusion (OR 11.11, 95% CI 5.44 to 22.69). Full recanalization within 6 hours after symptom onset was highly significantly associated with clinical improvement within 48 hours (OR 5.64, 95% CI 3.82 to 8.31) and functional independence after 3 months (OR 6.07, 95% CI 3.94 to 9.35). Conclusions: Transcranial ultrasound provides important information on prognosis in patients with acute stroke. abstract_id: PUBMED:34754192 High Serum S100A12 Levels Predict Poor Outcome After Acute Primary Intracerebral Hemorrhage. Objective: Intracerebral hemorrhage (ICH) triggers an inflammatory cascade that damages brain tissues and worsens functional outcome. S100A12 functions to promote brain inflammation. We aimed to investigate the relationship between serum S100A12 levels and functional outcome in ICH patients. Methods: Serum S100A12 levels were measured in 101 ICH patients hospitalized within 24 h after symptom onset. Poor functional outcome was defined as a modified Rankin scale of 3 or greater at 3 months after stroke. Early neurologic deterioration was defined as an increase of ≥4 points in the National Institutes of Health Stroke Scale (NIHSS) score or death at 24 hours from symptoms onset. Results: High serum S100A12 levels were independently correlated with NIHSS score (t = 5.384, P < 0.001), hematoma volume (t = 4.221, P < 0.001) and serum C-reactive protein levels (t = 5.068, P < 0.001). Serum S100A12 levels were substantially higher in patients with a poor outcome (median, 66.5 versus 37.7 ng/mL; P < 0.001) or early neurological deterioration (median, 76.5 versus 40.1 ng/mL; P < 0.001) than in the other remainders, independently predicted a poor outcome (odds ratio, 1.035; 95% confidence interval, 1.007-1.064; P = 0.015) and early neurologic deterioration (odds ratio,1.032; 95% confidence interval, 1.003-1.060; P = 0.027), and significantly discriminated a poor outcome (area under curve, 0.794; 95% confidence interval, 0.702-0.868) and early neurologic deterioration (area under curve, 0.760; 95% confidence interval, 0.664-0.839) under receiver operating characteristic curve. Conclusion: High serum S100A12 levels at admission are highly associated with the extent of inflammatory response, severity, a poor functional outcome and early neurologic deterioration in ICH patients, substantializing serum S100A12 as a promising prognostic biomarker for ICH. Answer: Yes, early neurosonology can predict outcome in acute stroke. A meta-analysis exploring the prognostic value of initial bedside transcranial ultrasound in acute stroke found that middle cerebral artery (MCA) occlusion was associated with a significantly increased risk for a fatal course of stroke. Patients with patent MCA were more likely to clinically improve within 4 days than patients with MCA occlusion. Full recanalization within 6 hours after symptom onset was highly significantly associated with clinical improvement within 48 hours and functional independence after 3 months (PUBMED:18845799). Additionally, neurosonology is a useful diagnostic tool for studying vascular status in patients with acute stroke, offering valuable diagnostic information in various situations such as MCA occlusion, 'T' internal carotid artery occlusion, and monitoring of intracranial artery occlusions. It has an essential role in the diagnosis of vascular status and in therapeutic decision-making of acute ischemic stroke patients (PUBMED:23250680). Furthermore, the Neurosonology in Acute Ischaemic Stroke Study is designed to determine whether extracranial and transcranial Doppler and duplex sonography performed within 6 hours after onset of stroke improves prediction of functional outcome when applied in addition to routine diagnostic admission investigations (PUBMED:12470856). In summary, early neurosonology has been shown to be a valuable tool in predicting outcomes in acute stroke, particularly when assessing the status of the middle cerebral artery and the presence of occlusions or recanalization.
Instruction: Is membrane extraction in cases of exudative age-related macular degeneration still up-to-date? Abstracts: abstract_id: PUBMED:14573972 Is membrane extraction in cases of exudative age-related macular degeneration still up-to-date? A 4-year résumé. Background: Age-related macular degeneration (AMD) is a frequent cause of an irreversible loss of the ability to read. The non-exudative form of AMD has not been therapeutically approached in the past in contrast to the exudative form with choroidal neovascularizations (CNVs). Parafoveal laser coagulation can be applied, and in cases of subfoveal location a pars plana vitrectomy with subretinal resection of the CNV is possible. Material And Methods: Since 1995, we have operated 46 eyes of 45 patients with CNV developing from AMD. Patient ages ranged from 63 to 85 years (mean 71.8 years). Pre- and postoperatively we performed vision tests, fluorescence angiographies with sodium fluorescein and indocyanine green. Follow-up times ranged from 3 to 28 months (mean 12.3 months). Results: Pre-operative vision was 0.10 (range: hand movements to 0.4). Postoperative vision at the end of the follow-up period was 0.12 (range: hand movements to 0.4). Vision at the end of the follow-up was lower in 41%, unchanged in 20% and improved in 39%. In 43 eyes, a non-exudative form of AMD developed. Two eyes had a recurrent CNV, which was removed successfully with a second pars plana vitrectomy. Three patients developed a retinal detachment, which was successfully treated by pars plana vitrectomy, encircling buckle and gas tamponade. Conclusions: We still have to wait for the results of the photodynamic study trials and a randomized study of macular dislocation. Subretinal removal of the CNV by pars plana vitrectomy allows a stabilization of the visual function in most of our cases of AMD. This method inhibits the development of large pseudotumour-like scars. Postoperatively remaining pigment epithelial defects with choroidal atrophies however limit a visual rehabilitation so that reading vision can only be achieved in cases with good pre-operative vision. Long-term results of photodynamic therapy are still lacking and have to show its effectiveness over greater time spans. abstract_id: PUBMED:8874425 Age-related macular degeneration after extracapsular cataract extraction with intraocular lens implantation. Purpose: To evaluate the course of age-related maculopathy after cataract surgery. Methods: Included were 47 patients with bilateral, symmetric, early age-related macular degeneration (AMD), documented by fluorescein angiography, who underwent extracapsular cataract extraction with intraocular lens implantation in one eye. The fellow eye served as the control. The patients were retrospectively reviewed or prospectively followed. Results: Wet AMD developed in nine eyes (19.1%) that were treated with surgery compared with two fellow eyes (4.3%). It was detected within 3 months of surgery in four (44.4%) of the nine affected eyes and within 6 to 12 months of surgery in four other eyes (44.4%). Progression to wet AMD occurred significantly more often in men than in women (P < 0.05). Soft drusen were found as a significant ocular risk factor (P < 0.05). The final visual outcome was poor in all eyes with such progression. Conclusions: In this study, progression of AMD occurred more often in the surgical eyes compared with the fellow eyes. However, the reasons for the progression of AMD after cataract surgery are still uncertain. Further prospective studies are needed to investigate this observation. abstract_id: PUBMED:7516148 Granulomatous reaction to Bruch's membrane in age-related macular degeneration. The histopathologic features of a granulomatous reaction in one eye of a patient with neovascular age-related macular degeneration are presented. Multiple multinucleated giant cells were found in intimate association with Bruch's membrane and at the margin of Bruch's membrane defects. Multinucleated giant cells appear to participate in the breakdown of Bruch's membrane and, together with diffuse disease of the retinal pigment epithelium and changes in the physicochemical properties of Bruch's membrane, may provide angiogenic stimulus for choroidal neovascularization in age-related macular degeneration. abstract_id: PUBMED:22542780 Understanding age-related macular degeneration (AMD): relationships between the photoreceptor/retinal pigment epithelium/Bruch's membrane/choriocapillaris complex. There is a mutualistic symbiotic relationship between the components of the photoreceptor/retinal pigment epithelium (RPE)/Bruch's membrane (BrMb)/choriocapillaris (CC) complex that is lost in AMD. Which component in the photoreceptor/RPE/BrMb/CC complex is affected first appears to depend on the type of AMD. In atrophic AMD (~85-90% of cases), it appears that large confluent drusen formation and hyperpigmentation (presumably dysfunction in RPE) are the initial insult and the resorption of these drusen and loss of RPE (hypopigmentation) can be predictive for progression of geographic atrophy (GA). The death and dysfunction of photoreceptors and CC appear to be secondary events to loss in RPE. In neovascular AMD (~10-15% of cases), the loss of choroidal vasculature may be the initial insult to the complex. Loss of CC with an intact RPE monolayer in wet AMD has been observed. This may be due to reduction in blood supply because of large vessel stenosis. Furthermore, the environment of the CC, basement membrane and intercapillary septa, is a proinflammatory milieu with accumulation of complement components as well as proinflammatory molecules like CRP during AMD. In this toxic milieu, CC die or become dysfunction making adjacent RPE hypoxic. These hypoxic cells then produce angiogenic substances like VEGF that stimulate growth of new vessels from CC, resulting in choroidal neovascularization (CNV). The loss of CC might also be a stimulus for drusen formation since the disposal system for retinal debris and exocytosed material from RPE would be limited. Ultimately, the photoreceptors die of lack of nutrients, leakage of serum components from the neovascularization, and scar formation. Therefore, the mutualistic symbiotic relationship within the photoreceptor/RPE/BrMb/CC complex is lost in both forms of AMD. Loss of this functionally integrated relationship results in death and dysfunction of all of the components in the complex. abstract_id: PUBMED:33847999 Bruch's Membrane and the Choroid in Age-Related Macular Degeneration. A healthy choroidal vasculature is necessary to support the retinal pigment epithelium (RPE) and photoreceptors, because there is a mutualistic symbiotic relationship between the components of the photoreceptor/retinal pigment epithelium (RPE)/Bruch's membrane (BrMb)/choriocapillaris (CC) complex. This relationship is compromised in age-related macular degeneration (AMD) by the dysfunction or death of the choroidal vasculature. This chapter will provide a basic description of the human Bruch's membrane and choroidal anatomy and physiology and how they change in AMD.The choriocapillaris is the lobular, fenestrated capillary system of choroid. It lies immediately posterior to the pentalaminar Bruch's membrane (BrMb). The blood supply for this system is the intermediate blood vessels of Sattler's layer and the large blood vessels in Haller's layer.In geographic atrophy (GA), an advanced form of dry AMD, large confluent drusen form on BrMb, and hyperpigmentation (presumably dysfunction in RPE) appears to be the initial insult. The resorption of these drusen and loss of RPE (hypopigmentation) can be predictive for progression of GA. The death and dysfunction of CC and photoreceptors appear to be secondary events to loss in RPE. The loss of choroidal vasculature may be the initial insult in neovascular AMD (nAMD). We have observed a loss of CC with an intact RPE monolayer in nAMD, by making RPE hypoxic. These hypoxic cells then produce angiogenic substances like vascular endothelial growth factor (VEGF), which stimulate growth of new vessels from CC, resulting in choroidal neovascularization (CNV). Reduction in blood supply to the CC, often stenosis of intermediate and large blood vessels, is associated with CC loss.The polymorphisms in the complement system components are associated with AMD. In addition, the environment of the CC, basement membrane and intercapillary septa, is a proinflammatory milieu with accumulation of proinflammatory molecules like CRP and complement components during AMD. In this toxic milieu, CC die or become dysfunctional even early in AMD. The loss of CC might be a stimulus for drusen formation since the disposal system for retinal debris and exocytosed material from RPE would be limited. Ultimately, the photoreceptors die of lack of nutrients, leakage of serum components from the neovascularization, and scar formation.Therefore, the mutualistic symbiotic relationship of the photoreceptor/RPE/BrMb/CC complex is lost in both forms of AMD. Loss of this functionally integrated relationship results in death and dysfunction of all of the components in the complex. abstract_id: PUBMED:34037377 Targeted Noninvasive Treatment of Choroidal Neovascularization by Hybrid Cell-Membrane-Cloaked Biomimetic Nanoparticles. Choroidal neovascularization (CNV) is the leading cause of vision loss in many blinding diseases, but current antiangiogenic therapies with invasively intravitreal injection suffer from poor patient compliance and a rate of devastating ocular complications. Here, we develop an alternative antiangiogenic agent based on hybrid cell-membrane-cloaked nanoparticles for noninvasively targeted treatment of CNV. The retinal endotheliocyte membrane coating provides as-fabricated nanoagents with homotypic targeting capability and binding ability to the vascular endothelial growth factor. The fusion of red blood cell membranes protects the hybrid membrane-coated nanoparticles from phagocytosis by macrophages. In a laser-induced wet age-related macular degeneration mouse model, a significantly enhanced accumulation is observed in CNV regions after intravenous delivery of the hybrid membrane-coated nanoparticles. Moreover, an excellent therapeutic efficacy is achieved in reducing the leakage and area of CNV. Overall, the biomimetic antiangiogenic nanoagents provide an effective approach for noninvasive treatment of CNV. abstract_id: PUBMED:9545783 Characteristics of drusen and changes in Bruch's membrane in eyes with age-related macular degeneration. Histological study Background: Different types of drusen and changes in Bruch's membrane have been associated with age-related macular degeneration (AMD). Methods: We compared 51 eyes with different stages of AMD with 40 age-matched controls using light microscopy. The degree of calcification of Bruch's membrane, fragmentation of Bruch's membrane, number of different types of drusen, and basal laminar deposit (BLD) were assessed. Results: In the macular area, the presence of basal laminar deposit was most strongly associated with the presence of AMD. There was a statistically significant difference observed in the degree of calcification and fragmentation of Bruch's membrane in eyes with AMD as compared to controls. Eyes with AMD displayed significantly more soft, confluent, and large drusen as compared to controls. Conclusion: Calcification and fragmentation of Bruch's membrane, soft, confluent, and large drusen and BLD but not hard drusen correlated strongly with the histologic presence of AMD. Calcification and fragmentation of Bruch's membrane seem to facilitate ingrowth of choroidal neovascular membranes with consecutive development of exudative AMD. abstract_id: PUBMED:17071125 Maculoplasty for age-related macular degeneration: reengineering Bruch's membrane and the human macula. Age-related macular degeneration (AMD) is the leading cause of blindness in the western world. Over the last decade, there have been significant advances in the management of exudative AMD with the introduction of anti-VEGF drugs; however, many patients with exudative AMD continue to lose vision and there are no effective treatments for advanced exudative AMD or geographic atrophy. Initial attempts at macular reconstruction using cellular transplantation have not been effective in reversing vision loss. Herein we discuss the current status of surgical attempts to reconstruct damaged subretinal anatomy in advanced AMD. We reinforce the concept of maculoplasty for advanced AMD, which is defined as reconstruction of macular anatomy in patients with advanced vision loss. Successful maculoplasty is a three-step process that includes replacing or repairing damaged cells (using transplantation, translocation or stimulation of autologous cell proliferation); immune suppression (if allografts are used to replace damaged cells); and reconstruction or replacement of Bruch's membrane (to restore the integrity of the substrate for proper cell attachment). In the current article we will review the rationale for maculoplasty in advanced AMD, and discuss the results of initial clinical attempts at macular reconstruction. We will then discuss the role of Bruch's membrane damage in limiting transplant survival and visual recovery, and discuss the effects of age-related changes within human Bruch's membrane on the initial attachment and subsequent proliferation of transplanted cells. We will discuss attempts to repair Bruch's membrane by coating with extracellular matrix ligands, anatomic reconstitution of the inner collagen layer, and the effects of Bruch's membrane reconstruction of ultrastuctural anatomy and subsequent cell behavior. Lastly, we will emphasize the importance of continued efforts required for successful maculoplasty. abstract_id: PUBMED:9046265 Characteristics of Drusen and Bruch's membrane in postmortem eyes with age-related macular degeneration. We performed a histopathologic study to compare eyes with different stages of age-related macular degeneration (AMD) with age-matched eyes to identify characteristics associated with exudative vs nonexudative AMD. We analyzed 51 eyes, which were obtained from an eye bank, from 40 donors with different stages of AMD and compared them with 40 age-matched control eyes. The eyes were processed for light microscopy, and the degree of calcification of Bruch's membrane, fragmentation of Bruch's membrane, the number of different types of drusen, and the presence of basal laminar (linear) deposit were assessed in the macular and extramacular regions. In the macular area, a statistically significant difference was observed for the degree of calcification (P = .02) and fragmentation (P = .03) of Bruch's membrane in eyes with exudative AMD (1.6 and 5 per eye, respectively) compared with eyes with nonexudative AMD (0.8 and 1 per eye, respectively) and control eyes (0.8 and 0 per eye, respectively). Eyes with AMD displayed notably softer, more confluent, and larger drusen and basal laminar (linear) deposit in the macular area compared with control eyes. Calcification and fragmentation of Bruch's membrane, soft, confluent, and large drusen, and basal laminar (linear) deposit but not hard drusen correlate with the histological presence of AMD. The degree of calcification and fragmentation of Bruch's membrane is greater in eyes with exudative compared with nonexudative AMD. abstract_id: PUBMED:15309487 Accumulation of lipid particles in Bruch's membrane of LDL receptor knockout mice as a model of age-related macular degeneration Background: Atherosclerosis is a suspected risk factor for the development of neovascular age-related macular degeneration (AMD). Methods: We used a well-established murine knockout model with low-density lipoprotein (LDL) receptor deficiency for atherosclerotic vascular pathogenesis to evaluate changes in Bruch's membrane due to high cholesterol levels. Blood cholesterol levels were modified by the diet fed (standard rodent diet or high-fat diet western type). Animals were sacrificed and plasma cholesterol levels were determined. Eyes were examined by transmission electron microscopy (TEM). Results: Plasma total cholesterol levels were highest in LDL receptor-deficient mice after high-fat diet and elevated in LDL receptor-deficient mice after standard diet compared to control mice with and without special high-fat diet. While Bruch's membranes of control animals did not exhibit any visible changes by TEM even after a high-fat diet, membrane-bound translucent particles were seen in all membranes in knockout mice. The amount of these particles was substantially increased and membranes were thickened in knockout animals following high-fat diet with additional deposits of non-membrane-bound particles. Conclusion: LDL receptor-deficient mice exhibited a degeneration of Bruch's membrane with accumulation of lipid particles, which is further increased after fat intake due to elevated blood lipid levels. In our opinion, this animal model is suitable for investigating more aspects in the pathogenesis of neovascular AMD. Answer: Based on the provided abstracts, membrane extraction, specifically subretinal removal of choroidal neovascularization (CNV) by pars plana vitrectomy, was considered a treatment option for exudative age-related macular degeneration (AMD) to stabilize visual function in most cases (PUBMED:14573972). However, the abstract also notes that postoperative remaining pigment epithelial defects with choroidal atrophies limit visual rehabilitation, suggesting that while the procedure can stabilize vision, it may not significantly improve it, especially in cases where pre-operative vision is not good. The abstracts also discuss the complexity of AMD and the importance of the mutualistic symbiotic relationship between the photoreceptor/retinal pigment epithelium (RPE)/Bruch's membrane/choriocapillaris complex, which is lost in AMD (PUBMED:22542780, PUBMED:33847999). The loss of this relationship results in the death and dysfunction of all components in the complex, indicating that treatments need to address multiple aspects of the disease. Furthermore, new treatment approaches are being explored, such as the use of hybrid cell-membrane-cloaked biomimetic nanoparticles for noninvasive targeted treatment of CNV (PUBMED:34037377). This suggests that while membrane extraction may still be used, there is ongoing research into alternative and potentially less invasive treatments. In conclusion, while membrane extraction has been a treatment for exudative AMD, the field is evolving with new studies and potential therapies that may offer improved outcomes or less invasive options. The abstracts do not provide a definitive answer on whether membrane extraction is still up-to-date, but they do highlight the complexity of treating AMD and the need for continued research into effective therapies.
Instruction: Endovascular treatment of diabetic foot in a selected population of patients with below-the-knee disease: is the angiosome model effective? Abstracts: abstract_id: PUBMED:23358605 Endovascular treatment of diabetic foot in a selected population of patients with below-the-knee disease: is the angiosome model effective? Purpose: To evaluate the efficacy of percutaneous transluminal angioplasty (PTA) in a selected population of diabetic patients with below-the-knee (BTK) disease and to analyze the reliability of the angiosome model. Methods: We made a retrospective analysis of the results of PTA performed in 201 diabetic patients with BTK-only disease treated at our institute from January 2005 to December 2011. We evaluated the postoperative technical success, and at 1, 6, and 12 months' follow-up, we assessed the rates and values of partial and complete ulcer healing, restenosis, major and minor amputation, limb salvage, and percutaneous oximetry (TcPO2) (Student's t test). We used the angiosome model to compare different clinicolaboratory outcomes in patients treated by direct revascularization (DR) from patients treated with indirect revascularization (IR) technique by Student's t test and the χ(2) test. Results: At a mean ± standard deviation follow-up of 17.5 ± 12 months, we observed a mortality rate of 3.5 %, a major amputation rate of 9.4 %, and a limb salvage rate of 87 % with a statistically significant increase of TcPO2 values at follow-up compared to baseline (p < 0.05). In 34 patients, treatment was performed with the IR technique and in 167 by DR; in both groups, there was a statistically significant increase of TcPO2 values at follow-up compared to baseline (p < 0.05), without statistically significant differences in therapeutic efficacy. Conclusion: PTA of the BTK-only disease is a safe and effective option. The DR technique is the first treatment option; we believe, however, that IR is similarly effective, with good results over time. abstract_id: PUBMED:22231527 Endovascular procedures and new insights in diabetic limb salvage. Critical limb ischemia (CLI) is affecting an increasing number of patients, mainly due to an ageing population and the growing number of diabetics. Clinically, CLI is characterized by rest pain, non-healing foot wounds and gangrene, due to insufficient arterial blood supply. Limb preservation should be the goal in patients with diabetic foot due to tibial occlusive disease. As surgery is associated with considerable morbidity and mortality rates, endovascular therapy can offer a valuable alternative. Small-diameter below-the-knee arteries that were previously unamenable to surgical methods, can now be reached and treated. Currently, many endovascular techniques are available, from regular PTA and bare metal stents to drug-coated balloons and drug-eluting stents. In our opinion the results of endovascular therapy for below-the-knee vessels will be further improved by the continuous technical evolution and new material developments. In the light of the current evolution towards minimally invasive techniques, an increasing number of experienced centers will be able to treat the vast majority of all below-the-knee arterial pathology by endovascular means. abstract_id: PUBMED:32123676 A Single-Center Experience on Below-The-Knee Endovascular Treatment in Diabetic Patients. Diabetic ulceration of the foot is a major global medical, social and economic problem and is the most frequent end-point of diabetic complications. A retrospective analysis from February 2017 to May 2019 of diabetic patients presenting below-the-knee artery disease (PAD) was carried out. Only patients treated with endovascular techniques as first choice treatment were evaluated. Outcome measured was perioperative mortality and morbidity. Freedom from occlusion, secondary patency and amputation rate were all registered. Additional maneuvers including stenting or angioplasty with drug eluting balloon (DEB) were reported. A total of 167 (101 male/66 female) patients with a mean age of 71 years were included in the study. A Rutherford 3, 4, 5 and 6 categories were reported in 5, 7, 110 and 45 patients, respectively. No perioperative mortality was reported. Morbidity occurred in 4 (4.4%) cases and consisted of pseudoaneurysm. Additional stenting during first procedure was required in 7 (4%) patients, drug eluting balloon was needed in 56 (33%) patients. At 1-year follow-up, estimated freedom from occlusion and secondary patency was 70% and 80% respectively. Major amputation rate was 2.4%, minor amputation rate was 41.9%. In our experience, extreme revascularization in search of distal direct flow reduce the rate of amputations with an increase in ulcer healing. New materials and techniques such as drug eluting technology, used properly, can improve outcome. abstract_id: PUBMED:36223156 Endovascular treatment for critical limb ischemia in patients with diabetes mellitus: new opportunities and prospects Diabetes mellitus (DM) is still one of the most common endocrine diseases despite all available technologies in modern medicine. In recent years, it was shown that severity and duration of DM are closely associated with vascular wall lesions (the called micro-and macroangiopathy). One of severe clinical signs is damage to lower limb arteries followed by trophic and purulent-necrotic lesions of soft tissues (diabetic foot syndrome) and risk of amputation. The authors review the possibilities of endovascular treatment of critical limb ischemia in patients with diabetes mellitus. The features of endovascular interventions depending on clinical and morphological peculiarities of vascular lesions are discussed. The authors compared the results of open and endovascular treatment of lower limb ischemia and determined further prospects for improving the treatment of these patients. abstract_id: PUBMED:35655119 Feasibility of the Complete Endovascular Reconstruction of the Trifurcation (CERT) Technique for Revascularisation in Chronic Limb Threatening Ischemia. BackgroundRevascularisation of patients with chronic limb threatening ischaemia due to arterial lesions in the below the knee segment can be challenging. This study describes a novel technique that allows a complete endovascular reconstruction of the trifurcation (CERT) utilising stents in the below the knee segment when conventional techniques are exhausted, or have failed to deliver an acceptable result, leading to remaining outflow compromise. Methods: Eight patients with Rutherford 5 chronic limb threatening ischaemia underwent CERT between January 1st, 2018 and January 1st, 2020. All patients underwent ultrasound at 6 weeks post operatively and then at variable intervals until the completion of the follow up period in March 2020. Results: Technical success of the CERT technique was achieved in all patients. Six patients had anterior tibial artery/Tibioperoneal trunk reconstructions, whilst 2 patients were stented directly into posterior tibial and peroneal artery. Five patients (63%) achieved wound healing. All-cause mortality was 25% (2 patients) with 1 patient achieving wound healing prior to death. Two stents were occluded during the follow up period. The first was asymptomatic and had achieved wound healing. The second was symptomatic with stent occlusion and a delayed presentation with Rutherford 3 acute limb ischaemia. Conclusions: Complete endovascular reconstruction of the trifurcation is a feasible option to achieve revascularisation in patients with tissue loss and below the knee arterial lesions allowing a continuous reconstruction of the trifurcation segment keeping the anatomical configuration intact. Clinical outcomes appear acceptable however larger series are needed. abstract_id: PUBMED:35530835 The Role of Endovascular Procedure for Peripheral Arterial Disease in Diabetic Patients With Chronic Limb-Threatening Ischemia. Type 2 diabetes mellitus is a major risk factor for all forms of cardiovascular diseases, including peripheral arterial disease (PAD). Chronic limb-threatening ischemia (CLTI) is determined by the presence of ischemic rest pain, and may or may not be accompanied by tissue loss (such as ulcers and gangrene) or infection. Treatments for CLTI consist of wound treatment, infection control, and ischemia control by arterial revascularization, which can be performed by either open surgical procedure (bypass) or an endovascular approach. We present two cases of chronic limb-threatening ischemia, one with an above-knee lesion and the other with a below-knee lesion. In addition to good wound treatment and glucose control as the risk factor management, we performed endovascular therapy in both patients. Both patients showed good wound improvement after the procedure. abstract_id: PUBMED:19937784 Indications and clinical outcomes for below knee endovascular therapy: review article. Chronic critical limb ischemia (CLI) still represents the most common cause for amputation and frequently the possibility for peripheral revascularization, particularly in below knee (BK) arteries, is not adequately evaluated before amputation. This may also be due to the fact that even today, there's some confusion about results of the endovascular treatment in this territory. Diabetics, representing the population most frequently affected by CLI, have specific clinical characteristics, the so called diabetic foot syndrome, which cannot be compared with the situation in nondiabetic patients with ischemic ulcers. Measuring the success of BK endovascular therapy can be a difficult issue, considering that it is often the work of a multidisciplinary team. The clinical benefit of BK endovascular therapy often shows a large discrepancy from the primary patency. While ulcer healing, limb salvage, and reintervention rates are usually low after BK endovascular therapy, rates of restenosis remain excessively high. Nevertheless, the positive impact of revascularization on mortality, which mainly depends on the major amputation rate reduction, is also evident. This review article summarizes indications and clinical outcomes after BK endovascular therapy with special attention to the role of diabetes mellitus in patients with CLI. abstract_id: PUBMED:22271721 Diabetic foot and PAD: the endovascular approach. Diabetic foot ulceration (DFU) is recognized as one of the most serious complications of diabetes. Active revascularisation plays a crucial role in achieving ulcer healing. Non-surgical, minimally invasive, revascularisation options for DFU have expanded over the last decade and have become a prominent tool to prevent amputation. Endovascular treatment of arterial DFU lesions is mainly concentrated in the below-the-knee arteries. The outcome of both open surgery and endovascular treatment is broadly spoken the same for the endpoints ulcer healing and limb salvage and is between 78% and 85%. The choice between endovascular treatment and open surgery should always be the outcome of a team discussion. Local expertise plays an important role in these discussions. In many institutions, the endovascular approach has currently become the first choice treatment option. The revascularisation of below-the-knee vessels needs experienced hands, team discussion and the right set of devices. Centralisation in DFU centres is therefore probably the best guaranty for the best outcome. abstract_id: PUBMED:37610397 Comparison Between the Efficacy of Spinal Cord Stimulation and of Endovascular Revascularization in the Treatment of Diabetic Foot Ulcers: A Retrospective Observational Study. Objective: We aimed to compare the effects of spinal cord stimulation (SCS) with those of endovascular revascularization on the treatment of diabetic foot ulcers. Materials And Methods: A total of 104 patients with diabetic foot ulcers who met the inclusion criteria were retrospectively analyzed and classified to the SCS treatment group (n = 46) and endovascular revascularization treatment group (n = 46). The quality-of-life scores (Quality of Life Scale for Patients with Liver Cancer v2.0), visual pain analog scale score, lower limb skin temperature, lower limb arterial ultrasound results, and lower extremity electromyography results were analyzed to compare the efficacy of the two treatments for diabetic foot ulcers in the two groups before surgery and six months after surgery. Results: A total of 92 patients (men: 73.9%, mean age: 66.51 ± 11.67 years) completed the six-month postoperative follow-up period. The patients in the SCS treatment group had a higher quality-of-life score (25.54% vs 13.77%, p < 0.05), a larger reduction in pain scores (69.18% vs 37.21%, p < 0.05), and a larger reduction in foot temperature (18.56% vs 7.24%, p < 0.05) than those of the endovascular revascularization treatment group at six months after surgery. The degree of vasodilation in the lower limbs on color Doppler arterial ultrasound and the nerve conduction velocity were higher in the SCS treatment group than in the endovascular revascularization treatment group at six months after surgery (p < 0.05). Conclusion: SCS was more effective than endovascular revascularization in improving quality of life, relieving pain, improving lower limb skin temperature, increasing lower limb blood flow, and improving nerve conduction in patients with diabetic foot ulcers at six months after surgery. abstract_id: PUBMED:30537989 Angioplasty and stenting for below the knee ulcers in diabetic patients: protocol for a systematic review. Background: The worldwide incidence and prevalence of diabetes mellitus (DM) are increasing. DM has a high social and economic burden due to its complications and associated disorders. Peripheral arterial disease (PAD) is closely related to DM. More than 85% of patients with DM will develop PAD in their lifetime, and between 10 and 25% of patients with DM will have a foot ulcer. In such cases, it is important to determine for each patient whether it is necessary and feasible to revascularise the affected limb as well as the optimal technique. Percutaneous transluminal angioplasty (PTA) is designed to restore blood flow through the vessel lumen by various devices including balloons, drug-coated balloons, bare stents, drug-eluting stents and endovascular atherectomes. This systematic review aims to evaluate the effects of PTA in the treatment of lower limb arterial ulcers in diabetic patients. Methods: We will search randomised controlled trials (RCTs) and quasi-RCTs in the following databases (e.g., MEDLINE via PubMed, EMBASE, Lilacs, Cochrane Central Register of Controlled Trials, Ibecs, CINAHL, AMED, World Health Organization International Clinical Trials Registry Platform, ClinicalTrials.gov , and OpenGrey). Our search strategy will use the following free-text terms and controlled vocabulary (e.g., Emtree, MeSH) for 'foot ulcer', 'leg ulcer', 'diabetic foot', 'Peripheral Arterial Disease', 'Diabetes Complications', 'Peripheral Vascular Diseases', 'critical limb ischemia', 'below the knee ulcer', 'angioplasty', 'stents', 'stenting', and 'endovascular procedures'. There will be no limits on date or language of publication. Two authors will, independently, select studies and assess the data from them. Risks of bias (RoB) of included studies will be evaluated using the Cochrane's RoB tool. If possible, we will perform and report structured summaries of the included studies and meta-analyses. Results are not available as this is a protocol for a systematic review, and we are currently in the phase of building a sensitive search strategy. Discussion: While there are several available endovascular techniques for revascularisation, it is unclear which technique has better outcomes for ulcers below the knee in diabetic patients. A systematic review is required to validate and demonstrate these techniques and their outcomes to allow an evidence-based clinical decision. Systematic Review Registration: PROSPERO CRD42017065171. Answer: The efficacy of percutaneous transluminal angioplasty (PTA) in treating diabetic patients with below-the-knee (BTK) disease was evaluated, and the angiosome model's reliability was analyzed in a retrospective study. The study found that PTA is a safe and effective option for BTK-only disease, with a limb salvage rate of 87% and a significant increase in transcutaneous partial oxygen pressure (TcPO2) values at follow-up compared to baseline. The study compared the outcomes of patients treated by direct revascularization (DR) with those treated by indirect revascularization (IR) and found no statistically significant differences in therapeutic efficacy between the two groups, suggesting that both DR and IR are effective treatment options (PUBMED:23358605). In summary, the angiosome model, which involves direct revascularization of the affected angiosome, appears to be effective in the endovascular treatment of diabetic foot in patients with below-the-knee disease. However, indirect revascularization also shows good results, indicating that the angiosome model is not the only effective approach for this patient population.
Instruction: Should maximal urethral closure pressure be performed before midurethral sling surgery for stress incontinence? Abstracts: abstract_id: PUBMED:27010558 Should maximal urethral closure pressure be performed before midurethral sling surgery for stress incontinence? A time to revisit. Introduction And Hypothesis: Maximum urethral closure pressure (MUCP) provides an objective assessment of urethral integrity, but its role in predicting outcome after midurethral sling (MUS) placement is debatable and current practice in the UK is variable. The study was carried out to determine if lower preoperative MUCP is associated with poor outcome following MUS. Method: The study was a retrospective review of the British Society of Urogynaecology (BSUG) database and urodynamics (UDS) data. Patients who reported outcome as "no improvement", "worse" or "much worse" on the Patient Global Impression of Improvement (PGII) scale were identified as having a poor outcome. Patients who reported "a little improvement", "improved" and "very much improved" on the PGII were thought to have a good outcome. The preoperative demographics, UDS findings and quality of life (International Consultation of Incontinence questionnaires [ICIQ-SF]) data of the two groups were compared. Result: A total of 236 women were identified for the study. Of these, 24 women (10.2 %) had a poor outcome. Of the remaining women reporting a good outcome, 50 cases were randomly selected. All urodynamic parameters, including mean functional urethral length (FUL), bladder capacity, and Qmax, were similar, except for mean MUCP 37.05 cm H2O, which was significantly lower in group 1 (poor outcome 37.05 cm H2O) compared with a mean MUCP of 50.6 cm H2O in group 2 (good outcome; p = 0.005). Conclusion: We conclude that failure following MUS is associated with preoperatively lower MUCP, which can be used as a predictor of failure. abstract_id: PUBMED:26927242 Is single incision midurethral sling effective in patients with low maximal urethral closure pressure? Objective: To ascertain whether low preoperational maximal urethral closure pressure (MUCP) affects the outcomes of single incision sling (SIS) procedures and changes MUCP values postsurgery. Material And Methods: There were 112 (MUCP ≥ 40 cmH2O, n = 88; MUCP < 40 cmH2O, n = 24) consecutive women with urodynamic stress incontinence who had undergone SIS (MiniArc) procedures included in this study. The threshold of 40 cmH2O was used since it has been shown to be a significant risk factor for failed incontinence surgery. Clinical outcomes were assessed by the cough stress test, the 1-hour pad test, the Incontinence Impact Questionnaire-Short Form, the Urogenital Distress Inventory six-item questionnaire, the Sexual Questionnaire-SF, and postoperative changes in the urodynamic parameters. A comparison of the 1-year follow-up data is presented. Results: Three months postsurgery, a significant decrease was observed in the 1-hour pad test, from 20.6 g preoperatively to 0.73 g postoperatively (p < 0.001). The objective cure rate was 82.1% without any significant differences between the two groups (p = 0.202). At 3 months and 1 year after surgery, significantly decreasing Urogenital Distress Inventory six-item questionnaire and Incontinence Impact Questionnaire-Short Form, and increasing Sexual Questionnaire-SF scores were observed in both groups, without any significant differences between the two groups. No statistically significant difference in the subjective cured rate was noted between the two groups at the 3-month and 18.4 month follow-ups. The postoperative MUCP was significantly decreased in the MUCP ≥ 40 group (p < 0.05) while significantly increased in the MUCP < 40 group (p = 0.006). Conclusions: These results suggest that SIS is a safe and highly effective treatment for urodynamic stress incontinence even in women with low MUCP at a mean follow-up of 18.4 months. Evaluation of the outcomes with more subjects after a longer follow-up period is necessary. abstract_id: PUBMED:30575112 The mechanics of urethral closure, incontinence and midurethral sling repair. Part 1 original experimental studies. (1990). Aims: To summarize the mechanics of urethral closure, incontinence, and midurethral sling repair, a work in 3 parts Part 1. Original scientific studies (1990). Part 2. Experimental validation of reliance of the closure mechanisms on a competent PUL (1993-2003). Part 3. Surgery (1990-2016). Methods: Part1. Two unrelated observations in the mid 1980s led to the discovery of the MUS: a hemostat applied on one side of the midurethral area of the vagina, controlled urine loss on coughing without bladder neck elevation; an implanted Teflon tape cause a collagenous reaction. It was hypothesized that urinary stress incontinence (USI) was caused by collagen loss in the pubourethral ligament (PUL) and a tape implanted in the exact position of PUL would reinforce it and cure USI. A tape removable at 6 weeks was configured as an inverted "U" in the vagina and lowered sequentially. Results: At a certain point, the patient was continent on coughing but was able to pass urine freely. This proved the mechanism for continence was not obstructive. Post-op xrays showed no elevation of bladder neck. This invalidated Enhorning's Theory. Ultrasound showed closure of distal urethra from behind and descent of vaginal fornix on straining. This indicated there were two closure mechanisms, distal urethral, and bladder neck. Three months following sling removal, there was a 50% failure rate. Conclusions: The 1990 results indicated a permanent sling was required for the MUS. Further proofs were required for the proposed musculoelastic mechanisms. abstract_id: PUBMED:30525259 The mechanics of urethral closure, incontinence, and midurethral sling repair Part 3 surgical applications (1990-2016). Part 3 briefly summarizes further development in midurethral sling (MUS) instruments and technique following the 1990 prototype operations, then critically examines the whole MUS surgical methodology, 1990 to present day. The aim is to identify positive and negative aspects of these methodologies which can be usefully applied to improve current MUS surgery. ANIMAL EXPERIMENTS: 1987-1988 proved that a collagenous neoligament could be formed by implantation of a tape. There was a wide variation in tissue reaction to implanted tapes. Inflamamatory tissue reaction was very different from bacterial infection and was safe even when a sinus is formed. MUS METHODOLOGY: The key factor in avoiding major vessel and nerve injuries is to penetrate the perianal membrane with scissors, insert the applicator. Importantly, this reveals any bleeding which could otherwise accumulate in the Space of Retzius and only be controlled by digital pressure. The balance between too tight (retention) and too loose (incontinence) is analyzed in terms of the exponential relationship between urethral diameter and urine flow; why elastic tapes are more likely to cause post-operative urinary retention; how to minimize retention by tightening against an indwelling No18 Foley catheter; the importance of routinely repairing the distal closure mechanism with purse string suture to external ligaments, fascial layer of vagina; why minislings avoid most of the serious MUS complication; why a tensioned minisling allows greater precision when tightening the sling and how anchors and individually knitted tapes give hope that tape erosions may decrease. abstract_id: PUBMED:30575103 The mechanics of urethral closure, incontinence and midurethral sling repair. Part 2 further experimental validation (1993-2003). IN: Part 1, The original 1990 science behind the MUS, the hypothesized closure mechanisms, and the prototype MUS itself were presented. The next phase of MUS development began in 1990 in collaboration with the late Ulf Ulmsten. It had two arms Further development of the prototype MUS. Further anatomical, imaging, urodynamic studies to validate the role of PUL in the closure mechanisms. A second series of prototype MUS operations performed under LA/sedation resulted in a permanently implanted polypropylene sling and the MUS as is known today. The tape was elevated until no urine leaked on coughing. This demonstrated that the artificial PUL neoligament needed to be at a specific length to work. Anatomical, EMG and video ultrasound, and X-ray studies confirmed three directional muscles contracted pubourethral (PUL) and uterosacral (USL) ligaments. The contribution of the horseshoe shaped rhabdosphincter (RS) to continence was directly tested with pressure measurements under live surgery conditions. It was concluded that the RS was responsible for pressure generation but not continence. Continence was a consequence of intraurethral resistance to flow created by the distal and proximal urethral closure mechanisms, both governed ultimately by the Law of Poiseuille. CONCLUSIONS: The key element in curing USI is creation of a competent PUL using the collagenous neoligament surgical principle described in Part 1. This creates a firm insertion point for the three directional muscle forces, restoring their contractile strength and closure. abstract_id: PUBMED:34003308 Impact of intrinsic sphincter deficiency on mid-urethral sling outcomes. Introduction And Hypothesis: Our primary objective was to study outcomes of patients with intrinsic sphincter deficiency (ISD) following mid-urethral slings (MUS) at 1-year. Our secondary objective was to delineate factors affecting success in these patients. Methods: Six hundred eighty-eight patients who had MUS between January 2004 and April 2017 were reviewed retrospectively; 48 women were preoperatively diagnosed with ISD. All completed urodynamic studies and validated quality-of-life (QOL) questionnaires at baseline and 1 year. Primary outcomes were objective and subjective cure of stress incontinence, defined as no involuntary urine leakage during filling cystometry and 1-h pad test < 2 g and negative response to Urogenital Distress Inventory-6 Question 3. Ultrasound was performed to determine tape position, urethral mobility and kinking at 1 year. Results: Women with ISD had significantly lower objective and subjective cure rates of 52.1% and 47.9%, respectively, compared to an overall of 88.2% and 85.9%. QOL scores significantly improved in those with successful surgeries. The sling type did not make a difference. Multivariate logistic regression identified reduced urethral mobility [OR 2.11 (1.24-3.75)], lower maximum urethral closure pressure (MUCP) [OR 1.61 (1.05-3.41)] and tape position [OR 3.12 (1.41-8.71)] to be associated with higher odds of failed slings for women with ISD. Conclusions: Although there are good overall success in women undergoing MUS, those with ISD have significantly lower cure rates at 1 year. Factors related to failure include reduced urethral mobility, low MUCP and relative tape position further away from the bladder neck. Optimal management of patients with ISD and reduced urethral mobility remains challenging. abstract_id: PUBMED:27641921 The Impact of Midurethral Sling Surgery on Sexual Activity and Function in Women With Stress Urinary Incontinence. Introduction: Stress urinary incontinence has a negative impact on sexual function. Aim: To assess the effect of midurethral sling surgery on sexual activity and function in women with stress urinary incontinence. Methods: This is a secondary analysis of the Value of Urodynamics Prior to Stress Incontinence Surgery (VUSIS-II) study, which assessed the value of urodynamics in women with (predominantly) stress urinary incontinence. Patients who underwent retropubic or transobturator sling surgery were included in the present study if information was available on sexual activity before and 12 months after surgery. Data were collected from a self-report validated questionnaire combined with non-validated questions. The association between midurethral sling surgery and sexual function (coital incontinence, satisfaction, and dyspareunia) was compared with McNemar χ(2) tests for nominal data and paired t-tests for ordinal data. Potentially influential factors were analyzed with univariable and multivariable logistic regression analyses. Main Outcome Measures: Changes in sexual activity and sexual function after midurethral sling surgery. Results: Information on sexual activity was available in 293 of the 578 women (51%) included in the VUSIS-II study. At baseline, 252 of 293 patients (86%) were sexually active vs 244 of 293 (83%) after 12 months. More patients with cured stress urinary incontinence were sexually active postoperatively (213 of 247 [86%] vs 31 of 46 [67%], P < .01). There was a significant decrease in coital incontinence (120 of 236 [51%] preoperatively vs 16 of 236 [7%] postoperatively, P < .01). De novo dyspareunia was present in 21 of 238 women (9%). There was a greater improvement in coital incontinence after placement of the retropubic sling compared with the transobturator sling (odds ratio = 2.04, 95% CI = 1.10-3.80, P = .02). Conclusion: These data show that midurethral sling surgery has an overall positive influence on sexual function in women with stress urinary incontinence. The retropubic sling is more effective than the transobturator sling for improvement of coital incontinence. De novo dyspareunia was present in 1 of 11 women. abstract_id: PUBMED:37917942 Comparison of Morbidity and Retreatment After Urethral Bulking or Midurethral Sling at the Time of Pelvic Organ Prolapse Repair. Objective: To compare postprocedure retreatment rates for stress incontinence in patients who underwent either midurethral sling or urethral bulking at the time of concomitant repair of pelvic organ prolapse (POP). Methods: This was a retrospective cohort study using data from the Premier Healthcare Database. Using Current Procedural Terminology codes, we identified patients who were undergoing POP repair and concomitant urethral bulking or midurethral sling between the years 2001 and 2018. Patients who underwent concomitant nongynecologic surgery, Burch urethropexy, or oncologic surgery, and those who did not undergo concomitant POP and anti-incontinence surgery, were excluded. Additional data collected included patient demographics, hospital characteristics, surgeon volume, and comorbidities. The primary outcome was a repeat anti-incontinence procedure at 2 years, and the secondary outcome was the composite complication rate. Results: Over the study period, 540 (0.59%) patients underwent urethral bulking, and 91,005 (99.41%) patients underwent midurethral sling. The rate of a second procedure within 2 years was higher for urethral bulking, compared with midurethral sling (9.07% vs 1.11%, P <.001); in the urethral bulking group, 4.81% underwent repeat urethral bulking and 4.81% underwent midurethral sling. In the midurethral sling group, 0.77% underwent repeat midurethral sling and 0.36% underwent urethral bulking. After adjusting for confounders, midurethral sling was associated with a decreased odds of a repeat anti-incontinence procedure at 2 years (adjusted odds ratio 0.11, 95% CI 0.08-0.16). The probability of any complication at 2 years was higher with urethral bulking (23.0% vs 15.0%, P <.001). Conclusion: Urethral bulking at the time of POP repair is associated with a higher rate of repeat procedure and postoperative morbidity up to 2 years after surgery. abstract_id: PUBMED:17014810 Is transobturator tape as effective as tension-free vaginal tape in patients with borderline maximum urethral closure pressure? Introduction: The purpose of this study was to compare transobturator tape (MONARC) with tension-free vaginal tape in patients with borderline low maximum urethral closure pressure. Study Design: Historical cohort analysis of 3-month outcomes in 145 subjects (MONARC = 85; tension-free vaginal tape = 60). A cut-off point of 42 cm H2O for preoperative maximum urethral closure pressure was identified as predictor of success in the entire cohort. The cohort was stratified by sling type and analyzed. Outcome variables included urodynamic stress incontinence, urethral pressure profiles, subjective stress incontinence symptoms, and complications. Results: The relative risk of postoperative urodynamic stress incontinence 3 months after surgery in patients with a preoperative maximum urethral closure pressure of 42 cm or less H2O was 5.89 (1.02 to 33.90, 95% confidence interval) when we compared MONARC with tension-free vaginal tape. Subjects in the MONARC and tension-free vaginal tape groups did not differ significantly in baseline characteristics. We defined subjects as failures if they demonstrated postoperative objective stress incontinence on multichannel urodynamic testing. Conclusion: In subjects with maximum urethral closure pressure of 42 cm or less H2O, the MONARC was nearly 6 times more likely to fail than tension-free vaginal tape at 3 months after surgery. Long-term follow-up and randomized controlled trials are needed. abstract_id: PUBMED:32107882 Mid-urethral sling revision for mesh exposure-long-term outcomes of two surgical techniques from a comparative clinical retrospective cohort study. Objective: To compare vaginal closure with versus without sling excision in the management of vaginal sling exposure following mid-urethral sling (MUS). Design: Clinical retrospective cohort study. Setting: Tertiary urogynaecological centre in Australia. Population: Women with urodynamic stress urinary incontinence (SUI) who had a MUS (n = 2823) during 1999-2017 with a follow-up period up to December 2018. Thirty-three women (1%) had sling exposure and 31 required surgical intervention (1%). Methods: Clinical review with analysis of surgical database and patient records. Main Outcome Measures: The primary outcome was successful closure and resolution of exposure-related symptoms without the need for re-surgery. Secondary outcomes were repeat procedure and recurrent incontinence following revision. Results: Mean follow up was 103 months. Of the 20 women with a primary excision and closure approach, 19 had successful closure. Seven of 11 women with simple vaginal closure without excision needed another surgery for recurrent mesh exposure. Recurrence of stress incontinence did not occur in any of the four who had 'successful' closure without excision. Of those who had sling division/removal without a concomitant stress continence procedure, 32% (7/22) required further surgery. Conclusion: Sling excision and repair have better outcomes with less recurrence of sling exposure compared with simple closure. Following sling removal, one of three women will develop SUI recurrence and require surgery. TWEETABLE ABSTRACT: #Slingexcision &repair leads to less recurrence of exposure versus #simpleclosure for #slingmeshexposure. Answer: The role of maximal urethral closure pressure (MUCP) in predicting the outcome of midurethral sling (MUS) surgery for stress incontinence is a subject of debate. However, several studies suggest that MUCP can be a valuable predictor of surgical outcomes. A retrospective review of the British Society of Urogynaecology database indicated that lower preoperative MUCP is associated with poorer outcomes following MUS placement. Specifically, the study found that failure following MUS is associated with preoperatively lower MUCP, suggesting that MUCP can be used as a predictor of failure (PUBMED:27010558). Another study examining the effectiveness of single incision sling (SIS) procedures in patients with low MUCP found that SIS is a safe and highly effective treatment for urodynamic stress incontinence even in women with low MUCP at a mean follow-up of 18.4 months. This suggests that while MUCP is an important factor, it may not be the sole determinant of success for certain types of MUS procedures (PUBMED:26927242). Research into the mechanics of urethral closure and MUS repair has provided insights into the importance of a competent pubourethral ligament (PUL) and the role of a collagenous neoligament in restoring continence. These studies have highlighted the complexity of the mechanisms involved in urethral closure and the importance of surgical technique in MUS outcomes (PUBMED:30575112, PUBMED:30525259, PUBMED:30575103). A study focusing on patients with intrinsic sphincter deficiency (ISD) found that these patients had significantly lower cure rates following MUS at 1 year. Factors such as reduced urethral mobility, lower MUCP, and tape position were associated with higher odds of failed slings in women with ISD (PUBMED:34003308). In conclusion, while MUCP is an important factor to consider, it should not be the only criterion for determining the suitability of MUS surgery for stress incontinence. The decision to perform MUCP before MUS surgery should take into account the type of sling procedure, the patient's specific condition (such as the presence of ISD), and other individual factors that may influence surgical outcomes.
Instruction: Can a state-of-the-art D-dimer test be used to determine the need for CT imaging in patients suspected of having pulmonary embolism? Abstracts: abstract_id: PUBMED:12238542 Can a state-of-the-art D-dimer test be used to determine the need for CT imaging in patients suspected of having pulmonary embolism? Rationale And Objectives: The purpose of this study was to determine whether a simple rapid blood test can obviate computed tomography (CT) in a sizable percentage of patients suspected of having pulmonary embolism, based on the hypothesis that negative D-dimer results could eliminate any further search for pulmonary embolism. Materials And Methods: At the authors' institution, 2,121 sequential patients underwent a whole-blood antibody agglutination test for cross-linked fibrin degradation products (D-dimer). Of these patients, 844 had positive test results and were not further considered. A retrospective review included reports of all multisection combined CT venographic and pulmonary angiographic studies obtained within 48 hours of the D-dimer assay for the 1,277 patients with negative D-dimer results; 229 (18%) of these 1,277 patients underwent combined CT venography and pulmonary angiography, usually within 24 hours. Results: Retrospective review of the imaging examinations that were discrepant with the D-dimer results revealed only three false-negative D-dimer results. Of the 229 patients in whom combined CT venography and pulmonary angiography was performed for suspected pulmonary embolism, 226 (98.7%) had no evidence of acute pulmonary embolism or deep venous thrombosis. The negative predictive value of a negative D-dimer result was therefore 98.7% (confidence interval, 96.2%-99.7%). Conclusion: The D-dimer assay is a simple rapid blood test that is sensitive to the presence of acute thrombosis. Very few patients with negative results have acute deep venous thrombosis or pulmonary embolism, with combined CT venography and pulmonary angiography used as the reference standard. abstract_id: PUBMED:33870000 Using Quantitative D-Dimer to Determine the Need for Pulmonary CT Angiography in COVID-19 Patients. Introduction: COVID-19 has been frequently cited as a condition causing a pro-inflammatory state leading to hypercoagulopathy and increased risk for venous thromboembolism. This condition has thus prompted prior studies and screening models that utilize D-dimer for pulmonary embolism (PE) into question. The limited research to date has failed to provide tools or guidance regarding what COVID-19 positive patients should receive pulmonary CT angiography screening. This knowledge gap has led to missed diagnoses, CT overutilization, and increased morbidity and mortality. Objective: The purpose of this study was to examine the utility of the quantitative D-dimer lab marker in a convenience sample of 426 COVID-19 positive patients to assist providers in determining the utility of pulmonary CT angiography. Methods: The authors conducted a retrospective analysis on all COVID-19 positive patients within the Henry Ford Medical System between March 1st, 2020 through April 30th, 2020 who received pulmonary CT angiography and had a quantitative D-dimer lab drawn within 24 hours of CT imaging. Results: Our sampling criteria yielded a total of n = 426 patients, of whom 347 (81.5%) were negative for PE and 79 (18.5%) were positive for PE. The average D-dimer in the negative PE group was 2.95 μg./mL. (SD 4.26), significantly different than the 9.15 μg./mL. (SD 6.80) positive PE group (P < 0.05; 95% CI -7.8, -4.6). Theoretically, applying the traditional ≤ 0.5 μg./mL. D-dimer cut-off to our data would yield a sensitivity of 100% and specificity of 7.49% for exclusion of PE. Based on these results, the authors would be able to increase the D-dimer threshold to < 0.89 μg./mL. to maintain their sensitivity to 100% and raise the specificity to 27.95%. Observing a D-dimer cut-off value of ≤ 1.28 μg./mL. would reduce sensitivity to 97.47% but increase the specificity to 57.93%. Conclusions: These study results support the utilization of alternative D-dimer thresholds to exclude PE in COVID-19 patients. Based on these findings, providers may be able to observe increased D-dimer cut-off values to reduce unnecessary pulmonary CT angiography scans. abstract_id: PUBMED:30084035 State-of-the-Art Imaging for the Evaluation of Pulmonary Embolism. Purpose Of Review: CT angiography has become the gold standard for evaluation of suspected pulmonary embolism; however, continuous evolution in radiology has led to new imaging approaches that offer improved options for detection and characterization of pulmonary embolism while exposing patients to lower contrast and radiation dose. The purpose of this review is to summarize state of the art imaging approaches for the evaluation of pulmonary embolism, focusing on technical innovations in this field. Recent Findings: The introduction of dual-energy CT has resulted in the ability to add functional and prognostic information beyond the morphologic assessment of the pulmonary arteries and potentially offer improved image quality without additional radiation burden. New approaches and strategies in CT scanning have resulted in decreased radiation exposure as well as a significant decrease in contrast material used without decreasing the sensitivity for detection of pulmonary embolism. Continuous developments and improvements in MR angiography techniques offer a valuable and efficient option for certain patient populations without the risk of radiation exposure. Improvements in the technical success rate and reliability of this modality will mean more widespread use in the future. Moving beyond planar ventilation/perfusion (V/Q) scintigraphy, nuclear imaging offers several new approaches, including the use of single photon emission computed tomography (SPECT) and SPECT/CT resulting in superior diagnostic performance and a decrease in nondiagnostic studies, potentially surpassing the diagnostic capabilities of computed tomography pulmonary angiography. Ongoing research in the use of V/Q PET/CT demonstrates superior temporal and spatial resolution and quantitative capabilities compared to SPECT-CT; this modality will likely play an increasing role in the detection and characterization of pulmonary embolism. The field of pulmonary embolism imaging has demonstrated continuous evolution in both development of novel techniques and improvement in current technologies, resulting in better detection, decreased radiation exposure, and enhanced functional information beyond morphologic characterization of the pulmonary vasculature. abstract_id: PUBMED:25990714 A simple decision rule including D-dimer to reduce the need for computed tomography scanning in patients with suspected pulmonary embolism. Background: An 'unlikely' clinical decision rule with a negative D-dimer result safely excludes pulmonary embolism (PE) in 30% of presenting patients. We aimed to simplify this diagnostic approach and to increase its efficiency. Methods: Data for 723 consecutive patients with suspected PE were analyzed (prevalence of PE, 22%). After constructing a logistic regression model with the D-dimer test result and items from the Wells' score, we identified the most prevalent combinations of influential items and selected new D-dimer positivity thresholds. The performance was separately validated with data from 2785 consecutive patients with suspected PE. Results: Three Wells items significantly added incremental value to the D-dimer test: hemoptysis, signs of deep vein thrombosis and 'PE most likely'. Based on the most frequent combinations of these three items, we identified two groups: (i) none of these three items positive (41%); (ii) one or more of these items positive (59%). When applying a 1000 μg/L D-dimer threshold in group 1 and 500 μg/L in group 2, PE could be excluded without CT scanning in 36%, at a false-negative rate of 1.2% (95%, 0.04-3.3%). In the validation set, these proportions were 46% and 1.9% (95% CI, 1.2-2.7%), respectively. Using the conventional Wells score with a normal D-dimer result, these rates were, respectively, 22% and 0.6% (95% CI, 0.10-2.4%). Conclusion: Combining Wells items with the D-dimer test resulted in a simplified decision rule, which reduces the need for CT scanning in patients with suspected PE. A prospective validation is required before it can be implemented in clinical practice. abstract_id: PUBMED:29806797 State-of-the-art pulmonary arterial imaging - Part 1. The pulmonary arteries are affected by a variety of congenital and acquired abnormalities. Multiple state-of-the art imaging modalities are available to evaluate these pulmonary arterial abnormalities, including computed tomography (CT), magnetic resonance imaging (MRI), echocardiography, nuclear medicine imaging and catheter pulmonary angiography. In part one of this two-part series on state-of-the art pulmonary arterial imaging, we review these imaging modalities, focusing particularly on CT and MRI. We also review the utility of these imaging modalities in the evaluation of pulmonary thromboembolism. abstract_id: PUBMED:15006941 Clinical utility of D-dimer in patients with suspected pulmonary embolism and nondiagnostic lung scans or negative CT findings. Background: The diagnosis of pulmonary embolism is difficult because the clinical diagnosis is nonspecific and all of the objective tests have limitations. The assay for plasma d-dimer may be useful as an exclusion test if results are negative. We conducted a prospective cohort study that evaluated the clinical utility (usefulness) of an automated quantitative d-dimer test in the diagnosis of patients with suspected pulmonary embolism. Methods: Consecutive eligible patients who had clinically suspected PE with nondiagnostic lung scans or negative helical CT scan of the chest results underwent d-dimer testing. Results: The d-dimer results were negative in 11 of 103 inpatients (10.6%, 95% confidence interval [CI], 5.5 to 18.3%) and 7 of 22 outpatients (31.8%, 95% CI, 13.9 to 54.9%; p = 0.02). Conclusions: Measurement of plasma d-dimer is of limited clinical utility for inpatients with clinically suspected pulmonary embolism and nondiagnostic lung scans or negative helical CT results at a US academic health center. abstract_id: PUBMED:11798202 The use of a D-dimer assay in patients undergoing CT pulmonary angiography for suspected pulmonary embolus. Purpose: To assess the ability of a semi-quantitative latex agglutination D-dimer test Accuclot with bedside measurements of arterial oxygen saturation, respiratory and cardiac rates to exclude pulmonary embolism (PE) on computed tomographic pulmonary angiography (CTPA). Materials And Methods: All patients referred to our CT unit for investigation of suspected acute pulmonary embolism were enrolled. Pulse oximetery, respiratory rate, heart rate and blood sampling for D-dimer testing were carried out just before CT. A high resolution CT (HRCT) of the chest was followed by a CT pulmonary angiogram (CTPA). The images were independently interpreted at a workstation with cine-paging and 2D reformation facilities by three consultant radiologists blinded to the clinical and laboratory data. If positive, the level of the most proximal embolus was recorded. Discordant imaging results were re-read collectively and consensus achieved. Results: A total of 101 patients were enrolled. The CTPA was positive for PE in 28/101 (28%). The D-dimer was positive in 65/101 (65%). Twenty-six patients had a positive CT and positive D-dimer, two a positive CT but negative D-dimer, 39 a negative CT and positive D-dimer, and 34 a negative CT and negative D-dimer. The negative predictive value of the Accuclot D-dimer test for excluding a pulmonary embolus on spiral CT was 0.94. Combining the D-dimer result with pulse oximetry (normal SaO2 > or = 90%) improved the negative predictive value to 0.97. Conclusion: A negative Accuclot D-dimer assay proved highly predictive for a negative CT pulmonary angiogram in suspected acute pulmonary embolus. If this D-dimer assay were included in the diagnostic algorithm of these patients a negative D-dimer would have unnecessary CTPA rendered in 36% of patients. abstract_id: PUBMED:34237678 Beyond the d-dimer - Machine-learning assisted pre-test probability evaluation in patients with suspected pulmonary embolism and elevated d-dimers. Introduction: Acute pulmonary embolism (PE) is a leading cardiovascular cause of death, resembling a common indication for emergency computed tomography (CT). Nonetheless, in clinical routine most CTs performed for suspicion of PE excluded the suspected diagnosis. As patients with low to intermediate risk for PE are triaged according to the d-dimer, its relatively low specifity and widespread elevation among elderly might be an underlying issue. Aim of this study was to find potential predictors based on initial emergency blood tests in patients with elevated d-dimers and suspected PE to further increase pre-test probability. Methods: In this retrospective study all patients at the local university hospital's emergency room from 2009 to 2019 with suspected PE, emergency blood testing and CT were included. Cluster analysis was performed to separate groups with distinct laboratory parameter profiles and PE frequencies were compared. Machine learning algorithms were trained on the groups to predict individual PE probability based on emergency laboratory parameters. Results: Overall, PE frequency among the 2045 analyzed patients was 41%. Three clusters with significant differences (p ≤ 0.05) in PE frequency were identified: C1 showed a PE frequency of 43%, C2 40% and C3 33%. Laboratory parameter profiles (e.g. creatinine) differed significantly between clusters (p ≤ 0.0001). Both logistic regression and support-vector machines were able to predict clusters with an accuracy of over 90%. Discussion: Initial blood parameters seem to enable further differentiation of patients with suspected PE and elevated d-dimers to raise pre-test probability of PE. Machine-learning-based prediction models might help to further narrow down CT indications in the future. abstract_id: PUBMED:21654131 D-dimer assays--a help or hindrance in suspected pulmonary thromboembolism assessment? Background: Suspected pulmonary thromboembolism (PTE) is a common presentation to acute medical units and can cause diagnostic difficulty. National guidelines on PTE management highlight the need for clinical probability assessment and D-dimer assays to ensure appropriate use of diagnostic imaging. D-dimers are used widely in UK hospitals, yet concern exists regarding their misuse. Aims: In this study we aimed to assess the impact of the introduction of D-dimer assays, combined with clinical probability assessment, for evaluation of suspected PTE in our unit. Materials And Methods: This was a prospective audit of all patients presenting with suspected PTE over two 12-week periods, exactly 1 year apart. D-dimers were introduced into our unit between these two periods. We recorded the clinical probability score, potential causes of false-positive D-dimer assay, diagnostic imaging result, patient outcome, admission rates, and length of inpatient stay. Statistical Analysis: Categorical variables were compared using a 2 x 2 chi-square test or Fisher's exact test. Groups were compared utilizing the two-sample t-test or Mann-Whitney U test. Results: A total of 190 patients were included in the study; 65% were female. PTE was confirmed in 8.4%. Patients in both audit periods were comparable with regard to suitability for D-dimer measurement. Following D-dimer introduction, 40 out of 110 patients in period 2 could be discharged directly from the emergency department. Of those admitted to hospital, the median length of stay was significantly reduced in period 2 (3 days in period 1 vs 1 day in period 2; P=0.0007). Use of diagnostic imaging was significantly reduced following the introduction of D-dimers (90% in period 1 vs 40% in period 2; P<0.0001). The positive diagnostic yield for PTE on CT pulmonary angiogram (CTPA) increased significantly from 10% in period 1 to 23% in period 2 (P=0.039). Conclusion: D-dimers must be used judiciously in the assessment of suspected PTE. Appropriate use of D-dimers can provide many benefits, including reductions in diagnostic imaging (and thus radiation exposure), admission rates, and length of inpatient stay. abstract_id: PUBMED:10726049 Exclusion and diagnosis of pulmonary embolism by a rapid ELISA D-dimer test and noninvasive imaging techniques within the context of a clinical model. A negative rapid ELISA D-dimer test alone in out-patients with a low to moderate clinical probability (CP) on pulmonary embolism (PE) is predicted to safely exclude pulmonary embolism. The combination of a negative rapid ELISA D-dimer test and a low to moderate CP on PE followed by compression ultrasonography (CUS) for the detection of deep vein thrombosis (DVT) is safe and cost-effective as it reduces the need for noninvasive imaging techniques to about 50% to 60% of outpatients with suspected PE. A high probability ventilation-perfusion (VP) scan or a positive spiral CT consistent with PE and the detection of DVT by CUS are currently considered to be clear indications for anticoagulant treatment. Subsequent pulmonary angiography (PA) is the gold standard diagnostic strategy to exclude or diagnose PE in suspected outpatients with a negative CUS, a positive rapid ELISA D-dimer test, and a nondiagnostic VP scan or negative spiral CT to prevent overtreatment with anticoagulants. However, the willingness of clinicians and the availability of resources to perform PA is restricted, a fact that has provided an impetus for clinical investigators to search for alternative noninvasive strategies to exclude or detect venous thromboembolism (VTE). Serial CUS testing for the detection of DVT in patients with a low to moderate CP on PE and a nondiagnostic VP scan or negative spiral CT is predicted to be safe and will reduce the need for PA to less than 10% or even less than 5%. This noninvasive serial CUS strategy restricts the need for invasive PA to a minor group of patients (< 5%) with the combination of a low CP on PE and high probability VP scan or the combination of a nondiagnostic VP scan or negative spiral CT and a high CP on PE. Prospective evaluations are warranted to implement and to validate the advantages and the disadvantages of the various combinations of noninvasive strategies and to compare serial CUS testing versus PA in randomized clinical management studies of outpatients with suspected pulmonary embolism. Answer: Yes, a state-of-the-art D-dimer test can be used to determine the need for CT imaging in patients suspected of having pulmonary embolism (PE). Studies have shown that a negative D-dimer result can effectively rule out PE in a significant number of patients, potentially reducing the need for computed tomography (CT) imaging. For instance, one study found that the negative predictive value of a negative D-dimer result was 98.7%, suggesting that very few patients with negative results have acute deep venous thrombosis or PE when combined CT venography and pulmonary angiography are used as the reference standard (PUBMED:12238542). Another study indicated that alternative D-dimer thresholds could be utilized to exclude PE in COVID-19 patients, which could help reduce unnecessary pulmonary CT angiography scans (PUBMED:33870000). Moreover, a decision rule combining Wells items with the D-dimer test resulted in a simplified approach that reduces the need for CT scanning in patients with suspected PE (PUBMED:25990714). Additionally, a negative D-dimer assay was found to be highly predictive for a negative CT pulmonary angiogram in suspected acute pulmonary embolus, suggesting that including this assay in the diagnostic algorithm could prevent unnecessary CTPA in a substantial proportion of patients (PUBMED:11798202). However, it is important to note that the clinical utility of D-dimer testing may be limited for inpatients with clinically suspected PE and nondiagnostic lung scans or negative helical CT results (PUBMED:15006941). Furthermore, the use of D-dimer assays must be judicious, and their appropriate use can lead to benefits such as reduced diagnostic imaging, admission rates, and length of inpatient stay (PUBMED:21654131). In summary, a state-of-the-art D-dimer test can be a valuable tool in determining the need for CT imaging in patients suspected of having PE, but its use should be carefully integrated with clinical assessment and other diagnostic strategies.
Instruction: Can body mass index predict survival outcomes in patients treated with radical nephroureterectomy for upper-tract urothelial carcinoma? Abstracts: abstract_id: PUBMED:23931096 Influence of body mass index on oncological outcomes in patients with upper urinary tract urothelial carcinoma treated with radical nephroureterectomy. Objective: To investigate the association between body mass index and oncological outcomes in Chinese patients who had undergone radical nephroureterectomy for upper urinary tract urothelial carcinoma. Methods: Between August 1998 and October 2009, 236 consecutive Chinese patients underwent radical nephroureterectomy for upper urinary tract urothelial carcinoma at Sun Yat-sen University Cancer Center (Guangzhou, China). Body mass index data were available for 230 (97.5%) of these patients. All 230 patients were classified into three groups according to the body mass index criteria for Asians, issued by the Asia Cohort Consortium: underweight, body mass index <18.5 kg/m(2) (n = 21, 9.1%); normal weight, body mass index ≥18.5 and <25 kg/m(2) (n = 151, 65.7%); and obesity, body mass index ≥25 kg/m(2), (n = 58, 25.2%). Spearman's rank correlation, Kaplan-Meier plots and Cox proportional hazards regression model were used to analyze the data. Results: Being underweight was significantly associated with lymph node metastasis (P = 0.017) and Eastern Cooperative Oncology Group performance status (P = 0.003). Univariate analysis showed recurrence-free survival and cancer-specific survival were significantly worse in underweight patients than in patients with normal weight or obese patients. After adjustments for other clinicopathological variables, multivariate analysis confirmed that recurrence-free survival and cancer-specific survival were significantly worse in underweight patients than in patients with normal weight or obese patients (recurrence-free survival P = 0.014, cancer-specific survival P = 0.015). Conclusions: Preoperative underweight is an independent predictor of unfavorable recurrence-free survival and cancer-specific survival in Chinese patients with upper urinary tract urothelial carcinoma treated by radical nephroureterectomy, whereas obesity is associated with superior recurrence-free survival and cancer-specific survival. Further studies, including a multi-institutional, prospective, Asian cohort study, are required to confirm these findings. abstract_id: PUBMED:29032451 Impact of body mass index on the oncological outcomes of patients treated with radical nephroureterectomy for upper tract urothelial carcinoma. Purpose: To evaluate the association between body mass index (BMI) and oncological outcomes in patients treated with radical nephroureterectomy (RNU) for upper tract urothelial carcinoma (UTUC). Methods: We retrospectively reviewed 237 consecutive patients treated with RNU for UTUC at our institution between 1990 and 2012. Univariable and multivariable cox regression models investigated the association of BMI with disease recurrence, cancer-specific mortality, and overall mortality. Results: From the 237 patients, 104 (44%) had a BMI < 25 kg/m2, 88 (37%) had a BMI between 25 and 29.9 kg/m2, and 45 (19%) had a BMI ≥ 30 kg/m2 at the time of surgery. Within a median follow-up of 44 months (IQR: 24-79), 53 patients (22.4%) experienced a disease recurrence, 85 patients (35.9%) had bladder recurrence, and 44 patients (18.6%) died from the disease. The 5 year recurrence-free and cancer-specific survival rates were, respectively, 32 and 56% for BMI ≥ 30 kg/m2, 45 and 74% for patients with BMI 25-29.9 kg/m2, and 69 and 81% for patients with BMI < 25 kg/m2. In multivariable analyses that adjusted for the effects of the standard clinico-pathological features, BMI ≥ 30 kg/m2 was associated with a higher risk of disease recurrence (HR 3.23; 95% CI 2.3-6.6, p < 0.001) and cancer-specific mortality (HR 3.84; 95% CI 2.8-6.5; p < 0.001). Conclusions: Obesity was independently associated with higher risks of disease recurrence and cancer-specific mortality in patients treated with RNU for UTUC. abstract_id: PUBMED:29356359 Impact of body mass index on the oncological outcomes of patients with upper and lower urinary tract cancers treated with radical surgery: A multi-institutional retrospective study. Aim: To evaluate the impact of body mass index (BMI) on the oncological outcomes of urothelial carcinoma (UC) patients. Patients And Methods: We retrospectively analyzed data from 818 patients with upper tract urothelial cancer (UTUC) and bladder cancer (BC) who were treated with radical nephroureterectomy (RNU) or radical cystectomy (RC) between 1990 and 2015 at six different institutions in Japan. Patients with distant metastasis at diagnosis and those who received neoadjuvant therapies were excluded, leaving 727 eligible patients (UTUC: n = 441; BC: n = 286). Patients were classified into four groups according to World Health Organization BMI criteria: underweight (BMI <18.5 kg/m2 ), normal weight (BMI 18.5-25 kg/m2 ), overweight (BMI 25.1-30 kg/m2 ), and obese (BMI >30 kg/m2 ). Results: Overweight UTUC and BC patients achieved significantly better cancer-specific survival (CSS) than the other three groups. However, obese UTUC and BC patients had significantly worse CSS than the other three groups (UTUC: P = 0.031; BC: P = 0.0019). Multivariate analysis of BC patients demonstrated that obesity was an independent predictor of unfavorable CSS (hazard ratio [HR] = 7.47; P = 0.002) and that being underweight was an independent predictor of favorable CSS (HR = 0.37; P = 0.029). However, BMI was not a prognostic factor for CSS in UTUC patients according to multivariate analysis. Conclusions: Obesity was an independent predictor of BC patients requiring RC. Conversely, being underweight was associated with a favorable prognosis for BC patients. However, BMI was not an independent prognostic factor in patients with upper urinary tract cancer. abstract_id: PUBMED:37924334 Influence of preoperative body mass index on prognosis for patients with upper urinary tract urothelial carcinoma treated with radical nephroureterectomy. Purpose: The impact of body mass index (BMI) on patients with upper urinary tract urothelial carcinoma (UTUC) undergoing radical nephroureterectomy (RNU) is controversial. Increasing evidence suggests an age-dependent relationship between obesity and outcomes for some solid organ tumors. Herein, we aimed to assess the prognostic value of preoperative BMI in UTUC patients treated with RNU in Taiwan. Methods: This was a retrospective single-center study of 468 UTUC patients undergoing RNU during January 2010-December 2017, with preoperative BMI classification and subgroup analysis based on ages of < or ≥ 70 years. All UTUC patients underwent RNU and bladder cuff excision. Overall survival (OS), cancer-specific survival, and disease-free survival (DFS) were analyzed. Fisher's exact test, Mann-Whitney U test, Kaplan-Meier method, and Cox regression model were used for data analysis. Results: The median follow-up duration was 36 months. Patients with higher versus lower BMI (cutoff: 25 kg/m2) showed no differences in OS; older patients had poor OS (hazard ratio [HR] 1.74; 95% confidence interval [CI] 1.24-2.40; p < 0.001). Older age was an independent predictor of poor OS in multivariate Cox regression analysis (p = 0.001). Younger patients with higher BMI (p = 0.02) had better DFS than older patients with no BMI-related survival differences. Higher BMI was an independent predictor of favorable DFS in younger patients in multivariate Cox regression analysis (HR, 0.53; 95% CI 0.28-0.99; p = 0.043). Conclusion: Younger UTUC patients with higher BMI were independently associated with a favorable DFS. abstract_id: PUBMED:32318857 Interethnic differences in the impact of body mass index on upper tract urothelial carcinoma following radical nephroureterectomy. Purpose: Inconsistent prognostic implications of body mass index (BMI) in upper tract urothelial carcinoma (UTUC) have been reported across different ethnicities. In this study, we aimed to analyze the oncologic role of BMI in Asian and Caucasian patients with UTUC. Methods: We retrospectively collected data from 648 Asian Taiwanese and 213 Caucasian American patients who underwent radical nephroureterectomy for UTUC. We compared clinicopathologic features among groups categorized by different BMI. Kaplan-Meier method and Cox regression model were used to examine the impact of BMI on recurrence and survival by ethnicity. Results: According to ethnicity-specific criteria, overweight and obesity were found in 151 (23.2%) and 215 (33.2%) Asians, and 79 (37.1%) and 78 (36.6%) Caucasians, respectively. No significant association between BMI and disease characteristics was detected in both ethnicities. On multivariate analysis, overweight and obese Asians had significantly lower recurrence than those with normal weight (HR 0.631, 95% CI 0.413-0.966; HR 0.695, 95% CI 0.493-0.981, respectively), and obesity was an independent prognostic factor for favorable cancer-specific and overall survival (HR 0.521, 95% CI 0.342-0.794; HR 0.545, 95% CI 0.386-0.769, respectively). There was no significant difference in outcomes among normal, overweight and obese Caucasians, but obese patients had a relatively poorer 5-year RFS, CSS, and OS rates of 52.8%, 60.5%, and 47.2%, compared to 54.9%, 69.1%, and 54.9% for normal weight patients. Conclusion: Higher BMI was associated with improved outcomes in Asian patients with UTUC. Interethnic differences could influence preoperative counseling or prediction modeling in patients with UTUC. abstract_id: PUBMED:37994335 Comparison of survival outcomes between laparoscopic versus open radical nephroureterectomy in upper tract urothelial cancer patients: Experiences of a tertiary care single center. Objectives: To test for differences in overall and recurrence-free survival between laparoscopic and open surgical approaches in patients undergoing radical nephroureterectomy (RNU) for upper tract urothelial carcinoma (UTUC). Materials And Methods: We retrospectively identified patients treated for UTUC from 2010 to 2020 from our institutional database. Patients undergoing laparoscopic or open RNU with no suspicion of metastasis (cM0) were for the current study population. Patients with suspected metastases at diagnosis (cM1) or those undergoing other surgical treatments were excluded. Tabulation was performed according to the laparoscopic versus open surgical approach. Kaplan-Meier plots were used to test for differences in overall and recurrence-free survival with regard to the surgical approach. Furthermore, separate Kaplan-Meier plots were used to test the effect of preoperative ureterorenoscopy on overall and recurrence-free survival within the overall study cohort. Results: Of the 59 patients who underwent nephroureterectomy, 29% (n = 17) underwent laparoscopic nephroureterectomy, whereas 71% (n = 42) underwent open nephroureterectomy. Patient and tumor characteristics were comparable between groups (p ≥ 0.2). The median overall survival was 93 and 73 months in the laparoscopic nephroureterectomy group compared to the open nephroureterectomy group (p = 0.5), respectively. The median recurrence-free survival did not differ between open and laparoscopic nephroureterectomies (73 months for both groups; p = 0.9). Furthermore, the median overall and recurrence-free survival rates did not differ between patients treated with and without preoperative ureterorenoscopy. Conclusions: The results of this retrospective, single-center institution showed that overall and recurrence-free survival rates did not differ between patients with UTUC treated with laparoscopic and open RNU. Furthermore, preoperative ureterorenoscopy before RNU was not associated with higher overall or recurrence-free survival rates. abstract_id: PUBMED:26139078 Can body mass index predict survival outcomes in patients treated with radical nephroureterectomy for upper-tract urothelial carcinoma? Purpose: To assess the relationship between body mass index (BMI) and survival outcomes in Korean patients with upper-tract urothelial carcinoma (UTUC). Methods: A single-institutional retrospective analysis was conducted using clinical and pathological data of 445 UTUC patients who had undergone radical nephroureterectomy with bladder cuff excision from 1997 to 2012. Enrolled patients were classified into normal weight (BMI < 23 kg/m(2)), overweight (BMI 23-24.9 kg/m(2)), and obese (BMI ≥ 25 kg/m(2)) in accordance with BMI cutoffs for Asian populations. The impact of BMI on intravesical recurrence (IVR)-free survival, overall survival (OS), and cancer-specific survival (CSS) was evaluated using Kaplan-Meier analysis with the log-rank test and the Cox proportional hazard model. Results: The median BMI of all patients was 24.2 kg/m(2) (interquartile range 22.2-25.8). There were no significant differences in the IVR-free survival rates according to BMI classification (p = 0.488). The 5-year OS and CSS rates were 58.8, 66.3, and 76.3 % (p = 0.057) and 67.4, 69.3, and 81.5 % (p = 0.021) in the normal, overweight, and obese groups, respectively. In the univariate analysis, obesity (BMI ≥ 25 kg/m(2)) was a significant predictor of better OS [hazard ratio (HR) 0.63; 95 % confidence interval (CI) 0.43-0.92, p = 0.017] and CSS (HR 0.53; 95 % CI 0.33-0.84, p = 0.007) than normal weight. However, these associations could not be confirmed in the multivariable analysis after adjusting for other clinicopathological factors, such as tumor stage, tumor grade, lymphovascular invasion, and surgical margin. Conclusions: Our study results are inconclusive, in that, the multivariate analysis did not identify the influence of BMI on survival, although higher BMI appears clinically associated with favorable survival outcomes in Korean patients with UTUC. abstract_id: PUBMED:29137333 Preoperative chronic kidney disease predicts poor oncological outcomes after radical nephroureterectomy in patients with upper urinary tract urothelial carcinoma. Objective: To evaluate the impact of preoperative chronic kidney disease (CKD) on oncological outcomes in patients with upper tract urothelial carcinoma who underwent radical nephroureterectomy. Methods: A total of 426 patients who underwent radical nephroureterectomy at five medical centers between February 1995 and February 2017 were retrospectively examined. Oncological outcomes, including intravesical recurrence-free, visceral recurrence-free, cancer-specific, and overall survival rates (intravesical RFS, visceral RFS, CSS, and OS, respectively) stratified by preoperative CKD status (CKD vs. non-CKD) were investigated. Cox proportional hazards regression analysis was performed using inverse probability of treatment weighting (IPTW) to evaluate the impact of preoperative CKD on prognosis and a prognostic factor-based risk stratification nomogram was developed. Results: Of the 426 patients, 250 (59%) were diagnosed with CKD before radical nephroureterectomy. Before the background adjustment, intravesical RFS, visceral RFS, CSS, and OS after radical nephroureterectomy were significantly shorter in the CKD group than in the non-CKD group. Background-adjusted IPTW analysis demonstrated that preoperative CKD was significantly associated with poor visceral RFS, CSS, and OS after radical nephroureterectomy. Intravesical RFS was not significantly associated with preoperative CKD. The nomogram for predicting 5-year visceral RFS and CSS probability demonstrated a significant correlation with actual visceral RFS and CSS (c-index = 0.85 and 0.83, respectively). Conclusions: Upper tract urothelial carcinoma patients with preoperative CKD had a significantly lower survival probability than those without CKD. abstract_id: PUBMED:29435873 Oncologic outcomes for open and laparoscopic radical nephroureterectomy in patients with upper tract urothelial carcinoma. Background: Oncologic benefits of laparoscopic radical nephroureterectomy (LNU) are unclear. We aimed to evaluate the impact of surgical approach for radical nephroureterectomy on oncologic outcomes in patients with locally advanced upper tract urothelial carcinoma (UTUC). Methods: Of 426 patients who underwent radical nephroureterectomy at five medical centers between February 1995 and February 2017, we retrospectively investigated oncological outcomes in 229 with locally advanced UTUC (stages cT3-4 and/or cN+). The surgical approach was classified as open nephroureterectomy (ONU) or LNU, and oncologic outcomes, including intravesical recurrence-free survival (RFS), visceral RFS, cancer-specific survival (CSS), and overall survival (OS), were compared between the groups. The inverse probability of treatment weighting (IPTW)-adjusted Cox-regression analyses was performed to evaluate the impact of LNU on the prognosis. Results: Of the 229 patients, 48 (21%) underwent LNU. There were significant differences in patient backgrounds, including preoperative renal function, lymph-node involvement, lymphovascular invasion, and surgical margins, between the groups. Before the background adjustment, intravesical RFS, visceral RFS, CSS, and OS were significantly inferior in the ONU group than in the LNU group. However, in the IPTW-adjusted Cox-regression analysis, no significant differences were observed in intravesical RFS (hazard ratio [HR], 0.65; P = 0.476), visceral RFS (HR, 0.46; P = 0.109), CSS (HR, 0.48; P = 0.233), and OS (HR, 0.40; P = 0.147). Conclusion: Surgical approaches were not independently associated with prognosis in patients with locally advanced UTUC. abstract_id: PUBMED:26425047 Preoperative Underweight Patients with Upper Tract Urothelial Carcinoma Survive Less after Radical Nephroureterectomy. The prognostic impact of body mass index (BMI) in patients with upper tract urothelial carcinoma (UTUC) is an ongoing debate. Our study aimed to investigate the prognostic role of BMI in patients treated with radical nephroureterectomy (RNU) for UTUC from a multi-institutional Korean collaboration. We retrospectively reviewed data from 440 patients who underwent RNU for UTUC at four institutions in Korea. To avoid biasing the survival estimates, patients who had previous or concomitant muscle-invasive bladder tumors were excluded. BMI was categorized into approximate quartiles with the lowest quartile assigned to the reference group. Kaplan-Meier and multivariate Cox regression analyses were performed to assess the influence of BMI on survival. The lower quartile BMI group showed significantly increased overall mortality (OM) and cancer specific mortality (CSM) compared to the 25%-50% quartiles and upper quartile BMI groups. Kaplan-Meier estimates showed similar results. Based on multivariate Cox regression analysis, preoperative BMI as a continuous variable was an independent predictor for OM and CSM. In conclusion, preoperative underweight patients with UTUC in Korea survive less after RNU. Preoperative BMI may provide additional prognostic information to establish risk factors. Answer: The relationship between body mass index (BMI) and survival outcomes in patients treated with radical nephroureterectomy for upper tract urothelial carcinoma (UTUC) has been investigated in several studies, and the findings suggest that BMI can indeed have a predictive value for survival outcomes, although the direction and strength of this association may vary. Some studies have found that being underweight is associated with worse oncological outcomes. For instance, underweight patients had significantly worse recurrence-free survival and cancer-specific survival compared to those with normal weight or obesity (PUBMED:23931096). Similarly, another study concluded that preoperative underweight patients with UTUC had increased overall mortality and cancer-specific mortality (PUBMED:26425047). Conversely, obesity has been linked to both poorer and better outcomes in different studies. One study reported that obesity was independently associated with higher risks of disease recurrence and cancer-specific mortality (PUBMED:29032451), while another study found that overweight UTUC and bladder cancer patients achieved significantly better cancer-specific survival than other groups, but obese UTUC and bladder cancer patients had significantly worse cancer-specific survival (PUBMED:29356359). However, a study from Taiwan indicated that younger UTUC patients with higher BMI were independently associated with a favorable disease-free survival (PUBMED:37924334), and another study found that higher BMI was associated with improved outcomes in Asian patients with UTUC (PUBMED:32318857). In contrast, some studies have reported inconclusive or no significant association between BMI and survival outcomes. For example, a study in Korean patients did not find BMI to be an independent predictor of survival outcomes after adjusting for other clinicopathological factors (PUBMED:26139078). Another study found no differences in overall survival between patients with higher versus lower BMI (PUBMED:37924334). Overall, the evidence suggests that BMI may have a predictive role in the survival outcomes of patients with UTUC treated with radical nephroureterectomy, but the relationship is complex and may be influenced by factors such as ethnicity, age, and other clinicopathological variables. Further research is needed to clarify the prognostic significance of BMI in this patient population.
Instruction: Does Degenerative Lumbar Spine Disease Influence Femoroacetabular Flexion in Patients Undergoing Total Hip Arthroplasty? Abstracts: abstract_id: PUBMED:27020429 Does Degenerative Lumbar Spine Disease Influence Femoroacetabular Flexion in Patients Undergoing Total Hip Arthroplasty? Background: Sitting pelvic tilt dictates the proximity of the rim of the acetabulum to the proximal femur and, therefore, the risk of impingement in patients undergoing total hip arthroplasty (THA). Sitting position is achieved through a combination of lumbar spine segmental motions and/or femoroacetabular articular motion in the lumbar-pelvic-femoral complex. Multilevel degenerative disc disease (DDD) may limit spine flexion and therefore increase femoroacetabular flexion in patients having THAs, but this has not been well characterized. Therefore, we measured standing and sitting lumbar-pelvic-femoral alignment in patients with radiographic signs of DDD and in patients with no radiographic signs of spine arthrosis. Questions/purposes: We asked: (1) Is there a difference in standing and sitting lumbar-pelvic-femoral alignment before surgery among patients undergoing THA who have no radiographic signs of spine arthrosis compared with those with preexisting lumbar DDD? (2) Do patients with lumbar DDD experience less spine flexion moving from a standing to a sitting position and therefore compensate with more femoroacetabular flexion compared with patients who have no radiographic signs of arthrosis? Methods: Three hundred twenty-five patients undergoing primary THA had preoperative low-dose EOS spine-to-ankle lateral radiographs in standing and sitting positions. Eighty-three patients were excluded from this study for scoliosis (39 patients), spondylolysis (15 patients), not having five lumbar vertebrae (7 patients), surgical or disease fusion (11 patients), or poor image quality attributable to high BMI (11 patients). In the remaining 242 of 325 patients (75%), two observers categorized the lumbar spine as either without radiographic arthrosis or having DDD based on defined radiographic criteria. Sacral slope, lumbar lordosis, and proximal femur angles were measured, and these angles were used to calculate lumbar spine flexion and femoroacetabular flexion in standing and sitting positions. Patients were aligned in a standardized sitting position so that their femurs were parallel to the floor to achieve approximately 90° of apparent hip flexion. Results: After controlling for age, sex, and BMI, we found patients with DDD spines had a mean of 5° more posterior pelvic tilt (95% CI, -2° to -8° lower sacral slope angles; p < 0.01) and 7° less lumbar lordosis (95% CI, -10° to -3°; p < 0.01) in the standing position compared with patients without radiographic arthrosis. However, in the sitting position, patients with DDD spines had 4° less posterior pelvic tilt (95% CI, 1°-7° higher sacral slope angles; p = 0.02). From standing to sitting position, patients with DDD spines experienced 10° less spine flexion (95% CI, -14° to -7°; p < 0.01) and 10° more femoroacetabular flexion (95% CI, 6° to 14°; p < 0.01). Conclusions: Most patients undergoing THA sit in a similar range of pelvic tilt, with a small mean difference in pelvic tilt between patients with DDD spines and those without radiographic arthrosis. However, in general, the mechanism by which patients with DDD of the lumbar spine achieve sitting differs from those without spine arthrosis with less spine flexion and more femoroacetabular flexion. Clinical Relevance: When planning THA, it may be important to consider which patients sit with less posterior pelvic tilt and those who rotate their pelvises forward to achieve a sitting position, as both mechanisms will limit or reduce the functional anteversion of the acetabular component in a patient with a THA. Our study provides some additional perspective on normal relationships between pelvic tilt and femoroacetabular flexion, but further research might better characterize this relationship in outliers and the possible implications for posterior instability after THA. abstract_id: PUBMED:27998660 Prosthetic Dislocation and Revision After Primary Total Hip Arthroplasty in Lumbar Fusion Patients: A Propensity Score Matched-Pair Analysis. Background: Lumbar-pelvic fusion reduces the variation in pelvic tilt in functional situations by reducing lumbar spine flexibility, which is thought to be important in maintaining stability of a total hip arthroplasty (THA). We compared dislocation and revision rates for patients with lumbar fusion and subsequent THA to a matched comparison cohort with hip and spine degenerative changes undergoing only THA. Methods: We identified patients in New York State who underwent primary elective lumbar fusion for degenerative disc disease pathology and subsequent THA between January 2005 and December 2012. A propensity score match was performed to compare 934 patients with prior lumbar fusion to 934 patients with only THA according to age, gender, race, Deyo comorbidity score, year of surgery, and surgeon volume. Revision and dislocation rates were assessed at 3, 6, and 12 months post-THA. Results: At 12 months, patients with prior lumbar fusion had significantly increased rates of THA dislocation (control: 0.4%; fusion: 3.0%; P < .001) and revision (control: 0.9%; fusion: 3.9%; P < .001). At 12 months, fusion patients were 7.19 times more likely to dislocate their THA (P < .001) and 4.64 times more likely to undergo revision (P < .001). Conclusion: Patients undergoing lumbar fusion and subsequent THA have significantly higher risks of dislocation and revision of their hip arthroplasty than a matched cohort of patients with similar hip and spine pathology but only undergoing THA. During preoperative consultation for patients with prior lumbar fusion, orthopedic surgeons must educate the patient and family about the increased risk of dislocation and revision. abstract_id: PUBMED:32922702 Total hip arthroplasty and lumbar spine disorders: Plain co-existence or mutual influence? Lumbar spine disorders (LSD) might influence the outcome after total hip arthroplasty (THA). Despite a known common prevalence of LSD and degenerative hip disorders, this study investigates their mutual influence in case of co-existence with the purpose to advance surgeons planning and patient's prognosis. Patients with and without LSD were compared before and at the one-year postoperative examination. For clinical evaluation the WOMAC was assessed. The radiological analysis focused on cup anteversion and inclination. The total group included 203 consecutive patients. The overall incidence of LSD was 51.0%. Patients with LSD were on average 4.3 years older and had a 1.8 higher BMI than non-LSD patients (P<0.05). The cup positioning and the clinical results were comparable between both groups before and at the last time of follow up (P>0.05). No hip dislocations nor clinical signs of impingement were seen.We can conclude that there is a high degree of co-existence of LSD and hip disorders. However, a strong negative impact of LSD to clinical or radiologic results could not be confirmed in our study. abstract_id: PUBMED:37088220 Spine or Hip First? Outcomes in Patients Undergoing Sequential Lumbar Spine or Hip Surgery. Background: Lumbar spine pathology frequently coexists in patients who have hip arthrosis. There is controversy on whether lumbar or hip pathology should be first addressed. The purpose of this study was to evaluate the outcomes of sequential lumbar spine (LSP) or hip arthroplasty (THA). Methods: Using a large national database from 2010 to 2020, we reviewed the records of 241,279 patients who had concurrent hip arthritis and lumbar spine disease defined as spinal stenosis, lumbar radiculopathy, or degenerative disc disease. During the study period, 6,458 (2.7%) patients with concurrent hip/spine disease underwent sequential operative treatment of either the hip joint or lumbar spine within 2 years. The rates of subsequent surgery in either the hip or the spine, opioid requirements, and rates of hip dislocation were determined and analyzed using compared Chi-squared analyses. Results: Patients undergoing THA first had lower risk of subsequent spinal procedure compared to patients who had spinal procedures first (5.7 versus 23.7%, P < .001). This disparity was maintained up to 5 years (P < .001). Opioid requirements at 1 year were highest in patients who underwent spinal procedures only (836 pills/patient) compared to any other group THA only (566 pills/patient), LSP and then THA (564 pills/patient), THA and LSP (586 pills/patient). Also, THA following LSP was associated with significantly higher rates of dislocation compared to patients undergoing THA first (3.2 versus 1.9%, P < .001). Conclusion: Total hip arthroplasty first in patients who have concurrent spine disease was associated with lower risk of subsequent surgery, opioid requirement, and risk of postoperative instability compared to patients having lumbar procedure first. abstract_id: PUBMED:27444852 Prior Lumbar Spinal Arthrodesis Increases Risk of Prosthetic-Related Complication in Total Hip Arthroplasty. Background: Degenerative hip disorders often coexist with degenerative changes of the lumbar spine. Limited data on this patient population suggest inferior functional improvement and pain relief after surgical management. The purpose of this study is to compare the rates of prosthetic-related complication after primary total hip arthroplasty (THA) in patients with and without prior lumbar spine arthrodesis (SA). Methods: Medicare patients (n = 811,601) undergoing primary THA were identified and grouped by length of prior SA (no fusion, 1-2 levels fused [S-SAHA], and ≥3 levels fused [L-SAHA]). Results: Compared with controls, patients with prior SA had significantly higher rates of complications including dislocation (control: 2.36%; S-SAHA: 4.26%; and L-SAHA: 7.51%), revision (control: 3.43%, S-SAHA: 5.55%, and L-SAHA: 7.77%), loosening (control: 1.33%, S-SAHA: 2.10%, and L-SAHA: 3.04%), and any prosthetic-related complication (control: 7.33%, S-SAHA: 11.15% [relative risk: 1.52], and L-SAHA: 14.16% [relative risk: 1.93]) within 24 months (P < .001). Conclusion: The interplay of coexisting degenerative hip and spine disease deserves further attention of both arthroplasty and spine surgeons. abstract_id: PUBMED:28358974 The Impact of Lumbar Spine Disease and Deformity on Total Hip Arthroplasty Outcomes. Concomitant spine and hip disease in patients undergoing total hip arthroplasty (THA) presents a management challenge. Degenerative lumbar spine conditions are known to decrease lumbar lordosis and limit lumbar flexion and extension, leading to altered pelvic mechanics and increased demand for hip motion. In this study, the effect of lumbar spine disease on complications after primary THA was assessed. The Medicare database was searched from 2005 to 2012 using International Classification of Diseases, Ninth Revision, procedure codes for primary THA and diagnosis codes for preoperative diagnoses of lumbosacral spondylosis, lumbar disk herniation, acquired spondylolisthesis, and degenerative disk disease. The control group consisted of all patients without a lumbar spine diagnosis who underwent THA. The risk ratios for prosthetic hip dislocation, revision THA, periprosthetic fracture, and infection were significantly higher for all 4 lumbar diseases at all time points relative to controls. The average complication risk ratios at 90 days were 1.59 for lumbosacral spondylosis, 1.62 for disk herniation, 1.65 for spondylolisthesis, and 1.53 for degenerative disk disease. The average complication risk ratios at 2 years were 1.66 for lumbosacral spondylosis, 1.73 for disk herniation, 1.65 for spondylolisthesis, and 1.59 for degenerative disk disease. Prosthetic hip dislocation was the most common complication at 2 years in all 4 spinal disease cohorts, with risk ratios ranging from 1.76 to 2.00. This study shows a significant increase in the risk of complications following THA in patients with lumbar spine disease. [Orthopedics. 2017; 40(3):e520-e525.]. abstract_id: PUBMED:32089366 The Majority of Total Hip Arthroplasty Patients With a Stiff Spine Do Not Have an Instrumented Fusion. Background: Total hip arthroplasty (THA) patients with limited lumbar flexion (LF) have increased rates of dislocation. An instrumented spinal fusion is a well-recognized cause whose risk increases with increasing number of levels fused. However, many patients without an instrumented fusion (IF) also exhibit abnormal spinopelvic mobility. The purpose of this study was to understand the proportion of THA patients without an IF that have a stiff spine (SS) and behave as if they are surgically fused. Methods: A retrospective analysis was performed on 6340 primary THA patients, all of whom had preoperative spinopelvic measurements. Any IF of the lumbar spine was observed on the lateral standing radiograph and recorded. SS was classified by LF ≤ 20°, and the percentage of patients with an IF and limited LF was determined. Results: Three hundred fifty-six (6%) patients had a SS, and only 67 (19%) had an IF. Of the entire 6340 patients, 207 (3%) had an IF. Of these 207, only 67 (32%) had a SS. Conclusions: The vast majority (81%) of THA patients with a SS do not have an IF. We recommend preoperative spinopelvic assessment of all patients undergoing THA, as only a minority of those with limited LF have an IF and may otherwise be overlooked. Lumbar degenerative disc disease is common in THA patients, limits the available LF in the same way an IF might and potentially increases the risk of dislocation in this subset of patients. Level Of Evidence: III. abstract_id: PUBMED:31809465 Surgical Treatment of Patients With Dual Hip and Spinal Degenerative Disease: Effect of Surgical Sequence of Spinal Fusion and Total Hip Arthroplasty on Postoperative Complications. Study Design: Retrospective study. Objective: To determine how lumbar spinal fusion-total hip arthroplasty (LSF-THA) operative sequence would affect THA outcomes. Summary Of Background Data: Outcomes following THA in patients with a history of lumbar spinal degenerative disease and fusion are incompletely understood. Methods: The PearlDiver Research Program (http://www.pearldiverinc.com) was used to identify patients undergoing primary THA. Patients were divided into four cohorts: 1) Primary THA without spine pathology, 2) remote LSF prior to hip pathology and THA, and patients with concurrent hip and spinal pathology that had 3) THA following LSF, and 4) THA prior to LSF. Postoperative complications and opioid use were assessed with multivariable logistic regression to determine the effect of spinal degenerative disease and operative sequence. Results: Between 2007 and 2017, 85,595 patients underwent primary THA, of whom 93.6% had THA without lumbar spine degenerative disease, 0.7% had a history of remote LSF, and those with concurrent hip and spine pathology, 1.6% had THA prior to LSF, and 2.4% had THA following LSF. Patients with hip and lumbar spine pathology who underwent THA prior to LSF had significantly higher rates of dislocation (aOR = 2.46, P < 0.0001), infection (aOR = 2.65, P < 0.0001), revision surgery (aOR = 1.91, P < 0.0001), and postoperative opioid use at 1 month (aOR: 1.63, P < 0.001), 3 months (aOR = 1.80, P < 0.001), 6 months (aOR: 2.69, P < 0.001), and 12 months (aOR = 3.28, P < 0.001) compared with those treated with THA following LSF. Conclusion: Patients with degenerative hip and lumbar spine pathology who undergo THA prior to LSF have a significantly increased risk of postoperative dislocation, infection, revision surgery, and prolonged opioid use compared with THA after LSF. Surgeons should consider the surgical sequence of THA and LSF on outcomes for patients with this dual pathology. Shared decision making between patients, spine surgeons, and arthroplasty surgeons is necessary to optimize outcomes in patients with concomitant hip and spine pathology. Level Of Evidence: 3. abstract_id: PUBMED:37158334 Spinopelvic challenges in primary total hip arthroplasty. There is no universal safe zone for cup orientation. Patients with spinal arthrodesis or a degenerative lumbar spine are at increased risk of dislocation. The relative contributions of the hip (femur and acetabulum) and of the spine (lumbar spine) in body motion must be considered together. The pelvis links the two and influences both acetabular orientation (i.e. hip flexion/extension) and sagittal balance/lumbar lordosis (i.e. spine flexion/extension). Examination of the spino-pelvic motion can be done through clinical examination and standard radiographs or stereographic imaging. A single, lateral, standing spinopelvic radiograph would be able to providemost relevant information required for screening and pre-operative planning. A significant variability in static and dynamic spinopelvic characteristics exists amongst healthy volunteers without known spinal or hip pathology. The stiff, arthritic, hip leads to greater changes in pelvic tilt (changes are almost doubled), with associated obligatory change in lumbar lordosis to maintain upright posture (lumbar lordosis is reduced to counterbalance for the reduction in sacral slope). Following total hip arthroplasty and restoration of hip flexion, spinopelvic characteristics tend to change/normalize (to age-matched healthy volunteers). The static spinopelvic parameters that are directly associated with increased risk of dislocation are lumbo-pelvic mismatch (pelvic incidence - lumbar lordosis angle >10°), high pelvic tilt (>19°), and low sacral slope when standing. A high combined sagittal index (CSI) when standing (>245°) is associated with increased risk of anterior instability, whilst low CSI when standing (<205°) is associated with increased risk of posterior instability. Aiming to achieve an optimum CSI when standing within 205-245° (with narrower target for those with spinal disease) whilst ensuring the coronal targets of cup orientation targets are achieved (inclination/version of 40/20 ±10°) is our preferred method. abstract_id: PUBMED:26883265 The effect of total hip arthroplasty on sagittal spinal-pelvic-leg alignment and low back pain in patients with severe hip osteoarthritis. Purpose: Sagittal spinopelvic malalignment has been reported in spinal disorders such as low back pain (LBP), and restoration of normal alignment is targeted when treating these disorders. Abnormal sagittal spinal-pelvic-leg alignment has been reported in patients with severe hip osteoarthritis (OA), who have a high prevalence of associated LBP. This prospective longitudinal study aimed to investigate changes in sagittal spinal-pelvic-leg alignment after total hip arthroplasty (THA) in patients with severe hip OA, and whether these changes contribute to LBP relief. Methods: Patients undergoing primary THA due to severe unilateral hip OA were recruited. Physical examination and X-ray films were taken to rule out any spinal disorder. Sagittal alignment of pelvis, hip, and spine was analyzed on lateral radiographs taken before (baseline) and 1 year after (follow-up) THA. Functional instruments were completed by patients including: visual analog scale (VAS) for LBP, Roland-Morris Disability Questionnaire (RMDQ), and Harris Hip Score (HHS). Comparisons were carried out at baseline and follow-up, and between patients with and without LBP. Results: The recruited 69 patients showed significantly reduced hip flexion and improved global spinal balance at follow-up compared with baseline. LBP was reported by 39 patients (56.5 %) before surgery; at follow-up, 17 reported complete resolution, while 22 reported significant relief. Significant decreases in VAS and RMDQ scores in lumbar spine and increase in hip HHS were observed. Conclusions: THA in patients with severe hip OA could help correct abnormal sagittal spinal-pelvic-leg alignment and relieve comorbid LBP. Improvements in hip flexion and global spinal balance might be involved in the mechanism of LBP relief. Answer: Yes, degenerative lumbar spine disease (DDD) does influence femoroacetabular flexion in patients undergoing total hip arthroplasty (THA). A study found that patients with DDD spines experienced 10° less spine flexion and 10° more femoroacetabular flexion when moving from a standing to a sitting position compared with patients who had no radiographic signs of arthrosis (PUBMED:27020429). This suggests that the mechanism by which patients with DDD of the lumbar spine achieve a sitting position differs from those without spine arthrosis, with less spine flexion and more femoroacetabular flexion. This has clinical relevance for THA planning, as it may be important to consider which patients sit with less posterior pelvic tilt and those who rotate their pelvises forward to achieve a sitting position, as both mechanisms can limit or reduce the functional anteversion of the acetabular component in a patient with THA (PUBMED:27020429).
Instruction: Doctor, is my teddy bear okay? Abstracts: abstract_id: PUBMED:34374472 What do children think about doctors' communication at the Teddy Bear Hospital? Aim: Excellent communication is essential for health professionals working with children. Teddy Bear Hospital (TBH) is an innovative method of developing paediatric communication skills in health-care students. By exploring the child's perspective of medical students' communication at the TBH, we sought to better understand the role TBH plays in the development of the communication skills in medical students. Methods: Semi-structured interviews were conducted with 31 children, aged 3-8 years old, who were attending a TBH run by third year medical students at the Royal Children's Hospital in Melbourne. These interviews were recorded and transcribed after which themes were generated by inductive content analysis using the programme NVivo 12. Results: Children used mostly positive language when describing interactions with teddy doctors. However, almost half of the children could not recall the medical students explaining why their teddy was sick or how their teddy would get better. Furthermore, many teddies returned from TBH with medical issues different to their initial presentation. Conclusions: The communication described at TBH was overwhelmingly positive with children describing little difference between medical students and actual doctors. However, the mismatch in teddy medical issues before and after a visit to TBH along with the lack of understanding on teddy health management plans, suggests the need for further evidence-based training in communication skills for medical students to improve their ability to communicate with very young children. abstract_id: PUBMED:23818510 Does the 'Teddy Bear Hospital' enhance preschool children's knowledge? A pilot study with a pre/post-case control design in Germany. The 'Teddy Bear Hospital' is a medical students' project, which has been increasingly established in many countries. To evaluate this concept, we examined the effects of a German Teddy Bear Hospital on children's knowledge relating to their body, health and disease. Using a quasi-experimental pre/post design, we examined 131 preschool children from 14 German kindergartens with pictorial interview-based scales. The analysis of covariance revealed that the children who visited the Teddy Bear Hospital had a significantly better knowledge concerning their body, health and disease than the children from the control group. This German Teddy Bear Hospital is a good health education vehicle for preschool children. abstract_id: PUBMED:18847160 Doctor, is my teddy bear okay? The "Teddy Bear Hospital" as a method to reduce children's fear of hospitalization. Background: Children report various types of fear in the context of hospitalization, such as fear of separation from the family, having injections and blood tests, staying in the hospital for a long time, and being told "bad news" about their health. Objectives: To examine the effects of the "Teddy Bear Hospital" method on preschool children's fear of future hospitalization. Methods: The study group comprised 41 preschool children aged 3-6.5 years (mean 5.1 +/- 0.7 years), and 50 preschool children, age matched and from a similar residential area, served as the control group. Assessment included a simple one-item visual analog scale of anxiety about hospitalization. This was assessed individually one day prior to the intervention and again a week after the intervention in both groups. Results: While baseline levels of anxiety were not different between groups [t(89) = 0.4, NS], children in the "Teddy Bear Hospital" group reported significantly lower levels of anxiety than the control group at follow-up. Conclusions: Our results indicate that by initiating a controlled pain-free encounter with the medical environment in the form of a "Teddy Bear Hospital", we can reduce children's anxiety about hospitalization. abstract_id: PUBMED:30220524 More than just teddy bears: Unconventional transmission agents in the operating room. Introduction: Surgical site infection (SSI) following orthopedic surgery can have a substantial impact on patients and families. The rate remains high, ranging from 0.5% to 8.5% in pediatric spine surgery. It is common to allow children to bring a teddy bear (or similar toy) to the surgical ward to help reduce the stress of surgery. We hypothesize that despite their known benefits for children, teddies would increase the bacterial load in the surgical room. Methods: A blinded descriptive study was conducted from June 2015 to September 2016. The study included children entering the hospital through the emergency ward for a traumatic cause requiring surgery. Patients admitted for infectious problems and those who had been hospitalized less than 6 months before the inclusion date were excluded. A picture of the teddy was taken and stored in a blind fashion. The AFNOR (Association française de normalisation) standardized rules for bacteriological surface control and the ISO/DIS 14698 protocol were strictly followed. Two independent observers performed blind bacteriologic analyses of the teddy bears with bacteria identification and colony counts. Photos of the teddy bears were then analyzed by two blinded, independent observers: one doctor and one parent from outside the hospital. Cleanliness and fluffiness of the toy was evaluated using a numeric scale. Results: Bacteria were identified on 100% of the 53 teddies included. The mean number of bacteria was 182.5±49.8 CFU/25 cm2. Eight teddies (15.1%) tested positive for potential pathogenic bacteria (two staphylococcus aureus, one acinetobacter ursingii, four acinetobacter baumannii, one pseudomonas stutzeri). Three teddies (5.7%) tested positive for fungi. The median cleanliness score was 2 (interquartile range (IQR)=1) if rated by the doctor and 2 (IQR=1) if rated by the parent. No statistical difference was found between these two values in the global teddy bear population. We found no any statistical link between the number of CFUs and the cleanliness scores given by the doctor. The median fluffiness score given by the parent was 2 (IQR=1). Looking at the correlative CFUs, we found a statistically significant difference between each stage of fluffiness with a higher stage showing higher CFU (P<0.0001). Conclusion: Despite their documented benefits for the child, teddy bears are not appropriate in the surgical room. abstract_id: PUBMED:33620024 Healthcare Professionals' Views for the Content of the Teddy Bear Hospital for a Child Sexual Abuse Prevention Module. Worldwide studies have reported a drastic increase in child sexual abuse (CSA) involving very young children. In Malaysia, several attempts have been made to combat this problem via educational programs. Teachers have reported a lack of confidence in teaching this topic; hence a less threatening approach is needed. The Teddy Bear Hospital (TBH) is an innovation whereby the children bring their teddies while visiting the volunteers assuming healthcare practitioners' role. This execution is effective in reducing the children's anxieties about hospitalization and increasing their health knowledge. Therefore, our objective is to explore healthcare practitioners' (HCP) views for the content of TBH and its approach as a personal safety module toward preventing CSA. Eighteen in-depth-interviews were conducted. Interviews were thematically analyzed. Participants suggest the TBH method as a good approach to teaching prevention of CSA among preschoolers. Four main themes emerged from this study: (1) educating children about personal safety, (2) moral values and faith as a medium to prevent child sexual abuse, (3) addressing social media use in children, and (4) general approach to content delivery. The involvement of parents is crucial. Addressing moral values and faith and usage of social media platforms are also essential factors to look into. abstract_id: PUBMED:34057270 Integrating elements of teddy bear therapy into cognitive behavioral therapy for a child with obsessive-compulsive disorder: A case study. Problem: Childhood obsessive-compulsive disorder (OCD) can chronically affect functioning across a multitude of areas. Cognitive behavioral therapy (CBT) is well-evidenced as an effective treatment option, however, there is less research on how CBT for OCD can best be adapted to meet the specific needs of younger children. Integrating CBT with forms of therapy that incorporate play and externalization may be particularly appropriate for this age group. However, more research is needed detailing how this could be carried out in clinical settings. Methods: This study meets this need by describing the treatment of an 8-year-old boy with OCD. An evidence-based CBT approach was used integrated with teddy-bear therapy (TBT). This study employs a single-case A-B design to explore the acceptability and benefits of using an integrated CBT/TBT treatment approach. Findings And Conclusions: A reduction in ritualistic behavior and anxiety was seen following treatment, with qualitative feedback from the client and his family showing the inclusion of TBT to be experienced as acceptable and useful. All therapy goals were met by the end of treatment, though the parental scores on the Revised Child Anxiety and Depression Scale indicated ongoing clinically significant OCD symptoms. Implications for clinical practice and future research are discussed. abstract_id: PUBMED:15529904 Teddy bear in the heart. In a patient with native aortic valve endocarditis, transoesphageal echocardiography yielded a teddy bear appearance which is not reported so far. A perivalvular abscess (right ear), the superior vena cava in cross section (left ear) and the dilated (post-stenotic) aortic root (face) made up the teddy bear. This was not a cuddlesome toy but an ominous sign. The genesis of perivalvular abscess as well as the role of transoesphageal echocardiography in its diagnosis and treatment are briefly reviewed. abstract_id: PUBMED:22261387 "Teddy bear granuloma", a rare condition: a case report of a 3-year-old child Conjunctival synthetic fiber granulomas, or "Teddy bear granulomas", are rare granulomatous responses to synthetic fabric fibers. We report the case of a 3-year-old boy with no prior infectious or traumatic history, brought in by his parents for an incidentally discovered conjunctival growth in his right eye. Slit lamp examination revealed a 10-mm growth in the inferior fornix surrounding a small greyish foreign body. Surgical excision and histopathology revealed granulomatous inflammatory cell response with foreign body giant cells surrounding exogenous material. This foreign material was birefringent in polarized light, very suggestive of synthetic fabric fibers, which permitted the diagnosis of Teddy bear granuloma. Synthetic fiber granulomas present in children as unilateral, more or less inflammatory growths in the inferior conjunctival fornix. Surgical excision with histopathology makes the diagnosis and effects the cure. abstract_id: PUBMED:35230194 Clinical and histopathological features of conjunctival "Teddy bear" granuloma: A case series. Purpose: To describe the clinical features, histopathological findings, and prognosis of conjunctival Teddy bear granuloma (TBG) diagnosed in a Mexican ophthalmologic referral center in a period of 64 years. Methods: We reviewed clinical and histopathological material from all patients with documented conjunctival TBG. Patient's age, gender, location, clinical signs and symptoms, duration, treatment and, specimen size, were evaluated. Formalin-fixed paraffin embedded 5 microns hematoxylin-eosin stained slides and periodic acid-Schiff special stain as well as examination under polarized light microscopy were performed in all cases. Results: A total of 5 conjunctival TBG were collected. The ages of the patients ranged between 2 and 11 with a mean of 6 years. Female to male ratio was 4:1. The right inferior fornix was involved in 4 cases (80%) with clinical diagnosis of conjunctival mass of unknown etiology. Treatment consisted of surgical removal by slit-lamp biomicroscopy (3 cases) and in the operating room (2 cases). No complications were informed in a short follow-up period. Histopathologically, numerous cross-sections of synthetic and non-synthetic fibers surrounded by multinucleated foreign body giant cells, epithelioid histiocytes, lymphocytes, plasma cells and sparse neutrophils were seen. Polarization light microscopy demonstrated numerous birefringent colorful foreign fibers. The follow-up period of the patients ranged between 1 and 32 weeks with a mean of 10 weeks. Conclusions: Most patients with TBG are children with vague, non-specific symptoms troubling precise clinical diagnosis. Treatment of choice involves surgical excision of the foreign body granuloma followed by topical corticosteroids. All surgical specimens obtained must be sent for histopathological examination. Special awareness is recommended during COVID-19 pandemic lockdown to prevent potential exposure to fibers that may cause conjunctival TBG. abstract_id: PUBMED:9128510 Teddy says "Hi!": Teddy bear clinics revisited. Teddy bear/doll clinics continue to offer a feasible, effective opportunity to reach the community's children and their health information needs. Many thanks to Sunny B. Lee, RN, BS (Swedish Covenant Hospital, Chicago), and Kathy Powell, RN, PhD, and Barb Pierce, RN, MS (Children's Hospital, Birmingham). Answer: The "Teddy Bear Hospital" (TBH) is an innovative method used to help reduce children's fear of hospitalization and to develop paediatric communication skills in health-care students. Children often report fears related to hospitalization, such as fear of separation from their family, having injections and blood tests, staying in the hospital for a long time, and receiving bad news about their health (PUBMED:18847160). TBH aims to create a controlled, pain-free encounter with the medical environment, which has been shown to significantly reduce children's anxiety about hospitalization (PUBMED:18847160). Children generally use positive language when describing their interactions with the medical students, or "teddy doctors," at TBH, and they perceive little difference between these students and actual doctors (PUBMED:34374472). However, some children may not recall the medical students explaining why their teddy was sick or how it would get better, and there can be a mismatch in teddy medical issues before and after a visit to TBH (PUBMED:34374472). This suggests that while TBH is a positive experience, there may be room for improvement in the communication skills training of medical students to ensure they can effectively communicate with very young children (PUBMED:34374472). In addition to reducing fear, TBH has also been found to enhance preschool children's knowledge about their body, health, and disease (PUBMED:23818510). It serves as a health education vehicle, helping children to better understand these concepts in a less intimidating environment (PUBMED:23818510). While TBH has many benefits, it is important to note that teddy bears or similar toys brought into the surgical ward can increase the bacterial load in the surgical room, potentially posing a risk for surgical site infections (PUBMED:30220524). Therefore, despite their comfort to children, teddy bears may not be appropriate in the surgical room (PUBMED:30220524). In summary, the Teddy Bear Hospital is a valuable tool for reducing children's fear of hospitalization and enhancing their health knowledge, but it is important to ensure that the communication skills of medical students are effectively developed and that the potential risks of bringing toys into sterile environments are managed.
Instruction: Is there a role for bi-atrial pacing resynchronisation therapy in the management of drug-refractory atrial fibrillation in patients unsuitable for left atrial ablation? Abstracts: abstract_id: PUBMED:21127382 Is there a role for bi-atrial pacing resynchronisation therapy in the management of drug-refractory atrial fibrillation in patients unsuitable for left atrial ablation? Background: This retrospective cohort study evaluated whether long term outcome of atrial resynchronisation therapy using bi-atrial pacing (BiaP) to treat atrial fibrillation (AF) was effective in patients deemed unfit for left atrial (LA) ablation procedures. Methods And Results: The patient population comprised 2 groups: those deemed suitable for left LA ablation (n=14) and those who were not (n = 17). Both groups underwent BiaP and outcomes were evaluated by comparing symptoms, AF duration, admissions and antiarrhythmic drugs (AAD) for an equal period of time pre and post implantation. Median follow-up was 24 months (range 8-66 months) for the unsuitable group and 31 months (range 7-84 months) for the suitable group. Efficacy in reduction of both AF and symptoms was similar (P = 0.44) in both groups (unsuitable group: 13/17; suitable group: 8/14). There was significant improvement in median AF episodes/week pre and post BiaP in both groups (unsuitable group AF reduction: 5 days/week, P = 0.001; suitable group AF reduction: 4.9 days/week, P = 0.03); the improvement was similar in both groups (P = 0.33). There was a significant reduction in the median number of admissions for AF in both groups (unsuitable group: P = 0.003; suitable group: P = 0.01) and this reduction was also similar (P = 0.70). The median number of AAD was also reduced to a similar degree (P = 0.83) in both groups (suitable group: P = 0.004; unsuitable group: P = 0.001). Conclusions: Atrial resynchronisation therapy is effective in the long term management of drug-resistant AF in patients unsuitable for LA ablation, leading to significant reductions in symptoms, AF duration, admissions and AAD. abstract_id: PUBMED:28687248 Device Therapy for Rate Control: Pacing, Resynchronisation and AV Node Ablation. Atrioventricular node ablation (AVNA) is generally reserved for patients whose atrial fibrillation (AF) is refractory all other therapeutic options, since the recipients will often become pacemaker dependent. In such patients, this approach may prove particularly useful, especially if a tachycardia-induced cardiomyopathy is suspected. Historically, an "ablate and pace" approach has involved AVNA and right ventricular pacing, with or without an atrial lead. There is also an evolving role for atrioventricular node ablation in patients with AF who require cardiac resynchronisation therapy for treatment of systolic heart failure. A mortality benefit over pharmacotherapy has been demonstrated in observational studies and this concept is being further investigated in multi-centre randomised control trials. abstract_id: PUBMED:37187494 His-Purkinje conduction system pacing and atrioventricular node ablation in treatment of persistent atrial fibrillation refractory to multiple ablation procedures: A case report. In patients with symptomatic atrial fibrillation refractory to optimal medical therapy, atrioventricular node ablation followed by permanent pacemaker implantation is an effective treatment option. A 66-year-old woman with symptomatic persistent atrial fibrillation refractory to multiple ablation procedures was referred to our institution. After optimal drug therapy, the patient still had obvious symptoms. Sequential His-Purkinje conduction system pacing and atrioventricular node ablation were performed. Left bundle branch pacing was used as a backup pacing method if thresholds of His bundle pacing were too high or loss of His bundle capture occurred in the follow-up. At the 6-month follow-up, the European Heart Rhythm Association classification for AF was improved, the score of the Atrial Fibrillation Effect on Quality of Life was enhanced, and the 6-Minute Walk Test was ameliorated. The present case was subjected to His-Purkinje conduction system pacing in combination with atrioventricular node ablation as treatment for a symptomatic persistent atrial fibrillation refractory to multiple ablation procedures, and this procedure alleviated symptoms and improved the quality of life in a short-term follow-up. abstract_id: PUBMED:25080310 Left atrial reverse remodeling and prevention of progression of atrial fibrillation with atrial resynchronization device therapy utilizing dual-site right atrial pacing in patients with atrial fibrillation refractory to antiarrhythmic drugs or catheter ablation. Introduction: Dual-site right atrial pacing (DAP) produces electrical atrial resynchronization but its long-term effect on the atrial mechanical function in patients with refractory atrial fibrillation (AF) has not been studied. Methods: Drug-refractory paroxysmal (PAF) and persistent AF (PRAF) patients previously implanted with a dual-site right atrial pacemaker (DAP) with minimal ventricular pacing modes (AAIR or DDDR mode with long AV delay) were studied. Echocardiographic structural (left atrial diameter [LAD] and left ventricular [LV] end diastolic diameter [EDD], end systolic diameter [ESD]) and functional (ejection fraction [EF]) parameters were serially assessed prior to, after medium-term (n = 39) and long-term (n = 34) exposure to DAP. Results: During medium-term follow-up (n = 4.5 months), there was improvement in left atrial function. Mean peak A wave flow velocity increased with DAP as compared to baseline (75 ± 19 vs. 63 ± 23 cm/s, p = 0.003). The long-term impact of DAP was studied with baseline findings being compared with last follow-up data with a mean interval of 37 ± 25 (range 7-145) months. Mean LAD declined from 45 ± 5 mm at baseline to 42 ± 7 mm (p = 0.003). Mean LVEF was unchanged from 52 ± 9 % at baseline and 54 ± 6 % at last follow-up (p = 0.3). There was no significant change in LV dimensions with mean LVEDD being 51 ± 6 mm at baseline and 53 ± 5 mm at last follow-up (p = 0.3). Mean LVESD also remained unchanged from 35 ± 6 mm at baseline to 33 ± 6 mm at last follow-up (p = 0.47). During long-term follow-up, 30 patients (89 %) remained in sinus or atrial paced rhythm as assessed by device diagnostics at 3 years. Conclusions: DAP can achieve long-term atrial reverse remodeling and preserve LV systolic function. DAP when added to antiarrhythmic drug (AAD) and/or catheter ablation (ABL) maintains long-term rhythm control and prevents AF progression in elderly refractory AF patients. Reverse remodeling with DAP may contribute to long-term rhythm control. abstract_id: PUBMED:11992027 Catheter ablation of inducible atrial flutter, in combination with atrial pacing and antiarrhythmic drugs ("hybrid therapy") improves rhythm control in patients with refractory atrial fibrillation. Unlabelled: Atrial flutter or tachycardia may coexist with atrial fibrillation [AF] and can be treated with ablation techniques in attempt to reduce the total AF burden. The role of ablation of latent atrial tachyarrhythmias elicited at electrophysiologic study in conjunction with atrial pacing and antiarrhythmic drugs in patients with refractory AF has not been evaluated. We evaluated the efficacy of catheter ablation of electrically induced atrial flutter or atrial tachycardia in improving rhythm control in patients with refractory AF. Methods: Consecutive patients with refractory AF, and spontaneous atrial flutter (Group 1) or without spontaneous atrial flutter (Group 2) underwent programmed stimulation in a baseline drug-free state. All patients had electrically induced atrial flutter or tachycardia. Radiofrequency ablation of the arrhythmia substrate was performed in all patients. Primary endpoints evaluated for patient outcome in both groups included maintenance of rhythm control and freedom from recurrent atrial tachyarrhythmias. Results: Forty-three patients, with a mean age of 66 +/- 13 years were studied. Group 1 consisted of 22 patients while Group 2 had 21 patients. Ablation of the tricuspid valve-inferior venacaval isthmus was performed in 41 patients who had common atrial flutter induced at electrophysiologic study. Ablation of other atrial sites was performed in 8 patients with induced atypical flutter and 4 patients with induced atrial tachycardia. Ten of these patients had ablation of more than one arrhythmia. 17 patients (40%) had atrial pacing instituted and 28 patients remained on a class 1/3 antiarrhythmic drug. During a mean follow-up of 26 +/- 14 months, 33 patients (82.5%) remained in rhythm control. Actuarial analysis showed 96% of patients in rhythm control at 6 months, 94% at 12 months, and 90% at 24 months. Freedom from symptomatic AF recurrence was 64% at 6 months, 58% at 12 months, and 42% at 24 months. The outcome for both of these endpoints was similar for Group 1 and Group 2 (p = NS). The AF free interval increased significantly from 7+/- 9 days to 172 +/- 121 days (p < 0.01) after ablation. This increase was again similar in both the groups. In the 14 patients were who did not receive atrial pacing and who remained on the same class 1/3 antiarrhythmic drug, the AF free interval increased from 18 +/- 17 days to 212+/- 102 days (p < 0.01). Conclusions: We conclude that electrophysiologic studies can elicit latent atrial flutter or tachycardia in patients with refractory AF without spontaneous monomorphic atrial tachyarrhythmias. Catheter ablation of electrically induced atrial flutter or tachycardia either alone, or with atrial pacing and with antiarrhythmic drug may improve rhythm control and reduce AF recurrences. This is similar in patients with and without spontaneous atrial flutter and refractory AF. abstract_id: PUBMED:11220538 The Atrial Pacing Peri-ablation for Paroxysmal Atrial Fibrillation (PA3) Study: rationale and study design. The Canadian Atrial Pacing Peri-Ablation for Paroxysmal Atrial Fibrillation Study tested the hypotheses that atrial pacing prevents paroxysmal atrial fibrillation (PAF) in patients without symptomatic bradycardia and that DDDR pacing is more likely to prevent PAF following total atrioventricular (AV) node ablation compared to VDD pacing. Patients with PAF who were refractory to or intolerant of antiarrhythmic drug therapy received a Medtronic Thera DR pacemaker 3 months prior to a planned total AV node ablation. Patients were randomized to atrial pacing or no pacing therapy. The time to first recurrence of sustained PAF was the primary study outcome event. Following AV node ablation, patients were randomized to the DDDR or VDD mode in a crossover study design. Patients were followed in each mode for 6 months. The time course of PAF recurrence was compared for each pacing mode. abstract_id: PUBMED:31705185 Long-term experience of atrioventricular node ablation in patients with refractory atrial arrhythmias. Atrial fibrillation and other atrial tachyarrhythmias are increasing with age and concomitant morbidity. First options in symptomatic patients are drug treatment and catheter ablation. Nevertheless, a considerable number of patients suffer from refractory atrial tachyarrhythmias despite treatment. Atrioventricular node ablation (AVNA) may be helpful in many of these patients. Therefore, we investigated AVNA patients with a long-term follow-up. We enrolled 82 patients with a follow-up longer than 1 year receiving AVNA for drug- and ablation-resistant atrial tachyarrhythmias (AA) in a retrospective manner. Mean follow-up duration was 48 ± 24 months. 50% of the patients initially received AVNA to optimize biventricular pacing in cardiac resynchronization therapy, the other 50% because of refractory symptomatic tachyarrhythmias. Persistent AV block was achieved in every patient. Symptom relief and patient satisfaction were high during follow-up. Due to system upgrades there were 63% of patients with a biventricular system during follow-up. In these patients, left-ventricular ejection fraction (LV-EF) increased by 7% (42-49%) after ablation. AVNA is effective in increasing biventricular pacing as well as for symptom relief in patients with refractory atrial tachyarrhythmias. AVNA should be considered as a valuable option in patients with refractory atrial tachyarrhythmias lacking other treatment options. abstract_id: PUBMED:28496905 Role of Bi-Atrial Pacing In Slowing The Progression of Paroxysmal Atrial Fibrillation To Permanent Atrial Fibrillation. Introduction: Bi-atrial lead placement combined with atrial overdrive pacing has demonstrated a reduction in percent time mode switched and mode switches per day. This retrospective analysis compared long term outcomes of patients with right atrial overdrive pacing alone (DAO) to patients having atrial overdrive with bi-atrial leads (BIA) in slowing the progression of paroxysmal atrial fibrillation (PAF) to permanent continuous atrial fibrillation (CAF). Methods: Thirty-three patients age 76.6 (+/-1.96) from our prior investigation were selected. The DAO control group (N=16) had received a standard right atrial pacing lead. The BIA group (N=17) had pacing leads placed in the right atrium and coronary sinus. Patients were followed for a mean 1217 days (+/-838). Days of CAF was classified as the date of final mode switch until analysis. Results: A total of 40,171 follow-up days were evaluated. The mean follow-up for both cohorts was 1217 days (+/-838). The DAO group consisted of 15,318 days (mean 957 +/-761) and the BIA group 24,853 days (mean 1461 +/-854). A lower total number of days were spent in CAF in the BIA group versus the DAO group, 1380 vs 2197 respectively. Corrected for follow-up duration, 5.55% days in CAF was seen in the BIA group vs. 14.34% in the DAO group which did not reach statistical significance. Conclusions: Although BIA overdrive pacing initially demonstrated reduced time in mode switch compared to DAO alone, this analysis did not detect a reduction in progression to CAF. More subjects or a longer follow up would be needed. abstract_id: PUBMED:37304966 The role of conduction system pacing in patients with atrial fibrillation. Conduction system pacing (CSP) has emerged as a promising novel delivery method for Cardiac Resynchronisation Therapy (CRT), providing an alternative to conventional biventricular epicardial (BiV) pacing in indicated patients. Despite increasing popularity and widespread uptake, CSP has rarely been specifically examined in patients with atrial fibrillation (AF), a cohort which forms a significant proportion of the heart failure (HF) population. In this review, we first examine the mechanistic evidence for the importance of sinus rhythm (SR) in CSP by allowing adjustment of atrioventricular delays (AVD) to achieve the optimal electrical response, and thus, whether the efficacy of CSP may be significantly attenuated compared to conventional BiV pacing in the presence of AF. We next evaluate the largest clinical body of evidence in this field, related to patients receiving CSP following atrioventricular nodal ablation (AVNA) for AF. Finally, we discuss how future research may be designed to address the vital question of how effective CSP in AF patients is, and the potential hurdles we may face in delivering such studies. abstract_id: PUBMED:24054481 The role of ablation of the atrioventricular junction in patients with heart failure and atrial fibrillation. Ablation of the atrioventricular junction (AVJ) is a technically easy procedure that is safe and has a high success rate as an intervention for effective ventricular rate control in patients in symptomatic atrial fibrillation. AVJ ablation has been reported to improve quality of life, left ventricular ejection fraction, and exercise duration in these patients and minimize the incidence of inappropriate shocks. Because right ventricular pacing after AVJ ablation may result in decrease in left ventricular function and worsening of heart failure symptoms, there is increasing evidence to support the effectiveness of cardiac resynchronization therapy in atrial fibrillation populations. Answer: Yes, there is a role for bi-atrial pacing resynchronization therapy in the management of drug-refractory atrial fibrillation in patients unsuitable for left atrial ablation. A retrospective cohort study evaluated the long-term outcome of atrial resynchronization therapy using bi-atrial pacing (BiaP) in such patients and found it to be effective. The study included patients deemed unsuitable for left atrial (LA) ablation and compared their outcomes with those suitable for the procedure. Both groups underwent BiaP and showed similar efficacy in the reduction of atrial fibrillation (AF) and symptoms. There was a significant improvement in median AF episodes per week, reduction in the median number of admissions for AF, and a decrease in the median number of antiarrhythmic drugs (AAD) used post-implantation in both groups. The study concluded that atrial resynchronization therapy leads to significant reductions in symptoms, AF duration, admissions, and AAD in the long-term management of drug-resistant AF in patients unsuitable for LA ablation (PUBMED:21127382).
Instruction: Is there a unique measuring method to assess obesity? Abstracts: abstract_id: PUBMED:18842512 Is there a unique measuring method to assess obesity? Unlabelled: In the literature there is not available a uniformly accepted method for assessing the degree of obesity. Aim: To determine how far insulin resistance, serum levels of leptin and resistin are altered in persons categorized on the basis of body-mass index (BMI), body fat percentage, and abdominal circumference. Methods: 101 volunteer boys and 115 girls participated in the studies. Body height was measured, body mass, abdominal circumference, and body composition were determined by InBody3 bioimpedance instrument. Body mass index and body fat percentage were calculated by the instrument. Concentrations of serum glucose, insulin, leptin, and resistin were determined. Insulin resistance was calculated using the homeostasis model: HOMA IR . Results: Body fat percentage, serum levels of leptin and resistin were significantly higher in girls than in boys. Increases in BMI, body fat percentage, and abdominal circumference were associated with the significant elevation of both HOMA IR and serum leptin concentrations. In overweight boys categorized by body fat percentage as obese the serum leptin concentrations were significantly higher than in their non-obese counterparts. Conclusion: Determination of body composition would be important concerning the follow-up of biochemical changes occurring in the body during the course of both epidemiological studies and nutritional interventions. abstract_id: PUBMED:35189956 My nutrition index: a method for measuring optimal daily nutrient intake. Background: Adequate nutrition is essential for individual and population level health. However, determining adequacy of daily nutrient intake in research studies is often challenging given the unique nutritional needs of individuals. Herein, we examine construct, predictive, criterion, content, and concurrent validity of a dietary analytic tool - My Nutrition Index (MNI) for measuring nutrient intake in relation to personalized daily nutrient intake guidelines. MNI gauges adequacy of an individual's daily nutrient intake based on his or her unique demographic and lifestyle characteristics. MNI accounts for potential adverse effects of inadequate and excess nutrient consumption. Methods: MNI, calculated based on 34 nutrients, provides an overall index score ranging from 0 to 100, with higher scores reflecting a more nutritious diet. We calculated MNI scores for 7154 participants ages 18-65 in the National Health and Nutrition Examination Surveys (2007-2014) by using average nutrient intakes from two 24-h dietary recalls. Survey-weighted binary logistic regression models were used to assess associations between MNI scores and obesity, depression, health perceptions, and past or present cardiovascular disease. Results: Higher MNI scores were associated with lower prevalence of self-reported cardiovascular disease (OR = 0.69, CI: 0.52, 0.92, p = 0.012), depression (OR = 0.76, CI: 0.65, 0.90, p < 0.001), and obesity (OR = 0.92, CI: 0.87, 0.99, p = 0.016), as well as more favorable health perceptions (OR = 1.24, CI: 1.13, 1.37, p < 0.001). Conclusions: MNI provides an individualized approach for measuring adequacy/sufficiency of daily nutrient intake that can validly be employed to assess relationships between nutrition and health outcomes in research studies. abstract_id: PUBMED:31119911 Impact of body fat and obesity on tissue dielectric constant (TDC) as a method to assess breast cancer treatment-related lymphedema (BCRL). Obesity is linked to the risk of breast cancer and treatment-related lymphedema (BCRL). Thus, knowledge of how obesity, or more specifically total body fat percentage (TBF) and body mass index (BMI), affect measurements that are used to detect or track lymphedema is clinically important. Tissue dielectric constant (TDC) is one measure used to help characterize lymphedema features, detect its presence, and assess treatment-related changes. The goal of this research was to determine the extent to which TDC values depend on TBF and BMI. TDC was measured on both forearms (2.5mm depth) in 250 women (18-72 years) along with TBF (impedance, 50KHz). TBF was 12.2%- 54.4% (median=29.3%) and BMI was 14.7Kg/m2-44.3 Kg/m2 (median=22.6 Kg/m2). TDC values and interarm ratios were compared between subgroups that had TBF and BMI values in lower vs. upper quartiles. Subjects in the upper quartile had slightly lower TDC values (1.3 TDC units, p <0.01) that was at most a 5% differential. Contrastingly, TDC interarm ratios were not dependent on TBF or BMI levels. These findings suggest that when tracking lymphedema changes using the TDC method, treatment-related or temporal changes in a woman's TBF or BMI are unlikely to significantly impact TDC values or their interarm ratios. abstract_id: PUBMED:31667091 A novel method for measuring diet-induced thermogenesis in mice. Diet-induced thermogenesis (DIT) refers to energy expenditure (EE) related to food consumption. Enhancing DIT can lead to weight loss. Factors that increase DIT are expected to lower body mass index and body fat mass. Although various methods have been developed for measuring DIT in humans, there is currently no method available for calculating absolute DIT values in mice. Therefore, we attempted to measure DIT in mice by applying the method more commonly used for humans. Mouse energy metabolism was first measured under fasting conditions; EE was plotted against the square root of the activity count, and a linear regression equation was fit to the data. Then, energy metabolism was measured in mice that were allowed to feed ad libitum, and EE was plotted in the same way. We calculated the DIT by subtracting the predicted EE value from the fed EE value for the same activity count. The methodology for measuring DIT in mice may be helpful for researching ways of combatting obesity by increasing DIT. •The methodology for measuring absolute DIT values in mice is developed.•For mice, the proportion of DIT compared with calorie intake and EE are 12.3% and 21.7%, respectively. abstract_id: PUBMED:6485692 A method for measuring the size of the gastric outlet in obesity surgery. A simple method was evolved for measuring the gastric outlet size following surgery for obesity. Measurements are made through a fiberoptic endoscope, using a Fogarty catheter. A "phantom" study showed the method to be accurate and superior to endoscopic estimation without a reference balloon. High interexaminer and intraexaminer reproducibility was confirmed by independent measurements in patients. abstract_id: PUBMED:21965061 Accuracy of measuring tape and vertebral-level methods to determine shoulder internal rotation. Background: Goniometers can be used to assess shoulder ROM with reasonable accuracy, but not internal rotation. Vertebral level, as determined by the hand-behind-the-back method, is used frequently but its reproducibility is questionable. We therefore devised a new measuring tape-based method for determining vertebral level. Questions/purposes: We (1) compared the accuracy of a measuring tape-based and conventional vertebral-level method; (2) determined whether BMI affects their accuracy; and (3) devised a formula for converting distances measured using a measuring tape to vertebral levels. Patients And Methods: We assessed internal rotation in 61 patients with shoulder pain. An electrode was taped to the skin where the thumb reached maximally behind the back. The vertebral-level method involved determining the vertebral level of the electrode by palpating bony landmarks whereas the measuring tape method involved measuring the distance between the C7 spinous process and the electrode. True vertebral levels of the electrode were confirmed by radiography. Results: In nonobese patients, the accuracies of the upper thoracic and lumbar-level measurements were better for the measuring tape method than the vertebral-level method (r = 0.861 and 0.700, respectively in upper thoracic; 0.913 and 0.710, respectively in lumbar). Patient BMI affected the accuracy of the vertebral-level method but not that of the measuring tape method. The distances obtained using the measuring tape method could be converted into vertebral-level units using the formula: estimated vertebral level = 0.031 × [distance between C7 spinous process and thumb behind back] - 0.044 × [patient height] + 7.277. Conclusions: The measuring tape-based method reflected shoulder internal rotation with higher accuracy than the vertebral-level method, and unlike the vertebral-level method, the measuring tape method was not affected by obesity. Level Of Evidence: Level II, diagnostic study. See Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:1202548 Measuring body image. Preliminary report on a new method. A new, very simple and inexpensive METHOD FOR MEASUREING BODY IMage is described. A number of female patients with different somatic complaints were investigated. A specific patterning of body image related to the localization of somatic complaints is revealed. Anorexic and obese patients have an overall larger body image while the distortions in other conditions mostly go on the elevation in the vertical axis. Different aspects of this way of measuring body image are discussed, but the difference of the patterning is clear. This is a preliminary report on the new method. abstract_id: PUBMED:26199893 Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner. The statistics on the growing number of non-healing wounds is alarming. In the United States, chronic wounds affect 6.5 million patients. An estimated US $25 billion is spent annually on treatment of chronic wounds and the burden is rapidly growing due to increasing health care costs, an aging population and a sharp rise in the incidence of diabetes and obesity worldwide.(1) Accurate wound measurement techniques will help health care personnel to monitor the wounds which will indirectly help improving care.(7,9) The clinical practice of measuring wounds has not improved even today.(2,3) A common method like the ruler method to measure wounds has poor interrater and intrarater reliability.(2,3) Measuring the greatest length by the greatest width perpendicular to the greatest length, the perpendicular method, is more valid and reliable than other ruler based methods.(2) Another common method like acetate tracing is more accurate than the ruler method but still has its disadvantages. These common measurement techniques are time consuming with variable inaccuracies. In this study, volumetric measurements taken with a non-contact 3-D scanner are benchmarked against the common ruler method, acetate grid tracing, and 2-D image planimetry volumetric measurement technique. A liquid volumetric fill method is used as the control volume. Results support the hypothesis that the 3-D scanner consistently shows accurate volumetric measurements in comparison to standard volumetric measurements obtained by the waterfill technique (average difference of 11%). The 3-D scanner measurement technique was found more reliable and valid compared to other three techniques, the ruler method (average difference of 75%), acetate grid tracing (average difference of 41%), and 2D planimetric measurements (average difference of 52%). Acetate tracing showed more accurate measurements compared to the ruler method (average difference of 41% (acetate tracing) compared to 75% (ruler method)). Improving the accuracy in measuring chronic wounds might improve overall care of patients with non-healing wounds. This study consistently shows that the 3-D scanner is a more accurate, quicker, and safer method for measuring wounds. abstract_id: PUBMED:26850199 Novel method to assess arterial insufficiency in rodent hind limb. Background: Lack of techniques to assess maximal blood flow capacity thwarts the use of rodent models of arterial insufficiency to evaluate therapies for intermittent claudication. We evaluated femoral vein outflow (VO) in combination with stimulated muscle contraction as a potential method to assess functional hind limb arterial reserve and therapeutic efficacy in a rodent model of subcritical limb ischemia. Materials And Methods: VO was measured with perivascular flow probes at rest and during stimulated calf muscle contraction in young, healthy rats (Wistar Kyoto, WKY; lean Zucker rats, LZR) and rats with cardiovascular risk factors (spontaneously hypertensive [SHR]; obese Zucker rats [OZR]) with acute and/or chronic femoral arterial occlusion. Therapeutic efficacy was assessed by administration of Ramipril or Losartan to SHR after femoral artery excision. Results: VO measurement in WKY demonstrated the utility of this method to assess hind limb perfusion at rest and during calf muscle contraction. Although application to diseased models (OZR and SHR) demonstrated normal resting perfusion compared with contralateral limbs, a significant reduction in reserve capacity was uncovered with muscle stimulation. Administration of Ramipril and Losartan demonstrated significant improvement in functional arterial reserve. Conclusions: The results demonstrate that this novel method to assess distal limb perfusion in small rodents with subcritical limb ischemia is sufficient to unmask perfusion deficits not apparent at rest, detect impaired compensation in diseased animal models with risk factors, and assess therapeutic efficacy. The approach provides a significant advance in methods to investigate potential mechanisms and novel therapies for subcritical limb ischemia in preclinical rodent models. abstract_id: PUBMED:29200824 Pharmacodynamic testing and new validated HPLC method to assess the interchangeability between multi-source orlistat capsules. Background: Orlistat is an irreversible inhibitor of the lipase enzyme that prevents trigylcerides from being digested, thereby inhibiting triglyceride hydrolysis and absorption. The resultant reduced calorie uptake enables a positive effect on weight control. Systemic absorption of the drug is, therefore, not necessary for its mode of action. An alternative in vitro study (pharmacodynamic) has been introduced for this drug, as in vivo bioavailability studies are irrelevant with regard to the achievement of the product's intended purposes. Objectives: To develop a new validated high-performance liquid chromatography (HPLC) method for the analysis of orlistat and to assess the potency and equivalence of three orlistat formulations using the pharmacodynamic method as a surrogate indicator of pharmaceutical interchangeability. Methods: A new HPLC method was developed for the analysis and for the dissolution studies of orlistat in capsules. Pancreatic lipase activity was measured for three different capsule products: Orlislim®, Slimcare®, and Xenical®, G1, G2, and the brand, respectively. Porcine pancreatic lipase and p-nitrophenyl butyrate (PNPB) were placed in a pH 7.4 reaction buffer at 37°C, and substrate hydrolysis was monitored by measuring absorbance changes at 410 nm; this was repeated on six capsules of each product. The inhibition was expressed by the concentration of product, which inhibited 50% of the activity of pancreatic lipase (IC50). Results: The new analytical method was suitable for orlistat analysis. Values of IC50 from regression lines and equations were 6.14, 8.43, and 7.80 μg/mL for Orlislim®, Xenical®, and Slimcare®, respectively. Conclusion: Pharmacodynamic studies of lipase inhibition could be used to support in vitro dissolution, which demonstrates interchangeability between generic and branded orlistat capsules. Moreover, it could be suggested as an alternative tool to bioequivalence studies for orlistat oral products. Answer: No, there is not a uniformly accepted unique measuring method to assess obesity. Various methods are used to assess the degree of obesity, including body mass index (BMI), body fat percentage, and abdominal circumference. These methods can be associated with biochemical changes in the body, such as alterations in insulin resistance, serum levels of leptin, and resistin, which can be significant in understanding obesity-related health issues (PUBMED:18842512). Each method has its own advantages and limitations, and the choice of method may depend on the specific context or research objectives.
Instruction: Hepatitis E infection in patients with severe alcoholic hepatitis: is there a place for systematic screening? Abstracts: abstract_id: PUBMED:26308710 Hepatitis E infection in patients with severe alcoholic hepatitis: is there a place for systematic screening? Background: One study has suggested that markers of acute hepatitis E virus (HEV) infection are present in 3.6% of patients with severe alcoholic hepatitis (AH). However, validation of these preliminary results is lacking, as well as the impact of HEV infection on the 6-month survival. Aims: The aims of this study were to evaluate the prevalence of HEV infection markers in an external cohort of patients with histologically proven severe AH and to assess the impact of markers of acute HEV infection on the 6-month survival and the need for liver transplantation (LT). Patients And Methods: Patients admitted for severe AH from January 2008 to June 2014 were analysed. HEV serology (IgM and IgG) was retrospectively performed. Results: Ninety-three patients were analysed (male sex 77.4%, age 53±9 years, Maddrey discriminant function 65±32, MELD score 24±6). Six patients (6.5%) had markers of acute HEV infection (IgM+and IgG+), 11 (11.8%) of past HEV infection (IgG+and IgM-) and 76 (81.7%) had a negative serology (IgM- and IgG-). Initial presentation and biological characteristics were not different between IgM+ and IgM- patients, except for the aspartate aminotransferase level (P<0.001). Markers of acute HEV infection had no impact on response to corticosteroids, 1-, 3- or 6-month survival, and the need for LT. Three patients showed symptomatic acute HEV at onset of acute AH: two were treated with ribavirin during the acute phase: one patient died and one patient underwent LT. Conclusion: Markers of acute HEV infection were present in 6.5% of patients in our cohort of cirrhotics with histologically proven severe AH, without any impact on short-term or long-term outcome. Whether systematic screening of acute HEV infection in this population should be performed remains an unsolved question. abstract_id: PUBMED:24904954 Hepatitis E infection in patients with severe acute alcoholic hepatitis. Background & Aims: Hepatitis E virus (HEV) infection is a known cause of acute-on-chronic liver failure in developing countries, but its implication in Western countries remains unknown. HEV burden in the setting of severe acute alcoholic hepatitis (AAH) was assessed. Methods: Patients admitted for severe AAH from 2007 to 2013, with available sera and histologically proven AAH, were included and managed according to current European guidelines. At admission, clinical and biological characteristics were collected; HEV serology and RNA detection were retrospectively performed. Results: Eighty-four patients were included. Mean age was 50.8 ± 9.6 years, 65.5% were male, 91.7% were cirrhotic and 33.3% presented with encephalopathy. Mean MELD and Maddrey scores were respectively 32.4 ± 11.4 and 73.3 ± 37. Liver biopsy showed mild, moderate and severe hepatitis in 25 (29.8%), 23 (27.4%) and 32 (38.1%) patients respectively. Steroids were given to 61 patients (72.6%) of whom 35 (57.4%) presented corticoresistance (mean Lille score: 0.78 ± 0.21). During hospitalization, 24 patients (28.6%) died and 11 (13.1%) were transplanted. Three patients (3.6%) presented markers of acute HEV infection and 21 (25%) markers of past HEV infection. Patient with acute infection were men, cirrhotic, and 2/3 presented with encephalopathy. Steroids were given to two patients without any response. The third patient died. None were transplanted. Conclusions: A substantial proportion of patients with severe AAH had markers of acute HEV infection, with similar clinical presentation and outcomes. Larger studies are needed to evaluate HEV impact on AAH management, resistance to steroids, and outcome. abstract_id: PUBMED:20721624 Non-hepatic insults are common acute precipitants in patients with acute on chronic liver failure (ACLF). Introduction: Acute-on-chronic liver failure (ACLF) is a newly coined term to describe simultaneous coexistence of two liver conditions, one of them being chronic or long-standing and the other acute or recent. There is limited data on the entity of ACLF. This study was performed to review our experience in ACLF patients from a tertiary care centre. Patients And Methods: ACLF was defined as per the Asian Pacific Association for the Study of the Liver (APASL) criteria, except for including the non-hepatic insults as precipitating events. Based on the type of acute insult, patients were divided into type I (non hepatic injury) and type II (hepatic injury-further divided in to IIA-acute viral hepatitis (AVH) on underlying chronic liver disease (CLD), IIB-other acute hepatitic insults like drugs/toxins and IIC-same disease responsible for worsening). Patients were also analyzed for the mode of presentation, severity of liver illness, presence of acute kidney injury and other organ failure, hospital stay and final outcome. Results: One hundred two patients with ACLF (85 males, mean age 44 ± 12.5 years) were included in the study; they accounted for 49% of all liver failures and 27% of all admissions during the study period. Sixty patients (59%) had known cirrhosis whereas 42 (41%) patients presented for the first time as ACLF, unaware of the underlying CLD. Sixty-two (60%) patients had type I ACLF while 40 (40%) patients had type II ACLF. Infections (47%) were the most common non-hepatic causes of acute deterioration in type I ACLF. Amongst type II, acute viral hepatitis (IIA) accounted for six patients (4 hepatitis E virus, 2 hepatitis A virus) and type II C was the most common with alcoholic hepatitis accounting for 30 (29%) patients. Acute kidney injury was present in 47 (46%) and hypotension in 36 (35%) patients. Hypoxemia with ventilatory support was required in 22 (21%) patients. Mean hospital stay of patients was 9.7 ± 6 days (2-27 days). Forty-seven (46%) patients either died or left hospital in a very sick state. Conclusion: ACLF is a common problem in our clinical practice. Non-hepatic insults like non-hepatotropic infections/sepsis are common acute precipitating events. abstract_id: PUBMED:31648288 Unexpected high seroprevalence of hepatitis E virus in patients with alcohol-related cirrhosis. Introduction: Little is known about hepatitis E virus (HEV) infection in patients with cirrhosis. The aim of the present study was to describe the frequency of HEV infection and associated risk factors in patients with cirrhosis from Argentina. Materials And Methods: We evaluated HEV seroprevalence (IgG anti-HEV) and acute infections (IgM and RNA) in patients with cirrhosis (n = 140) vs. healthy controls (n = 300). Additionally, we compared the same outcomes in individuals with alcohol-related cirrhosis (n = 43) vs. patients with alcohol use disorder (without cirrhosis, n = 72). Results: The overall HEV seroprevalence in the cohort of subjects with cirrhosis was 25% (35/140), compared to 4% in the healthy control group [12/300; OR = 8; (95% CI = 4-15.99); p<0.05]. HEV seropositivity was significantly higher in alcohol-related cirrhosis compared to other causes of cirrhosis [39.5% vs. 12.4%; OR = 4.71; (95% CI = 1.9-11.6); p<0.05] and to healthy controls [OR = 15.7; (95% CI = 6.8-36.4); p = 0.0001]. The HEV seroprevalence in alcoholic-related cirrhosis vs. with alcohol use disorder was 39.5% vs. 12.5% [OR = 4.58; (95% CI = 1.81-11.58); p<0.001]. Conclusion: We found a high seroprevalence of HEV in patients with cirrhosis and in individuals with alcohol use disorder. The simultaneous presence of both factors (cirrhosis + alcohol) showed more association to HEV infection. Larger studies with prospective follow up are needed to further clarify this interaction. abstract_id: PUBMED:34025837 High prevalence of anti-HEV antibodies among patients with immunosuppression and hepatic disorders in eastern Poland. Introduction: The incidence of hepatitis E virus (HEV) infections in Poland is largely unknown. This study aimed to describe seroprevalence of markers of HEV infection among patients with immunodeficiency of diverse etiology and patients with advanced chronic liver diseases. Material And Methods: Four hundred fifty patients were enrolled; among them, 180 persons were solid organ transplant recipients, 90 patients were HIV-infected and 180 persons had confirmed liver cirrhosis of different etiology. Serum anti-HEV-IgG, IgM antibodies and HEV-antigen were detected by ELISA (Wantai, China). Results: In the group of transplant recipients, serum anti-HEV-IgG antibodies were detected in 40.6%, IgM in 1.1% and HEV-Ag in 2.8% of subjects. In the HIV-infected population 37.7% had anti-HEV-IgG, 1.1% had anti-HEV-IgM and none had HEV-Ag. Among patients with advanced chronic liver diseases the highest prevalence of anti-HEV-IgG was recorded in alcohol-related liver cirrhosis (52.1%) (p = 0.049). In the population of all liver cirrhotics anti-HEV-IgG seroprevalence was 48.3%, anti-HEV-IgM seroprevalence was 5.0% and HEV-Ag seroprevalence was 1.7%. Older age and male gender were significant risk factors associated with increased anti-HEV-IgG prevalence, p = 0.0004 and p = 0.02, respectively. Conclusions: In this large cohort a high seroprevalence of anti-HEV-IgG was detected in comparison to other European countries, with the highest rates in patients with alcoholic liver disease and in transplant recipients. abstract_id: PUBMED:18983776 Etiological analysis of 1977 patients with acute liver failure, subacute liver failure and acute-on-chronic liver failure Objective: To investigate the etiology of 1977 patients from northern China with acute (ALF), sub-acute (SALF) or acute-on-chronic liver (ACLF) failures. Method: The age, gender, etiology, pathogenesis, and prognosis of the 1977 patients with liver failures were retrospectively analyzed. Results: Of the 1977 cases, the three most common causes of ALF were HEV (33.96%) or HBV (13.21%) infections or those caused by medicines (9.43%). The three predominant causes of SALF were medicines (31.53%), HEV (16.22%) or HBV (9.91%) infections, but those of the ACLF were HBV (90.29%) infection, alcoholic hepatopathy (2.65%), and HBV super infected with HEV (2.26%) infections. 90.09% (1781) patients were infected by hepatotropic viruses. Of these 1781 patients, the most common cause of their liver failures was HBV infection (92.93%). In these HBV infected patients, 77.10% were from 26 to 55 years old. From 2005 to 2007, there were 39 patients with alcoholic liver failure. In the past two years, there were 23 patients with drug induced liver failure. The improvement rate of the 1977 patients after their treatments was 35.56%. The improvement rate of HEV infected liver failure was higher than drug induced liver failure (P less than 0.05); no statistical significance was found between other groups (P more than 0.05). Conclusion: Different types of liver failure have different predominant causes. HBV infection is the most common cause in our 1977 patients. In the past two years, the number of drug induced liver failures and alcoholic liver failures have been increasing. abstract_id: PUBMED:30307619 Retrospectively seroprevalence study on anti-HEV-IgG antibody in patients with chronic hepatitis or liver cirrhosis in a Chinese teaching hospital. Background And Objective: To investigate the serum anti-hepatitis E virus (HEV) antibody positive rate in patients with different types of chronic hepatitis (CH) or cirrhosis. Methods: A total of 1751 hospitalized patients were chart reviewed, who were diagnosed with mono-CH or cirrhosis between 2011 and 2016. Results: The total anti-HEV-IgG positive rate was 1.33% (13/981) in CH patients, which was significantly lower than that (6.49%; 50/770) in cirrhosis patients (odds ratio [OR], 4.78 [2.51-9.10]; P = 0.00). The comparison of positive rate of anti-HEV-IgG between the same etiology CH and cirrhosis groups was as follows: chronic hepatitis B 1.27% (10/790) versus hepatitis B virus (HBV)-related cirrhosis 4.21% (22/522) (OR, 3.04 [1.36-6.77]; P = 0.00); chronic alcoholic hepatitis 1.41% (1/71) versus alcoholic cirrhosis 9.40% (11/117) (OR, 8.00 [1.00-64.25]; P = 0.03); chronic autoimmune hepatitis 1.69% (1/59) versus autoimmune cirrhosis 13.33% (12/90) (OR, 13.11 [1.49-115.27]; P = 0.01); the differences above were statistically significant. And chronic hepatitis C 3.23% (1/31) versus hepatitis C virus-related cirrhosis 10.81% (4/37) (OR, 4.40 [0.45-43.53]; P > 0.05); chronic NASH 0.00% (0/30) versus NASH-related cirrhosis 25.00% (1/4) (P > 0.05), the differences were not statistically significant. Anti-HEV-IgG positive rates were also compared among different types of CH groups and no significant difference was found. Likewise, anti-HEV-IgG positive rate was compared among different types of cirrhosis groups, showing that the positive rates of both alcoholic cirrhosis (9.40%) and autoimmune cirrhosis (13.33%) were significantly higher than that of HBV-related cirrhosis (4.21%) (P < 0.05). Conclusion: We observed that the cirrhosis patients had a significantly higher anti-HEV-IgG positive rate comparing with the CH patients, especially in those with HBV-related, alcohol-related, and autoimmune-related cirrhosis (after adjusted for age). Additionally, it seems that the conditions of alcoholic cirrhosis and autoimmune cirrhosis are more susceptible to HEV infection due to the significantly higher positive anti-HEV-IgG rate in these patients. abstract_id: PUBMED:28002308 Clinical Updates in Women's Health Care Summary: Liver Disease: Reproductive Considerations. All women are at risk of acute and chronic liver diseases. Of particular importance are those diseases that exclusively affect pregnant women and have adverse effects on maternal, fetal, or neonatal outcomes. Acute viral hepatitis is an important cause of liver disease in pregnant women, and hepatitis E infection is associated with substantial mortality. An increasing number of women have chronic liver diseases caused by viral hepatitis, alcoholic liver disease and nonalcoholic fatty liver disease, autoimmune liver diseases, and genetic liver diseases. The presence of chronic liver diseases or cirrhosis in pregnant or nonpregnant women requires alterations in gynecologic care, including contraception, pregnancy planning, cervical cancer screening, human papillomavirus vaccination, and postmenopausal hormone therapy. Women who have had liver and other solid organ transplantation require gynecologic care tailored to their immunosuppressed status. Collaboration between obstetrician-gynecologists and hepatologists is essential to provide optimal care to women with acute or chronic liver diseases. Timely referral for evaluation for liver transplantation is mandatory for all women with acute liver failure. abstract_id: PUBMED:27552296 High prevalence of anti-hepatitis E virus antibodies in outpatients with chronic liver disease in a university medical center in Germany. Aim/objectives/background: Hepatitis E virus (HEV) is an emerging disease in developed countries. HEV seroprevalence ranges from 3.2 to 10% in Europe, but is higher in endemic areas such as southern France. In Germany, an increasing incidence of HEV infections has been reported recently. Risk factors for the acquisition of HEV are incompletely understood. Methods: We screened 295 consecutive patients with chronic liver disease attending the outpatient department at Charité University Hospital for HEV seroprevalence. Epidemiological characteristics were analyzed and patients were questioned for risk factors using a standardized questionnaire. A total of 78 patients without known liver disease were also tested for HEV IgG. Results: Out of 295 screened patients, 62 tested positive for HEV-IgG. Overall, 50% of the HEV-positive patients were women and 23.8% had underlying liver cirrhosis. HEV-positive patients were older than HEV-negative patients (mean age 56 vs. 48.6 years). Seroprevalence increased with age from 13% in patients 30-39 years of age to 36.4% in patients 70-79 years of age. Of the total, 46.7% of HEV-IgG-positive patients had contact with domestic animals and 38.3% had received blood transfusions. A total of 50% of the HEV-IgG-positive patients had regularly consumed uncooked meat and 45% had regularly consumed wild game or wild boar, which was significantly more frequent than in HEV-IgG-negative patients. Conclusion: HEV-IgG seroprevalence was 21% in a cohort of patients with chronic liver disease and 24.4% in a cohort of patients without known liver disease. The higher seroprevalence found among elderly patients suggests a lifetime accumulation of risk of exposure to HEV. The results from this study imply that regular testing should be performed for HEV in developed countries in case of liver disease of unknown etiology. abstract_id: PUBMED:11283083 Molecular characteristic-based epidemiology of hepatitis B, C, and E viruses and GB virus C/hepatitis G virus in Myanmar. We carried out a molecular characteristic-based epidemiological survey of various hepatitis viruses, including hepatitis B virus (HBV), hepatitis C virus (HCV), hepatitis E virus (HEV), and GB virus C (GBV-C)/hepatitis G virus (HGV), in Myanmar. The study population of 403 subjects consisted of 213 healthy individuals residing in the city of Yangon, Myanmar, and the surrounding suburbs and 190 liver disease patients (155 virus-related liver disease patients and 35 nonviral disease patients). The infection rates of the viruses among the 213 healthy subjects were as follows: 8% for HBV (16 patients), 2% for HCV (4 patients), and 8% for GBV-C/HGV (17 patients). In contrast, for 155 patients with acute hepatitis, chronic hepatitis, liver cirrhosis, or hepatocellular carcinoma, the infection rates were 30% for HBV (46 patients), 27% for HCV (41 patients), and 11% for GBV-C/HGV (17 patients). In the nonviral liver disease group of 35 patients with alcoholic liver disease, fatty liver, liver abscess, and biliary disease, the infection rates were 6% for HBV (2 patients), 20% for HCV (7 patients), and 26% for GBV-C/HGV (9 patients). The most common viral genotypes were type C of HBV (77%), type 3b of HCV (67%), and type 2 of GBV-C/HGV (67%). Moreover, testing for HEV among 371 subjects resulted in the detection of anti-HEV immunoglobulin G (IgG) in 117 patients (32%). The age prevalence of anti-HEV IgG was 3% for patients younger than 20 years and 30% or more for patients 20 years of age or older. Furthermore, a high prevalence of anti-HEV IgG (24%) was also found in swine living together with humans in Yangon. These results suggest that these hepatitis virus infections are widespread in Myanmar and have led to a high incidence of acute and chronic liver disease patients in the region. Answer: The question of whether there is a place for systematic screening for Hepatitis E infection in patients with severe alcoholic hepatitis (AH) is addressed in several studies, but the conclusions are not definitive. One study found that markers of acute Hepatitis E virus (HEV) infection were present in 6.5% of a cohort of patients with histologically proven severe AH, but these markers did not impact short-term or long-term outcomes, including response to corticosteroids, survival, or the need for liver transplantation (LT) (PUBMED:26308710). This suggests that while HEV infection is present in a subset of patients with severe AH, routine screening may not alter clinical management or prognosis. Another study reported a 3.6% prevalence of markers of acute HEV infection in patients with severe acute alcoholic hepatitis (AAH), with similar clinical presentations and outcomes compared to those without HEV infection (PUBMED:24904954). This study also noted that larger studies are needed to evaluate the impact of HEV on AAH management, resistance to steroids, and outcomes, indicating that the role of systematic screening remains uncertain. In a broader context, non-hepatic insults are common precipitants in patients with acute on chronic liver failure (ACLF), and infections are a frequent non-hepatic cause of acute deterioration (PUBMED:20721624). This highlights the complexity of managing patients with severe liver disease, where multiple factors may contribute to their clinical state. A study from Argentina found a high seroprevalence of HEV in patients with cirrhosis, particularly in those with alcohol-related cirrhosis, suggesting a potential association between alcohol use, cirrhosis, and HEV infection (PUBMED:31648288). Similarly, a study from eastern Poland reported high seroprevalence of anti-HEV antibodies among patients with hepatic disorders and immunosuppression, with the highest rates in patients with alcoholic liver disease (PUBMED:34025837). In summary, while there is evidence of HEV infection in patients with severe AH and cirrhosis, particularly alcohol-related, the impact of this infection on clinical outcomes is not clear. The decision to implement systematic screening for HEV in this patient population may depend on factors such as local HEV prevalence, the potential for altering clinical management, and the availability of effective treatments.
Instruction: The use of audit to identify maternal mortality in different settings: is it just a difference between the rich and the poor? Abstracts: abstract_id: PUBMED:18019905 The use of audit to identify maternal mortality in different settings: is it just a difference between the rich and the poor? Objective: To illustrate how maternal mortality audit identifies different causes of and contributing factors to maternal deaths in different settings in low- and high-income countries and how this can lead to local solutions in reducing maternal deaths. Design: Descriptive study of maternal mortality from different settings and review of data on the history of reducing maternal mortality in what are now high-income countries. Settings: Kalabo district in Zambia, Farafenni division in The Gambia, Onandjokwe district in Namibia, and the Netherlands. Population: Population of rural areas in Zambia and The Gambia, peri-urban population in Namibia and nationwide data from The Netherlands. Methods: Data from facility-based maternal mortality audits from three African hospitals and data from the latest confidential enquiry in The Netherlands. Main Outcome Measures: Maternal mortality ratio (MMR), causes (direct and indirect) and characteristics. Results: MMR ranged from 10 per 100,000 (the Netherlands) to 1540 per 100,000 (The Gambia). Differences in causes of deaths were characterized by HIV/AIDS in Namibia, sepsis and HIV/AIDS in Zambia, (pre-)eclampsia in the Netherlands and obstructed labour in The Gambia. Conclusion: Differences in maternal mortality are more than just differences between the rich and poor. Acknowledging the magnitude of maternal mortality and harnessing a strong political will to tackle the issues are important factors. However, there is no single, general solution to reduce maternal mortality, and identification of problems needs to be promoted through audit, both national and local. abstract_id: PUBMED:17491578 The use of audit to identify maternal mortality in different settings: is it just a difference between the rich and the poor? Objective: To illustrate how maternal mortality audit identifies different causes of and contributing factors to maternal deaths in different settings in low- and high-income countries and how this can lead to local solutions in reducing maternal deaths. Design: Descriptive study of maternal mortality from different settings and review of data on the history of reducing maternal mortality in what are now high-income countries. Settings: Kalabo district in Zambia, Farafenni division in The Gambia, Onandjokwe district in Namibia, and the Netherlands. Population: Population of rural areas in Zambia and The Gambia, peri-urban population in Namibia and nationwide data from the Netherlands. Methods: Data from facility-based maternal mortality audits from three African hospitals and data from the latest confidential enquiry in the Netherlands. Main Outcome Measures: Maternal mortality ratio (MMR), causes (direct and indirect) and characteristics. Results: MMR ranged from 10 per 100,000 (the Netherlands) to 1,540 per 100,000 (The Gambia). Differences in causes of deaths were characterized by HIV/AIDS in Namibia, sepsis and HIV/AIDS in Zambia, (pre-)eclampsia in The Netherlands and obstructed labour in The Gambia. Conclusion: Differences in maternal mortality are more than just differences between the rich and poor. Acknowledging the magnitude of maternal mortality and harnessing a strong political will to tackle the issues are important factors. However, there is no single, general solution to reduce maternal mortality, and identification of problems needs to be promoted through audit, both national and local. abstract_id: PUBMED:18270496 The use of audit to identify maternal mortality in different settings: is it just a difference between the rich and the poor? Objective: To illustrate how maternal mortality audit identifies different causes of and contributing factors to maternal deaths in different settings in low- and high-income countries and how this can lead to local solutions in reducing maternal deaths. Design: Descriptive study of maternal mortality from different settings and review of data on the history of reducing maternal mortality in what are now high-income countries. Settings: Kalabo district in Zambia, Farafenni division in The Gambia, Onandjokwe district in Namibia, and The Netherlands. Population: Population of rural areas in Zambia and The Gambia, peri-urban population in Namibia and nationwide data from The Netherlands. Methods: Data from facility-based maternal mortality audits from three African hospitals and data from the latest confidential enquiry in The Netherlands. Main Outcome Measures: Maternal mortality ratio (MMR), causes (direct and indirect) and characteristics. Results: MMR ranged from 10 per 100,000 (The Netherlands) to 1,540 per 100,000 (The Gambia). Differences in causes of deaths were characterized by HIV/AIDS in Namibia, sepsis and HIV/AIDS in Zambia, (pre-)eclampsia in The Netherlands and obstructed labour in The Gambia. Conclusion: Differences in maternal mortality are more than just differences between the rich and poor. Acknowledging the magnitude of maternal mortality and harnessing a strong political will to tackle the issues are important factors. However, there is no single, general solution to reduce maternal mortality, and identification of problems needs to be promoted through audit, both national and local. abstract_id: PUBMED:19242072 Maternal mortality: an autopsy audit. Background: The process of audit standardizes protocols in departments and has long-term benefits. Maternal autopsies though routinely performed, deserve a special attention. Aims: This study was carried out to calculate the maternal mortality ratio (MMR) in a tertiary care hospital and to correlate final cause of death with the clinical diagnosis. An audit of maternal autopsies was carried out to evaluate current practices, identify fallacies and suggest corrective measures to rectify them. Materials And Methods: Eighty-nine autopsies of maternal deaths in the period 2003 to 2007 were studied in detail along with the clinical details. Results: There were 158 maternal deaths and 13940 live births in this five-year period. Maternal mortality rate was found to be very high (1133/ 100000 live births) in our institution with a high number of complicated referral cases (68/89 cases, 76%). Of the 89 autopsies, acute fulminant viral hepatitis was the commonest cause of indirect maternal deaths (37 cases, 41.5%). This was followed by direct causes like pregnancy-induced hypertension (12 cases, 13.4%) and puerperal sepsis (10 cases, 11.2%). Certain fallacies were noted during the audit process. Conclusion: During the audit it was realized that in maternal mortality autopsies, special emphasis should be given to clinicopathologic correlation, microbiological studies, identification of thromboembolic phenomenon and adequate sectioning of relevant organs. We found difficulty in identification of placental bed in the uterus in postpartum autopsies. A systematic approach can help us for better understanding of the pathophysiology of diseases occurring in pregnancy. abstract_id: PUBMED:16235307 Critical incident audit and feedback to improve perinatal and maternal mortality and morbidity. Background: Audit and feedback of critical incidents is an established part of obstetric practice. However, the effect on perinatal and maternal mortality is unclear. The potential harmful effects and costs are unknown. Objectives: Is critical incident audit and feedback effective in reducing the perinatal mortality rate, the maternal mortality ratio, and severe neonatal and maternal morbidity? Search Strategy: We searched the Cochrane Pregnancy and Childbirth Group Trials Register (January 2005), the Cochrane Effective Practice and Organisation of Care Group Trials Register (January 2005), MEDLINE (1965 to December 2004), EMBASE (1965 to December 2004), SCIBASE (1965 to December 2004) and the World Health Organization systematic review of maternal mortality and morbidity database (January 1997 to December 2002). Selection Criteria: Randomized trials of audit (defined as any summary of clinical performance over a specified period of time) and feedback (method of feeding that information back to the clinicians) that reported objectively measured professional practice in a healthcare setting or healthcare outcomes. Data Collection And Analysis: No suitable trials were found. Main Results: None. Authors' Conclusions: The necessity of recording the number and cause of deaths is not in question. Mortality rates are essential in identifying problems within the healthcare system. Maternal and perinatal death reviews should continue to be held, until further information is available. The evidence from serial data clearly suggests more benefit than harm. Feedback is essential in any audit system. The most effective mechanisms for this are unknown, but it must be directed at the relevant people. abstract_id: PUBMED:18720637 Maternal mortality in Bahrain 1987-2004: an audit of causes of avoidable death. The aim of this report was to establish the national maternal mortality rate in Bahrain over the period 1987-2004, to identify preventable factors in maternal deaths and to make recommendations for safe motherhood. There were 60 maternal deaths out of 243 232 deliveries giving an average maternal mortality rate of 24.7 per 100 000 total births. The main causes of death were sickle-cell disease (25.0%), hypertension (18.3%), embolism (13.3%), haemorrhage (13.3%), heart disease (11.7%), infection (8.3%) and other (10.0%). In an audit of care, 17 (28.3%) out of 60 deaths were judged to be avoidable, nearly half of which were due to a shortage of intensive care beds. We recommend that a confidential enquiry of maternal deaths be conducted at the national level every 3 to 5 years. abstract_id: PUBMED:28851302 Maternal mortality audit in Suriname between 2010 and 2014, a reproductive age mortality survey. Background: The fifth Millennium Development Goal (MDG-5) aimed to improve maternal health, targeting a maternal mortality ratio (MMR) reduction of 75% between 1990 and 2015. The objective of this study was to identify all maternal deaths in Suriname, determine the extent of underreporting, estimate the reduction, audit the maternal deaths and assess underlying causes and substandard care factors. Methods: A reproductive age mortality survey was conducted in Suriname (South-American upper-middle income country) between 2010 and 2014 to identify all maternal deaths in the country. MMR was compared to vital statistics and a previous confidential enquiry from 1991 to 1993 with a MMR 226. A maternal mortality committee audited the maternal deaths and identified underlying causes and substandard care factors. Results: In the study period 65 maternal deaths were identified in 50,051 live births, indicating a MMR of 130 per 100.000 live births and implicating a 42% reduction of maternal deaths in the past 25 years. Vital registration indicated a MMR of 96, which marks underreporting of 26%. Maternal deaths mostly occurred in the urban hospitals (84%) and the causes were classified as direct (63%), indirect (32%) or unspecified (5%). Major underlying causes were obstetric and non-obstetric sepsis (27%) and haemorrhage (20%). Substandard care factors (95%) were mostly health professional related (80%) due to delay in diagnosis (59%), delay or wrong treatment (78%) or inadequate monitoring (59%). Substandard care factors most likely led to death in 47% of the cases. Conclusion: Despite the reduction in maternal mortality, Suriname did not reach MDG-5 in 2015. Steps to reach the Sustainable Development Goal in 2030 (MMR ≤ 70 per 100.000 live births) and eliminate preventable deaths include improving data surveillance, installing a maternal death review committee, and implementing national guidelines for prevention and management of major complications of pregnancy, childbirth and puerperium. abstract_id: PUBMED:29113880 Enhanced system for maternal mortality surveillance in France, context and methods Maternal mortality, despite its rarity in rich countries, remains a fundamental indicator of maternal health. It is considered as a "sentinel event", consequence of dysfunctions of the health care system, often cumulative. In addition to the classical epidemiological surveillance outcomes-number of deaths, maternal mortality ratio and identification of the subgroups of women at risk-its study allows an accurate analysis of each deceased woman's trajectory to identify opportunities for improvements in the content or organization of care; the correction of which will make it possible to prevent deaths but also upstream morbid events affected by the same dysfunctions. To achieve this dual epidemiological and clinical audit objective, an ad hoc enhanced system is needed. France has had such a system since 1996, the National Confidential enquiry into maternal deaths (ENCMM), coordinated by the Inserm Epopé team. The first step is the multi-source identification (direct declaration, death certificate, birth certificates, hospital discharge data) of women who died during pregnancy or within one year of its end. The second step is the collection of detailed information for each death by a pair of clinical assessors. The third stage is the review of these anonymized documents by the National Committee of Experts on Maternal Mortality, which judges whether the death is maternal (causal link) and makes a judgment on the adequacy of care and avoidability of death. The synthesis of the information thus collected for maternal deaths in the period 2010-2012 is the subject of the last report. abstract_id: PUBMED:28058194 Maternal Mortality in the Main Referral Hospital in Angola, 2010-2014: Understanding the Context for Maternal Deaths Amidst Poor Documentation. Background: Increasing global health efforts have focused on preventing pregnancy-related maternal deaths, but the factors that contribute to maternal deaths in specific high-burden nations are poorly understood. The aim of this study was to identify factors that influence the occurrence of maternal deaths in a regional maternity hospital in Kuando Kubango province of Angola. Methods: The study was a retrospective cross-sectional analysis of case notes of all maternal deaths and deliveries that were recorded from 2010 to 2014. The information collected included data on pregnancy, labor and post-natal period retrieved from case notes and the delivery register. Results: During the period under study, a total of 7,158 live births were conducted out of which 131 resulted in maternal death with an overall maternal mortality ratio of 1,830 per 100,000 live births. The causes of death and their importance was relatively similar over the period reviewed. The direct obstetric causes accounted for 51% of all deaths. The major causes were hemorrhage (15%), puerperal sepsis (13%), eclampsia (11%) and ruptured uterus (10%). In addition, indirect non-obstetric medical causes such as Malaria, Anemia, hepatitis, AIDs and cardiovascular diseases accounted for 49% of all maternal deaths. There is poor documentation of personal data and clinical case management of cases. The factors of mutual instability of statistical significance associated with maternal death are: place of domicile (P=0.0001) and distance to the hospital (P=0.0001). Conclusion And Global Health Implication: The study demonstrated that the MMR in maternity hospital is very high and is higher than the WHO 2014 estimates and the province is yet to achieve the desired MDG 5 target by the end of 2015. A reversal of the present state requires data driven planning in order to improve access and use of Maternal Health Services (MHS) and ultimately lower the number of pregnancy-related maternal deaths. abstract_id: PUBMED:24069772 Audit of maternal mortality ratio and causes of maternal deaths in the largest maternity hospital in Cairo, Egypt (Kasr Al Aini) in 2008 and 2009: lessons learned. This study examined maternal deaths at Cairo University Maternity Hospital between January 2008 and December 2009. The aim was to calculate Maternal Mortality Ratio (MMR) as well as identify the causes and predisposing factors to maternal deaths. Data were collected from the files of the hospitalized pregnant women in the hospital. There were 38 maternal deaths and MMR was 79 per 100,000 live births for the two years examined. The main causes of death were obstetric hemorrhage, hypertensive disorders of pregnancy and cardiac arrest. Substandard medical care and the delay in seeking of medical advice were two contributing factors to maternal deaths recorded. The need for audit and publication of all obstetric hospitals MMR to compare and identify areas of improvements is recommended. Answer: The use of audit to identify maternal mortality in different settings reveals that the differences in maternal mortality are not merely a matter of rich versus poor countries. Maternal mortality audits have shown that the causes of maternal deaths vary significantly across different settings, indicating that local factors play a crucial role in maternal health outcomes. For instance, in a descriptive study that included data from Zambia, The Gambia, Namibia, and the Netherlands, the maternal mortality ratio (MMR) ranged from as low as 10 per 100,000 in the Netherlands to as high as 1,540 per 100,000 in The Gambia. The causes of death differed markedly, with HIV/AIDS being a significant factor in Namibia, sepsis and HIV/AIDS in Zambia, (pre-)eclampsia in the Netherlands, and obstructed labor in The Gambia (PUBMED:18019905; PUBMED:17491578; PUBMED:18270496). These findings underscore that while economic disparities may influence maternal health outcomes, they do not fully account for the variations in maternal mortality. Other factors, such as the prevalence of specific health conditions, the quality of healthcare systems, and the availability of medical resources, contribute to the differences observed. For example, in Bahrain, the main causes of maternal death were identified as sickle-cell disease, hypertension, embolism, and hemorrhage, with an audit suggesting that nearly half of the avoidable deaths were due to a shortage of intensive care beds (PUBMED:18720637). Moreover, audits have highlighted the importance of acknowledging the magnitude of maternal mortality and the need for strong political will to address the issue. There is no single, universal solution to reduce maternal mortality; instead, problem identification through national and local audits is crucial for developing targeted interventions (PUBMED:18019905; PUBMED:17491578; PUBMED:18270496). In summary, maternal mortality audits demonstrate that the differences in maternal mortality transcend economic status and are influenced by a complex interplay of local health challenges, healthcare system capabilities, and political commitment. These audits are essential for identifying specific problems and guiding the implementation of effective, context-specific solutions to improve maternal health outcomes.
Instruction: Does coronary artery bypass grafting alone correct moderate ischemic mitral regurgitation? Abstracts: abstract_id: PUBMED:11568033 Does coronary artery bypass grafting alone correct moderate ischemic mitral regurgitation? Background: The optimal management of moderate (3+ on a scale of 0 to 4+) ischemic mitral regurgitation (MR) remains controversial. Some advocate CABG alone, whereas others favor concomitant mitral annuloplasty. To clarify the optimal management of these patients, we evaluated the early impact of isolated CABG on moderate ischemic MR. Methods And Results: Between January 1992 and August 1999, 136 patients (54% male, mean age 70.5 years, mean New York Heart Association class 2.7, mean ejection fraction 38.1%) with a preoperative diagnosis of moderate ischemic MR, without leaflet prolapse or pathology, underwent isolated CABG. Thirty-eight (28%) of 136 patients had intraoperative transesophageal echocardiography (TEE) before CABG, and 68 (50%) had postoperative transthoracic echocardiography (TTE) within 6 weeks of surgery. The subgroups of patients undergoing intraoperative TEE and postoperative TTE had preoperative characteristics similar to the overall group. The 30-day operative mortality was 2.9% (). Intraoperative TEE downgraded the severity of MR to mild or less (0 to 2+) in 89% (). On postoperative TTE, 40% () continued to have at least moderate MR (3 to 4+), 51% () improved somewhat to mild (2+) MR, and only 9% () had resolution of their MR (0 to 1+). The mean preoperative, intraoperative, and postoperative MR grades were 3.0+/-0.0, 1.4+/-1.0, and 2.3+/-0.8, respectively (P<0.001). Conclusions: CABG alone for moderate ischemic MR leaves many patients with significant residual MR and may not be the optimal therapy for most patients. Intraoperative TEE may significantly underestimate the severity of ischemic MR. A preoperative diagnosis of moderate MR may warrant concomitant mitral annuloplasty. abstract_id: PUBMED:31666451 Risk Factors for Moderate or More Residual Regurgitation in Patients with Moderate Chronic Ischemic Mitral Regurgitation Undergoing Surgical Revascularization Alone. Few reports have focused on which patients with moderate ischemic mitral regurgitation (IMR) were not good candidates for coronary artery bypass grafting (CABG) alone. This single-center study aimed to assess risk factors for moderate or more residual regurgitation within two years after CABG alone for the treatment of moderate chronic IMR to optimize the operation strategy and prognosis.A total of 189 eligible patients were entered into a failure group (n = 108) or an improved group (n = 81) according to whether moderate or more residual regurgitation occurred within two years after surgery. Baseline and surgical characteristics were analyzed, and clinical outcomes were compared between groups.Prior myocardial infarction (MI)/chronic myocardial ischemia and region wall motion abnormality (anterior/inferior-posterior wall) were two independent risk factors for moderate or more residual regurgitation, following CABG alone, for the treatment of moderate chronic IMR (OR = 3.15, 95% CI 1.66-5.75, and OR = 2.45, 95% CI 1.36-4.84, respectively). During a median follow-up of 40 months, compared with the improved group, the failure group was more likely to present with New York Heart Association (NYHA) class III-IV and cardiac re-hospitalization (57.4% versus 11.1%, P < 0.001, and 13.9% versus 4.9%, P = 0.043, respectively) and had worse cumulative survival (χ2 = 4.259, log-rank P = 0.039).Patients suffering from moderate chronic IMR secondary to prior MI (rather than chronic ischemia) with anterior wall motion abnormalities (rather than inferior-posterior wall motion abnormalities) may not be good candidates for CABG alone, and may have a poor prognosis after CABG alone. abstract_id: PUBMED:27671215 Short- and Medium-Term Effects of Combined Mitral Valve Surgery and Coronary Artery Bypass Grafting Versus Coronary Artery Bypass Grafting Alone for Patients with Moderate Ischemic Mitral Regurgitation: A Meta-Analysis. Objective: To investigate the short- and medium-term effects of combined mitral valve surgery (MVS) and coronary artery bypass grafting (CABG) versus CABG alone for patients with moderate ischemic mitral regurgitation (IMR). Design: Meta-analysis of 4 randomized controlled trials (RCTs) and 5 observational studies. Setting: Hospitals that perform cardiac surgery. Participants: The study included 1,256 cardiac surgery patients from 4 RCTs and 5 observational studies. Interventions: None. Measurements And Main Results: Four RCTs and 5 observational studies were included in this meta-analysis. Concomitant MVS significantly reduced the residual rate of postoperative IMR (moderate or severe) (RCTs: OR -0.32, 95% confidence interval [CI] -0.58 to -0.07, p = 0.01; observational studies: OR -0.23, 95% CI -0.34 to -0.12, p<0.0001) and the proportion of surviving patients with New York Heart Association class III or IV (RCTs: OR 0.45, 95% CI 0.31-1.8, p = 0.008), but did not improve early mortality (RCTs: OR 0.91, 95% CI 0.30-2.74, p = 0.87; observational studies: OR 1.63, 95% CI 0.88-3.05, p = 0.12) or medium-term mortality (RCTs: OR 0.89, 95% CI 0.46-1.74, p = 0.73; observational studies: OR 0.94, 95% CI 0.65-1.37, p = 0.48) compared with CABG alone. Moreover, adding the mitral valve procedure did not significantly increase the risk of stroke (RCTs: OR 2.27, 95% CI 0.73-7.08, p = 0.16; observational studies: OR 0.55, 95% CI 0.10-3.06, p = 0.50). Conclusions: The potential benefits of combined MVS and CABG could outweigh its risks for patients with moderate IMR. abstract_id: PUBMED:35236195 Comparative study of coronary artery bypass grafting combined with off-pump mitral valvuloplasty versus coronary artery bypass grafting alone in patients with moderate ischemic mitral regurgitation. Introduction: Whether mitral surgery should be performed simultaneously with coronary artery bypass grafting (CABG) in patients with moderate ischemic mitral regurgitation (MIMR) is controversial. This study was performed to introduce a method of off-pump mitral valvuloplasty after off-pump CABG (OPCABG) and compare it with OPCABG alone. Methods: Eighty-three patients with MIMR underwent OPCABG. Among them, 21 patients (Group A) underwent posterior mitral annuloplasty without cardiopulmonary bypass, and 62 patients (Group B) underwent OPCABG alone. The primary endpoint of follow-up was the mitral regurgitation area. Results: The mean mitral regurgitant area in Group A and B was 6.42 ± 1.02 and 5.49 ± 1.24 cm2 preoperatively (p = .479), 2.93 ± 1.35 and 3.28 ± 1.93 cm2 at 1 week postoperatively (p = .516), 3.06 ± 2.16 and 3.09 ± 1.85 cm2 at 3 months postoperatively (p = .839), and 3.02 ± 1.60 and 3.7 cm2 (median) at 1 year postoperatively (p = .043). There was less regurgitation in Group A at the mid-term. Intragroup comparison showed significant differences between the preoperative and postoperative values in both groups, with no difference in the regurgitant area at each postoperative time point in Group A but a significant difference between 3 months and 1 year postoperatively in Group B (p = .042). Multiple linear regression showed that the mid-term mitral regurgitant area changes were negatively correlated with graft flow and positively correlated with age. Conclusion: In patients with MIMR who underwent OPCABG plus off-pump mitral valve annuloplasty, the mitral regurgitant area was smaller and mitral regurgitation recurrence was less frequent at the mid-term follow-up. abstract_id: PUBMED:29888544 Should the mitral valve be repaired for moderate ischemic mitral regurgitation at the time of revascularization surgery? Background: Ischemic mitral regurgitation (IMR) is associated with increased mortality and recurrent congestive heart failure following coronary artery bypass graft (CABG) surgery. While mitral surgery should be undertaken for severe MR during CABG, the treatment of moderate IMR remains controversial. We conducted a meta-analysis to determine the outcomes of CABG alone and combine with mitral valve repair (MVr) in moderate IMR. Methods: A literature search was conducted by Pubmed, Ovid, and Embase, which included 643 articles. Eleven studies (seven observational studies and four randomized controlled trials) with a total of 1406 patients were included (CABG alone = 864 and CABG plus MVr = 542). Results: There was no difference in operative mortality (odds ratio 1.56, 95% confidence interval [CI] 0.92-2.71) or long-term survival at 1 or 5 years (hazard ratio 0.98, 95%CI 0.71-1.35, P = 0.49) between the two groups, and little evidence of heterogeneity was found in the studies (I2 = 0.0, P = 0.562). There was significantly greater improvement in MR grade (weighted mean difference [WMD] -1.15, 95%CI -1.67 to -0.064, P = < 0.001) and left ventricular systolic diameter (WMD -3.02, 95%CI -4.85 to -1.18, P = 0.001) following CABG and MVr compared to CABG alone. No difference in postoperative functional class or ejection fraction was found. Conclusions: Our results show that in the presence of moderate IMR, adding MVr to revascularization reduces MR grade on follow-up echocardiography and promotes ventricular remodeling, with no improvement in long-term survival or functional class. abstract_id: PUBMED:26837498 Does Surgical Repair of Moderate Ischemic Mitral Regurgitation Improve Survival? A Systematic Review. Mitral regurgitation (MR) is one of the common complications in myocardial infarction (MI) patients. Almost half of the post MI patients have MR (ischemic MR)(17) which is moderate to severe (grade II-IV). Whether there is a mortality benefit of performing mitral valve repair (MVR) along with coronary artery bypass grafting (CABG) in patients with post MI moderate MR remains inconclusive. Literature search was done from PubMed, Google scholar, Ovid, and Medline databases. Studies which included post MI patients with moderate ischemic MR and reported mortality outcomes of performing CABG and MVR were chosen for the systematic review. Our preliminary literature search identified 194 studies, of which 11 studies met our inclusion criteria. Nine studies showed no survival benefit of performing simultaneous MVR and CABG. One study demonstrated survival benefit of performing CABG plus MVR only in the New York Heart Association (NYHA) class III-IV, and one study suggested survival benefit of performing CABG plus MVR as compared to CABG alone in patient with ischemic MR irrespective of preoperative NYHA functional class. Review of current literature showed mixed results in terms of improvement in functional status but failed to show any survival benefit of performing MVR along with CABG. Limitations of studies include small sample size, difference in baseline demographic variables, and short follow-up period which might influence the outcome of the study. Prospective randomized studies are required to establish clear benefit of performing MVR simultaneously with CABG. abstract_id: PUBMED:19594070 Moderate ischemic mitral regurgitation: to operate or to ignore? Introduction: Moderate ischemic mitral regurgitation (MR) is characterized by significant, symptomatic multivessel coronary disease and mitral regurgitation 2-3+. Case Outline: A 60-year-old patient was admitted at the Cardiovascular Institute "Dedinje" due to the symptoms of unstable angina pectoris. He survived a myocardial infarction (inferoposterolateral localization) 8 years ago. On admission echocardiogram revealed regional disturbances of the left ventricle wall with ejection fraction of 25% and mitral regurgitation 2+. The patient underwent a triple coronary bypass with surgical correction of mitral regurgitation. Postoperative course was normal. Conclusion: Several authors are against surgical correction of MR in moderate ischemic MR due to several reasons: revascularization of ischemic areas will improve regional wall motion and correct MR, while mitral valve surgery adds significantly to the operative risk of coronary surgery. Other authors, however, favour combined operation emphasizing that in many patients coronary surgery alone will not correct a moderate ischemic MR. Today there is no consensus whether to operate the moderate ischemic MR or to ignore it. Some novel studies underscore significant predictors of a long-term survival of these patients: NYHA (New York Heart Association) class and left ventricle ejection fraction. In that respect a combined operation should be recommended in patients with heart failure and NYHA class III and IV. abstract_id: PUBMED:23960603 Moderate ischemic mitral regurgitation: Is there a case for early intervention? Ischemic mitral regurgitation (IMR) results from left ventricular remodelling after myocardial infarction and severely affects cardiovascular mortality and morbidity. Ischemic mitral valve regurgitation also represents a negative prognostic factor for long-term survival in patients undergoing surgical myocardial revascularization. While severe mitral regurgitation should always be corrected during a coronary artery bypass operation, the decision making is more difficult in patients with a moderate degree of regurgitation. In this review, we wish to highlight the negative impact of IMR on long-term survival and discuss the available evidence for surgical correction of IMR at the time of coronary revascularization. abstract_id: PUBMED:19876417 Coronary revascularization alone or with mitral valve repair: outcomes in patients with moderate ischemic mitral regurgitation. We sought to evaluate retrospectively the outcomes of patients at our hospital who had moderate ischemic mitral regurgitation and who underwent coronary artery bypass grafting (CABG) alone or with concomitant mitral valve repair (CABG+MVr).A total of 83 patients had a reduced left ventricular ejection fraction and moderate mitral regurgitation: 28 patients underwent CABG+MVr, and 55 underwent CABG alone. Changes in mitral regurgitation, functional class, and left ventricular ejection fraction were compared in both groups.The mean follow-up was 5.1 +/- 3.6 years (range, 0.1-15.1 yr). Reduction of 2 mitral-regurgitation grades was found in 85% of CABG+MVr patients versus 14% of CABG-only patients (P < 0.0001) at 1 year, and in 56% versus 14% at 5 years, respectively (P = 0.1), as well as improvements in left ventricular ejection fraction and functional class. One- and 5-year survival rates were similar in the CABG+MVr and CABG-only groups: 96% +/- 3% versus 96% +/- 4%, and 87% +/- 5% versus 81% +/- 8%, respectively (P = NS). Propensity analysis showed similar results. Recurrent (3+ or 4+) mitral regurgitation was found in 22% and 47% at late follow-up, respectively.In patients with moderate ischemic mitral regurgitation, either surgical approach led to an improvement in functional class. Early and intermediate-term mortality rates were low with either CABG or CABG+MVr. However, an increased rate of late recurrent mitral regurgitation in the CABG+MVr group was observed. abstract_id: PUBMED:31929898 Elderly Patients with Moderate Chronic Ischemic Mitral Regurgitation: Coronary Artery Bypass Grafting Alone or Concomitant Mitral Annuloplasty? Background: An increasing number of elderly patients with ischemic mitral regurgitation (IMR) are referred for coronary artery bypass grafting (CABG). However, data about the management of elderly patients with moderate IMR are scanty. This study evaluates the impacts of two surgical approaches (CABG alone or concomitant mitral annuloplasty (MAP)) on in-hospital and midterm outcomes, to attempt to determine an appropriate treatment option for elderly patients with moderate chronic IMR. Methods: All eligible patients over 65 years of age were included and were entered into either a MAP group (patients undergoing CABG plus MAP, n = 96) or a CABG group (patients receiving CABG alone, n = 104). Baseline and surgical characteristics were analyzed, and in-hospital and midterm outcomes between groups were compared after propensity score-matching (1 : 1). Results: Using propensity score-matching, 82 pairs of patients were successfully established in a 1 : 1 ratio. No significant differences between the two matched groups were found regarding surgical mortality (4.9% vs. 1.2%, p=0.173) and major postoperative morbidity. 150 patients (76 in the MAP group and 74 in the CABG group) received regular follow-up visit with the median duration of 37 months. Compared with the CABG group, the MAP group received a similar overall survival but a better recurrent MR-free survival (stratified log-rank p, 0.492 and < 0.001, respectively). Using Cox regression, the MAP group as compared with the CABG group did not affect midterm survival probability (propensity score-adjusted hazard ratio, 0.854; 95% confidence interval, 0.571-2.729, p=0.630). Additionally, patients in the MAP group had a significantly lower ratio of NYHA class III-IV at the latest follow-up by comparison with patients in the CABG group (19.7% vs. 35.5%, p=0.033). Conclusion: Compared with CABG alone, concomitant mitral annuloplasty is associated with improved midterm outcomes (including reduced IMR recurrence and improved cardiac functional class) but shares similar surgical mortality and major postoperative morbidity and may be a promising treatment option for elderly patients with moderate chronic IMR. Answer: Coronary artery bypass grafting (CABG) alone does not consistently correct moderate ischemic mitral regurgitation (MR). Studies have shown that while CABG alone may lead to some improvement in MR, many patients continue to have significant residual MR postoperatively. For instance, one study found that after isolated CABG, 40% of patients continued to have at least moderate MR, 51% improved to mild MR, and only 9% had resolution of their MR (PUBMED:11568033). Another study identified prior myocardial infarction and anterior/inferior-posterior wall motion abnormality as independent risk factors for moderate or more residual regurgitation after CABG alone (PUBMED:31666451). A meta-analysis comparing combined mitral valve surgery (MVS) and CABG versus CABG alone for patients with moderate ischemic MR found that concomitant MVS significantly reduced the residual rate of postoperative IMR and the proportion of surviving patients with New York Heart Association class III or IV, but did not improve early or medium-term mortality (PUBMED:27671215). Another study suggested that off-pump mitral valvuloplasty combined with CABG resulted in smaller mitral regurgitant areas and less frequent recurrence of mitral regurgitation at mid-term follow-up compared to CABG alone (PUBMED:35236195). A meta-analysis on the outcomes of CABG alone and combined with mitral valve repair (MVr) in moderate IMR showed that adding MVr to revascularization reduces MR grade on follow-up echocardiography and promotes ventricular remodeling, with no improvement in long-term survival or functional class (PUBMED:29888544). A systematic review also indicated that there is no clear survival benefit of performing MVR along with CABG in patients with post-myocardial infarction moderate MR (PUBMED:26837498). In summary, while CABG alone may lead to some degree of improvement in moderate ischemic MR, it often leaves patients with significant residual MR. Concomitant mitral valve interventions may provide better outcomes in terms of MR severity reduction and functional class improvement, although the impact on long-term survival is less clear. Therefore, CABG alone may not be the optimal therapy for most patients with moderate ischemic MR, and a preoperative diagnosis of moderate MR may warrant consideration of concomitant mitral annuloplasty (PUBMED:11568033).
Instruction: Hypervascular hepatocellular carcinoma: can double arterial phase imaging with multidetector CT improve tumor depiction in the cirrhotic liver? Abstracts: abstract_id: PUBMED:12185057 Hypervascular hepatocellular carcinoma: can double arterial phase imaging with multidetector CT improve tumor depiction in the cirrhotic liver? Objective: We assessed the efficacy of double arterial phase CT with multidetector CT for the detection of hypervascular hepatocellular carcinoma in the cirrhotic liver. Materials And Methods: Double arterial phase images with multidetector CT were evaluated using quantitative, qualitative, and receiver operating characteristic analyses for 59 patients with 78 hepatocellular carcinomas. Early and late arterial phase (double arterial phase) CT scans were obtained at a fixed time of 25 and 40 sec, respectively, after administration of contrast material. Total dose and injection rate of contrast material were 100 mL and 3 mL/sec, respectively. Results: On the basis of the receiver operating characteristic curves, the mean area under the curve values of the late (0.98) and combined arterial phase CT scans (0.98) were equivalent, and both were significantly greater than the mean of the early arterial phase CT scans (0.842) for detecting hepatocellular carcinoma (p < 0.05). The mean relative sensitivity values obtained with the late (69/78, 88%) and combined arterial phase CT scans (70/78, 90%) were also equivalent and were significantly greater than those obtained with the early arterial phase CT scans (52/78, 67%; p < 0.001). Conclusion: Double arterial phase CT with multidetector CT showed no significant improvement in effectiveness compared with single late arterial phase CT used alone for detecting hypervascular hepatocellular carcinoma in the cirrhotic liver. abstract_id: PUBMED:31367557 Characterization of microvessels and parenchyma in in-line phase contrast imaging CT: healthy liver, cirrhosis and hepatocellular carcinoma. Background: Hepatocellular carcinoma (HCC) is a cancer with a poor prognosis, and approximately 80% of HCC cases develop from cirrhosis. Imaging techniques in the clinic seem to be insufficient for revealing the microstructures of liver disease. In recent years, phase contrast imaging CT (PCI-CT) has opened new avenues for biomedical applications owing to its unprecedented spatial and contrast resolution. The aim of this study was to present three-dimensional (3D) visualization of human healthy liver, cirrhosis and HCC using a PCI-CT technique called in-line phase contrast imaging CT (ILPCI-CT) and to quantitatively evaluate the variations of these tissues, focusing on the liver parenchyma and microvasculature. Methods: Tissue samples from 9 surgical specimens of normal liver (n=3), cirrhotic liver (n=2), and HCC (n=4) were imaged using ILPCI-CT at the Shanghai Synchrotron Radiation Facility (SSRF) without contrast agents. 3D visualization of all ex vivo liver samples are presented. To quantitatively evaluate the vessel features, the vessel branch angles of each sample were clearly depicted. Additionally, radiomic features of the liver parenchyma extracted from the 3D images were measured. To evaluate the stability of the features, the percent coefficient of variation (%COV) was calculated for each radiomic feature. A %COV <30 was considered to be low variation. Finally, one-way ANOVA, followed by Tukey's test, was used to determine significant changes among the different liver specimens. Results: ILPCI-CT allows for a clearer view of the architecture of the vessels and reveals more structural details than does conventional radiography. Combined with the 3D visualization technique, ILPCI-CT enables the acquisition of an accurate description of the 3D vessel morphology in liver samples. Qualitative descriptions and quantitative assessment of microvessels demonstrated clear differences among human healthy liver, cirrhotic liver and HCC. In total, 38 (approximately 51%) radiomic features had low variation, including 11 first-order features, 16 GLCM features, 6 GLRLM features and 5 GLSZM features. The differences in the mean vessel branch angles and 3 radiomic features (first-order entropy, GLCM-inverse variance and GLCM-sum entropy) were statistically significant among the three groups of samples. Conclusions: ILPCI-CT may allow for morphologic descriptions and quantitative evaluation of vessel microstructures and parenchyma in human healthy liver, cirrhotic liver and HCC. Vessel branch angles and radiomic features extracted from liver parenchyma images can be used to distinguish the three kinds of liver tissues. abstract_id: PUBMED:26340309 Combined parenchymal and vascular imaging: High spatiotemporal resolution arterial evaluation of hepatocellular carcinoma. Purpose: To assess the ability of high-resolution arterial phase imaging of hepatocellular carcinoma (HCC) to provide combined vascular characterization and parenchymal evaluation. Materials And Methods: Thirty-eight consecutive studies in cirrhotic patients with HCC scanned with a view-shared 2-point-Dixon-based Differential Subsampling with Cartesian Ordering (DISCO) sequence were analyzed. Lesion contrast relative to precontrast and adjacent parenchyma was evaluated and compared using a Fisher's exact test. Visibility of hepatic arteries and tumor feeding vessels were graded on a 5-point scale. Catheter angiography was used as a reference standard for arterial anatomy. Results: The high spatiotemporal multiphasic acquisition allowed imaging of both the angiographic and late arterial phase in 30 of 38 studies with good image quality. Maximal lesion enhancement compared to precontrast occurred more frequently during the late arterial phase compared to maximal lesion-to-adjacent, which occurred more frequently during the early arterial phase (P < 0.001). Common and proper hepatic arteries were visualized adequately in 100%, right hepatic artery in 94-97%, left hepatic artery in 94%, and segmental vessel in 83% of cases. Arterial variants were detected with sensitivity of 87-100% and specificity of 100%. Conclusion: High spatiotemporal resolution arterial phase imaging provides multiple angiographic and arterial phases in a single breath-hold, enabling accurate depiction of vascular anatomy while maintain optimal arterial phase imaging for characterization of focal lesions. abstract_id: PUBMED:20720069 Intraindividual comparison of gadoxetate disodium-enhanced MR imaging and 64-section multidetector CT in the Detection of hepatocellular carcinoma in patients with cirrhosis. Purpose: To prospectively compare gadoxetate disodium-enhanced magnetic resonance (MR) imaging with multiphasic 64-section multidetector computed tomography (CT) in the detection of hepatocellular carcinoma (HCC) in patients with cirrhosis. Materials And Methods: Institutional review board approval and informed patient consent were obtained for this prospective study. Fifty-eight patients (39 men, 19 women; mean age, 63 years; age range, 35-84 years) underwent gadoxetate disodium-enhanced MR imaging and multiphasic 64-section multidetector CT. The imaging examinations were performed within 30 days of each other. The two sets of images were qualitatively analyzed in random order by three independent readers in a blinded and retrospective fashion. Using strict diagnostic criteria for HCC, readers classified all detected lesions with use of a four-point confidence scale. The reference standard was a combination of pathologic proof, conclusive imaging findings, and substantial tumor growth at follow-up CT or MR imaging (range of follow-up, 90-370 days). The diagnostic accuracy, sensitivity, and positive predictive value were compared between the two image sets. Interreader variability was assessed. The accuracy of each imaging method was determined by using an adjusted modified chi(2) test. Results: Eighty-seven HCCs (mean size +/- standard deviation, 1.8 cm +/- 1.5; range, 0.3-7.0 cm) were confirmed in 42 of the 58 patients. Regardless of lesion size, the average diagnostic accuracy and sensitivity for all readers were significantly greater with gadoxetate disodium-enhanced MR imaging (average diagnostic accuracy: 0.88, 95% confidence interval [CI]: 0.80, 0.97; average sensitivity: 0.85, 95% CI: 0.74, 0.96) than with multidetector CT (average diagnostic accuracy: 0.74, 95% CI: 0.65, 0.82; average sensitivity: 0.69, 95% CI: 0.59, 0.79) (P < .001 for each). No significant difference in positive predictive value was observed between the two image sets for each reader. Interreader agreement was good to excellent. Conclusion: Compared with multiphasic 64-section multidetector CT, gadoxetate disodium-enhanced MR imaging yields significantly higher diagnostic accuracy and sensitivity in the detection of HCC in patients with cirrhosis. abstract_id: PUBMED:17051556 Diagnostic imaging of hepatocellular carcinoma in patients with cirrhosis before liver transplantation. Key Concepts: 1. The lack of whole-liver explant correlation has led to an overestimation of the sensitivity of imaging tests for the diagnosis of HCC in the radiological literature. 2. Ultrasound is insensitive for the diagnosis of HCC in the cirrhotic liver and should not be used for the detection of focal liver lesions in this setting. 3. Although magnetic resonance (MR) imaging is more sensitive than multidetector 3-phase computed tomography (CT) for the diagnosis of regenerative and dysplastic nodules it is probably no better than CT for detection of HCC and has a lower false-positive rate. 4. Approximately 10-30% of nodules measuring <2 cm seen only on the hepatic arterial phase at CT or MR imaging represent small HCC and vigilant surveillance imaging is required as interval growth is the best indicator of malignancy. abstract_id: PUBMED:17222371 Helical double-phase CT scan imaging features of hepatocellular carcinoma and pathology of false-positive lesions Background & Objective: The helical double-phase CT scan imaging features of hepatocellular carcinoma (HCC) overlap those of other hepatic lesions. This study was to investigate the helical double-phase CT scan imaging features of HCC to improve diagnosis accuracy. Methods: Double-phase CT data and pathologic data of 52 HCC patients, received resection in Cancer Center of Sun Yat-sen University from Dec. 2000 to Dec. 2002, were analyzed. The double-phase CT features of HCC lesions were summarized. The pathology of false-positive lesions was analyzed. Results: CT scan showed 56 lesions in the 52 patients: 51 were cancer lesions, including 49 HCC lesions and 2 mixed lesions of HCC and cholangioma, 5 were false-positive lesions. Arterial phase of these HCC lesions showed obvious heterogeneous enhancement, and the portal vein phase showed heterogeneous low dense. Necrosis was seen in all massive lesions, but was seldom seen in nodular and small lesions. Most lesions had clear borders and amicula. The pathologic diagnoses of the 5 false-positive lesions were hepatic cirrhosis with hepatocellular nodular hyperplasia, regenerative nodule, hepatic cirrhosis, bile duct calculus companied with inflammatory reaction, and fibrosis hyperplasia. Conclusions: Helical double-phase CT scan can be used to diagnose typical HCC lesions. There are no obvious differences in helical double-phase CT scan between HCC lesions and false-positive lesions. The diagnosis of HCC must be based on clinical information, follow-up or biopsy. abstract_id: PUBMED:15383731 Multidetector row CT and MR imaging in diagnosing hepatocellular carcinoma. It is important to diagnose hepatocellular carcinoma (HCC) correctly and in early stage, because viral hepatitis and liver cirrhosis are often complicated by HCC. Noncontrast and enhanced CT and MRI are very useful to visualize and diagnose HCC objectively. Especially, CT and MR imaging with dynamic study is essential to diagnose HCC, because it is usually hypervascular. Dynamic CT and MR study also improve differential diagnosis in the characterization of the tumor. However, to perform useful dynamic study, it is necessary to use a CT unit which can make a helical scan, or a MR system with fast imaging technique available that can obtain more than 15 slices within a single breath hold. Tissue specific contrast medium, such as superparamagnetic iron oxide that is available only on MRI, is also useful for diagnosis of HCC. abstract_id: PUBMED:20616586 Depiction of portal supply in early hepatocellular carcinoma and dysplastic nodule: value of pure arterial ultrasound imaging in hepatocellular carcinoma. Ultrasound (US) contrast agents such as SonoVue and Sonazoid are commercially available worldwide. Innovation of contrast agents and advances of new US technologies have dramatically changed both diagnostic and treatment strategies for hepatocellular carcinoma (HCC). Recently, the breakthrough technique, pure arterial phase (PAP) US imaging, which depicts only intranodular arterial supply by use of maximum intensity projection (MIP) images, was developed from advanced raw data-storing and accumulation technologies. A total of 8 dysplastic nodules (DNs), 16 early HCCs, 5 nodule-in-nodule type early HCCs and 48 overt HCCs were included in this study. All 8 DNs (100%) showed arterial hypovascularity in the PAP followed by preserved portal perfusion at the portal phase and isouptake at the Kupffer phase by Sonazoid-enhanced contrast US. A total of 12 out of 16 early HCCs (75%) showed similar patterns on vascular and Kupffer phase imaging of contrast-enhanced ultrasonography. The remaining 4 HCCs showed slightly hypervascular pattern without venous washout and slightly decreased Kupffer uptake. All 5 nodule-in-nodule type early HCCs presented partial arterial enhancement within hypovascular nodule at the PAP followed by isovascular pattern at the portal phase and partial Kupffer defect within isouptake nodules. All 48 overt HCCs showed a hypervascular pattern with Kupffer defect on contrast-enhanced ultrasonography. This technique can clearly identify whether blood supply in the tumor is of arterial or portal origin, and facilitate the noninvasive characterization of nodular lesions associated with liver cirrhosis. In conclusion, this newly developed innovative technique can depict pure portal supply in early HCC and DN, enabling differentiating premalignant lesions and early HCCs from overt HCC even though dynamic CT or MRI does not have such capabilities. abstract_id: PUBMED:17179357 Dynamic CT for detecting small hepatocellular carcinoma: usefulness of delayed phase imaging. Objective: The purpose of this retrospective study was to determine the usefulness of delayed phase imaging for detecting small (< or = 2 cm) hepatocellular carcinomas (HCCs) in patients with liver cirrhosis. Materials And Methods: Triphasic (arterial, portal venous, and delayed phases) dynamic CT was performed in 33 patients with 48 HCCs proven histopathologically and in 65 control subjects. Arterial, portal venous, and delayed phase images were obtained 30 seconds, 68-70 seconds, and 5 minutes after the start of contrast material injection, respectively. Three blinded observers reviewed the images independently and evaluated tumor attenuation. Diagnostic performance for the combination of phases was assessed using receiver operating characteristic (ROC) curve analysis. Results: On arterial phase images, 28 of the 48 HCCs were hyperattenuating, nine were isoattenuating, and 11 were hypoattenuating. On portal venous phase images, three tumors were hyperattenuating, 17 were isoattenuating, and 28 were hypoattenuating. On delayed phase images, five tumors were isoattenuating, and 43 were hypoattenuating. The mean sensitivity for the combination of arterial and portal venous phase imaging was 86.8%, that for the combination of arterial and delayed phase imaging was 90.3%, and that for the combination of all three phase imaging was 93.8%. The area underneath composite ROC curve (A(Z)) for the combination of all three phase imaging (A(Z) = 0.940) was significantly higher than that for the combination of arterial and portal venous phase imaging (A(Z) = 0.917) and for the combination of arterial and delayed phase imaging (A(Z) = 0.922). Conclusion: Delayed phase imaging is useful for detecting small HCCs and should be included in dynamic CT examinations of patients with liver cirrhosis. abstract_id: PUBMED:11471323 Liver hyperdensity during arterial phase on CT exams The use of helical CT, infusing pump and non-ionic contrast media has enabled the evaluation of different hepatic circulatory phases during contrast injection. Starting the acquisition of scans 20 to 30 seconds after the injection at a rate of 3 to 4 ml/sec the arterial enhancing of the liver is depicted. THROMBOSIS OR COMPRESSION OF THE PORTAL VEIN: Hypervascular triangle-shaped was with peripheral base can be seen, secondary to the increased arterial flow to compensate for the diminished portal flow. ARTERIOPORTAL SHUNTS: This condition can be caused by tumors such hepatocellular adenocarcinomas and hemangiomas, trauma, interventional procedures, cirrhosis, AVMs and surgery. INFLAMMATORY LESIONS: Hypervascular areas can be seen during the arterial phase in abscesses or cholecystitis, returning to their normal condition in the arterial phase. ANATOMIC VARIANTS: Third veins coming from the periphery (capsular veins, accessory cystic vein and an aberrant gastric vein) supply enhanced blood earlier than the portal circulation. OTHER CAUSES: In liver cirrhosis diffuse hyperattenuated areas can be seen during the arterial circulation. In right-sided heart failure, pericardial disease and Budd-Chiari Syndrome, "mosaic areas" can also be noted. In other patients these perfusion disorders were considered unknown. TUMORS: The well-differentiated hepatocellular carcinoma is a lesion with a predominant arterial blood supply, thus appearing in general hyperdense in this phase. Hemangiomas may appear as highly hyperdense lesions in the arterial phase and can be misinterpreted as HCC if smaller than 2 cm. (30% of cases). Focal nodular hyperplasia is a benign lesion (vascular malformation associated with focal nodules of hepatocellular hyperplasia) with increased arterial blood supply. Hepatic adenomas show an important hypervascularity during the arterial phase and, if large, they may present a small central scar and or capsule. Low or high-grade dysplastic nodules can sometimes be seen as hypervascular areas during the arterial phase. Although most metastasis are depicted as hypodense lesions sometimes they can show arterial hypervascularity such as carcinoid and pancreatic islet cell metastasis. Answer: The efficacy of double arterial phase imaging with multidetector CT for the detection of hypervascular hepatocellular carcinoma (HCC) in the cirrhotic liver was assessed in a study involving 59 patients with 78 hepatocellular carcinomas. The study compared early and late arterial phase CT scans obtained at fixed times after the administration of contrast material. The results indicated that the mean area under the curve values of the late arterial phase CT scans and combined arterial phase CT scans were equivalent and significantly greater than those of the early arterial phase CT scans for detecting HCC. Similarly, the mean relative sensitivity values obtained with the late and combined arterial phase CT scans were also equivalent and significantly greater than those obtained with the early arterial phase CT scans. The conclusion drawn from this study was that double arterial phase CT with multidetector CT showed no significant improvement in effectiveness compared with single late arterial phase CT used alone for detecting hypervascular HCC in the cirrhotic liver (PUBMED:12185057).
Instruction: Bleeding complications during cardiac electronic device implantation in patients receiving antithrombotic therapy: is there any value of local tranexamic acid? Abstracts: abstract_id: PUBMED:27105588 Bleeding complications during cardiac electronic device implantation in patients receiving antithrombotic therapy: is there any value of local tranexamic acid? Background: The perioperative use of antithrombotic therapy is associated with increased bleeding risk after cardiac implantable electronic device (CIED) implantation. Topical application of tranexamic acid (TXA) is effective in reducing bleeding complications after various surgical operations. However, there is no information regarding local TXA application during CIED procedures. The purpose of our study was to evaluate bleeding complications rates during CIED implantation with and without topical TXA use in patients receiving antithrombotic treatment. Methods: We conducted a retrospective analysis of consecutive patients undergoing CIED implantation while receiving warfarin or dual antiplatelet (DAPT) or warfarin plus DAPT treatment. Study population was classified in two groups according to presence or absence of topical TXA use during CIED implantation. Pocket hematoma (PH), major bleeding complications (MBC) and thromboembolic events occuring within 90 days were compared. Results: A total of 135 consecutive patients were identified and included in the analysis. The mean age was 60 ± 11 years old. Topical TXA application during implantation was reported in 52 patients (TXA group). The remaining 83 patients were assigned to the control group. PH occurred in 7.7 % patients in the TXA group and 26.5 % patients in the control group (P = 0.013). The MBC was reported in 5.8 % patients in the TXA and 20.5 % patients in control group (P = 0.024). Univariate logistic regression analysis identified age, history of recent stent implantation, periprocedural spironolactone use, periprocedural warfarin use, perioperative warfarin plus DAPT use, cardiac resynchronization therapy, and topical TXA application during CIED implantation as predicting factors of PH. Multivariate analysis showed that perioperative warfarin plus DAPT use (OR = 10.874, 95 % CI: 2.496-47.365, P = 0.001) and topical TXA application during CIED procedure (OR = 0.059, 95 % CI: 0.012-0.300, P = 0.001) were independent predictors of PH. Perioperative warfarin plus DAPT use and topical TXA application were also found to be independent predictors of MBC in multivariate analyses. No thromboembolic complications was recorded in the study group. Conclusion: The present study demonstrated that the topical TXA application during CIED implantation is associated with reduced PH and MBC in patients with high bleeding risk. abstract_id: PUBMED:35798057 Pulmonary hemorrhage after cardiac resynchronization therapy device implantation - A systematic review. Cardiac implantable electronic devices are being increasingly used for a variety of cardiovascular diseases. We describe a rare case of massive hemoptysis after device implantation. The patient was managed conservatively with reversal of anticoagulation and inhaled tranexamic acid and had a successful recovery. A systematic review accompanies the case presentation. The modality and difficulty of access appear to play a significant role in precipitating bleeding, believed to be the result of direct injury to the pulmonary parenchyma and vasculature. The condition is often self-limiting; however, anticoagulation reversal, intubation, endobronchial intervention, and transarterial embolization may be indicated in more severe pulmonary hemorrhage. abstract_id: PUBMED:30155575 Local haemostatic measures after tooth removal in patients on antithrombotic therapy: a systematic review. Objective: The interruption of antithrombotics prior to tooth removal because of the fear of bleeding or following postoperative bleeding increases the risk of thromboembolic events. The aim of this systematic review was to investigate which local haemostatic measures can effectively prevent postoperative bleeding in patients continuing oral antithrombotics. Methods: A systematic review was conducted by running a search in PubMed, Embase, Web of Science and Cochrane Library. Clinical randomised trials investigating bleeding and haemostatics after tooth removal in patients on antithrombotics were identified. Results: In total, 15 articles were included. The investigated haemostatics included gauze pressure, tranexamic acid-soaked gauze, sponges, glue, calcium sulfate, plant extract Ankaferd Blood Stopper, epsilon-aminocaproic acid and tranexamic acid. In patients treated with vitamin K antagonists, tranexamic acid mouthwash significantly reduced bleeding compared to placebo. Further, histoacryl glue was proven better than gelatin sponges. Other studies failed to show significant differences between haemostatics, but bleeding events were low. Conclusions: Tranexamic acid seems to effectively reduce bleeding, although its superiority to other haemostatics was not proven. In view of the rapidly changing landscape of antithrombotics and the lack of standardization of bleeding outcome, adequately powered clinical studies are required to optimise postoperative management in patients on antithrombotics. Clinical Relevance: In order to optimise postoperative management, the best haemostatics over different patient groups have to be identified and implemented in guidelines. abstract_id: PUBMED:25444433 Implantation of a left ventricular assist device as a destination therapy in Duchenne muscular dystrophy patients with end stage cardiac failure: management and lessons learned. Duchenne muscular dystrophy (DMD) is an X-linked recessive disorder, characterized by progressive skeletal muscle weakness, loss of ambulation, and death secondary to cardiac or respiratory failure. End-stage dilated cardiomyopathy (DCM) is a frequent finding in DMD patients, they are rarely candidates for cardiac transplantation. Recently, the use of ventricular assist devices as a destination therapy (DT) as an alternative to cardiac transplantation in DMD patients has been described. Preoperative planning and patient selection play a significant role in the successful postoperative course of these patients. We describe the preoperative, intraoperative and postoperative management of Jarvik 2000 implantation in 4 DMD pediatric (age range 12-17 years) patients. We also describe the complications that may occur. The most frequent were bleeding and difficulty in weaning from mechanical ventilation. Our standard protocol includes: 1) preoperative multidisciplinary evaluation and selection, 2) preoperative and postoperative non-invasive ventilation and cough machine cycles, 3) intraoperative use of near infrared spectroscopy (NIRS) and transesophageal echocardiography, 4) attention on surgical blood loss, use of tranexamic acid and prothrombin complexes, 5) early extubation and 6) avoiding the use of nasogastric feeding tubes and nasal temperature probes. Our case reports describe the use of Jarvik 2000 as a destination therapy in young patients emphasizing the use of ventricular assist devices as a new therapeutic option in DMD. abstract_id: PUBMED:30804689 Management of ST-elevation myocardial infarction in the setting of anterior epistaxis: focused on antiplatelet and antithrombotic therapies. Background: Antiplatelet and antithrombotic therapies are part of standard core treatments for ST-elevation myocardial infarction (STEMI). Effectiveness of these therapies, however, is often offset by the resultant hemorrhagic complications, which in turn possess significantly worse prognosis. Acute myocardial infarction (AMI) accompanied by acute bleeding, such as anterior epistaxis, is common and arise potential dilemma in deciding appropriate management as a standard medical strategy that may put patients in immediate threat as it increases the ongoing bleeding event. Case Description: A 46-year-old male patient with late-onset infero-posterolateral STEMI and anterior epistaxis was admitted to the emergency ward of Mangusada Regional Hospital. The patient had long-standing history of uncontrolled hypertension and previously been treated with tranexamic acid to stop nasal bleeding. Neither percutaneous coronary intervention nor fibrinolysis was performed due to financial issue, and patient only managed conservatively with adequate medications including dual antiplatelet with aspirin and clopidogrel and anticoagulant with unfractionated heparin. No active bleeding was observed during in-hospital treatment and the patient was then discharged after 8 days with complete improvement of symptoms and ST-segment elevation resolution. Conclusion: This case report highlights the treatment strategy for patients with myocardial infarction in the setting of acute bleeding focusing on antiplatelet and anticoagulant therapies. We also discussed the potential association between tranexamic acid and arterial thromboembolic complication resulting in AMI. abstract_id: PUBMED:18931203 A randomized controlled trial of cell salvage in routine cardiac surgery. Background: Previous trials have indicated that cell salvage may reduce allogeneic blood transfusion during cardiac surgery, but these studies have limitations, including inconsistent use of other blood transfusion-sparing strategies. We designed a randomized controlled trial to determine whether routine cell salvage for elective uncomplicated cardiac surgery reduces blood transfusion and is cost effective in the setting of a rigorous transfusion protocol and routine administration of antifibrinolytics. Methods: Two-hundred-thirteen patients presenting for first-time coronary artery bypass grafting and/or cardiac valve surgery were prospectively randomized to control or cell salvage groups. The latter group had blood aspirate during surgery and mediastinal drainage the first 6 h after surgery processed in a cell saver device and autotransfused. All patients received tranexamic acid and were subjected to an algorithm for red blood cell and hemostatic blood factor transfusion. Results: There was no difference between the two groups in the proportion of patients exposed to allogeneic blood (32% in both groups, relative risk 1.0 P = 0.89). At current blood products and cell saver prices, the use of cell salvage increased the costs per patient by a minimum of $103. When patients who had mediastinal re-exploration for bleeding were excluded (as planned in the protocol), significantly fewer units of allogeneic red blood cells were transfused in the cell salvage compared with the control group (65 vs 100 U, relative risk 0.71 P = 0.04). Conclusion: In patients undergoing routine first-time cardiac surgery in an institution with a rigorous blood conservation program, the routine use of cell salvage does not further reduce the proportion of patients exposed to allogeneic blood transfusion. However, patients who do not have excessive bleeding after surgery receive significantly fewer units of blood with cell salvage. Although the use of cell savage may reduce the demand for blood products during cardiac surgery, this comes at an increased cost to the institution. abstract_id: PUBMED:18931201 Tranexamic acid and aprotinin in primary cardiac operations: an analysis of 220 cardiac surgical patients treated with tranexamic acid or aprotinin. Background: Antifibrinolytics are widely used in cardiac surgery to reduce bleeding. Allogeneic blood transfusion, even in primary cardiac operations with low blood loss, is still high. In the present study we evaluated the impact of tranexamic acid compared to aprotinin on the transfusion incidence in cardiac surgical patients with low risk of bleeding. Methods: This prospective, randomized, double-blind study included 220 patients undergoing primary coronary artery revascularization (coronary artery bypass grafting [CABG]) or aortic valve replacement (AVR). Randomized in blocks of 20, patients received either tranexamic acid (approximately 6 g) or full-dose aprotinin (approximately 5-6 x 10(6) Kallikrein Inhibiting Units). Transfusion was guided by a strict transfusion algorithm. Molecular markers of hemostasis were determined to assess differences in the mode of action of the two drugs. Primary end-points were the incidence of allogeneic red cell transfusion and 24-h postoperative blood loss. Data were analyzed according to the intention-to-treat principle and compared using the chi(2) and Mann-Whitney U-test. Results: Two-hundred-twenty patients were enrolled (CABG: 134, AVR: 86). In the aprotinin Group 47% of patients received allogeneic blood during the hospital stay as compared to 61% in the tranexamic acid group (P = 0.036). Aprotinin conferred a 23% reduction in allogeneic transfusion risk (RR 0.77, 95% CI 0.53-0.88). Overall, no significant difference in postoperative bleeding was observed, although 24-h blood loss was reduced in aprotinin-treated CABG patients (500, 350-750 mL vs 650, 475-875 mL (median, 25th-75th percentile); P = 0.039). Despite the lower transfusion rate, the hemoglobin concentration on the first postoperative day was higher in the aprotinin group (11.3, 9.9-12.1 vs 10.6, 9.9-11.6 mg/dL; P = 0.023). The fibrinolytic activity at the end of operation determined by D-Dimer was comparable in both groups. (0.15, 0.11-0.17 mg/L [aprotinin] versus 0.18, 0.12-0.24 mg/L [tranexamic acid]). The activated partial thromboplastin time was prolonged up to 4 h postoperatively in the aprotinin group, while the heparin requirement was reduced: 19% of the patients in the aprotinin group and 45% in the tranexamic acid group received at least one additional bolus heparin during cardiopulmonary bypass (P < 0.001). Troponin T levels postoperatively and on postoperative day 1 were significantly higher in the tranexamic acid group (P = 0.017). No differences in renal, cardiac, or mortality outcomes were observed. Conclusion: Considering the rate of transfusion of red blood cells, tranexamic acid was slightly inferior in patients undergoing CABG, but there was no difference in patients receiving AVR. Tranexamic acid seems to be less effective in operations with increased bleeding such as CABG. Clinical benefit depends on specific patient and institution characteristics (ClinicalTrials.gov NCT00396760). abstract_id: PUBMED:19413940 Minor dentoalveolar surgery in patients ungergoing antithrombotic therapy Minor oral dentoalveolar surgery can be performed safely without interrupting treatment with vitamin K antagonists provided the INR is within therapeutic range. Platelet inhibitors such as aspirin and clopidogrel may increase the risk of bleeding, but the risk of disabling or fatal sequelae is generally higher if the treatment is stopped. Application of local haemostatic agents and postoperative mouthwashes with tranexamic acid are recommended. Any changes in antithrombotic therapy must be undertaken in collaboration with the patient's prescribing physician. abstract_id: PUBMED:31520092 Fast track concepts in total knee arthroplasty: use of tranexamic acid and local intra-articular anesthesia technique Objective: Fast track concepts are used to reduce the risk of perioperative and postoperative complications after total knee arthroplasty. Indications: The described concepts are used for patients with indications for the implantation of a total knee prosthesis. Contraindications: Contraindications for fast track concepts are aged patients, dementia, American Society of Anesthesiologists (ASA) grade IV and implantation of large revision or tumor prostheses. Contraindications for tranexamic acid are bleeding in the urinary tract, caution in cases of known epilepsy, individual risk assessment in existing thromboses or increased thrombosis risk, fresh myocardial infarction, conditions following fresh pulmonary embolism, percutaneous transluminal coronary angioplasty (PTCA) and stent implantation. Contraindications for ropivacaine are hypersensitivity (allergy) to ropivacaine and other amide type topical anesthetics and hypovolemia. Surgical Technique: Preoperative administration of 1 g tranexamic acid and intraoperative local infiltration anesthesia are carried out. After femoral and tibial bone resection and before cementing the femoral and tibial components, approximately 40 ml of ropivacaine (2%) is injected into the posterior capsule. This is followed by injection of the medial and lateral collateral ligaments with approximately 20 ml each and infiltration of Hoffa's fat pad and the extensor apparatus also with approximately 20 ml local anesthetic. After cementing, the subcutaneous tissue is infiltrated with approximately 50 ml ropivacaine solution. Postoperative Management: On the same day as the operation the patient is mobilized with the help of a physiotherapist. The patient should, if possible, walk a few steps on crutches. Systemic analgesic treatment is carried out according to the World Health Organization (WHO) staged scheme II with a weak opioid and first stage non-opioid analgesic (nonsteroidal anti-inflammatory drug, NSAID and/or metamizole). Gabapentin can be used as an adjuvant comedication. Medicinal thrombosis prophylaxis is carried out with a low molecular weight heparin for 2 weeks postoperatively. Results: In 100 patients who preoperatively received 1 g tranexamic acid and intra-articular infiltration anesthesia, in the evening of the day of the operation the pain was on average 2.1 (±1.8) on the numeric pain rating scale (NPRS). In one patient, there was a sensitive deficit of the lower leg and foot. A motor deficit was not observed. A total of 90 patients were able to raise and straighten leg. On the day of surgery 68 patients were able to walk more than 10 steps and 22 patients could be mobilized to a standing position. The mean length of hospital stay was 6.6 days (5-11 days). No infections, thromboses or pulmonary embolisms occurred. abstract_id: PUBMED:31966979 When do we need more than local compression to control intraoral haemorrhage? Objectives: The aims of this study were to determine the effectiveness of local compression in patients presenting to the emergency room with intraoral bleeding and to identify when complex haemostatic measures may be required. Materials And Methods: Five hundred forty patients who had experienced intraoral haemorrhage were retrospectively reviewed. The outcome variable was the haemostasis method used, i.e., simple (local compression with gauze) or complex (an alternative method after local compression has failed). Predictor variables were sex, age, American Society of Anesthesiologists (ASA) class, hepatic cirrhosis, bleeding disorder, use of antithrombotic agents, and site/cause of haemorrhage. Results: The mean patient age was 48.9±23.9 years, 53.5% were male, 42.8% were ASA class II or higher, and 23.7% were taking antithrombotic agents. Local compression was used most often (68.1%), followed by local haemostatic agents, sutures, systemic tranexamic acid or blood products, and electrocautery. The most common site of bleeding was the gingiva (91.7%), and the most common cause was tooth extraction (45.7%). Risk factors for needing a complex haemostasis method were use of antithrombotic agents (odds ratio 2.047, P=0.009) and minor oral surgery (excluding extraction and implant procedures; odds ratio 6.081, P=0.001). Conclusion: A haemostasis method other than local compression may be needed in patients taking antithrombotic agents or having undergone minor oral surgery. Answer: Yes, there is value in using local tranexamic acid (TXA) during cardiac implantable electronic device (CIED) implantation in patients receiving antithrombotic therapy. A retrospective analysis demonstrated that topical TXA application during CIED implantation is associated with reduced rates of pocket hematoma (PH) and major bleeding complications (MBC) in patients with high bleeding risk. Specifically, PH occurred in 7.7% of patients in the TXA group compared to 26.5% in the control group, and MBC was reported in 5.8% of patients in the TXA group versus 20.5% in the control group. Multivariate analysis showed that perioperative warfarin plus dual antiplatelet therapy (DAPT) use and topical TXA application were independent predictors of PH and MBC. No thromboembolic complications were recorded in the study group, suggesting that TXA can be used safely without increasing the risk of thromboembolic events (PUBMED:27105588).
Instruction: Does the pattern of injury in elderly pedestrian trauma mirror that of the younger pedestrian? Abstracts: abstract_id: PUBMED:23546344 Elderly pedestrian neurotrauma: A descriptive study from a premier neurotrauma center in India. Context: Experience with elderly pedestrian neurotrauma at a major neurotrauma tertiary center. Aims: To highlight the specific injuries and outcome of the elderly pedestrian neurotrauma patients within the city of Bangalore and its surrounding districts. Settings And Design: A retrospective study consisting of demographic data, clinical findings, radiological details, and outcomes. Materials And Methods: A study was conducted at the casualty services, in which 143 consecutive elderly pedestrian (age >60 years) head injury victims were studied from June to September 2009. The records from the hospital mortuary were analyzed from 2007 to 2009. An analysis of 77 elderly patients who died as a pedestrian in accidents during this period was performed. Statistical Analysis Used: SPSS 15. Results: The elderly pedestrians constituted 27% (143/529) of all pedestrian traumas. Two wheelers were the most common accident vehicle (56.6%, 81/143). Most of the injuries (38.5%, 55/143) occurred during peak traffic hours, that is, 4 pm to 9 pm. Majority sustained moderate to severe head injury (61%, 87/143). More than three-fourths of patients required a computed tomography (CT) scan (77%, 110/143), in which there was a higher frequency of contusion (31.5%, 45/143), and subdural hemorrhage (23.1%, 33/143). Most of the injured (43.3%, 13/30) underwent surgery for intracranial hematoma. The mortality rate was 22.8% (8/35). Nearly one-fourth of conducted postmortems among pedestrians belonged to the elderly age group (77/326, 23.6%). Conclusions: Elderly pedestrian neurotrauma patients sustain a more severe injury as evident by poorer Glasgow Coma Score (GCS) scores and CT scan findings, and hence have a higher mortality rate. abstract_id: PUBMED:21109262 Does the pattern of injury in elderly pedestrian trauma mirror that of the younger pedestrian? Background: Walking is the primary mode of transportation for people aged 65 y and over; hence pedestrian injuries are a substantial source of morbidity and mortality among elderly patients in the United States. This study is aimed at evaluating the pattern of injury in the elderly pedestrians and how it differs from younger patients. Methods: Retrospective analysis of the National Trauma Data Bank (2002-2006) was performed, with inclusion criteria defined as pedestrian injuries based on ICD-9 codes, excluding age < 15 y. The following age categories in years were created: 15-24 (reference group), 25-34, 35-44, 45-54, 55-64, 65-74, 75-84, and 85-89. The injury prevalence was compared, and multivariate regression for mortality was conducted adjusting for demographic and injury characteristics. Results: A total of 79,307 patients were analyzed. Superficial injuries were the most common at 29.1%, with lower extremity fractures and intracranial injuries following at 25.1% and 21.4% respectively. The very elderly (75-84 and 85-89) had significantly higher rates of fractures of the pelvis(16.2% and 16.8% versus 8.1% in the youngest group), upper (19.3% and 18.4% versus 9.8%), lower extremities (31.1% and 31.9% versus 22.5%) and intracranial injuries (25.5% and 28.7% versus 22.4%), but sustained lower rates of hepatic (2.3% and 1.7% versus 3.0%) injuries, with no difference seen in pancreatic, splenic, and genitourinary injuries. On multivariate analysis, very elderly patients were six to eight times more likely to die (OR 6.24 and 8.27, P < 0.001). Conclusion: Elderly patients have higher rates of fractures and intracranial injuries with an extremely worse mortality after pedestrian trauma. abstract_id: PUBMED:27494040 Injury pattern in lethal motorbikes-pedestrian collisions, in the area of Barcelona, Spain. Introduction: There are several studies about M1 type vehicle-pedestrian collision injury pattern, and based on them, there has been several changes in automobiles for pedestrian protection. However, the lack of sufficient studies about injury pattern in motorbikes-pedestrian collisions leads to a lack of optimization design of these vehicles. The objective of this research is to study the injury pattern of pedestrians involved in collisions with motorized two-wheeled vehicles. Methods: A retrospective descriptive study of pedestrian's deaths after collisions with motorcycles in an urban area, like Barcelona was performed. The cases were collected from the Forensic Pathology Service database of the Institute of Legal Medicine of Catalonia. The selected cases were categorized as pedestrian-motorcycle collision, between January 1st, 2005 and December 31st, 2014. Data were collected from the autopsy, medical, and police report. The collected information was then analyzed using Microsoft Excel statistical functions. Results: Traumatic Brain Injury is the main cause of death in pedestrian hit by motorized two-wheeled vehicles (62.85%). The most frequent injury was the subarachnoid hemorrhage, in 71.4% of cases, followed by cerebral contusions and skull base fractures (65.7%). By contrast, pelvic fractures and tibia fractures only appeared in 28.6%. Conclusions: The study characterizes the injury pattern of pedestrians involved in a collision with motorized two-wheeled vehicles in an urban area, like Barcelona, which has been found to be different from other vehicle-pedestrian collisions, with a higher incidence of brain injuries and minor frequency of lower extremities fractures in pelvis, tibia and fibula. abstract_id: PUBMED:38175182 Pedestrian injuries in the United States: Shifting injury patterns with the introduction of pedestrian protection into the passenger vehicle fleet. Objective: Between 2010 and 2020, an annual average of more than 70,000 pedestrians were injured in U.S. motor vehicle crashes. Pedestrian fatalities increased steadily over that period, outpacing increases in vehicle occupant fatalities. Strategies for reducing pedestrian injuries include pedestrian crash prevention and improved vehicle design for protection of pedestrians in the crashes that cannot be prevented. This study focuses on understanding trends in injuries sustained in U.S. pedestrian crashes to inform continuing efforts to improve pedestrian crash protection in passenger vehicles. Methods: More than 160,000 adult pedestrians injured in motor vehicle crashes who were admitted to U.S. trauma centers between 2007 and 2016 were drawn from the National Trauma Data Bank (NTDB) Research Data Sets. The injuries in those cases were used to explore the shifting patterns of pedestrian injuries. Results: The proportion of pedestrians with thorax injuries increased 3.0 percentage points to 30.7% of trauma center-admitted NTDB pedestrian cases over the 10 years studied, and the proportion with pelvis/hip injuries increased to 21.2%. The proportion of cases with head injuries fell to 48.6%, and the percentage of pedestrians with lower extremity injury (44%) did not change significantly over the 10 year period. Assessment of possible reasons for the shifts suggested that increasing numbers of sport utility vehicles, population increases among the oldest age groups, and improvements in pedestrian protection in U.S. passenger vehicles likely contributed to, but did not completely account for, the relative changes in injury frequency in each body region. Conclusions: More important than the reasons for the shifts in the relative frequency of injury to each body region are the conclusions that can be drawn regarding priorities for pedestrian protection research. Though head/face and lower extremity injuries remained the most frequently injured body regions in adult pedestrians admitted to NTDB trauma centers, the relative frequency of thorax and pelvis/hip injuries increased steadily, underlining the increasing importance of pedestrian protection research on these body regions. abstract_id: PUBMED:36706205 Older Adult Pedestrian Injury in Rhode Island, 2017-2020. Objective: To understand the epidemiology and clinical outcomes of older adult pedestrian injury in Rhode Island. Methods: Descriptive univariate analysis of data from Rhode Island Hospital's trauma registry on patients admitted for pedestrian-related injuries between 2017-2020. Results: The rate of pedestrian injury in older adults was 1.5 times the rate in adults age 18-49. Injured older adult pedestrians experienced a higher rate of serious adverse events during hospitalization (18.0%) than their younger counterparts (10.3%) and had almost twice the mortality rate (14.9% versus 7.6%). Across ages, pedestrian injury rates are higher in densely populated areas, and those injured disproportionately are male and have comorbid alcohol and substance use disorders. Conclusions: The increased risk of pedestrian injury in older adults is evident and necessitates intervention. Further research is warranted on the root causes of higher pedestrian injury and mortality rates among older adults. abstract_id: PUBMED:29192337 Pediatric emergency department visits for pedestrian and bicyclist injuries in the US. Background: Despite reductions in youth pedestrian and bicyclist deaths over the past two decades, these injuries remain a substantial cause of morbidity and mortality for children and adolescents. There is a need for additional information on non-fatal pediatric pedestrian injuries and the role of traumatic brain injury (TBI), a leading cause of acquired disability. Methods: Using a multi-year national sample of emergency department (ED) records, we estimated annual motorized-vehicle related pediatric pedestrian and bicyclist (i.e. pedalcyclist) injury rates by age and region. We modeled in-hospital fatality risk controlling for age, gender, injury severity, TBI, and trauma center status. Results: ED visits for pediatric pedestrian injuries declined 19.3% (95% CI 16.8, 21.8) from 2006 to 2012, with the largest decreases in 5-to-9 year olds and 10-to-14 year olds. Case fatality rates also declined 14.0%. There was no significant change in bicyclist injury rates. TBI was implicated in 6.7% (95% CI 6.3, 7.1) of all pedestrian and bicyclist injuries and 55.5% (95% CI 27.9, 83.1) of fatalities. Pedestrian ED visits were more likely to be fatal than bicyclist injuries (aOR = 2.4, 95% CI 2.3, 2.6), with significant additive interaction between pedestrian status and TBI. Conclusions: TBI in young pedestrian ED patients was associated with a higher risk of mortality compared to cyclists. There is a role for concurrent clinical focus on TBI recovery alongside ongoing efforts to mitigate and prevent motor vehicle crashes with pedestrians and bicyclists. Differences between youth pedestrian and cycling injury trends merit further exploration and localized analyses, with respect to behavior patterns and interventions. ED data captures a substantially larger number of pediatric pedestrian injuries compared to crash reports and can play a role in those analyses. abstract_id: PUBMED:35793173 Knee ligament injuries in U.S. pedestrian crashes. Objective: Projectile legform tests are used to evaluate pedestrian lower extremity injury risk, including risk of injury to the cruciate and collateral ligaments. However, it has been suggested that cruciate ligament injuries rarely occur without collateral ligament injuries, making a cruciate ligament injury requirement unnecessary in pedestrian test procedures. Therefore, the current study examines cruciate ligament injuries among U.S. pedestrians with and without other injuries that are evaluated in pedestrian test procedures. Methods: Injury data for pedestrians treated in U.S. trauma centers from 2007 to 2017 were drawn from the National Trauma Data Bank (NTDB) Research Data Set (RDS) and from its successor, the Trauma Quality Program (TQP) Participant User Files (PUF). Crash and demographic details for individual cases with documented knee ligament injuries were obtained from the Pedestrian Crash Data Study (PCDS). Results: Among pedestrians aged 16 and older with knee ligament injuries, 38% had only collateral injuries, 31% had only cruciate injuries and 31% were documented with injuries to both. Younger pedestrians also sustained cruciate injuries without collateral injuries, with 36% of the 0-15 year-old pedestrians diagnosed with knee ligament injuries having isolated cruciate injuries. Given that injuries to the left and right knee could not be distinguished in NTDB cases, these estimates of isolated ligament injuries are likely conservative, so that at least 31% of pedestrians aged 16 and older and at least 36% of younger pedestrians sustained cruciate ligament injuries without collateral ligament injuries in the same knee. A PCDS case study illustrated how cruciate injury can occur without collateral injury in a lateral bumper impact below the knee. Conclusions: Cruciate ligament injuries can occur in pedestrian crashes, with or without other injuries that are evaluated in pedestrian test procedures. Isolated cruciate injuries may be more likely in impacts above or below the knee and in impacts with a component of anterior-posterior loading. The frequency of cruciate injury in the absence of collateral injury in lateral and non-lateral impact supports inclusion of injury measures correlating to cruciate injury risk in pedestrian legform test procedures. abstract_id: PUBMED:33815635 Fatal Motor Vehicle-Pedestrian Collision Injury Patterns-A Systematic Literature Review. Introduction: Injury patterns in pedestrians struck by motor vehicles were described in medical literature first published almost a half century ago. "Classical" triads of injury distribution were described for adults (skull-pelvis-extremity) and subsequently applied to children (head-hip or pelvis-distal femur/knee joint). Notably, these classical triads were derived from two publications reporting clinical observations of only 11 patients, all of whom were adults. Methods: A systematic literature review was conducted using Medline, CINAHL, EMBASE, and Cochrane to determine the evidence-base for motor vehicle collision (MVC)-pedestrian injury "triads" and other trauma patterns described for pedestrians in the adult and pediatric age groups. Results: Of the 1540 full-text articles identified in the review, 56 articles published in English met the inclusion criteria, that is, motor vehicle-pedestrian collision resulting in specific, fatal injuries determined by postmortem examinations. There were variations in injury patterns that differed from the "classical" triads. These differences likely stem from advances in vehicle design and safety features which have affected the nature and distribution of injuries. Discussion: Further research on the correlation of specific injuries sustained by pedestrians of different ages with various types of vehicles and impacts are needed to assess the validity of previously observed injury patterns in relation to the current motor vehicle fleet. Delineation of injury patterns can assist health care teams in trauma management. Vehicle manufacturers and government regulators can better assess whether the introduction of advanced driver assistance features designed to protect pedestrians when struck will be effective in reducing severe injuries. In forensic pathology practice, knowledge of pedestrian injury patterns based on data representative of impacts involving modern vehicles can provide MVC death investigators the means to determine MVC dynamics and pedestrian kinematics. abstract_id: PUBMED:38301297 Database Integration Correlates Street Crossing Design Strategies With Pedestrian Injury. Introduction: Transportation databases have limited data regarding injury severity of pedestrian versus automobile patients. To identify opportunities to reduce injury severity, transportation and trauma databases were integrated to examine the differences in pedestrian injury severity at street crossings that were signalized crossings (SCs) versus nonsignalized crossings (NSCs). It was hypothesized that trauma database integration would enhance safety analysis and pedestrians struck at NSC would have greater injury severity. Methods: Single-center retrospective review of all pedestrian versus automobile patients treated at a level 1 trauma center from 2014 to 2018 was performed. Patients were matched to the transportation database by name, gender, and crash date. Google Earth Pro satellite imagery was used to identify SC versus NSC. Injury severity of pedestrians struck at SC was compared to NSC. Results: A total of 512 patients were matched (median age = 41 y [Q1 = 26, Q3 = 55], 74% male). Pedestrians struck at SC (n = 206) had a lower injury severity score (ISS) (median = 9 [4, 14] versus 17 [9, 26], P < 0.001), hospital length of stay (median = 3 [0, 7] versus 6 [1, 15] days, P < 0.001), and mortality (21 [10%] versus 52 [17%], P = 0.04), as compared to those struck at NSC (n = 306). The transportation database had a sensitivity of 63.4% (55.8%-70.4%) and specificity of 63.4% (57.7%-68.9%) for classifying severe injuries (ISS >15). Conclusions: Pedestrians struck at SC were correlated with a lower ISS and mortality compared to those at NSC. Linkage with the trauma database could increase the transportation database's accuracy of injury severity assessment for nonfatal injuries. Database integration can be used for evidence-based action plans to reduce pedestrian morbidity, such as increasing the number of SC. abstract_id: PUBMED:28454869 Hierarchical ordered model for injury severity of pedestrian crashes in South Korea. Introduction: The high percentage of fatalities in pedestrian-involved crashes is a critical social problem. The purpose of this study is to investigate factors influencing injury severity in pedestrian crashes by examining the demographic and socioeconomic characteristics of the regions where crashes occurred. Method: To understand the correlation between the unobserved characteristics of pedestrian crashes in a defined region, we apply a hierarchical ordered model, in which we set crash characteristics as lower-level variables and municipality characteristics as upper-level. Pedestrian crash data were collected and analyzed for a three-year period from 2011 to 2013. The estimation results show the statistically significant factors that increase injury severity of pedestrian crashes. Results: At the crash level, the factors associated with increased severity of pedestrian injury include intoxicated drivers, road-crossing pedestrians, elderly pedestrians, heavy vehicles, wide roads, darkness, and fog. At the municipality level, municipalities with low population density, lower level of financial independence, fewer doctors, and a higher percentage of elderly residents experience more severe pedestrian crashes. Municipalities ranked as having the top 10% pedestrian fatality rate (fatalities per 100,000 residents) have rates 7.4 times higher than municipalities with the lowest 10% rate of fatalities. Their demographic and socioeconomic characteristics also have significant differences. The proposed model accounts for a 7% unexplained variation in injury severity outcomes between the municipalities where crashes occurred. Conclusion: To enhance the safety of vulnerable pedestrians, considerable investments of time and effort in pedestrian safety facilities and zones should be made. More certain and severe punishments should be also given for the traffic violations that increase injury severity of pedestrian crashes. Furthermore, central and local governments should play a cooperative role to reduce pedestrian fatalities. Practical applications: Based on our study results, we suggest policy directions to enhance pedestrian safety. Answer: The pattern of injury in elderly pedestrian trauma does not entirely mirror that of younger pedestrians. Elderly pedestrians tend to sustain more severe injuries and have higher mortality rates after pedestrian trauma compared to their younger counterparts. Specifically, studies have shown that elderly pedestrians have higher rates of fractures and intracranial injuries (PUBMED:21109262). In the very elderly (ages 75-84 and 85-89), there are significantly higher rates of fractures of the pelvis, upper and lower extremities, and intracranial injuries compared to the youngest group (ages 15-24) (PUBMED:21109262). Additionally, elderly pedestrian neurotrauma patients often sustain more severe injuries, as evidenced by poorer Glasgow Coma Score (GCS) scores and CT scan findings, leading to a higher mortality rate (PUBMED:23546344). Furthermore, the injury pattern of pedestrians involved in collisions with motorized two-wheeled vehicles in urban areas like Barcelona has been found to be different from other vehicle-pedestrian collisions, with a higher incidence of brain injuries and a lower frequency of lower extremities fractures in the pelvis, tibia, and fibula (PUBMED:27494040). This suggests that the type of vehicle involved in the collision can also influence the injury pattern. In the United States, the introduction of pedestrian protection into the passenger vehicle fleet has led to shifting injury patterns, with an increase in the proportion of pedestrians with thorax and pelvis/hip injuries and a decrease in the proportion of cases with head injuries (PUBMED:38175182). This indicates that improvements in vehicle design for pedestrian protection may have contributed to changes in injury patterns over time. Overall, while there are some similarities in the types of injuries sustained by pedestrians of different ages, the severity and specific patterns of injury can vary significantly between elderly and younger pedestrians.
Instruction: Is coronary angiography performed in the appropriate patients after acute myocardial infarction? Abstracts: abstract_id: PUBMED:30892387 Analysis of the Appropriate Use Criteria for Coronary Angiography in Two Cardiology Services of Southern Brazil. Background: Despite its great relevance, there are no studies in our country evaluating the application of the 2012 guidelines for the appropriate use of cardiac diagnostic catheterization. Objective: To analyze the adequacy of coronary angiography performed in two hospitals in the southern region of Brazil. Methods: This is a multicenter cross-sectional study, which analyzed indications, results and proposals for the treatment of 737 coronary angiograms performed in a tertiary hospital with multiple specialties (Hospital A) and a tertiary cardiology hospital (Hospital B). Elective or emergency coronary angiographies were included, except for cases of acute myocardial infarction with ST segment elevation. The level of statistical significance adopted was 5% (p < 0.05). Results: Of the 737 coronary angiograms, 63.9% were performed in male patients. The mean age was 61.6 years. The indication was acute coronary syndrome in 57.1%, and investigation of coronary artery disease in 42.9% of the cases. Regarding appropriation, 80.6% were classified as appropriate, 15.1% occasionally appropriate, and 4.3% rarely appropriate. The proposed treatment was clinical for 62.7%, percutaneous coronary intervention for 24.6%, and myocardial revascularization surgery for 12.7% of the cases. Of the coronary angiographies classified as rarely appropriate, 56.2% were related to non-performance of previous functional tests, and 21.9% showed severe coronary lesions. However, regardless of the outcome of coronary angiography, all patients in this group were indicated for clinical treatment. Conclusion: We observed a low number of rarely appropriate coronary angiograms in our sample. The guideline recommendation in these cases was adequate, and no patient required revascularization treatment. Most of these cases are due to non-performance of functional tests. abstract_id: PUBMED:25751586 Validation of the appropriate use criteria for coronary angiography: a cohort study. Background: The use of invasive coronary angiography in stable ischemic heart disease (IHD) varies widely. Objective: To validate the 2012 appropriate use criteria for diagnostic catheterization by examining the relationship between the appropriateness of cardiac catheterization in patients with suspected stable IHD and the proportion of patients with obstructive coronary artery disease (CAD) and subsequent revascularization. Design: Population-based, observational, multicenter cohort study. Setting: The Cardiac Care Network, a registry of all patients having elective angiography at 18 hospitals in Ontario, Canada, between 1 October 2008 and 30 September 2011. Patients: Persons without prior coronary revascularization or myocardial infarction who had angiography for suspected stable CAD. Measurements: Appropriateness scores were ascertained by using data collected at the time of the index angiography and were categorized as appropriate, inappropriate, or uncertain. Results: Among the final cohort of 48 336 patients, 58.2% of angiographic studies were classified as appropriate, 10.8% were classified as inappropriate, and 31.0% were classified as uncertain. Overall, 45.5% of patients had obstructive CAD. In patients with appropriate indications for angiography, 52.9% had obstructive CAD, with 40.0% undergoing revascularization. In those with inappropriate indications, 30.9% had obstructive CAD and 18.9% underwent revascularization; in those with uncertain indications, 36.7% had obstructive CAD and 25.9% had revascularization. Although more patients with appropriate indications had obstructive CAD and underwent revascularization (P < 0.001), a substantial proportion of those with inappropriate or uncertain indications had important coronary disease. Limitation: Data were not available on whether symptoms were atypical. Conclusion: Despite the association between appropriateness category and obstructive CAD, this study raises concerns about the ability of the appropriate use criteria to guide clinical decision making. Primary Funding Source: Canadian Institutes of Health Research. abstract_id: PUBMED:28082007 Prognostic performance of coronary computed tomography angiography in asymptomatic individuals as compared to symptomatic patients with an appropriate indication. Introduction: Current Appropriatene Usa Criteria exclude coronary computed tomography angiography (CTA) in asymptomatic individuals. We compared the prognostic value of coronary CTA in asymptomatic individuals to symptomatic patients with "definitely appropriate" indications for coronary CTA. Methods: Consecutive patients without previously known CAD referred for a CTA exam were divided into 2 groups. One group consisted ofasymptomatic individuals, the other of symptomatic patients with a "definitely appropriate" indication for coronary CT (unable to exercise and/or with an uninterpretable electrocardiogram and at an intermediate pre-test probability of obstructive coronary artery disease). Patients that did not fit into either groups were excluded. The segment stenosis score (SSS) was calculated based on coronary CTA and patients were followed for a composite endpoint of all-cause death, acute myocardial infarction and late revascularization. Results: A total of 1080 patients (60 ± 12 years, 65% male) were included in the study (674 "asymptomatic" and 406 "appropriate"). SSS >4 was more frequent in "asymptomatic" than in "appropriate" CT data sets (27% vs 20%, p = 0.02). After a mean follow-up of 4.4 ± 1.8 yrs, 49 patients reached the composite endpoint. On multivariable analysis adjusting for CAD risk factors and symptoms, only a high-risk CTA study and past smoking were independently predictive of events. Conclusions: Although currently not regarded as "definitely appropriate", the use of coronary CTA in a selected asymptomatic population had higher yield for identifying high-risk individuals than appropriately indicated studies in symptomatic patients and provided thequal prognostic information. abstract_id: PUBMED:9465750 Coronary angiography Coronary angiography continues to be the standard for assessing coronary artery obstructive disease. Coronary angiography is used not only in diagnosis, but also to assess the appropriateness and feasibility of various forms of therapy. In addition, information provided by coronary angiography is useful for assessing prognosis in patients with coronary artery disease. It helps to decide, whether percutaneous coronary angioplasty, coronary bypass surgery or medical treatment should be performed. Bypass surgery has been shown to improve prognosis in patients with left main and three vessel disease. However, in patients with single vessel disease, choice of therapy generally does not influence prognosis. Although the overall risk of coronary angiography is very low, it should be restricted to patients in whom the information obtained is expected to improve management. abstract_id: PUBMED:34454051 Coronary computed tomography angiography in patients with stable coronary artery disease. The treatment of coronary artery disease (CAD), which is defined by stable anatomical atherosclerotic and functional alterations of epicardial vessels or microcirculation, focuses on managing intermittent angina symptoms and preventing major adverse cardiovascular events with optimal medical therapy. When patients with known CAD present with angina and no acute coronary syndrome, they have historically been evaluated with a variety of noninvasive stress tests that utilize electrocardiography, radionuclide scintigraphy, echocardiography, or magnetic resonance imaging for determining the presence and extent of inducible myocardial ischemia. Patient event-free survival, however, is largely driven by the coronary atherosclerotic disease burden, which is not directly assessed by functional testing. Direct evaluation of coronary atherosclerotic disease by coronary computed tomography angiography (coronary CTA) has emerged as the first line noninvasive imaging modality as it improves diagnostic accuracy and positively influences clinical management. Compared to functional assessment of CAD, coronary CTA-guided management results in improved patient outcomes by facilitating prevention of myocardial infarction. Other strengths of coronary CTA include detailed atherosclerotic plaque characterization and the ability to assess functional significance of specific lesions, which may further improve risk assessment and prognosis and lead to more appropriate referrals for additional testing, such as invasive coronary angiography. abstract_id: PUBMED:31883979 Avoidance of Coronary Angiography in High-Risk Patients With Acute Coronary Syndromes: The ACSIS Registry Findings. Background/purpose: Patients with acute coronary syndrome (ACS) are at high-risk for recurrent coronary syndromes, heart failure and death. Early coronary intervention combined with medications reduces these risks. The ACS Israeli Survey (ACSIS) is conducted over a 2-month period, every 2-3 years. ACSIS includes all patients discharged with a diagnosis of ACS from the 24 coronary care units and cardiology departments in Israel. We compared clinical profiles and 1-year survival between ACS patients who did and did not undergo coronary angiography. Methods/materials: We reviewed ACSIS for the period 2002-2013. Results: The prognosis of patients who did not undergo coronary angiography during hospitalization (N = 2078) was significantly worse than for patients who underwent angiography (N = 9550). Avoidance of angiography was less common in ST-elevation myocardial infarction (STEMI) patients than in non-STEMI/unstable angina (NSTEMI/UAP) patients (13% vs. 22%, p < 0.001). Among NSTEMI/UAP patients, those who did not undergo angiography were older (mean: 71 vs. 64 years, p < 0.001), had higher incidences of diabetes (47% vs. 38%, p < 0.001), and renal (55% vs. 27%, p < 0.001) and heart failure (35% vs. 13%, p < 0.01) on admission, compared to those who underwent angiography. Even patients that underwent only diagnostic angiography had had a better prognosis than patients who did not undergo angiography. After propensity score matching for the major differences mentioned above, survival was still significantly better for patients who underwent angiography. Conclusion: ACS patients who did not undergo coronary angiography had higher-risk clinical profiles and worse 1-year survival than ACS patients who underwent angiography. After propensity score matching, the absence of angiography was independently associated with higher mortality. Summary: Data over 10 years were reviewed from a national registry of acute coronary syndrome. Patients who did not undergo coronary angiography during hospitalization were older and with more comorbidities than patients who underwent angiography. After propensity score matching, the absence of angiography remained independently associated with 1-year mortality. abstract_id: PUBMED:27142217 Computed tomography coronary angiography in patients with acute myocardial infarction and normal invasive coronary angiography. Background: Three to five percent of patients with acute myocardial infarction (AMI) have normal coronary arteries on invasive coronary angiography (ICA). The aim of this study was to assess the presence and characteristics of atherosclerotic plaques on computed tomography coronary angiography (CTCA) and describe the clinical characteristics of this group of patients. Methods: This was a multicentre, prospective, descriptive study on CTCA evaluation in thirty patients fulfilling criteria for AMI and without visible coronary plaques on ICA. CTCA evaluation was performed head to head in consensus by two experienced observers blinded to baseline patient characteristics and ICA results. Analysis of plaque characteristics and plaque effect on the arterial lumen was performed. Coronary segments were visually scored for the presence of plaque. Seventeen segments were differentiated, according to a modified American Heart Association classification. Echocardiography performed according to routine during the initial hospitalisation was retrieved for analysis of wall motion abnormalities and left ventricular systolic function in most patients. Results: Twenty-five patients presented with non ST-elevation myocardial infarction (NSTEMI) and five with ST-elevation myocardial infarction (STEMI). Mean age was 60.2 years and 23/30 were women. The prevalence of risk factors of coronary artery disease (CAD) was low. In total, 452 coronary segments were analysed. Eighty percent (24/30) had completely normal coronary arteries and twenty percent (6/30) had coronary atherosclerosis on CTCA. In patients with atherosclerotic plaques, the median number of segments with plaque per patient was one. Echocardiography was normal in 4/22 patients based on normal global longitudinal strain (GLS) and normal wall motion score index (WMSI); 4/22 patients had normal GLS with pathological WMSI; 3/22 patients had pathological GLS and normal WMSI; 11/22 patients had pathological GLS and WMSI and among them we could identify 5 patients with a Takotsubo pattern on echo. Conclusions: Despite a diagnosis of AMI, 80 % of patients with normal ICA showed no coronary plaques on CTCA. The remaining 20 % had only minimal non-obstructive atherosclerosis. Patients fulfilling clinical criteria for AMI but with completely normal ICA need further evaluation, suggestively with magnetic resonance imaging (MRI). abstract_id: PUBMED:19121248 Frequency and predictors of urgent coronary angiography in patients with acute pericarditis. Objectives: To determine the frequency of urgent coronary angiography in patients with acute pericarditis and to examine clinical characteristics associated with coronary angiography. Patients And Methods: This is a retrospective analysis of all incident cases of acute viral or idiopathic pericarditis evaluated at Mayo Clinic's site in Rochester, MN, between January 1, 2000, and December 31, 2006. The main outcome measures were use of urgent coronary angiography and rate of concomitant coronary artery disease in patients with pericarditis. Results: There were 238 patients with a final diagnosis of acute pericarditis (mean age, 47.7+/-17.9 years; 157 [66.0%] were male). On the initial electrocardiogram, 146 patients (61.3%) had ST-segment elevation, and 92 (38.7%) had no ST-segment elevation. Coronary angiography was performed in 40 patients (16.8% of all patients); the frequency was 5-fold higher among those with ST-segment elevation (24.7% vs 4.3%; P<.001). Additionally, 7 patients (4.8%) with ST-segment elevation received thrombolytics before transfer to our institution; no patients without ST-segment elevation received thrombolysis (P=.05). Characteristics associated with a higher likelihood of coronary angiography included typical anginal chest pain, ST-segment elevation, previous percutaneous coronary intervention, elevated troponin T values, diaphoresis, and male sex. Coronary angiography revealed concomitant mild to moderate coronary artery disease in 14 (35.0%) of the 40 patients who underwent this procedure. Conclusion: Urgent coronary angiography is commonly performed in patients with acute pericarditis, particularly those with ST-segment elevation, typical myocardial infarction symptoms, and elevated troponin T values. Coronary artery disease was present angiographically in one-third of patients undergoing the procedure. Although patients with ST-segment elevation myocardial infarction must receive prompt reperfusion, clinicians must also consider the diagnosis of pericarditis to avoid unneeded coronary angiography. abstract_id: PUBMED:29376398 Spontaneous coronary artery dissection: Acute findings on coronary computed tomography angiography. Background: The coronary computed tomography angiography features of acute spontaneous coronary artery dissection, an important cause of acute coronary syndrome in young women, have not been assessed. Methods: The "Virtual" Multicenter Mayo Clinic Spontaneous Coronary Artery Dissection Registry was established in 2010 and includes retrospective and prospective patient data. Retrospective assessment of acute coronary computed tomography angiography images was performed for 14 patients (16 vessels) who had images performed within two days of invasive coronary angiography diagnosis of acute spontaneous coronary artery dissection. Results: Four pertinent diagnostic coronary features of acute spontaneous coronary artery dissection were observed in order of prevalence: 1) abrupt luminal stenosis (64%); 2) intramural hematoma (50%); 3) tapered luminal stenosis (36%); and 4) dissection (14%). Additional findings include epicardial fat stranding (42%), coronary tortuosity (29%), and coronary bridge (14%). Fifty percent of patients had myocardial hypoperfusion in the myocardial distribution of the dissected coronary artery. Conclusions: We define key coronary computed tomography angiography features of acute spontaneous coronary artery dissection, the most common of which are abrupt luminal stenosis and intramural hematoma. Importantly, intramural hematoma appears similar to noncalcified atherosclerotic plaque, emphasizing the importance of invasive coronary angiography for acute diagnosis of spontaneous coronary artery dissection until the sensitivity and specificity of coronary computed tomography angiography is better understood. abstract_id: PUBMED:30922552 What is the Optimal Rate of Invasive Coronary Angiography After Acute Coronary Syndrome? (ANZACS-QI 22). Background: Invasive coronary angiography plays a pivotal role in the management of acute coronary syndromes (ACS). Wide variability in its use has been previously documented. Our aim was to investigate whether coronary angiography is being used appropriately prior to discharge after ACS, taking into account relative contraindications of the procedure. Methods: Patients presenting with ACS in 2015 to two large, demographically distinct New Zealand (NZ) District Health Boards (DHBs)-Counties Manukau (CMDHB) and Waitemata (WDHB)-were identified from the NZ Ministry of Health National Dataset using ICD-10-AM codes. Patients' clinical data were obtained from the electronic and paper clinical records. Pre-defined relative contraindications to coronary angiography were identified. Results: Of the 3,809 patient admissions coded with ACS, 600 patient admissions (300 from each DHB) were reviewed. Sixty-one (61) (10%) did not meet diagnostic criteria for ACS on review of clinical data and were excluded. Of the patients reviewed, 55% received coronary angiography, with a higher rate in WDHB than CMDHB (61% and 49%, respectively) and 37.5% had relative contraindications documented. The overall rate of angiography was appropriately high in those without a relative contraindication (90.3%) and low in those with one (7.4%). There were fewer patients with relative contraindications in WDHB than CMDHB (36.7% and 48.5%) but the rate of angiography in those with (6.9% and 7.8%) and without (92.5% and 87.5%) contraindications in the two DHBs was similar. Conclusions: The decision to offer coronary angiography after ACS appears to be appropriately influenced by the presence or absence of relative contraindications. Approximately 60% of patients had no documented relative contraindication suggesting that this may be an appropriate angiography rate in New Zealand practice. However, differences between the two DHBs of around 10% appear to be clinically appropriate due to variation in contraindication rates. Answer: Coronary angiography is a critical diagnostic tool used to assess coronary artery disease and guide treatment decisions after acute myocardial infarction (AMI). The appropriateness of its use in patients post-AMI has been the subject of various studies. In a study conducted in Southern Brazil, the majority of coronary angiograms were classified as appropriate (80.6%), with a smaller percentage deemed occasionally appropriate (15.1%) or rarely appropriate (4.3%) (PUBMED:30892387). This suggests that, in this sample, coronary angiography was generally performed in appropriate patients. Another study from Ontario, Canada, found that 58.2% of angiographic studies were classified as appropriate, with a substantial proportion of patients with inappropriate or uncertain indications still having significant coronary disease (PUBMED:25751586). This indicates that while there is an association between appropriateness category and obstructive coronary artery disease, the criteria may not fully guide clinical decision-making. In Israel, the ACS Israeli Survey (ACSIS) revealed that patients with ACS who did not undergo coronary angiography had higher-risk clinical profiles and worse 1-year survival than those who underwent angiography (PUBMED:31883979). This suggests that coronary angiography may be underutilized in high-risk patients, potentially impacting their prognosis. A study on patients with acute pericarditis found that urgent coronary angiography is commonly performed, especially in those with ST-segment elevation, typical myocardial infarction symptoms, and elevated troponin T values (PUBMED:19121248). This implies that in certain clinical scenarios, coronary angiography is frequently used to differentiate between AMI and other conditions like pericarditis. Lastly, a New Zealand study indicated that the decision to offer coronary angiography after ACS was appropriately influenced by the presence or absence of relative contraindications, with around 60% of patients without contraindications receiving angiography (PUBMED:30922552). This suggests that coronary angiography is generally performed in appropriate patients post-ACS in that setting. In summary, while there is evidence that coronary angiography is often performed in appropriate patients after AMI, there are also indications that the criteria for appropriateness may not capture all clinically significant cases, and that some high-risk patients may not be receiving angiography when it could be beneficial. Variability in practice and the need for individualized clinical judgment are also highlighted by these studies Continuing from where I left off, the studies collectively suggest that while coronary angiography is generally performed in appropriate patients after AMI, there are nuances and variations in its application. For instance, the study from Southern Brazil showed a high rate of appropriate use, but also indicated that rarely appropriate angiograms were often due to the non-performance of previous functional tests (PUBMED:30892387). This could imply that some angiographies might be avoidable with prior non-invasive testing. The Canadian study raised concerns about the ability of the appropriate use criteria to guide clinical decision-making, as a significant number of patients with inappropriate or uncertain indications had obstructive CAD (PUBMED:25751586). This suggests that the criteria might not be sensitive enough to identify all patients who could benefit from angiography. The findings from the ACSIS registry in Israel highlighted that not performing coronary angiography in high-risk ACS patients was associated with worse outcomes, suggesting that more aggressive use of angiography might be warranted in this group (PUBMED:31883979). The study on acute pericarditis patients underscores the importance of differentiating between AMI and other conditions that can mimic it, such as pericarditis, and that coronary angiography is a valuable tool in this differentiation (PUBMED:19121248). Finally, the New Zealand study showed that the presence or absence of relative contraindications appropriately influenced the decision to perform coronary angiography after ACS, with a high rate of angiography in those without contraindications (PUBMED:30922552). This indicates that when relative contraindications are considered, coronary angiography tends to be used judiciously. In conclusion, while there is evidence supporting the appropriate use of coronary angiography in many cases post-AMI, there is also an indication of variability in practice and potential areas where the criteria for appropriateness could be improved to ensure that all patients who could benefit from angiography receive it.
Instruction: Health behaviors: is there any distinction for teachers? Abstracts: abstract_id: PUBMED:33806812 Health Literacy as a Major Contributor to Health-Promoting Behaviors among Korean Teachers. Teachers are not only subjects of school health efforts but also role models for students' health behaviors; teachers' health-promoting behaviors can induce students' healthy behaviors with their positive health outcomes. This study was an examination of personal factors, situational factors, and health literacy as influences on teachers' health-promoting behaviors. A hierarchical multiple regression analysis was implemented based on an integrated model of health literacy. The study results showed that health literacy was the strongest predictor of teachers' health-promoting behaviors. In addition, school type and school culture were situational factors related to the interpersonal relations and stress management domains of the Health-Promoting Lifestyle Profile II scale. These findings could serve as foundational evidence for developing programs at the individual and organizational levels that enhance teachers' health-promoting behaviors. abstract_id: PUBMED:31736838 Effect of Teachers' Happiness on Teachers' Health. The Mediating Role of Happiness at Work. The present study aims to expand the understanding of the effects of dispositional happiness and self-esteem, as dispositional traits, on the health of teachers, as well as to understand the role played by the working environment in generating positive affection, thus mediating between the dispositional traits and teachers' health. Two hundred and eighty-two full-time in-service teachers (93.6% female) from Rome (Italy) took part in this study. Their ages ranged from 26 to 55 (M = 40.49 years, SD = 5.93). Participants' teaching experience ranged from 1 to 31 years (M = 9.95 years, SD = 5.65). 30.6% of participants taught in kindergarten (for children aged 0-5 years), 42.6% in primary schools (for children aged 6-11 years), 15.8% in middle schools and 10.9% in high schools. A questionnaire was administered, containing: the Subjective Happiness Scale (SHS); the Rosenberg Self-Esteem Scale (RSES); The adapted version for teachers of the School Children Happiness Inventory (Ivens, 2007); the Physical and Mental Health Scales (SF12). The data were analyzed using the MPLUS software, version 8. Our results showed that teacher happiness at work partially mediates the relationship between dispositional happiness and teacher health, and fully mediates the relationship between self-esteem and teacher health. To the best of our knowledge, the mediational role of teacher happiness has not been addressed before, concerning these dimensions. At the same time, our findings confirmed the role of self-esteem in endorsing health-related behaviors, thus promoting physical and mental health. Moreover, according to our study findings, when teachers acknowledge their workplace as a context in which they feel happy, the impact of dispositional happiness and self-esteem on health conditions is higher. Effective measures to promote teachers' well-being are discussed. abstract_id: PUBMED:34120934 Analysis on the difference of college teachers' professional pressure and strategies to improve teachers' mental health under the expectancy theory. Background: Professional pressure is one of the most concerned issues in society. Teachers are a group of people with greater professional pressure. The pressure sources include students, schools and society. Objective: This exploration aims to explore the professional pressure and mental health of college teachers. Method: Based on the expectancy theory, the professional pressure and mental health of different college teachers are investigated. The overall steps are as follows: the determination of topic, questionnaire design, questionnaire distribution and recovery, questionnaire data analysis to obtain results, as well as countermeasure analysis based on the results. Results: The investigation suggests that the sores of college teachers' work pressure load, family life pressure, interpersonal pressure, physical and mental pressure, leadership and organizational factors pressure, career development pressure, scientific research, and professional title pressure are high. From senior to elementary, the pressure of teachers increases first and then decreases. The professional development pressure of liberal arts teachers is significantly higher than that of science teachers and engineering teachers (P < 0.05). Among science and engineering teachers, the professional development pressure of science teachers is relatively high. Men have better mental health than women (P < 0.05). Unmarried teachers have the best mental health status, followed by married and finally divorced (P < 0.05). The mental health of senior and elementary teachers is significantly better than that of sub-senior teachers and intermediate teachers (P < 0.05). Conclusion: The investigation on professional pressure and mental health of college teachers can contribute to the related problem solving in China, as well as enrich the content of relevant fields in China. abstract_id: PUBMED:30175891 Mental health literacy among Nigerian teachers. Introduction: Teachers are frontline professionals who have daily contact with children and are therefore most likely to have the biggest impact on their students. Findings in this study should inform the development of teacher training programs, and more broadly, assist in the success of a strategic plan addressing mental health in classrooms. This study aims to assess mental health literacy among teachers with focus on their knowledge of depression. Methods: The study was a cross-sectional descriptive survey conducted among teachers in five secondary schools (high school) in southeast Nigeria. All consenting teachers were recruited, making a total of 120 participants. The participants were presented with a questionnaire designed to elicit the participants' recognition of the disorder depicted in two vignettes and their recommendation about the appropriate source of help seeking. One vignette was of a clinically depressed case while the other vignette was about a girl undergoing normal life crisis. Results: Out of the 120 teachers recruited into the study, 104 questionnaires were adequately completed indicating a response rate of 86.7%. A total of 16.3% (n = 17) participants correctly identified and labeled the depression vignette. Only 14 teachers (13.5%) recommended professional help from a psychiatrist or psychologist. Diminished ability to concentrate was the most identified symptom of distress for depression (30.8%). Counsellors were the most recommended source of help. Discussion: Mental health literacy was poor among the teachers surveyed. There is an urgent need to improve mental health literacy among teachers in Nigeria. abstract_id: PUBMED:35210947 Mental Health of Teachers in Bosnia and Herzegovina in the Time of COVID-19 Pandemics. Background: The pandemic of COVID-19 has affected all spheres of life, including education. Teachers at all levels were faced with numerous challenges during the pandemic. These challenges had an impact on their mental health. Objective: The goal of the present study was to examine the depression, anxiety, and stress levels in teachers in Bosnia and Herzegovina. Methods: The sample for this study consisted of 559 teachers (471 female teachers and 88 male teachers). We used the Depression, Anxiety, and Stress Scale (DASS 21) to measure teachers' emotional states of depression, anxiety, and stress. Results: The findings of this study clearly indicate the high levels of depression, anxiety, and stress in teachers. We also identified that levels of support provided by family members and school administration served as protective factors in the time of crisis. Conclusion: Teachers in Bosnia and Herzegovina have a high prevalence of elevated depression, anxiety, and stress levels. The article concludes with some recommendations on how to improve the mental health of teachers. abstract_id: PUBMED:32864296 MENTAL HEALTH IN KENYAN SCHOOLS: TEACHERS' PERSPECTIVES. Introduction: This qualitative study, conducted in public primary and secondary schools, sought teachers' perceptions of mental health concerns that are relevant in school settings. Based on the phenomenological theory, the study aimed to understand the teachers experiences of mental health problems in the schools and how they handled them. Method: The schools sampled represented rural, suburban and urban sections of Kiambu County in Kenya. Data were collected through Focus Group Discussions (FGDs). The researcher made summary notes from both audio taped interviews and notes made by the research assistants and summarized the major themes. Results: Teachers reported that they were aware that students suffered from mental health problems. They recognized learning difficulties, externalizing problems, internalizing problems, bizarre behavior, and problem substance use among students. Teachers reported that lack of skills and time were challenges in dealing with student mental health problems. Conclusion: Teachers perceive presence of mental health problems among the students. There is need for in- service training for identification and referral and that school psychologists be employed to deal with student mental health problems. abstract_id: PUBMED:27004061 Teachers' knowledge about oral health and their interest in oral health education in Hail, Saudi Arabia. Objectives: To assess the dental health knowledge and the interest of secondary school teachers in imparting oral health education in Hail, Saudi Arabia. Methods: It was a questionnaire based cross-sectional survey of secondary school teachers in Hail, Saudi Arabia, carried out from November 2014 to January 2015. A validated self-administered questionnaire was used to determine teachers' oral health knowledge and their interest in participating in oral health education of school children. Data analysis was performed using SPSS version 20 statistical software. Results: Two hundred and twenty three secondary school teachers responded to the survey. Results showed that about 80 to 90 % of teachers had sufficient knowledge of causes and prevention of dental caries and gingivitis. About 94% of teachers agreed that they can play an effective role in oral health promotion while 96% were found to be interested in performing additional duty as oral health promoter. A large majority (91.9 %) had the opinion that oral health education must be included in school curriculum. Conclusion: Teachers in Hail region had adequate amount of knowledge regarding oral health, and they were interested to play their role in promoting oral health education. Based on the findings of this study, it is recommended to include dental health education in curriculum at secondary school level and to provide sufficient training to teachers to enable them to participate actively in oral health promotion activities. abstract_id: PUBMED:32509906 Effectiveness of an oral health training program for school teachers in India: An interventional study. Introduction: Schools are a valuable platform for promoting oral health through oral health education as the children spend most of their active time in schools. Training school teachers on oral health promotion will help to inculcate healthy oral habits in children during their formative years of life. Objectives: The objective of this study was to assess the knowledge, attitude, approach, and action change of school teachers toward oral health and the impact of this training intervention in improving their knowledge. Materials And Methods: An interventional study was conducted among 50 primary school teachers across the country selected by the Ministry of Human Resource Development. A self-administered, 28-item questionnaire in Google document format was developed to evaluate the knowledge and practice of teachers toward oral hygiene before and after the teachers' training program. The training was done using a validated training manual on oral health promotion for school teachers developed by the Ministry of Health and Family Welfare. Needs assessment for training was conducted 1 week before this training program. Statistical Analysis: Wilcoxon signed-rank test and Mc Nemar tests were used to assess the difference between the scores before and after oral health education. Results: The needs assessment revealed that majority of the teachers felt the need to participate in oral health promotion training. A significant increase (P < 0.001) in mean knowledge scores of school teachers was seen after a 1-day training program. Conclusion: The training improved the knowledge of school teachers on oral health which indicates that the adopted method of oral health education was well received by the participants from all over the country. abstract_id: PUBMED:36141999 Teachers' Health: How General, Mental and Functional Health Indicators Compare to Other Employees? A Large French Population-Based Study. Teachers' health is a key factor of any successful education system, but available data are conflicting. To evaluate to what extent teachers' health could be at risk, we used pre-pandemic data from the CONSTANCES population-based French cohort (inclusion phase: 2012-2019) and compared teachers (n = 12,839) included in the cohort with a random subsample selected among all other employees (n = 32,837) on four self-reported health indicators: perceived general health, depressive symptoms (CES-D scale), functional limitations in the last six months, and persistent neck/back troubles (Nordic questionnaire). We further restricted our comparison group to the State employees (n = 3583), who share more occupational similarities with teachers. Lastly, we focused on teachers and evaluated how their health status might differ across teaching levels (primary, secondary, and higher education). As compared to non-teacher employees, and even after adjusting for important demographic, socioeconomic, lifestyle, and occupational confounders, teachers were less likely to report bad perceived health and depressive symptoms but were more likely to present functional limitations. Trends were similar in the analyses restricted to State employees. Within the teaching population, secondary school teachers were more likely to report depressive symptoms but less frequently declared persistent neck/back troubles than primary school teachers. Our descriptive cross-sectional study based on a probability sampling procedure (secondary use of CONSTANCES inclusion data) did not support the idea that teachers' health in France was particularly at risk in the pre-pandemic period. Both cross-cultural and longitudinal studies are needed to further gain information on the topic of teachers' health around the world and to monitor its evolution over time, particularly during crises impacting the education system such as the COVID-19 pandemic. abstract_id: PUBMED:37082418 Predictors of teachers' mental health - implications for practice. Purpose: To identify teachers' mental health predictors of stress, anxiety and depression within the sociodemographic, health-related, work-related and COVID-19-related factors. Methods: Between March 3 and April 11, 2021 the cross-sectional national online survey of Polish teachers of all educational levels was conducted. The semi-structured questionnaire used in the study included demographic data (place of residence, gender, age, marital status, education), exposure to people infected with COVID-19, critical life events and pandemic-related stressors, health variables, work-related characteristics (duration of remote teaching), as well as DASS-21. Results: In 2,757 completed responses, 86% of the respondents were female and the mean age was 46.14 (SD = 9.35); 21.3% had been previously diagnosed with COVID-19; 38.8% of the respondents (n = 1,069) were employed in primary schools (grades 4-8) and 34% worked in the secondary level education. The mean years of work experience in the sample was 20.94 (SD = 10.60). At the moment of completing the survey, most of the respondents (n = 1,488; 54%) were working on-site, 24% were still working remotely and 22.1% were engaged in the hybrid model. Sociodemographic variables, mental and physical illness, fear of the negative consequences of COVID-19, and the deaths of a family member or friend due to COVID-19 were significant predictors of stress, anxiety, and depression. Stress and depression were also predicted by remote working methods and working in a very large city. The cases of COVID-19 in groups of co-workers or students predicted only stress. COVID-19 deaths of co-workers/students/parents of students and COVID-19 diagnosis predicted anxiety. Conclusions: The most vulnerable group of teachers turned out to consist in young women, working in a big city, mostly teaching remotely. Sustainable support and care systems adapted to the teachers' needs should be developed. Also some essential predictors of mental health deterioration among teachers should be considered when designing long-term prevention strategies. Answer: Yes, there are distinctions in health behaviors for teachers, and various factors influence these behaviors. Health literacy is a major contributor to health-promoting behaviors among Korean teachers, with health literacy being the strongest predictor of such behaviors. Situational factors like school type and school culture also play a role, particularly in relation to interpersonal relations and stress management (PUBMED:33806812). Additionally, teachers' happiness and self-esteem significantly impact their health, with happiness at work partially mediating the relationship between dispositional happiness and teacher health, and fully mediating the relationship between self-esteem and teacher health (PUBMED:31736838). Professional pressure is another factor that affects college teachers' mental health, with different levels of pressure experienced based on factors such as academic discipline, gender, marital status, and seniority (PUBMED:34120934). Mental health literacy among Nigerian teachers is found to be poor, indicating a need for improved mental health literacy in the teaching profession (PUBMED:30175891). The COVID-19 pandemic has also had a significant impact on teachers' mental health, with high levels of depression, anxiety, and stress reported among teachers in Bosnia and Herzegovina (PUBMED:35210947). In Kenya, teachers perceive the presence of mental health problems among students but report a lack of skills and time to address these issues effectively (PUBMED:32864296). Teachers in Hail, Saudi Arabia, have shown adequate knowledge regarding oral health and expressed interest in promoting oral health education (PUBMED:27004061). An oral health training program for school teachers in India demonstrated that such interventions can significantly improve teachers' knowledge and practices toward oral hygiene (PUBMED:32509906). Comparative studies have shown that teachers' health may not be at greater risk than other employees, but there are differences in health indicators such as perceived general health, depressive symptoms, functional limitations, and persistent neck/back troubles (PUBMED:36141999). Lastly, predictors of teachers' mental health include sociodemographic, health-related, work-related, and COVID-19-related factors, with young female teachers working in big cities and teaching remotely being particularly vulnerable (PUBMED:37082418).
Instruction: Is knee laxity change after ACL injury and surgery related to open kinetic chain knee extensor training load? Abstracts: abstract_id: PUBMED:19620951 Is knee laxity change after ACL injury and surgery related to open kinetic chain knee extensor training load? Objective: The purpose of this study was to evaluate whether knee anterior laxity changes after anterior cruciate ligament injury and surgery are related to aspects of thigh muscle resistance training during rehabilitation. Design: Forty-nine subjects (13 females) diagnosed with an anterior cruciate ligament-deficient knee or who had undergone anterior cruciate ligament reconstructive surgery participated in this study. The subjects trained their knee extensors in the open kinetic chain during a 6-wk program, and the relationship of aspects of training (for example, absolute resistance load) and other factors to anterior laxity change during this period were analyzed using linear regression analysis. Results: The only factor found to be significantly related (r = -0.347) to anterior knee laxity change was average absolute load used in training the knee extensors. Conclusions: These results offer some early clinical support for increasing the strain on the anterior cruciate ligament graft (in patients treated with reconstruction) or other passive restraints to anterior tibial displacement, during rehabilitation after anterior cruciate ligament injury and reconstruction surgery to promote decreased knee anterior laxity. abstract_id: PUBMED:15678299 Effects of closed versus open kinetic chain knee extensor resistance training on knee laxity and leg function in patients during the 8- to 14-week post-operative period after anterior cruciate ligament reconstruction. Open kinetic chain (OKC) knee extensor resistance training has lost favour in ACLR rehabilitation due to concerns that this exercise is harmful to the graft and will be less effective in improving function. In this randomized, single-blind clinical trial OKC and closed kinetic chain (CKC) knee extensor training were compared for their effects on knee laxity and function in the middle period of ACLR rehabilitation. The study subjects were 49 patients recovering from ACLR surgery (37 M, 12 F; mean age=33 years). Tests were carried out at 8 and 14 weeks after ACLR with knee laxity measured using a ligament arthrometer and function with the Hughston Clinic knee self-assessment questionnaire and single leg, maximal effort jump testing (post-test only). Between tests, subjects trained using either OKC or CKC resistance of their knee and hip extensors as part of formal physical therapy sessions three times per week. No statistically significant (one-way ANOVA, p>0.05) differences were found between the treatment groups in knee laxity or leg function. OKC and CKC knee extensor training in the middle period of rehabilitation after ACLR surgery do not differ in their effects on knee laxity or leg function. Exercise dosages are described in this study and further research is required to assess whether the findings in this study are dosage specific. abstract_id: PUBMED:11147152 Effects of open versus closed kinetic chain training on knee laxity in the early period after anterior cruciate ligament reconstruction. Knee extensor resistance training using open kinetic chain (OKC) exercise for patients recovering from anterior cruciate ligament reconstruction (ACLR) surgery has lost favour mainly because of research indicating that OKC exercise causes greater ACL strain than closed kinetic chain (CKC) exercise. In this prospective, randomized clinical trial the effects of these two regimes on knee laxity were compared in the early period after ACLR surgery. Thirty-six patients recovering from ACLR surgery (29 males, 7 females; age mean = 30) were tested at 2 and 6 weeks after ACLR with knee laxity measured using the Knee Signature System arthrometer. Between tests subjects trained using either OKC or CKC resistance of their knee and hip extensors in formal physical therapy sessions three times per week. Following adjustment for site of treatment, pretraining injured knee laxity, and untreated knee laxity at post-training, the use of OKC exercise, when compared to CKC exercise, was found to lead to a 9% increase in looseness with a 95% confidence interval of -8% to +29%. These results indicate that the great concern about the safety of OKC knee extensor training in the early period after ACLR surgery may not be well founded. abstract_id: PUBMED:35232546 In-vivo tibiofemoral kinematics of the normal knee during closed and open kinetic chain exercises: A comparative study of box squat and seated knee extension. A rehabilitation program after anterior cruciate ligament reconstruction is of great importance to obtain a satisfactory prognosis after surgery. However, there is still an onging debate over whether closed kinetic chain or open kinetic chain exercises should be chosen. Our study was designed to compare the in vivo tibiofemoral kinematics during closed kinetic chain and open kinetic chain exercises. Eighteen healthy volunteers were asked to perform box squat and unloaded/10 kg-loaded seated knee extension. In vivo 3-dimensional analysis of tibiofemoral kinematics of different motions were determined using a dual fluoroscopic imaging system. The study found significantly more tibial anterior displacement during loaded seated knee extension than during unloaded seated knee extension from 25°-50° of knee flexion (p ≤ 0.031). The knees exhibited significantly more internal tibial rotation and lateral tibial translation during the box squat than both seated knee extensions during mid-flexion. In addition, the knees showed less internal-external (IE) range of motion (ROM) from 20°- 75° of flexion (p < 0.001) and medial-lateral (ML) ROM from 75° to full extension (p ≤ 0.006) during box squat than both extensions. This knowledge may help optimize rehabilitation plans for patients post ACL reconstruction. abstract_id: PUBMED:28144404 Knee Laxity Variations in the Menstrual Cycle in Female Athletes Referred to the Orthopedic Clinic. Background: Anterior cruciate ligament (ACL) rupture is the biggest concern for orthopedic surgeons who are involved in sports injuries, so most of ACL reconstruction surgeries are sports related. ACL injuries in female athletes are 2 - 8 times more common than male athletes in similar sport injuries. Objectives: The aim of this study was to compare knee laxity changes in the menstrual cycle in female athletes referred to the orthopedic clinic of Imam Khomeini hospital in the north of Iran, Sari, 2013. Patients And Methods: The present descriptive study was conducted on 40 female athletes that were referred to the orthopedic clinic. Hormone levels, such as estrogen and progesterone were assessed by one laboratory in 3 phases of the menstrual cycle. We used Lachman test and anterior drawer test for knee laxity rate. The descriptive statistics were calculated as indices of central distribution of bonds (x ± SD) and relative frequency distribution was used for qualitative variables. Results: The results of the current study showed that there is no significant difference in ACL laxity in female athletes in three phases of menstrual cycle; namely menstruation time, ovulation time and mid-luteal phase. Conclusions: Despite numerous studies and research in the field of knee laxity and effects of female hormones, many researchers do not agree about the effect of female hormones on knee laxity. The current study also reported no relationship between female hormones and knee laxity, while statistics show fundamental difference between male and female athletes. abstract_id: PUBMED:36368151 Bilateral changes in knee joint laxity during the first year after non-surgically treated anterior cruciate ligament injury. Objectives: Analyse changes in knee laxity between 3, 6, 12 and 24 months after non-surgically treated ACL injury and to analyse associations between knee laxity and knee function, self-reported knee stability, ACL-Return to Sport after Injury (ACL-RSI), fear and confidence at different timepoints during recovery. Design: Prospective cohort study. Participants: 125 patients (67 males, mean age 25.0 ± 7.0 years) with acute ACL injury. Main Outcome: Laxity was measured using KT-1000 arthrometer. Self-reported knee function was assessed using the International Knee Documentation Committee Subjective Knee Form (IKDC-SKF). Confidence and fear were assessed with questions from the ACL-RSI scale. Subjectively knee stability was assessed using SANE. Results: Knee laxity increased bilaterally from 3 to 12 months, and in the non-involved knee from 3 to 24 months (p˂0.05), although mean change was below 1 mm. Side-to-side difference in knee laxity was correlated with IKDC-SKF (r = -0.283) and knee stability in rehabilitation/sport activities (r = -0.315) at 6 months, but not with confidence/fear. Conclusion: Knee laxity increased bilaterally during the first year after non-surgically treated ACL injury, though, the mean change in knee laxity was below 1 mm and the clinical significance is unknown. Knee laxity was weakly associated with knee function and perceived knee stability. Level Of Evidence: Level II TRIAL REGISTRATION: NCT02931084. abstract_id: PUBMED:28712025 Side-to-side asymmetries in landing mechanics from a drop vertical jump test are not related to asymmetries in knee joint laxity following anterior cruciate ligament reconstruction. Purpose: Asymmetries in knee joint biomechanics and increased knee joint laxity in patients following anterior cruciate ligament reconstruction (ACLR) are considered risk factors for re-tear or early onset of osteoarthritis. Nevertheless, the relationship between these factors has not been established. The aim of the study was to compare knee mechanics during landing from a bilateral drop vertical jump in patients following ACLR and control participants and to study the relationship between side-to-side asymmetries in landing mechanics and knee joint laxity. Methods: Seventeen patients following ACLR were evaluated and compared to 28 healthy controls. Knee sagittal and frontal plane kinematics and kinetics were evaluated using three-dimensional motion capture (200 Hz) and two synchronized force platforms (1000 Hz). Static anterior and internal rotation knee laxities were measured for both groups and legs using dedicated arthrometers. Group and leg differences were investigated using a mixed model analysis of variance. The relationship between side-to-side differences in sagittal knee power/energy absorption and knee joint laxities was evaluated using univariate linear regression. Results: A significant group-by-leg interaction (p = 0.010) was found for knee sagittal plane energy absorption, with patients having 25% lower values in their involved compared to their non-involved leg (1.22 ± 0.39 vs. 1.62 ± 0.40 J kg-1). Furthermore, knee sagittal plane energy absorption was 18% lower at their involved leg compared to controls (p = 0.018). Concomitantly, patients demonstrated a 27% higher anterior laxity of the involved knee compared to the non-involved knee, with an average side-to-side difference of 1.2 mm (p < 0.001). Laxity of the involved knee was also 30% higher than that of controls (p < 0.001) (leg-by-group interaction: p = 0.002). No relationship was found between sagittal plane energy absorption and knee laxity. Conclusions: Nine months following surgery, ACLR patients were shown to employ a knee unloading strategy of their involved leg during bilateral landing. However, this strategy was unrelated to their increased anterior knee laxity. Side-to-side asymmetries during simple bilateral landing tasks may put ACLR patients at increased risk of second ACL injury or early-onset osteoarthritis development. Detecting and correcting asymmetric landing strategies is highly relevant in the framework of personalized rehabilitation, which calls for complex biomechanical analyses to be applied in clinical routine. Level Of Evidence: III. abstract_id: PUBMED:35333934 A high level of knee laxity after anterior cruciate ligament reconstruction results in high revision rates. Purpose: The literature indicates a lack of consensus on the correlation between knee laxity after anterior cruciate ligament reconstruction (ACLR) and subjective clinical outcomes and the need for revision surgery. Therefore, using high-volume registry data, this study aimed to describe the relationship between objective knee laxity after ACLR and subjective symptom and functional assessments and the need for revision surgery. The hypothesis was that greater postoperative knee laxity would correlate with inferior patient-reported outcomes and a higher risk for revision surgery. Methods: In this study, 17,114 patients in the Danish knee ligament reconstruction registry were placed into three groups on the basis of objective side-to-side differences in sagittal laxity one year after surgery: group A (≤ 2 mm), Group B (3-5 mm) and Group C (> 5 mm). The main outcome measure was revision rate within 2 years of primary surgery, further outcome measures were the knee injury and osteoarthritis outcome score (KOOS) as well as Tegner activity score. Results: The study found the risk for revision surgery was more than five times higher for Group C [hazard ratio (HR) = 5.51] than for Group A. The KOOS knee-related Quality of Life (QoL) sub-score exhibited lower values when comparing Groups B or C to Group A. In addition, the KOOS Function in Sport and Recreation (Sport/Rec) sub-score yielded lower values for groups B and C in comparison with Group A. Conclusion: These results indicate that increased post-operative sagittal laxity is correlated with an increased risk for revision surgery and might correlate with poorer knee-related QoL, as well as a decreased function in sports. The clinical relevance of the present study is that high knee laxity at 1-year follow-up is a predictor of the risk of revision surgery. Level Of Evidence: III. abstract_id: PUBMED:27480978 Effect of High-Grade Preoperative Knee Laxity on Anterior Cruciate Ligament Reconstruction Outcomes. Background: Knee laxity in the setting of suspected anterior cruciate ligament (ACL) injury is frequently assessed through physical examination using the Lachman, pivot-shift, and anterior drawer tests. The degree of laxity noted on these examinations may influence treatment decisions and prognosis. Hypothesis: Increased preoperative knee laxity would be associated with increased risk of subsequent revision ACL reconstruction and worse patient-reported outcomes 2 years postoperatively. Study Design: Cohort study; Level of evidence, 2. Methods: From an ongoing prospective cohort study, 2333 patients who underwent primary isolated ACL reconstruction without collateral or posterior cruciate ligament injury were identified. Patients reported by the operating surgeons as having an International Knee Documentation Committee (IKDC) grade D for Lachman, anterior drawer, or pivot-shift examination were classified as having high-grade laxity. Multiple logistic regression modeling was used to evaluate whether having high-grade preoperative laxity was associated with increased odds of undergoing revision ACL reconstruction within 2 years of the index procedure, controlling for patient age, sex, Marx activity level, level of competition, and graft type. Multiple linear regression modeling was used to evaluate whether having high-grade preoperative laxity was associated with worse IKDC score or Knee injury and Osteoarthritis Outcome Score Knee-Related Quality of Life subscale (KOOS-QOL) scores at a minimum 2 years postoperatively, controlling for baseline score, patient age, ethnicity, sex, body mass index, marital status, smoking status, sport participation, competition level, Marx activity rating score, graft type, and articular cartilage and meniscus status. Results: Pre-reconstruction laxity data were available for 2325 patients (99.7%). Two-year revision data were available for 2259 patients (96.8%), and patient-reported outcomes were available for 1979 patients (84.8%). High-grade preoperative laxity was noted in 743 patients (31.9%). The mean postoperative IKDC score was 81.8 ± 15.9, and the mean KOOS-QOL score was 72.0 ± 22.0. The presence of high-grade pre-reconstruction laxity was associated with significantly increased odds of ACL graft revision (odds ratio [OR] = 1.87 [95% CI, 1.19-2.95]; P = .007). The presence of high-grade pre-reconstruction laxity was not associated with any difference in postoperative IKDC (β = -0.56, P = .44) or KOOS-QOL (β = 0.04, P = .97). Conclusion: The presence of high-grade pre-reconstruction knee laxity as assessed by manual physical examination under anesthesia is associated with significantly increased odds of revision ACL surgery but has no association with patient-reported outcome scores at 2 years after ACL reconstruction. abstract_id: PUBMED:32646403 The outcomes of one-stage treatment for multiple knee ligament injuries combined with extensor apparatus rupture. Background: Multiple knee ligament injuries combined with extensor apparatus rupture are serious and complex knee injuries that are rare in clinical practice. The management is extremely challenging and controversial. The aim of this study is to describe a patient collective with multiple knee ligament injuries combined with extensor apparatus injuries in detail and to report the mid-term outcomes of a one-stage surgical treatment regarding subjective outcome scores, complications, knee instability, and ROM. Methods: Eleven of 425 patients with multiple knee ligament injuries combined with extensor apparatus injuries admitted to our hospital were reviewed from July 2008 to May 2017. All patients underwent one-stage repair and reconstruction of multiple knee ligaments and extensor apparatus. The Lysholm knee score and the International Knee Documentation Committee (IKDC) score were adopted to evaluate the surgical effect preoperatively and at a minimum of 2 years' follow-up. Clinical data, including range of motion and knee stability, were also recorded at the final follow-up. Results: Ten patients were followed up with a mean time of 40 (range, 24-60) months. At the last follow-up, 8 patients had joint flexion range of motion greater than or equal to120 degrees, 2 patients had joint flexion range of motion of 100-120 degrees, and 1 patient had active knee extension limitation of 5 degrees. Stress radiographs showed that the mean differences in posterior displacement were reduced from 10.8 ± 3.0 mm preoperatively to 2.0 ± 2.5 mm at the last follow-up. There were significant improvements in stress radiographs from pre- to postoperative states for all patients with multiple knee ligament injuries. The Lysholm score ranged from 85 to 96, with a mean of 92.1 (compared with 33 before surgery, P < 0.05). The final IKDC scores were A in 2 patients (20%), B in 7 (70%), and C in 1 (10%). Nine of the 10 patients (90%) returned to their former activity level. Conclusion: Multiple knee ligament injuries combined with extensor apparatus rupture are rare. Single-stage management of the repair and reconstruction of multiple knee ligaments and extensor apparatus with proper rehabilitation is an effective and reliable procedure to restore knee stability and function. Level Of Evidence: Level IV, therapeutic case series. Answer: Yes, knee laxity change after ACL injury and surgery is related to open kinetic chain knee extensor training load. A study found that the only factor significantly related to anterior knee laxity change was the average absolute load used in training the knee extensors (r = -0.347), suggesting that increasing the strain on the anterior cruciate ligament graft or other passive restraints to anterior tibial displacement during rehabilitation after ACL injury and reconstruction surgery could promote decreased knee anterior laxity (PUBMED:19620951).
Instruction: Is overweight/obesity associated with short sleep duration in older women? Abstracts: abstract_id: PUBMED:34338629 Self-reported and actigraphic short sleep duration in older adults. Study Objectives: Persons > 65 years with short sleep duration (≤ 6 hours) are at risk for adverse outcomes, but the accuracy of self-reported sleep duration may be affected by reduced symptom awareness. We evaluated the performance characteristics of self-reported vs objectively measured sleep duration in this age group. Methods: In 2,980 men from the Osteoporotic Fractures in Men Sleep Study and 2,855 women from the Study of Osteoporotic Fractures we examined the agreement and accuracy of self-reported vs actigraphy-measured short and normal (> 6 but < 9 hours) sleep duration. We evaluated associations of select factors (demographics; medical, physical, and neuropsychiatric conditions; medication and substance use; and sleep-related measures) with risk of false-negative (normal sleep duration by self-report but short sleep duration by actigraphy) and false-positive (short sleep duration by self-report and normal sleep duration by actigraphy) designations, respectively, using logistic regression. Results: Average ages were 76.3 ± 5.5 and 83.5 ± 3.7 years in men and women, respectively. There was poor agreement between self-reported and actigraphic sleep duration (kappa ≤ 0.24). False negatives occurred in nearly half and false positives in over a quarter of older persons. In multivariable models in men and women, false negatives were independently associated with obesity, daytime sleepiness, and napping, while false positives were significantly lower with obesity. Conclusions: Under- and overreporting of short sleep is common among older persons. Reliance on self-report may lead to missed opportunities to prevent adverse outcomes or unnecessary interventions. Self-reported sleep duration should be objectively confirmed when evaluating the effect of sleep duration on health outcomes. Citation: Miner B, Stone KL, Zeitzer JM, et al. Self-reported and actigraphic short sleep duration in older adults. J Clin Sleep Med. 2022;18(2):403-413. abstract_id: PUBMED:17726359 Is overweight/obesity associated with short sleep duration in older women? Background And Aim: No study to date has documented the association between short sleep duration and the risk for obesity in older people. Therefore, the aim of this study was to examine cross-sectional associations between short sleep duration and variations in body fat indices in older women. Methods: Anthropometric and body composition measurements, resting energy expenditure, daily energy expenditure, daily energy intake, plasma lipid-lipoprotein profile, and self-reported sleep duration were determined in a sample of 90 women of 50 years and above. Results: The odds ratios for overweight/obesity were comparable in subjects reporting <7 hours and >or=7 hours of sleep per day, with or without adjustment for age, daily energy expenditure and daily energy intake. The results did not permit to observe any significant difference between the two sleeper groups for all the variables investigated. The correlations between sleep duration and adiposity indices were also non significant. Conclusions: Short sleep duration does not predict an increased risk of being overweight/obese in older women. This observation, together with our previously reported results in younger subjects, suggests that the sleep-body fat relationship progressively becomes less detectable with increasing in age. abstract_id: PUBMED:36130143 Sleep duration, plasma metabolites, and obesity and diabetes: a metabolome-wide association study in US women. Short and long sleep duration are associated with adverse metabolic outcomes, such as obesity and diabetes. We evaluated cross-sectional differences in metabolite levels between women with self-reported habitual short (<7 h), medium (7-8 h), and long (≥9 h) sleep duration to delineate potential underlying biological mechanisms. In total, 210 metabolites were measured via liquid chromatography-mass spectrometry in 9207 women from the Nurses' Health Study (NHS; N = 5027), the NHSII (N = 2368), and the Women's Health Initiative (WHI; N = 2287). Twenty metabolites were consistently (i.e. praw < .05 in ≥2 cohorts) and/or strongly (pFDR < .05 in at least one cohort) associated with short sleep duration after multi-variable adjustment. Specifically, levels of two lysophosphatidylethanolamines, four lysophosphatidylcholines, hydroxyproline and phenylacetylglutamine were higher compared to medium sleep duration, while levels of one diacylglycerol and eleven triacylglycerols (TAGs; all with ≥3 double bonds) were lower. Moreover, enrichment analysis assessing associations of metabolites with short sleep based on biological categories demonstrated significantly increased acylcarnitine levels for short sleep. A metabolite score for short sleep duration based on 12 LASSO-regression selected metabolites was not significantly associated with prevalent and incident obesity and diabetes. Associations of single metabolites with long sleep duration were less robust. However, enrichment analysis demonstrated significant enrichment scores for four lipid classes, all of which (most markedly TAGs) were of opposite sign than the scores for short sleep. Habitual short sleep exhibits a signature on the human plasma metabolome which is different from medium and long sleep. However, we could not detect a direct link of this signature with obesity and diabetes risk. abstract_id: PUBMED:36524599 Insomnia with objective short sleep duration in community-living older persons: A multifactorial geriatric health condition. Background: Insomnia or poor sleep quality with objective short sleep duration (hereafter referred to as ISSD) has been identified as a high-risk phenotype among middle-aged persons. We evaluated the prevalence and clinical correlates of ISSD among community-living older persons. Methods: In 3053 men from the Osteoporotic Fractures in Men Sleep Study (MrOS; average age 76.4 ± 5.5 years) and 3044 women from the Study of Osteoporotic Fractures (SOF; average age 83.6 ± 3.8 years), we evaluated the prevalence of ISSD (trouble getting to sleep within 30 minutes, waking up in the middle of the night or early morning, and/or taking a medication to help with sleep ≥3 times per week and actigraphy-estimated sleep duration <6 h). Using separate logistic regression models in men and women, we evaluated the cross-sectional associations between predisposing, precipitating, and perpetuating factors for ISSD, as compared with normal sleep (no insomnia and actigraphy-estimated sleep duration of 6-9 h). Results: Overall, 20.6% of older men and 12.8% of older women had insomnia with short sleep duration. Multiple predisposing, precipitating, and perpetuating factors were cross-sectionally associated with ISSD in both men and women. In multivariable models that adjusted for predisposing factors (demographics, multimorbidity, obesity), precipitating (depression, anxiety, central nervous system-active medication use, restless legs syndrome) and perpetuating (napping, falls) factors were significantly associated with ISSD in men and women (adjusted odds ratios ranging 1.63-4.57). Conclusions: In this cross-sectional study of community-living older men and women, ISSD was common and associated with multiple predisposing, precipitating, and perpetuating factors, akin to a multifactorial geriatric health condition. Future work should examine causal pathways and determine whether the identified correlates represent modifiable risk factors. abstract_id: PUBMED:36755902 Habitual night sleep duration is associated with general obesity and visceral obesity among Chinese women, independent of sleep quality. Purpose: Research on the relationship between sleep duration and obesity defined using multiple anthropometric and bioelectrical indices in women remains scarce. We aimed to explore the association between sleep duration and body mass index (BMI), waist-hip ratio (WHR), body fat percentage (PBF) and visceral fat area (VFA) among females. Methods: We recruited women for medical examination using multistage cluster sampling. Sleep was assessed using Pittsburgh Sleep Quality Index (PSQI) and sleep duration was categorized into short (<7 h), optimal (7 <9 h) and long sleep (≥ 9 h). Weight and height were measured using a calibrated stadiometer. Waist circumference was manually measured. PBF, and VFA were estimated by bioelectrical impedance analysis. Data on sociodemographic characteristics and lifestyle factors were also collected and included in the logistic regression models to explore the independent association between sleep duration and obesity defined by different indices. Results: A total of 7,763 women with a mean age of 42.6 ± 13.5 years were included. The percentage of women reporting short and long sleep was 10.3 and 13.4% respectively. The mean BMI, WHR, PBF and VFA were 23.07 ± 3.30 kg/m2, 0.78 ± 0.06, 32.23 ± 6.08% and 91.64 ± 35.97cm2, respectively. Short sleep was independently associated with 35% (95% CI: 1.05-1.75) increased odds of general obesity (BMI ≥ 28 kg/cm2), and long sleep was associated with 18% (95% CI: 1.01-1.37) increased odds of visceral obesity (VFA > 100 cm2). No association was observed between sleep deprivation or excessive sleep and high WHR or high PBF. Conclusion: In women, short sleep was associated with an increased odds of general obesity, whereas long sleep was associated with an increased odds of visceral obesity. Longitudinal observations are needed to confirm this cross-sectional relationship. abstract_id: PUBMED:24127149 Short duration of sleep is associated with hyperleptinemia in Taiwanese adults. Background: Higher plasma levels of leptin have been associated with increased cardiometabolic risk. The aim of this study was to investigate the association between short duration of sleep and hyperleptinemia in Taiwanese adults. Methods: We examined the association between duration of sleep and hyperleptinemia in 254 men and women recruited from the physical examination center at a regional hospital in southern Taiwan. Hyperleptinemia was defined as a plasma leptin level of 8.13 ng/mL and above. Short sleep duration was defined as < 6.5 h/day. Multiple logistic regression analysis was used to assess the association between short duration of sleep and hyperleptinemia. Results: In females, short duration of sleep (< 6.5 h/day; OR = 2.15, 95% CI = 0.99-4.78), greater hip circumference (OR = 3.00, CI = 1.13-8.78), higher percent body fat (OR = 1.75, CI = 1.07-2.95), and higher white blood cell counts (OR = 1.67, CI = 1.26-2.28) were associated with an increased risk of hyperleptinemia. In males, greater body weight was significantly associated with an increased risk of hyperleptinemia (OR = 3.55, 95% CI = 1.46-10.23). There was also a trend of association (p = 0.096) between short duration of sleep and an increased risk of hyperleptinemia (OR = 4.98, 95% CI = 0.80-42.40). Conclusions: In this study of healthy Taiwanese adults, short duration of sleep was significantly associated with hyperleptinemia in women, and the association was independent of adiposity. abstract_id: PUBMED:30621835 Short Sleep Duration Is Associated With Increased Serum Homocysteine: Insights From a National Survey. Study Objectives: Both short sleep duration and increased serum homocysteine levels are associated with cardiovascular events. However, research on the relationship between sleep duration and serum homocysteine levels is sparse. The aim of this study is to examine the association between sleep duration and serum homocysteine levels from a national database. Methods: In total, 4,480 eligible participants older than 20 years who had serum homocysteine data and reported sleep duration were enrolled from the US National Health and Nutrition Examination Survey of 2005 to 2006. The association between sleep duration and serum homocysteine levels was analyzed using multivariate regression models for covariate adjustment. Results: Serum homocysteine level was lowest in individuals with a sleep duration of 7 hours and increased in those with both shorter and longer self-reported total sleep time (groups were categorized into ≤ 5 hours, 6 hours, 7 hours, 8 hours, and ≥ 9 hours). After adjustment for covariates, those in the group sleeping ≤ 5 hours had significantly higher serum homocysteine levels than the reference group (sleep duration of 7 hours). In subgroup analyses by sex, body mass index (BMI), and ethnicity, the association between short sleep duration (≤ 5 hours) and higher serum homocysteine levels persisted in women, individuals with obesity (BMI ≥ 30 kg/m2), and non-Hispanic whites. Conclusions: This study highlighted that short sleep duration was associated with higher serum homocysteine levels in women, individuals with obesity (BMI ≥ 30 kg/m2), and non-Hispanic whites; this finding might suggest increased vulnerability to cardiovascular risk or other atherothrombotic events in these groups in the context of short sleep. abstract_id: PUBMED:37081433 Short sleep duration and interest in sleep improvement in a multi-ethnic cohort of diverse women participating in a community-based wellness intervention: an unmet need for improvement. Background: Disparities in sleep duration are a modifiable contributor to increased risk for cardiometabolic disorders in communities of color. We examined the prevalence of short sleep duration and interest in improving sleep among a multi-ethnic sample of women participating in a culturally tailored wellness coaching program and discussed steps to engage communities in sleep health interventions. Methods: Secondary analysis of data from a randomized trial were used. The wellness coaching trial utilized a Community-Based Participatory Research (CBPR) approach. Data were from the baseline survey and baseline wellness coaching notes. Short sleep duration was defined as < 7 h of self-reported sleep. Participants were prompted to set a goal related to healthy eating/physical activity and had the opportunity to set another goal on any topic of interest. Those who set a goal related to improving sleep or who discussed a desire to improve sleep during coaching were classified as having an interest in sleep improvement. Analyses utilized multivariable models to evaluate factors contributing to short sleep and interest in sleep improvement. We present our process of discussing results with community leaders and health workers. Results: A total of 485 women of color participated in the study. Among these, 199 (41%) reported short sleep duration. In adjusted models, Blacks/African Americans and Native Hawaiians/Pacific Islanders had higher odds of reporting < 7 h of sleep than Hispanics/Latinas. Depression symptoms and self-reported stress management scores were significantly associated with short sleep duration. Interest in sleep improvement was noted in the wellness coaching notes of 52 women (10.7%); sleep was the most common focus of goals not related to healthy eating/physical activity. African Immigrants/Refugees and African Americans were less likely to report interest in sleep improvement. Community leaders and health workers reported lack of awareness of the role of sleep in health and discussed challenges to obtaining adequate sleep in their communities. Conclusion: Despite the high prevalence of short sleep duration, interest in sleep improvement was generally low. This study highlights a discrepancy between need and interest, and our process of community engagement, which can inform intervention development for addressing sleep duration among diverse women. abstract_id: PUBMED:31236505 The role of sleep duration and sleep disordered breathing in gestational diabetes mellitus. Many women experience sleep problems during pregnancy. This includes difficulty initiating and maintaining sleep due to physiologic changes that occur as pregnancy progresses, as well as increased symptoms of sleep-disordered breathing (SDB). Growing evidence indicates that sleep deficiency alters glucose metabolism and increases risk of diabetes. Poor sleep may exacerbate the progressive increase in insulin resistance that normally occurs during pregnancy, thus contributing to the development of maternal hyperglycemia. Here, we critically review evidence that exposure to short sleep duration or SDB during pregnancy is associated with gestational diabetes mellitus (GDM). Several studies have found that the frequency of GDM is higher in women exposed to short sleep compared with longer sleep durations. Despite mixed evidence regarding whether symptoms of SDB (e.g., frequent snoring) are associated with GDM after adjusting for BMI or obesity, it has been shown that clinically-diagnosed SDB is prospectively associated with GDM. There are multiple mechanisms that may link sleep deprivation and SDB with insulin resistance, including increased levels of oxidative stress, inflammation, sympathetic activity, and cortisol. Despite emerging evidence that sleep deficiency and SDB are associated with increased risk of GDM, it has yet to be demonstrated that improving sleep in pregnant women (e.g., by extending sleep duration or treating SDB) protects against the development of hyperglycemia. If a causal relationship can be established, behavioral therapies for improving sleep can potentially be used to reduce the risk and burden of GDM. abstract_id: PUBMED:37308460 The gendered health consequences of persistent exposure to short sleep duration during adolescence. Introduction: Despite evidence that sleep duration is associated with adolescent health, there remain several gaps in the literature. Little is known about: (1) the extent to which persistent exposure to short sleep duration is associated with adolescent health and (2) whether this association varies by gender. Methods: Using six waves of longitudinal data from the 2011-2016 Korean Children and Youth Panel Survey (N = 6147), this study examined whether persistent exposure to short sleep duration is related to two adolescent health outcomes (overweight status and self-rated health). Fixed effects models were estimated to account for individual-level heterogeneity. Results: Short sleep duration was associated with being overweight and self-rated health in different ways for boys and girls. Gender-stratified analysis suggests that, for girls, the risk of being overweight increased for 5 years in a row as short sleep duration persisted. Prolonged short sleep duration also resulted in a continued decline in girls' self-rated health. For boys, persistent exposure to short sleep duration predicted a lower likelihood of being overweight up to the fourth year, but then began to recover. No association between persistent exposure to short sleep duration and self-rated health was observed for boys. Conclusion: Persistent exposure to short sleep duration was found to be more harmful to the health of girls than boys. Promoting longer sleep duration during adolescence may be an effective intervention to improve adolescent health, especially for girls. Answer: The association between overweight/obesity and short sleep duration in older women is not consistently supported by the available research. One study found that short sleep duration does not predict an increased risk of being overweight/obese in older women, suggesting that the relationship between sleep and body fat may become less detectable with age (PUBMED:17726359). However, other studies have identified various associations between short sleep duration and metabolic outcomes, such as hyperleptinemia in Taiwanese adults, which was significantly associated with short sleep duration in women, independent of adiposity (PUBMED:24127149). Additionally, short sleep duration was associated with increased serum homocysteine levels, particularly in women, individuals with obesity, and non-Hispanic whites, suggesting a potential increased cardiovascular risk in these groups (PUBMED:30621835). In contrast, another study found that short sleep duration was associated with a 35% increased odds of general obesity in women, while long sleep was associated with an 18% increased odds of visceral obesity (PUBMED:36755902). This indicates that both short and long sleep durations may be linked to different types of obesity in women. It is important to note that the accuracy of self-reported sleep duration may be affected by reduced symptom awareness in older adults, and there is often poor agreement between self-reported and actigraphic sleep duration (PUBMED:34338629). Therefore, objective measures of sleep duration should be used when evaluating the effect of sleep duration on health outcomes. Overall, the evidence suggests that there may be a complex relationship between sleep duration and obesity in older women, with some studies finding associations and others not. Further research, ideally using objective measures of sleep duration, is needed to clarify the nature of this relationship.
Instruction: Changing anesthesiologists' practice patterns. Can it be done? Abstracts: abstract_id: PUBMED:8712440 Changing anesthesiologists' practice patterns. Can it be done? Background: Because the ultimate purpose of new medical knowledge is to achieve improved health outcomes, physicians need to possess and use this knowledge in their practice. The authors introduced enhanced education and individualized feedback to reduce postoperative nausea and vomiting (PONV). The primary objective was to increase anesthesiologists' use of preventive measures to reduce PONV, and the secondary objective was to determine whether patient outcomes were improved. Methods: After obtaining hospital ethics committee approval, the effect of education and feedback on anesthesiologist performance and the rate of PONV in major surgery elective inpatients during a 2-yr period was assessed. After baseline data collection (6 months), anesthesiologists at the study hospital received enhanced education (8 months) and individualized feedback (10 months). Parallel data collection was performed at a control hospital at which practice was continued as usual. The education promoted preventive measures (antiemetic premedication, nasogastric tubes, droperidol, metoclopramide). Individualized feedback provided the number of patients receiving promoted measures and the rate of PONV. The mean percentage of anesthesiologists' patients receiving at least one promoted measure and the rate of PONV were compared with baseline levels. Results: At the study hospital, there was a significant increase in the mean percentage of the anesthesiologists' female patients receiving a preventive measure as well as a significant increase in the use of droperidol > or = 1 mg (P < 0.05) for all patients. The use of other promoted measures was unaffected. Absolute rates of PONV were unaffected at the study hospital until the post-feedback period (decrease of 8.8% between baseline and post-feedback (P = 0.015)). Conclusion: It was demonstrated that enhanced education and individualized feedback can change anesthesiologists' practice patterns. The actual benefit to patients from use of preventive measures was limited when used in the everyday clinical situation. Therefore, only modest decreases in PONV were achieved, despite the use of preventive measures. abstract_id: PUBMED:29509521 Practice Patterns of Dentist Anesthesiologists in North America. This study provides trends in the discipline of dental anesthesiology. A questionnaire-based survey was sent to 338 members of the American Society of Dentist Anesthesiologists to evaluate practice patterns. One focus of the study was modality of sedation/anesthesia used for dentistry in North America. Age, gender, years in practice, and geographic region of practice were also obtained. Data gathered from the returned questionnaires were entered into an Excel spreadsheet and then imported into JMP Statistical Discovery Software (v12.2 Pro) for descriptive analysis. A total of 112 surveys were completed electronically and 102 surveys were returned via post, for a total response rate of 63.3% ( N = 214). Data from this survey suggested a wide variation of therapeutic practices among dentist anesthesiologists in North America. Of the surveyed dentist anesthesiologists, 58.7% (SE = 4.2%) practice as mobile providers, 32.2% (SE = 3.1%) provide care in an academic environment, and 27.7% (SE = 2.8%) function as operator/anesthetists. The majority of anesthesia is provided for pediatric dentistry (47.0%, SE = 4.2%), oral and maxillofacial surgery (18.5%, SE = 3.9%), and special needs (16.7%, SE = 3.6%). Open-airway (58.7%, SE = 5.5%) sedation/anesthesia was the preferred modality of delivery, compared with the use of advanced airway (41.3%, SE = 4.6%). The demographics show diverse practice patterns of dentist anesthesiologists in multiple regions of the continent. Despite concerns regarding specialty recognition, reimbursement difficulties, and competition from alternative anesthesia providers, the overall perceptions of dentist anesthesiologists and the future of the field seem largely favorable. abstract_id: PUBMED:35950751 Nationwide Clinical Practice Patterns of Anesthesiology Critical Care Physicians: A Survey to Members of the Society of Critical Care Anesthesiologists. Background: Despite the growing contributions of critical care anesthesiologists to clinical practice, research, and administrative leadership of intensive care units (ICUs), relatively little is known about the subspecialty-specific clinical practice environment. An understanding of contemporary clinical practice is essential to recognize the opportunities and challenges facing critical care anesthesia, optimize staffing patterns, assess sustainability and satisfaction, and strategically plan for future activity, scope, and training. This study surveyed intensivists who are members of the Society of Critical Care Anesthesiologists (SOCCA) to evaluate practice patterns of critical care anesthesiologists, including compensation, types of ICUs covered, models of overnight ICU coverage, and relationships between these factors. We hypothesized that variability in compensation and practice patterns would be observed between individuals. Methods: Board-certified critical care anesthesiologists practicing in the United States were identified using the SOCCA membership distribution list and invited to take a voluntary online survey between May and June 2021. Multiple-choice questions with both single- and multiple-select options were used for answers with categorical data, and adaptive questioning was used to clarify stem-based responses. Respondents were asked to describe practice patterns at their respective institutions and provide information about their demographics, salaries, effort in ICUs, as well as other activities. Results: A total of 490 participants were invited to take this survey, and 157 (response rate 32%) surveys were completed and analyzed. The majority of respondents were White (73%), male (69%), and younger than 50 years of age (82%). The cardiothoracic/cardiovascular ICU was the most common practice setting, with 69.5% of respondents reporting time working in this unit. Significant variability was observed in ICU practice patterns. Respondents reported spending an equal proportion of their time in clinical practice in the operating rooms and ICUs (median, 40%; interquartile range [IQR], 20%-50%), whereas a smaller proportion-primarily those who completed their training before 2009-reported administrative or research activities. Female respondents reported salaries that were $36,739 less than male respondents; however, this difference was not statistically different, and after adjusting for age and practice type, these differences were less pronounced (-$27,479.79; 95% confidence interval [CI], -$57,232.61 to $2273.03; P = .07). Conclusions: These survey data provide a current snapshot of anesthesiology critical care clinical practice patterns in the United States. Our findings may inform decision-making around the initiation and expansion of critical care services and optimal staffing patterns, as well as provide a basis for further work that focuses on intensivist satisfaction and burnout. abstract_id: PUBMED:36879378 Practice Patterns Among Dentist Anesthesiologists for Pediatric Patients with Autism Spectrum Disorders. Purpose: The purpose of this study was to evaluate practice patterns among dentist anesthesiologists for pediatric patients with autism spectrum disorders (ASD) undergoing sedation for dental procedures. Methods: An electronic nationwide survey was delivered to all members of the American Society of Dentist Anesthesiologists. The survey assessed provider training and comfort in treating pediatric patients with ASD, perioperative procedures for children with and without ASD, and preferred educational resources for the perioperative management of pediatric patients with ASD. Results: Respondents were 114 dentist anesthesiologists and residents (33.3 percent response rate). Respondents indicated a high comfort level for managing pediatric patients with ASD for sedation (mean equals 91.9±14.74 [SD] percent). The average number of patients with ASD who respondents treat per week was 3.48±2.44). Providers reported making scheduling and staffing accommodations for patients with ASD. More than half of respondents reported no difference between patient groups in medication dosing for sedation and medication regimens used intraoperatively; however, only 43.9 percent of providers indicated using equivalent preoperative medication regimens for both patient groups, and providers reported increased usage of preoperative anxiolytic techniques with patients with ASD. Importantly, 87.7 percent of respondents reported the same incidence of adverse events during the perioperative period between groups. Conclusions: Findings from this survey suggest there are both similarities and differences in how dentist anesthesiologists practice with pediatric patients with and without autism spectrum disorders. Additional research is warranted to measure the clinical benefits of modified practices for patients with ASD and identify best practices for this vulnerable population. abstract_id: PUBMED:36811682 A Web-Based Reporting System for Reviewing Local Practice Patterns of Anesthesiologists Derived from the Electronic Medical Record. After completion of training, anesthesiologists may have fewer opportunities to see how colleagues practice, and their breadth of case experiences may also diminish due to specialization. We created a web-based reporting system based on data extracted from electronic anesthesia records that allows practitioners to see how other clinicians practice in similar cases. One year after implementation, the system continues to be utilized by clinicians. abstract_id: PUBMED:27714397 Comparing Graphical Formats for Feedback of Clinical Practice Data. A Multicenter Study among Anesthesiologists in France. Objectives: Although graphical formats used to feedback clinical practice data may have an important impact, the most effective formats remain unknown. Using prevention of postoperative nausea and vomiting by anesthesiologists as an application, the objective of this study was to assess which graphical formats for feedback of clinical practice data are the most incentive to change practice. Methods: We conducted a multicenter cross-sectional study among anesthesiologists randomized in two groups between March and June 2014. Each anesthesiologist assessed 15 graphical formats displaying an indicator of either prescription conformity or prescription effectiveness. Graphical formats varied by: type of graph (bar charts, linear sliders, or pictographs), presence or not of a target to reach, presence or not of a contrast between a hypothetical physician and his / her team, direction of the difference between the physician and his / her team, and restitution or not of the quality indicator evolution over the previous six months. The primary outcome was a numerical scale score expressing the anesthesiologists' motivation to change his / her practice (ranging from 1 to 10 points). A linear mixed model was fitted to explain variation in motivation. Results: Sixty-six anesthesiologists assessed the conformity indicator and 67 assessed the effectiveness indicator. Factors associated with an increased motivation to change practice were: (i) presence of a clearly defined target to reach (conformity: β = 0.24 points, p = 0.0046; effectiveness: β = 1.11 points, p < 0.0001); (ii) contrast between the physician and his / her team (conformity: β = 0.38 points, p < 0.0001; effectiveness: β = 0.33 points, p = 0.0021); (iii) better results for the team than for the physician (conformity: β = 0.65 points, p < 0.0001; effectiveness β = 1.16 points, p < 0.0001). For the effectiveness indicator, anesthesiologists were more motivated to change practice with bar charts (β = 0.24 points, p = 0.0447) and pictographs (β = 0.45 points, p = 0.0001) than with linear sliders. Conclusions: Graphs associated with a defined target to reach should be preferred to deliver feedback, especially bar graphs or pictographs for indicators which are more complex to represent such as effectiveness indicators. Anesthesiologists are also more motivated to change practice when graphs report contrasted data between the physician and his / her team and a lower conformity or effectiveness for the physician than for his / her team. abstract_id: PUBMED:9661565 Practice patterns in managing the difficult airway by anesthesiologists in the United States. Unlabelled: Despite the availability of several techniques and devices for the management of the difficult airway, little information has been published regarding the prevalence of their use by anesthesiologists in the United States. To determine current practice patterns, we surveyed clinicians using a questionnaire consisting of 14 difficult airway scenarios. Anesthesiologists were requested to indicate their likely approach to anesthetic induction (e.g., awake but sedated, general anesthesia with spontaneous ventilation, general anesthesia with apnea after assuring a patent airway, or general anesthesia with apnea) and the primary device they would use to intubate (e.g., direct laryngoscopy [DL], flexible fiberoptic bronchoscope [FOB], rigid fiberoptic device, surgical airway, retrograde intubation kit, laryngeal mask airway, gum elastic bougie, or Combitube). The availability of these devices was also determined (in room at all times, available "stat," available if arranged preoperatively, or not available). The survey was mailed to 1000 randomly chosen active members of the American Society of Anesthesiologists. Second and third surveys were mailed to non responders. Four hundred seventy-two completed surveys were returned. Responses by demographic groups were compared by using chi 2 analysis. DL and FOB-aided tracheal intubation techniques were chosen for most cases by most anesthesiologists (P < 0.05). Anesthesiologists with > 10 yr of clinical experience and those older than 55 yr of age preferred DL with apneic conditions (P < 0.05). Anesthesiologists who had attended workshops within the last 5 yr had greater availability of retrograde guidewire equipment and FOBs (P < 0.05). There was little use of newer alternative airway devices. Implications: Although the teaching of alternative methods of securing a difficult airway has become ubiquitous, most anesthesiologists rely on direct laryngoscopy and fiberoptic-aided intubation in most clinical circumstances. Although workshops in the management of the difficult airway may have resulted in increased use of the fiberoptic bronchoscope and the availability of retrograde guidewire intubation equipment, other devices have not enjoyed such an increase. abstract_id: PUBMED:11498316 Variation in practice patterns of anesthesiologists in California for prophylaxis of postoperative nausea and vomiting. Study Objective: To assess the responses to a survey asking anesthesiologists to report their clinical practice patterns for postoperative nausea and vomiting (PONV) prophylaxis. These practice patterns data may be useful for understanding how to optimize the decision to provide PONV prophylaxis. Design: A written questionnaire with three detailed clinical scenarios with differing levels of a priori risk of PONV (a low-risk patient, a medium-risk patient, and a high-risk patient) was mailed to 454 anesthesiologists. Setting: Survey was completed by anesthesiologists (n = 240) in 3 university and 3 community practices in California. Measurements: Type and number of pharmacological and nonpharmacological interventions for PONV prophylaxis were recorded. To assess the variability in the responses (by the a priori risk of patient), we counted the number of different regimens that would be necessary to account for 80% of the responses. Main Results: For the 240 respondents, we found that 1, 9, and 11 different pharmacological prophylaxis regimens were required to account for 80% of the variability in practice patterns for the low-, medium-, and high-risk patients, respectively. For the low-risk patient, 19% of practitioners would use pharmacological prophylaxis, and 37% would use nonpharmacological prophylaxis. For the medium-risk patient, 61% would use nonpharmacological prophylaxis and 67% of practitioners would use multidrug prophylaxis: 45% of patients would receive a 5HT(3) antagonist, 35% would receive metoclopramide, and 16% would receive droperidol. For the high-risk patient, 94% of practitioners would administer a 5HT(3) antagonist, whereas 84% would use multi-drug prophylaxis. Conclusions: We found a wide range of PONV prophylaxis management patterns. This variation in clinical practice may reflect uncertainty about the efficacy of available interventions, or differences in practitioners' clinical judgment and beliefs about how to treat PONV. Some therapies with proven benefit for PONV may be underused. Our results may be useful for designing studies aimed at determining the impact on PONV rates when physicians develop and implement guidelines for PONV prophylaxis. abstract_id: PUBMED:30724774 The American Society of Anesthesiologists Practice Parameter Methodology. The methodology used during the development of American Society of Anesthesiologists evidence-based practice parameters, from conceptualization through final adoption of the documents, is described. Features of the methodology include the literature search, review and analysis, survey development and application, and consolidation of the full body of evidence used for preparing clinical practice recommendations. Anticipated risks of bias, validation of the process, and the importance of the documents for clinical use are discussed. abstract_id: PUBMED:34762729 2022 American Society of Anesthesiologists Practice Guidelines for Management of the Difficult Airway. The American Society of Anesthesiologists; All India Difficult Airway Association; European Airway Management Society; European Society of Anaesthesiology and Intensive Care; Italian Society of Anesthesiology, Analgesia, Resuscitation and Intensive Care; Learning, Teaching and Investigation Difficult Airway Group; Society for Airway Management; Society for Ambulatory Anesthesia; Society for Head and Neck Anesthesia; Society for Pediatric Anesthesia; Society of Critical Care Anesthesiologists; and the Trauma Anesthesiology Society present an updated report of the Practice Guidelines for Management of the Difficult Airway. Answer: Yes, changing anesthesiologists' practice patterns can be done, as demonstrated by a study that introduced enhanced education and individualized feedback to reduce postoperative nausea and vomiting (PONV). The study aimed to increase anesthesiologists' use of preventive measures for PONV and to determine whether patient outcomes were improved. The intervention at the study hospital led to a significant increase in the mean percentage of female patients receiving a preventive measure and a significant increase in the use of droperidol for all patients. However, the use of other promoted measures was unaffected. There was a decrease in the absolute rates of PONV at the study hospital during the post-feedback period, indicating that the intervention had a positive effect on patient outcomes, albeit modest (PUBMED:8712440). This finding is consistent with the broader understanding that practice patterns among medical professionals can be influenced by various factors, including feedback mechanisms, education, and the availability of clinical guidelines. For instance, graphical formats for feedback of clinical practice data have been shown to impact anesthesiologists' motivation to change practice, with the presence of a clearly defined target, contrast between the physician and their team, and better results for the team than for the physician being associated with increased motivation (PUBMED:27714397). Moreover, practice patterns can vary widely among anesthesiologists in different subspecialties and regions, as seen in dentist anesthesiologists (PUBMED:29509521), critical care anesthesiologists (PUBMED:35950751), and those managing pediatric patients with autism spectrum disorders (PUBMED:36879378). The use of a web-based reporting system derived from electronic medical records has also been implemented to allow anesthesiologists to review local practice patterns, which may contribute to changes in practice by providing visibility into how colleagues manage similar cases (PUBMED:36811682). In summary, changing anesthesiologists' practice patterns is possible through targeted interventions such as education, feedback, and the use of technology to provide insights into current practices.
Instruction: School nursing for children with special needs: does number of schools make a difference? Abstracts: abstract_id: PUBMED:19630867 School nursing for children with special needs: does number of schools make a difference? Background: Few recent studies have focused on the role of school nurses who predominantly care for children with special health care needs (CSHCN). The primary aim of this study was to explore differences related to (a) child health conditions covered, (b) direct care procedures, (c) care management functions, and (c) consultation sources used among nurses who spent the majority of their time caring for CSHCN compared to a mixed student population and among nurses who covered a single school versus multiple schools. Methods: A community-based interdisciplinary team developed a 28-item survey which was completed by 50 nurses (48.5% response) employed by health departments and school districts. Descriptive and comparative statistics and thematic coding were used to analyze data. Results: Nurses who covered a single school (n = 23) or who were primarily assigned to CSHCN (n = 13) had a lower number of students, and more frequently (a) encountered complex child conditions, (b) performed direct care procedures, (c) participated in Individualized Education Plan (IEP) development, (d) collaborated with the Title V-CSHCN agency, and e) communicated with physicians, compared to nurses who covered multiple schools or a general child population. Benefits centered on the children, scope of work, school environment, and family relationships. Challenges included high caseloads, school district priorities, and families who did not follow up. Conclusion: The number of schools that the nurses covered, percent of time caring for CSHCN, and employer type (school district or health department) affected the scope of school nurse practice. Recommendations are for lower student-to-nurse ratios, improved nursing supervision, and educational support. abstract_id: PUBMED:11470098 Nursing inputs to special schools in N. Ireland. Children attending special schools often have healthcare needs that require ongoing medical and nursing care. Two postal surveys were undertaken of 47 special schools in N. Ireland to determine the type of contact they had with nurses and the functions they fulfilled. Responses were received from 42 school principals and from the 11 Health and Social Service Trusts responsible for nursing services. It was found that nurses were based in nine of 42 schools while the remaining schools depended on a range of different visiting nurses. The nurses were involved in 'hands-on' tasks as well as giving advice and training to school personnel. Further research needs to define more closely the nursing needs of these pupils as well as evaluating the differential benefits of various nursing services to schools and how their inputs can be coordinated with those of other health professionals. abstract_id: PUBMED:14515749 A study of children with special needs in mainstream schools and the role of nursing teachers Objective: Principals of elementary schools and classroom teachers of classes for children with special needs were surveyed by questionnaire to identify the features of children with special needs in mainstream schools and the role of nursing teachers. Methods: Subjects in Y prefecture were asked to consider several items including the following: 1) presence in class of children with special needs; 2) presence of a child with special needs who was a target of a class or a school for children with special needs; 3) presence of education for special needs experience; 4) obstacles that such children face; 5) the relation with a nursing teacher from June to August in 1998. Results: 1. There was no class for children with special needs in 87 among 135 schools. Children with special needs were present in 27 out of 87 schools, and the frequency of children with special needs was about 0.3% in mainsteam classes. 2. The total number of children in classes for special needs was 177 including 142 (79.8%) with intellectual disabilities, 77 (43.3%) with complex disabilities and 61 (34.3%) requiring medical care. 3. Ninety percent of teachers of special needs classes asked a school nurse about health care for special needs and how to cope with matters relating to disabilities. 4. Ninety percent of teachers of special needs classes were concerned about matters such as the method of teaching and coping with matters relating to disabilities. Conclusion: The survey found that there are children with special needs in mainstream classes. The situation with the disabilities is an appreciable number of complex, so teachers of special needs classes had many concerns. Those teachers are inclined to be isolated in school. To achieve good educational outcomes for children with special needs it is important to develop a systematic network between school and experts such as medical doctors and educational professionals. Nursing teachers could play an important role in their relation, with medical specialists. abstract_id: PUBMED:25854694 School Health Services for Children With Special Health Care Needs in California. Children with special health care needs (CSHCN) are at risk for school failure when their health needs are not met. Current studies have identified a strong connection between school success and health. This study attempted to determine (a) how schools meet the direct service health needs of children and (b) who provides those services. The study used the following two methods: (a) analysis of administrative data from the California Basic Educational Data System and (b) a cross-sectional online survey of 446 practicing California school nurses. Only 43% of California's school districts employ school nurses. Unlicensed school personnel with a variety of unregulated training provide school health services. There is a lack of identification of CSHCN, and communication barriers impair the ability to deliver care. Study results indicate that California invests minimally in school health services. abstract_id: PUBMED:36395685 Health and social relationships of mothers of children in special education schools. Background And Aims: The number of children in special education schools has increased in Japan. This study aimed to examine the association between special education school enrollment and the health and social relationships of mothers with children in these schools using population-based samples in Japan. Methods And Procedures: This study used data from the Kochi Child Health Impact of Living Difficulty (K-CHILD) study in 2016. First, fifth, eighth, and eleventh-grade children in all schools in Kochi prefecture were included (n = 12,623). Associations between school type (regular or special education school) and maternal physical and mental health and social relationships were investigated by multivariate regression models. Outcomes And Results: There were 134 children in special education schools (1.1 %) and 12,489 children in regular schools. Mothers of children in special education schools were more likely to have higher body mass index (BMI), poorer mental health and lower neighborhood relations score. Mothers of children in regular schools had higher BMI when their children had higher behavioral problems. Conclusion And Implications: Mothers of children in special education schools are at risk of obesity, poor mental health, and having fewer social networks. Services and support should be expanded for caregivers based on their child's behavioral problems and school system. abstract_id: PUBMED:25869812 The Mismatch Between Children's Health Needs and School Resources. There are increasing numbers of children with special health care needs (CSHCN) who require various levels of care each school day. The purpose of this study was to examine the role of public schools in supporting CSHCN through in-depth key informant interviews. For this qualitative study, the authors interviewed 17 key informants to identify key themes, provide recommendations, and generate hypotheses for further statewide survey of school nurse services. Key informants identified successful strategies and challenges that public schools face in meeting the needs of all CSHCN. Although schools are well intentioned, there is wide variation in the ability of schools to meet the needs of CSHCN. Increased funding, monitoring of school health services, integration of services, and interagency collaboration are strategies that could improve the delivery of health services to CSHCN in schools. abstract_id: PUBMED:16322130 The preparedness of schools to respond to emergencies in children: a national survey of school nurses. Objectives: Because children spend a significant proportion of their day in school, pediatric emergencies such as the exacerbation of medical conditions, behavioral crises, and accidental/intentional injuries are likely to occur. Recently, both the American Academy of Pediatrics and the American Heart Association have published guidelines stressing the need for school leaders to establish emergency-response plans to deal with life-threatening medical emergencies in children. The goals include developing an efficient and effective campus-wide communication system for each school with local emergency medical services (EMS); establishing and practicing a medical emergency-response plan (MERP) involving school nurses, physicians, athletic trainers, and the EMS system; identifying students at risk for life-threatening emergencies and ensuring the presence of individual emergency care plans; training staff and students in first aid and cardiopulmonary resuscitation (CPR); equipping the school for potential life-threatening emergencies; and implementing lay rescuer automated external defibrillator (AED) programs. The objective of this study was to use published guidelines by the American Academy of Pediatrics and the American Heart Association to examine the preparedness of schools to respond to pediatric emergencies, including those involving children with special care needs, and potential mass disasters. Methods: A 2-part questionnaire was mailed to 1000 randomly selected members of the National Association of School Nurses. The first part included 20 questions focusing on: (1) the clinical background of the school nurse (highest level of education, years practicing as a school health provider, CPR training); (2) demographic features of the school (student attendance, grades represented, inner-city or rural/suburban setting, private or public funding, presence of children with special needs); (3) self-reported frequency of medical and psychiatric emergencies (most common reported school emergencies encountered over the past school year, weekly number of visits to school nurses, annual number of "life-threatening" emergencies requiring activation of EMS); and (4) the preparedness of schools to manage life-threatening emergencies (presence of an MERP, presence of emergency care plans for asthmatics, diabetics, and children with special needs, presence of a school nurse during all school hours, CPR training of staff and students, availability of athletic trainers during all athletic events, presence of an MERP for potential mass disasters). The second part included 10 clinical scenarios measuring the availability of emergency equipment and the confidence level of the school nurse to manage potential life-threatening emergencies. Results: Of the 675 questionnaires returned, 573 were eligible for analysis. A majority of responses were from registered nurses who have been practicing for >5 years in a rural or suburban setting. The most common reported school emergencies were extremity sprains and shortness of breath. Sixty-eight percent (391 of 573 [95% confidence interval (CI): 64-72%]) of school nurses have managed a life-threatening emergency requiring EMS activation during the past school year. Eighty-six percent (95% CI: 84-90%) of schools have an MERP, although 35% (95% CI: 31-39%) of schools do not practice the plan. Thirteen percent (95% CI: 10-16%) of schools do not identify authorized personnel to make emergency medical decisions. When stratified by mean student attendance, school setting, and funding classification, schools with and without an MERP did not differ significantly. Of the 205 schools that do not have a school nurse present on campus during all school hours, 17% (95% CI: 12-23%) do not have an MERP, 17% (95% CI: 12-23%) do not identify an authorized person to make medical decisions when faced with a life-threatening emergency, and 72% (95% CI: 65-78%) do not have an effective campus-wide communication system. CPR training is offered to 76% (95% CI: 70-81%) of the teachers, 68% (95% CI: 61-74%) of the administrative staff, and 28% (95% CI: 22-35%) of the students. School nurses reported the availability of a bronchodilator meter-dosed inhaler (78% [95% CI: 74-81%]), AED (32% [95% CI: 28-36%]), and epinephrine autoinjector (76% [95% CI: 68-79%]) in their school. When stratified by inner-city and rural/suburban school setting, the availability of emergency equipment did not differ significantly except for the availability of an oxygen source, which was higher in rural/suburban schools (15% vs 5%). School-nurse responders self-reported more confidence in managing respiratory distress, airway obstruction, profuse bleeding/extremity fracture, anaphylaxis, and shock in a diabetic child and comparatively less confidence in managing cardiac arrest, overdose, seizure, heat illness, and head injury. When analyzing schools with at least 1 child with special care needs, 90% (95% CI: 86-93%) have an MERP, 64% (95% CI: 58-69%) have a nurse available during all school hours, and 32% (95% CI: 27-38%) have an efficient and effective campus-wide communication system linked with EMS. There are no identified authorized personnel to make medical decisions when the school nurse is not present on campus in 12% (95% CI: 9-16%) of the schools with children with special care needs. When analyzing the confidence level of school nurses to respond to common potential life-threatening emergencies in children with special care needs, 67% (95% CI: 61-72%) of school nurses felt confident in managing seizures, 88% (95% CI: 84-91%) felt confident in managing respiratory distress, and 83% (95% CI: 78-87%) felt confident in managing airway obstruction. School nurses reported having the following emergency equipment available in the event of an emergency in a child with special care needs: glucose source (94% [95% CI: 91-96%]), bronchodilator (79% [95% CI: 74-83%]), suction (22% [95% CI: 18-27%]), bag-valve-mask device (16% [95% CI: 12-21%]), and oxygen (12% [95% CI: 9-16%]). An MERP designed specifically for potential mass disasters was present in 418 (74%) of 573 schools (95% CI: 70-77%). When stratified by mean student attendance, school setting, and funding classification, schools with and without an MERP for mass disasters did not differ significantly. Conclusions: Although schools are in compliance with many of the recommendations for emergency preparedness, specific areas for improvement include practicing the MERP several times per year, linking all areas of the school directly with EMS, identifying authorized personnel to make emergency medical decisions, and increasing the availability of AED in schools. Efforts should be made to increase the education of school nurses in the assessment and management of life-threatening emergencies for which they have less confidence, particularly cardiac arrest, overdose, seizures, heat illness, and head injury. abstract_id: PUBMED:1290867 Public health nursing in schools: perceptions of public school administrators. In localities throughout the country, in lieu of school nurses, nursing services are provided to county schools by public health nurses. The multiple demands placed upon these nurses and counties' fund limitations suggested this study of school administrators' perceptions of nursing responsibility and nursing activities. Administrators of elementary, middle, and high schools were surveyed by questionnaire as to (1) area(s) of nursing responsibility they considered most important and (2) their rank ordering of importance of individual nursing activities. Supervision activities related to communicable disease and immunization garnered the highest ratings. The only significant difference in ranking between groups of administrators was that health counseling for students was ranked higher by high school administrators, whereas health education for parents was ranked higher at the elementary level. abstract_id: PUBMED:11826318 Providing a healthcare input to children in special schools. The education of children with special educational needs is often accompanied by a requirement for medical or healthcare provision. If this cannot be done safely then the child's access to education is limited. No standardized template for the delivery of a healthcare input to children in special schools is apparent. This study sought to explore, through the use of an indepth needs assessment exercise and focus group interviews, what the most appropriate healthcare role was for delivering health care in a special school catering for children with a broad range of severe learning disabilities. While an overwhelming viewpoint of participants in focus groups perceived that a nurse was the only suitable person to undertake the role, the evidence gathered prompted the research steering group to suggest to the contrary, i.e. that the role of a healthcare worker with a national vocational qualification (NVQ) level 3 in care was the more appropriate person to maximize both the role of the nurse and the quality of care provided to these children. abstract_id: PUBMED:28144766 Care Coordination with Schools: The Role of Family-Centered Care for Children with Special Health Care Needs. Objectives Family-centered care has been associated with positive outcomes for children with special health care needs. The purpose of the current study was to examine the relationship of family-centered care as associated with care coordination with schools and school absences (e.g., missed days) as reported by parents of children with special health care needs. Methods The current study utilized data from the National Survey of Children with Special Health Care Needs 2009-201 (N = 40,242) to achieve this purpose. The National Survey of Children with Special Health Care Needs may be considered a nationally-representative and community-based sample of parent responses for children with special health care needs across the United States. Results Results from the current study indicate that family-centered care is associated with fewer absences and improved care coordination with schools when applicable. The variables of functional difficulties, poverty level, and the number of conditions were statistically controlled. Conclusions We suggest that the positive influence of family-centered care when practiced extends beyond the family and interacts with educational outcomes. We also suggest that the role of schools appears to be under-studied given the role that schools can play in family-centered care. Answer: The number of schools covered by a school nurse does indeed make a difference in the care of children with special health care needs (CSHCN). Nurses who are assigned to a single school or who primarily care for CSHCN tend to encounter more complex child conditions, perform more direct care procedures, participate more frequently in Individualized Education Plan (IEP) development, collaborate more with Title V-CSHCN agencies, and communicate more with physicians compared to nurses who cover multiple schools or care for a general child population (PUBMED:19630867). This suggests that a lower student-to-nurse ratio and a more focused assignment can enhance the scope and quality of care provided to CSHCN. Moreover, the presence of nurses in schools, whether based in the school or visiting, is crucial for 'hands-on' tasks as well as for providing advice and training to school personnel (PUBMED:11470098). The role of nursing teachers is also significant, as they are often consulted by teachers of special needs classes about health care and disability-related issues, indicating the importance of a systematic network between schools and medical professionals (PUBMED:14515749). However, there are challenges in meeting the health needs of CSHCN in schools, including high caseloads, school district priorities, and families not following up on care plans (PUBMED:19630867). Additionally, there is a lack of identification of CSHCN and communication barriers that impair the ability to deliver care, with only 43% of California's school districts employing school nurses and unlicensed personnel often providing health services (PUBMED:25854694). The mismatch between children's health needs and school resources is a recognized issue, with wide variation in the ability of schools to meet the needs of CSHCN. Recommendations for improvement include increased funding, monitoring of school health services, integration of services, and interagency collaboration (PUBMED:25869812). In summary, the number of schools a nurse covers does impact the care of children with special needs, with single-school assignments and a focus on CSHCN leading to more comprehensive and specialized care. However, systemic challenges and resource limitations need to be addressed to ensure all CSHCN receive the necessary support within the school environment.
Instruction: On the Hospital Volume and Outcome Relationship: Does Specialization Matter More Than Volume? Abstracts: abstract_id: PUBMED:25783775 On the Hospital Volume and Outcome Relationship: Does Specialization Matter More Than Volume? Objective: To evaluate the relationship between hospital volume and outcome by focusing on alternative measures of volume that capture specialization and overall throughput of hospitals. Data Sources/study Setting: Hospital administrative data from the state of Victoria, Australia; data contain 1,798,474 admitted episodes reported by 135 public and private acute-care hospitals. Study Design: This study contrasts the volume-outcome relationship using regression models with different measures of volume; two-step and single-step risk-adjustment methods are used. Data Collection/extraction Methods: The sample is restricted to ischemic heart disease (IHD) patients (ICD-10 codes: I20-I25) admitted during 2001/02 to 2004/05. Principal Findings: Overall hospital throughput and degree of specialization display more substantive implications for the volume-outcome relationship than conventional caseload volume measure. Two-step estimation when corrected for heteroscedasticity produces comparable results to single-step methods. Conclusions: Different measures of volume could lead to vastly different conclusions about the volume-outcome relationship. Hospital specialization and throughput should both be included as measures of volume to capture the notion of size, focus, and possible congestion effects. abstract_id: PUBMED:24396230 Volume-outcome relationship in revision hip replacement - Results from a low volume hospital. Introduction: Mortality and morbidity are both increased during revision hip surgery. Higher hospital procedure volumes have been associated with lower rates of mortality and/or complications according to some reports - the "practice makes perfect" hypothesis. Aim: The aim of the study was to test "practice makes perfect; hypothesis with regards to revision hip surgery at our low volume hospital. Methods: This is a retrospective study of all the patients who underwent revision hip arthroplasty under the care of the senior author between February 2002 and January 2006. Data was collected about the 30-day and one-year mortality, post-operative complications like deep vein thrombosis (DVT), pulmonary embolism (PE), superficial or deep wound infections, dislocations, and the Oxford hip score. Results: The rate of revision hip surgery carried out in our hospital was 6.25 per year. There was no 30-day mortality, stroke within 3 months, dislocations within one year, re-admission within one month, one-year mortality and deep infections within one year. The final outcome after revision hip surgery, based on Oxford questionnaire, showed that 72% had an excellent outcome and 8% had poor outcome. Conclusion: Volume and outcome relationship may not contribute towards the final outcome when individual surgeons and hospitals are considered. Good general hospital care can greatly affect the health outcome for a particular procedure. Strategies aimed at improving the general hospital care may benefit the patients as much as volume based regionalization. abstract_id: PUBMED:36806254 Selective referral or learning by doing? An analysis of hospital volume-outcome relationship of vascular procedures. This paper analyzes the effects of hospital volume on outcomes of patients undergoing percutaneous transluminal angioplasty (PTA) with stent implant in Slovakia between 2014 and 2019. The volume-outcome relationship is estimated jointly using a discrete factor approach, where choice of hospital is correlated with durations until readmission or death, accounting for observed and unobserved characteristics. The results reveal the importance of controlling for between-hospital differences and selectivity in patient referral. Estimates without hospital fixed effects overstate the positive effect of volume on outcomes, but the results remain statistically significant. Once selectivity is accounted for in the joint correlated model, the positive volume-outcome relationship is not different from zero. Overall, the main driver of the volume-outcome relationship for PTA procedures appears to be related to selective referral and differences in quality of health care providers. abstract_id: PUBMED:28736640 Systematic review and a meta-analysis of hospital and surgeon volume/outcome relationships in colorectal cancer surgery. Background: Numerous hospitals worldwide are considering setting minimum volume standards for colorectal surgery. This study aims to examine the association between hospital and surgeon volume on outcomes for colorectal surgery. Methods: Two investigators independently reviewed six databases from inception to May 2016 for articles that reported outcomes according to hospital and/or surgeon volume. Eligible studies included those in which assessed the association hospital or surgeon volume with outcomes for the surgical treatment of colon and/or rectal cancer. Random effects models were used to pool the hazard ratios (HRs) for the association between hospital/surgeon volume with outcomes. Results: There were 47 articles pooled (1,122,303 patients, 9,877 hospitals and 9,649 surgeons). The meta-analysis demonstrated that there is a volume-outcome relationship that favours high volume facilities and high volume surgeons. Higher hospital and surgeon volume resulted in reduced 30-day mortality (HR: 0.83; 95% CI: 0.78-0.87, P<0.001 & HR: 0.84; 95% CI: 0.80-0.89, P<0.001 respectively) and intra-operative mortality (HR: 0.82; 95% CI: 0.76-0.86, P<0.001 & HR: 0.50; 95% CI: 0.40-0.62, P<0.001 respectively). Post-operative complication rates depended on hospital volume (HR: 0.89; 95% CI: 0.81-0.98, P<0.05), but not surgeon volume except with respect to anastomotic leak (HR: 0.59; 95% CI: 0.37-0.94, P<0.01). High volume surgeons are associated with greater 5-year survival and greater lymph node retrieval, whilst reducing recurrence rates, operative time, length of stay and cost. The best outcomes occur in high volume hospitals with high volume surgeons, followed by low volume hospitals with high volume surgeons. Conclusions: High volume by surgeon and high volume by hospital are associated with better outcomes for colorectal cancer surgery. However, this relationship is non-linear with no clear threshold of effect being identified and an apparent ceiling of effect. abstract_id: PUBMED:27083171 Measuring the Volume-Outcome Relation for Complex Hospital Surgery. Background: Prominent studies continue to measure the hospital volume-outcome relation using simple logistic or random-effects models. These regression models may not appropriately account for unobserved differences across hospitals (such as differences in organizational effectiveness) which could be mistaken for a volume outcome relation. Objective: To explore alternative estimation methods for measuring the volume-outcome relation for six major cancer operations, and to determine which estimation method is most appropriate. Methods: We analyzed patient-level hospital discharge data from three USA states and data from the American Hospital Association Annual Survey of Hospitals from 2000 to 2011. We studied six major cancer operations using three regression frameworks (logistic, fixed-effects, and random-effects) to determine the correlation between patient outcome (mortality) and hospital volume. Results: For our data, logistic and random-effects models suggest a non-zero volume effect, whereas fixed-effects models do not. Model-specification tests support the fixed-effects or random-effects model, depending on the surgical procedure; the basic logistic model is always rejected. Esophagectomy and rectal resection do not exhibit significant volume effects, whereas colectomy, pancreatic resection, pneumonectomy, and pulmonary lobectomy do. Conclusions: The statistical significance of the hospital volume-outcome relation depends critically on the regression model. A simple logistic model cannot control for unobserved differences across hospitals that may be mistaken for a volume effect. Even when one applies panel-data methods, one must carefully choose between fixed- and random-effects models. abstract_id: PUBMED:30856682 Hospital volume versus outcome following oesophagectomy for cancer in Australia and New Zealand. Background: Volume-outcome relationships for mortality following oesophagectomy have been demonstrated in Europe and the USA, but not in Australia or New Zealand. We determined whether higher volume hospitals achieve better outcomes following oesophagectomy in Australia and New Zealand. Methods: Administrative data for hospitals contributing data to the Health Roundtable were analysed. Hospitals performing oesophagectomy for cancer from July 2008 to June 2015 were grouped according to mean annual caseload: low (1-5), medium (6-11) and high (12+) volume. Univariate and multivariable analyses determined the impact of volume on 30-day and in-hospital mortalities, length of hospital stay and mechanical ventilation following surgery. Results: A total of 2252 patients underwent oesophagectomy in 65 hospitals. Sixty-eight percent (n = 44) were low-, 26% (n = 17) were medium- and 6% (n = 4) were high-volume hospitals. Seven hundred and sixty-two (34%) procedures were performed in low-, 1042 (46%) in medium- and 448 (20%) in high-volume hospitals. Overall in-hospital mortality was 3.1% and 30-day mortality was 2.1%. In-hospital mortality was lowest in high-volume hospitals; 1.6% versus 2.6% and 4.1% for low- and medium-volume hospitals (P = 0.02). Surgery in high-volume hospitals was shorter (32 min, P = 0.001), and patients were less likely to require post-operative ventilation (16.7% versus 25.3% and 28.0%, P < 0.001), although patients requiring ventilation in high-volume hospitals were ventilated for longer. Conclusions: A volume-outcome relationship was demonstrated, with overall better performance in higher volume hospitals. Colocation of oesophagectomies to hospitals that can demonstrate appropriate caseload should be considered. abstract_id: PUBMED:34210298 Perspective of potential patients on the hospital volume-outcome relationship and the minimum volume threshold for total knee arthroplasty: a qualitative focus group and interview study. Background: Total knee arthroplasty (TKA) is performed to treat end-stage knee osteoarthritis. In Germany, a minimum volume threshold of 50 TKAs/hospital/year was implemented to ensure outcome quality. This study, embedded within a systematic review, aimed to investigate the perspectives of potential TKA patients on the hospital volume-outcome relationship for TKA (higher volumes associated with better outcomes). Methods: A convenience sample of adults with knee problems and heterogeneous demographic characteristics participated in the study. Qualitative data were collected during a focus group prior to the systematic review (n = 5) and during telephone interviews, in which preliminary results of the systematic review were discussed (n = 16). The data were synthesised using content analysis. Results: All participants (n = 21) believed that a hospital volume-outcome relationship exists for TKA while recognising that patient behaviour or the surgeon could also influence outcomes. All participants would be willing to travel longer for better outcomes. Most interviewees would choose a hospital for TKA depending on reputation, recommendations, and service quality. However, some would also choose a hospital based on the results of the systematic review that showed slightly lower mortality/revision rates at higher-volume hospitals. Half of the interviewees supported raising the minimum volume threshold even if this were to increase travel time to receive TKA. Conclusions: Potential patients believe that a hospital volume-outcome relationship exists for TKA. Hospital preference is based mainly on subjective factors, although some potential patients would consider scientific evidence when making their choice. Policy makers and physicians should consider the patient perspectives when deciding on minimum volume thresholds or recommending hospitals for TKA, respectively. abstract_id: PUBMED:35459541 Relationship between volume and outcome for gastroschisis: A systematic review. Background: Newborns with gastroschisis need surgery to reduce intestines into the abdominal cavity and to close the abdominal wall. Due to an existing volume-outcome relationship for other high-risk, low-volume procedures, we aimed at examining the relationship between hospital or surgeon volume and outcomes for gastroschisis. Methods: We conducted a systematic literature search in Medline, Embase, CENTRAL, CINAHL and Biosis Previews in June 2021 and searched for additional literature. We included (cluster-) randomized controlled trials (RCTs) and prospective or retrospective cohort studies analyzing the relationship between hospital or surgeon volume and mortality, morbidity or quality of life. We assessed risk of bias of included studies using ROBINS-I and performed a systematic synthesis without meta-analysis and used GRADE for assessing the certainty of the evidence. Results: We included 12 cohort studies on hospital volume. Higher hospital volume may reduce in-hospital mortality of neonates with gastroschisis, while the evidence is very uncertain for other outcomes. Findings are based on a low certainty of the evidence for in-hospital mortality and a very low certainty of the evidence for all other analyzed outcomes, mainly due to risk of bias and imprecision. We did not identify any study on surgeon volume. Conclusion: The evidence suggests that higher hospital volume reduces in-hospital mortality of newborns with gastroschisis. However, the magnitude of this effect seems to be heterogeneous and results should be interpreted with caution. There is no evidence on the relationship between surgeon volume and outcomes. abstract_id: PUBMED:34979289 Hospital Volume-Outcome Relationship in Severe Traumatic Brain Injury: A Nationwide Observational Study in Japan. Objective: The hospital volume-outcome relationship in patients with severe traumatic brain injury (TBI) remains unclear. This study investigated the association between the volume of patients with severe TBI and in-hospital mortality. Methods: This observational study identified patients with severe TBI (Glasgow Coma Scale score <9 and Abbreviated Injury Scale head score ≥3) from the Japan Trauma Databank (2010-2018). Hospitals were grouped on the basis of annual patient volume as follows: low-volume (4-19 patients/year); middle-volume (20-35 patients/year); and high-volume (36-51 patients/year) groups. The association between hospital volume categories and in-hospital mortality was examined using a multivariate mixed-effect logistic regression analysis. A subgroup analysis was performed based on the presence of severe extracranial injuries. Results: A total of 11,344 patients from 64 hospitals were included. The median age of the patients was 57 years (interquartile range, 40-77), and 7933 (70.0%) patients were men. A total of 4879 (43.1%) patients died in the hospital. The medium-volume (adjusted odds ratio [OR], 0.76; 95% confidence interval [CI], 0.62-0.93) and high-volume (adjusted OR, 0.69; 95% CI, 0.52-0.94) groups were significantly associated with lower in-hospital mortality. The subgroup analysis revealed that the medium-volume (adjusted OR, 0.70; 95% CI, 0.54-0.92) and high-volume (adjusted OR, 0.64; 95% CI, 0.42-0.96) groups were significantly associated with lower in-hospital mortality for isolated TBI patients. Conclusions: Higher hospital volumes were significantly associated with lower in-hospital mortality after severe TBI. Regionalization and referral to higher-volume hospitals are beneficial for severe TBI patients. abstract_id: PUBMED:24700615 The volume-outcome relationship and minimum volume standards--empirical evidence for Germany. For decades, there is an ongoing discussion about the quality of hospital care leading i.a. to the introduction of minimum volume standards in various countries. In this paper, we analyze the volume-outcome relationship for patients with intact abdominal aortic aneurysm and hip fracture. We define hypothetical minimum volume standards in both conditions and assess consequences for access to hospital services in Germany. The results show clearly that patients treated in hospitals with a higher case volume have on average a significant lower probability of death in both conditions. Furthermore, we show that the hypothetical minimum volume standards do not compromise overall access measured with changes in travel times. Answer: The relationship between hospital volume and outcomes has been a subject of research, and the question of whether specialization matters more than volume is complex. According to the study by PUBMED:25783775, both overall hospital throughput and the degree of specialization have significant implications for the volume-outcome relationship. This study suggests that different measures of volume, including specialization and throughput, should be included to capture the notion of size, focus, and possible congestion effects. This indicates that specialization could indeed play a crucial role alongside volume in determining outcomes. However, the study by PUBMED:24396230 challenges the notion that volume alone determines outcomes, suggesting that good general hospital care can greatly affect health outcomes for specific procedures. This implies that factors other than volume, such as the quality of care, can be equally important. The study by PUBMED:36806254 further complicates the volume-outcome relationship by highlighting the importance of controlling for between-hospital differences and selectivity in patient referral. Once these factors are accounted for, the positive volume-outcome relationship for percutaneous transluminal angioplasty (PTA) procedures was not different from zero, suggesting that selective referral and differences in healthcare provider quality are significant drivers of the relationship. In the context of colorectal cancer surgery, the study by PUBMED:28736640 found that higher hospital and surgeon volume resulted in reduced mortality and complications, with the best outcomes occurring in high volume hospitals with high volume surgeons. This supports the idea that volume is an important factor for outcomes, but it does not directly address the impact of specialization. The study by PUBMED:27083171 emphasizes the importance of the estimation method used to measure the volume-outcome relation, indicating that unobserved differences across hospitals could be mistaken for a volume effect. This suggests that the relationship is not straightforward and may depend on how it is measured. In summary, while volume is an important factor in determining hospital outcomes, the evidence suggests that specialization, quality of care, and other factors such as selective referral and hospital differences also play significant roles. Therefore, it is not solely a matter of volume, but a combination of factors, including specialization, that influences outcomes (PUBMED:25783775, PUBMED:24396230, PUBMED:36806254, PUBMED:28736640, PUBMED:27083171).
Instruction: Is red blood cell rheology preserved during routine blood bank storage? Abstracts: abstract_id: PUBMED:20003060 Is red blood cell rheology preserved during routine blood bank storage? Background: Red blood cell (RBC) units stored for more than 2 weeks at 4 degrees C are currently considered of impaired quality. This opinion has primarily been based on altered RBC rheologic properties (i.e., enhanced aggregability, reduced deformability, and elevated endothelial cell interaction), during prolonged storage of nonleukoreduced RBC units. In this study, the rheologic properties and cell variables of leukoreduced RBC units, during routine blood bank storage in saline-adenine-glucose-mannitol, were investigated. Study Design And Methods: Ten leukoreduced RBC units were stored at the blood bank for 7 weeks at 4 degrees C. RBCs were tested weekly for aggregability, deformability, and other relevant variables. Results: RBC aggregability was significantly reduced after the first week of storage but recovered during the following weeks. After 7 weeks aggregability was slightly, but significantly, reduced (46.9 + or - 2.4-44.3 + or - 2.2 aggregation index). During storage the osmotic fragility was not significantly enhanced (0.47 + or - 0.01% phosphate-buffered saline) and the deformability at shear stress of 3.9 Pa was not significantly reduced (0.36 + or - 0.01 elongation index [EI]). The deformability at 50 Pa was reduced (0.58 + or - 0.01-0.54 + or - 0.01 EI) but remained within reference values (0.53 + or - 0.04). During 5 weeks of storage, adenosine triphosphate was reduced by 54% whereas mean cell volume, pH, and mean cell hemoglobin concentration were minimally affected. Conclusions: RBC biochemical and physical alterations during storage minimally affected the RBC ability to aggregate and deform, even after prolonged storage. The rheologic properties of leukoreduced RBC units were well preserved during 7 weeks of routine blood bank storage. abstract_id: PUBMED:26969770 Erythropoietin reduces storage lesions and decreases apoptosis indices in blood bank red blood cells. Background: Recent evidence shows a selective destruction of the youngest circulating red blood cells (neocytolysis) trigged by a drop in erythropoietin levels. Objective: The aim of this study was to evaluate the effect of recombinant human erythropoietin beta on the red blood cell storage lesion and apoptosis indices under blood bank conditions. Methods: Each one of ten red blood cell units preserved in additive solution 5 was divided in two volumes of 100mL and assigned to one of two groups: erythropoietin (addition of 665IU of recombinant human erythropoietin) and control (isotonic buffer solution was added). The pharmacokinetic parameters of erythropoietin were estimated and the following parameters were measured weekly, for six weeks: Immunoreactive erythropoietin, hemolysis, percentage of non-discocytes, adenosine triphosphate, glucose, lactate, lactate dehydrogenase, and annexin-V/esterase activity. The t-test or Wilcoxon's test was used for statistical analysis with significance being set for a p-value <0.05. Results: Erythropoietin, when added to red blood cell units, has a half-life >6 weeks under blood bank conditions, with persistent supernatant concentrations of erythropoietin during the entire storage period. Adenosine triphosphate was higher in the Erythropoietin Group in Week 6 (4.19±0.05μmol/L vs. 3.53±0.02μmol/L; p-value=0.009). The number of viable cells in the Erythropoietin Group was higher than in the Control Group (77%±3.8% vs. 71%±2.3%; p-value <0.05), while the number of apoptotic cells was lower (9.4%±0.3% vs. 22%±0.8%; p-value <0.05). Conclusions: Under standard blood bank conditions, an important proportion of red blood cells satisfy the criteria of apoptosis. Recombinant human erythropoietin beta seems to improve storage lesion parameters and mitigate apoptosis. abstract_id: PUBMED:23818106 Effect of blood bank storage on the rheological properties of male and female donor red blood cells. It was previously demonstrated that red blood cell (RBC) deformability progressively decreases during storage along with other changes in RBC mechanical properties. Recently, we reported that the magnitude of changes in RBC mechanical fragility associated with blood bank storage in a variety of additive solutions was strongly dependent on the donor gender [15]. Yet, the potential dependence of changes in the deformability and relaxation time of stored blood bank RBCs on donor gender is not known. The objective of this study was to determine the effects of donor gender and blood bank storage on RBC deformability and relaxation time through the measurement of RBC suspension viscoelasticity. Packed RBC units preserved in AS-5 solution from 12 male and 12 female donors (three from each ABO group) were obtained from the local blood center and tested at 1, 4 and 7 weeks of storage at 1-6°C. At each time point, samples were aseptically removed from RBC units and hematocrit was adjusted to 40% before assessment of cell suspension viscoelasticity. RBC suspensions from both genders demonstrated progressive increases (p < 0.05) in viscosity, elasticity and relaxation time at equivalent shear rates over seven weeks of storage indicating a decrease in RBC deformability. No statistically significant differences in RBC deformability or relaxation time were observed between male and female RBCs at any storage time. The decrease in RBC deformability during blood bank storage may reduce tissue perfusion and RBC lifespan in patients receiving blood bank RBCs. abstract_id: PUBMED:36694766 Effects of storage duration of suspended red blood cells before intraoperative infusion on coagulation indexes, routine blood examination and immune function in patients with gastrointestinal tumors. Objective: To investigate the effect of storage duration of suspended red blood cells (SRBC) before intraoperative infusion on coagulation indexes, routine blood examination and immune function in patients with gastrointestinal (GI) tumors. Methods: We divided clinical data of one hundred patients with GI tumors who underwent surgical treatment in our hospital into two different groups according to the storage duration of SRBC use for intraoperative infusion. The short-term group (n=50) had patients with SRBC storage durations shorter than two weeks, and the long-term group (n=50) had patients with storage durations longer than two weeks. We compared the coagulation, immune function, routine blood profile, electrolyte levels and adverse reactions assessment results between the two groups. Results: Compared with before transfusions, the levels of fibrinogen (FIB) and activated partial prothrombin time (APTT) after blood transfusions were higher than those before transfusion (P<0.05). The levels of hemoglobin (Hb) and hematocrit (HCT) in the two groups after blood transfusions were also higher than those before transfusion (P<0.05). However, the levels of CD4+ decreased and those of CD8+ increased in both groups after the blood transfusions. In addition, the levels of CD4+ and CD4+/CD8+ in the short-term group were higher than those of the long-term group (P<0.05) while the CD8+ levels were lower than that of the long-term group (P<0.05). After the blood transfusions, the potassium ion (K+) levels in the two groups increased, and those in the long-term group were higher than in the short-term group (P<0.05). The sodium ion (Na+) levels in the two groups increased after the transfusions, and the short-term group had higher levels than the long-term group (P<0.05). Finally, the incidence of adverse reactions in the short-term group (4.00%) was lower than that in the long-term group (18.00%) (P<0.05). Conclusion: Intraoperative infusion of SRBC with storage duration longer than two weeks increases the risk of perioperative adverse transfusion reactions, which implies that the storage duration of SRBC should be strictly controlled in clinical practice to reduce the risk of blood transfusion. abstract_id: PUBMED:25800014 Targeted quantitative phosphoproteomic analysis of erythrocyte membranes during blood bank storage. One of the hallmarks of blood bank stored red blood cells (RBCs) is the irreversible transition from a discoid to a spherocyte-like morphology with membrane perturbation and cytoskeleton disorders. Therefore, identification of the storage-associated modifications in the protein-protein interactions between the cytoskeleton and the lipid bilayer may contribute to enlighten the molecular mechanisms involved in the alterations of mechanical properties of stored RBCs. Here we report the results obtained analyzing RBCs after 0, 21 and 35 days of storage under standard blood banking conditions by label free mass spectrometry (MS)-based experiments. We could quantitatively measure changes in the phosphorylation level of crucial phosphopeptides belonging to β-spectrin, ankyrin-1, α-adducin, dematin, glycophorin A and glycophorin C proteins. Data have been validated by both western blotting and pseudo-Multiple Reaction Monitoring (MRM). Although each phosphopeptide showed a distinctive trend, a sharp increase in the phosphorylation level during the storage duration was observed. Phosphopeptide mapping and structural modeling analysis indicated that the phosphorylated residues localize in protein functional domains fundamental for the maintenance of membrane structural integrity. Along with previous morphological evidence acquired by electron microscopy, our results seem to indicate that 21-day storage may represent a key point for the molecular processes leading to the erythrocyte deformability reduction observed during blood storage. These findings could therefore be helpful in understanding and preventing the morphology-linked mechanisms responsible for the post-transfusion survival of preserved RBCs. abstract_id: PUBMED:1403236 Blood rheology in arterial hypertension. DETERMINANTS OF BLOOD RHEOLOGY: Blood flow depends on driving pressure and a resistance factor, the latter being related to geometrical hindrance and to the intrinsic viscosity of the blood. Since whole blood is non-Newtonian in nature, blood viscosity is strongly dependent on shear conditions. Low-shear areas occur in cardiovascular disease, and therefore the interaction between blood viscosity and flow conditions may affect vascular disorders. Increased shear stress secondary to increased viscosity may produce endothelial activation and release of endothelium-derived relaxing factors, leading to flow-dependent vasodilation. All the determinants of blood rheology, including plasma protein and erythrocyte factors may be altered in patients with arterial hypertension. BLOOD RHEOLOGY IN HYPERTENSION: A hyperviscosity state is created which is associated with an unfavourable prognosis, since it is correlated with blood pressure levels and the severity and complications of the disease including left ventricular hypertrophy. The mechanisms of haemorheological abnormalities in hypertension are still unclear. It is not known whether blood rheology is an independent variable in patients with hypertension or whether it is a covariable with other established indices of heterogeneity. However, many aetiopathological changes identified in hypertensive disease may contribute to the observed changes in blood rheology. Haemorheological changes in hypertension, through complex interactions with platelet activation and endothelial function, may contribute to the development of thrombosis and atherosclerosis. Moreover, in acute and chronic ischaemia and other conditions where compensatory mechanisms such as collateral formation and vasodilation are limited, rheological factors may become important determinants of blood flow and tissue oxygenation. TREATMENT EFFECTS: Many antihypertensive agents have direct or indirect potential effects on haemorheological variables. However, to date, most studies that have investigated the effects of therapy on rheological variables have not been performed in clinically relevant situations. Controlled studies that monitor both the acute and longterm effects of antihypertensive drugs on relevant haemorheological variables are required to show whether specific therapeutic approaches can correct abnormalities in blood rheology. abstract_id: PUBMED:3676231 Blood rheology in vegetarians. 1. Blood rheology has been quantified by measuring blood and plasma viscosity, packed cell volume (PCV), erythrocyte filterability and erythrocyte aggregation in forty-eight voluntary vegetarians and compared with matched controls. 2. Results show that in vegetarians, values for PCV were lower than those in controls, leading to reduced native blood viscosity. In addition PCV-standardized blood viscosity was also decreased. This was brought about mostly by lower plasma viscosity. Erythrocyte rheology seemed to be unaltered. Stricter avoidance of animal products was associated with even lower values for these indices. 3. These observations are in agreement with the fact that other low-cardiovascular-risk groups show better than average blood fluidity. They are consistent with the hypothesis that in vitro measurements of blood rheology may provide signs of early atherosclerotic changes in vivo. abstract_id: PUBMED:3396231 Blood rheology in IgA nephropathy. Blood rheology was measured in 35 patients with IgA nephropathy (IgAN) and compared with age and sex-matched normal controls. Whole blood viscosity, red blood cell deformability and plasma viscosity were significantly altered in patients with IgAN. A correlation between determinants of blood rheology and clinical indices of IgAN was found. The factor(s) causing the rheological abnormalities was (were) not defined. Increased blood viscosity may cause intraglomerular pressure to rise and filtration to increase and may contribute to the development of mesangial lesions. It is proposed that abnormal blood rheology may be a causal factor in the pathogenesis of IgAN. abstract_id: PUBMED:27148539 A Comparative Study of the Effect of Leukoreduction and Pre-storage Leukodepletion on Red Blood Cells during Storage. Blood transfusion is a fundamental therapy in numerous pathological conditions. Regrettably, many clinical reports describe adverse transfusion's drawbacks due to red blood cells alterations during storage. Thus, the possibility for a blood bank to ameliorate the quality of the erythrocyte concentrates units is crucial to improve clinical results and reduce transfusion adverse occurrences. Leukodepletion is a pre-storage treatment recognized to better preserve the quality of red blood cells with respect to leukoreduction. Aim of this work is to unravel the biochemical and biophysical basis that sustain the good clinical outcomes associated to the use of leukodepleted erythrocytes units. Erythrocytes concentrates were prepared as leukoreduced (n = 8) and pre-storage leukodepleted (n = 8) and then studied during 6 weeks in blood bank conditions. Overall, the data indicate that leukodepletion not only provide red blood cells with an appropriate amount of nutrients for a longer time but also selects red blood cells characterized by a more resilient plasma membrane fit to prolong their viability. We believe these results will stimulate new ideas to further optimize the current storage protocols. abstract_id: PUBMED:2238568 Erythrocyte endogenous proteinase activity during blood bank storage. We studied proteolytic alterations of membrane proteins in ghosts derived from human red blood cells, preserved up to 35 days in the liquid state either as whole blood or with additive solution. The study was carried out by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis of stromal proteins from erythrocytes, either previously treated with proteinase inhibitors or previously incubated in conditions promoting proteolysis. To differentiate the effect of erythrocyte from granulocyte proteinases, the investigation was also carried out in leukocyte-free red cell preparations. The results show: (1) the effects of endogenous proteinases on membrane proteins derived from red cells stored under blood bank conditions; (2) a decrease of proteolytic effects in ghosts derived from red cells which have been submitted to a longer storage; (3) a relevant influence of the red cell resuspending medium before lysis on the time-dependent onset and exhaustion of proteolysis in ghosts. The presence of increased proteolysis in ghosts could be regarded as a marker of molecular lesions induced in red cells by storage under blood bank conditions. Answer: Red blood cell (RBC) rheology appears to be relatively well preserved during routine blood bank storage, although some changes do occur. A study investigating the rheologic properties and cell variables of leukoreduced RBC units stored in saline-adenine-glucose-mannitol for 7 weeks at 4 degrees C found that RBC aggregability was significantly reduced after the first week of storage but recovered during the following weeks. After 7 weeks, aggregability was slightly but significantly reduced. Osmotic fragility was not significantly enhanced, and deformability at a shear stress of 3.9 Pa was not significantly reduced. However, deformability at 50 Pa was reduced but remained within reference values. Biochemical alterations during storage minimally affected the RBC ability to aggregate and deform, even after prolonged storage (PUBMED:20003060). Another study showed that the addition of recombinant human erythropoietin beta to RBC units under blood bank conditions resulted in higher adenosine triphosphate levels and a higher number of viable cells, as well as lower numbers of apoptotic cells after 6 weeks of storage, suggesting that erythropoietin may improve storage lesion parameters and mitigate apoptosis (PUBMED:26969770). Research on the effects of donor gender and blood bank storage on RBC deformability and relaxation time indicated that RBC suspensions from both genders demonstrated progressive increases in viscosity, elasticity, and relaxation time at equivalent shear rates over seven weeks of storage, indicating a decrease in RBC deformability. No statistically significant differences in RBC deformability or relaxation time were observed between male and female RBCs at any storage time (PUBMED:23818106). In summary, while there are some alterations in RBC rheology during blood bank storage, such as reduced deformability and changes in aggregability, these changes are generally minimal, and the rheologic properties of leukoreduced RBC units are preserved well enough to maintain their functionality for transfusion purposes. However, the addition of erythropoietin may further improve the quality of stored RBCs by reducing storage lesions and apoptosis.
Instruction: The minimum data set pressure ulcer indicator: does it reflect differences in care processes related to pressure ulcer prevention and treatment in nursing homes? Abstracts: abstract_id: PUBMED:12919231 The minimum data set pressure ulcer indicator: does it reflect differences in care processes related to pressure ulcer prevention and treatment in nursing homes? Objectives: To determine whether nursing homes (NHs) that score in the extreme quartiles of pressure ulcer (PU) prevalence as reported on the Minimum Data Set (MDS) PU quality indicator provide different PU care. Design: Descriptive, cohort. Setting: Sixteen NHs. Participants: Three hundred twenty-nine NH residents at risk for PU development as determined by the PU Resident Assessment Protocol of the MDS. Measurements: : Sixteen care process quality indicators (10 specific to PU care processes, five related to nutrition, and one related to incontinence management) were scored using medical record data, direct human observation, interviews, and data from wireless thigh movement monitors. Results: There were no differences between homes with low- and high-PU prevalence rates reported on the MDS PU quality indicator on most care processes. NHs with high PU prevalence rates used pressure-reduction surfaces more frequently and were better at documentation of four wound characteristics when PUs were present. No measure of PU care processes was better in low-PU NHs. Neither low- nor high-PU prevalence NHs routinely repositioned residents every 2 hours, even though 2-hour repositioning was documented in the medical record for nearly all residents. Conclusion: The assumption that homes with fewer PUs and thus low PU prevalence according to the MDS PU quality indicator are providing better PU care was not supported in this sample. NHs that scored low on the MDS PU quality indicator did not provide significantly better care than NHs that scored high. All NHs could improve PU prevention, as evidenced by the poor performance on prevention care processes by low- and high-PU NHs. The MDS PU quality indicator is not a useful measure of the quality of PU care in NHs and can be misleading if not presented with an explanation of the meaning of the indicator. abstract_id: PUBMED:25862410 The cost of pressure ulcer prevention and treatment in hospitals and nursing homes in Flanders: A cost-of-illness study. Introduction: The economic impact of pressure ulcer prevention and treatment is high. The results of cost-of-illness studies can assist the planning, allocation, and priority setting of healthcare expenditures to improve the implementation of preventive measures. Data on the cost of current practice of pressure ulcer prevention or treatment in Flanders, a region of Belgium, is lacking. Aim: To examine the cost of pressure ulcer prevention and treatment in an adult population in hospitals and nursing homes from the healthcare payer perspective. Design: A cost-of-illness study was performed using a bottom-up approach. Settings: Hospitals and nursing homes in Flanders, a region of Belgium. Methods: Data were collected in a series of prospective multicentre cross-sectional studies between 2008 and 2013. Data collection included data on risk assessment, pressure ulcer prevalence, preventive measures, unit cost of materials for prevention and treatment, nursing time measurements for activities related to pressure ulcer prevention and treatment, and nursing wages. The cost of pressure ulcer prevention and treatment in hospitals and nursing homes was calculated as annual cost for Flanders, per patient, and per patient per day. Results: The mean (SD) cost for pressure ulcer prevention was €7.88 (8.21) per hospitalised patient at risk per day and €2.15 (3.10) per nursing home resident at risk per day. The mean (SD) cost of pressure ulcer prevention for patients and residents identified as not at risk for pressure ulcer development was €1.44 (4.26) per day in hospitals and €0.50 (1.61) per day in nursing homes. The main cost driver was the cost of labour, responsible for 79-85% of the cost of prevention. The mean (SD) cost of local treatment per patient per day varied between €2.34 (1.14) and €77.36 (35.95) in hospitals, and between €2.42 (1.15) and €16.18 (4.93) in nursing homes. Conclusions: Related to methodological differences between studies, the cost of pressure ulcer prevention and treatment in hospitals and nursing homes in Flanders was found to be low compared to other international studies. Recommendations specific to pressure ulcer prevention are needed as part of methodological guidelines to conduct cost-of-illness studies. abstract_id: PUBMED:30389338 Minimum Data Set for Incontinence-Associated Dermatitis (MDS-IAD) in adults: Design and pilot study in nursing home residents. Study Aim: The aim of this study was to develop a Minimum Data Set for Incontinence-Associated Dermatitis (MDS-IAD), to psychometrically evaluate and pilot test the instrument in nursing homes. Comparable to the MDS for pressure ulcers, the MDS-IAD aims to collect epidemiological data and evaluate the quality of care. Materials And Methods: After designing and content/face validation by experts and clinicians, staff nurses assessed 108 residents (75.9% female, 77.8% double incontinent) in a convenience sample of five wards. A second nurse independently assessed fifteen residents to calculate inter-rater agreement (p0) and reliability [Cohen's Kappa (ĸ)]. Results: The ĸ-value for 'urinary incontinence' was 0.68 [95% confidence interval (CI) 0.37-0.99] and 0.55 (95% CI 0.27-0.82) for 'faecal incontinence'. The p0 for severity categorisation according to the Ghent Global IAD Categorisation Tool (GLOBIAD) was 0.60. IAD was diagnosed in 21.3% of the residents. IAD management mainly involved the application of a leave-on product (66.7%), no-rinse foams (49.1%), toilet paper (47.9%), and water and soap (38.8%). Fully adequate prevention or treatment was provided to respectively 3.6% and 8.7% of the residents. Conclusion: This instrument provides valuable insights in IAD prevalence at organisational level, will allow benchmarking between organisations, and will support policy makers. Future testing in other healthcare settings is recommended. abstract_id: PUBMED:32285486 The effectiveness of a pressure injury prevention program for nursing assistants in private for-profit nursing homes: A cluster randomized controlled trial. Aim: To examine the effectiveness of a pressure injury prevention program for private for-profit nursing homes. Design: This study was a two-arm cluster randomized controlled trial. Ten private for-profit nursing homes made up the clusters. Methods: The participants were nursing home residents who aged 60 or above regardless of whether or not having pre-existing pressure injuries and also three types of nursing home assistants who provided direct care to the residents from 10 private for-profit nursing homes. These 10 nursing homes were randomly assigned to either the experimental or the control group. There were 477 and 536 resident participants and 51 and 62 nursing assistant participants in the experimental and control groups, respectively. The residents were the study participants and the nursing assistant participants were the interveners. The experimental group had the pressure injury prevention program implemented while the control group received the usual care. The primary study outcome which was the pressure injury incidence was analysed by GEE. Significance was set at a p-value of ≤.05. The data were collected between September 2017-March 2018. Result: There were significant interactive effects of time and group on the incidence of pressure injuries (p = .0015) and on the skill performance of the nursing assistant participants (p < .0001). Conclusions: An evidence-based pressure injury prevention program reduced the development of the pressure injuries and improved the skill performance of the nursing assistant participants. It is highly recommended that private for-profit nursing homes with high proportion of non-professional nursing assistants and insufficient nurses adopt this program for improving the prevention care of pressure injuries. Impact: This research has an impact on prevention care of pressure injury in private for-profit nursing homes with high proportion non-professional nursing assistants which have the similar characteristics as the nursing homes studied in various regions and countries. Trial Registration: The Controlled Trial registration ID is NCT02270385. abstract_id: PUBMED:20367815 Impact of prevention structures and processes on pressure ulcer prevalence in nursing homes and acute-care hospitals. Aim: The study aimed to give a description analysis of pressure ulcer-related structures, processes and outcomes in nursing homes and acute-care hospitals that participated once, twice and thrice in prevalence surveys. Design: Repeated nationwide, multicentre, cross sectional surveys have been conducted. Methods: A total of 7377 residents in 60 nursing homes and 28,102 patients in 82 acute-care hospitals in Germany participated in annual point prevalence surveys. Because of their strong differences in sampling and in order to be able to display the differences occurring during the repeated (first, second and third) participation of the institutions, they were arranged according to their frequency and order of participation. The percentage of used guidelines and risk assessment scales (structures), prevention devices and measures (processes), and prevalence and nosocomial prevalence (outcomes) among all persons at risk was calculated. Results: The samples within the arranged groups showed no clinically relevant demographical differences. Nosocomial prevalence rates in hospitals dropped from 26.3% in the first year to 11.3% in the last year (nursing homes from 13.7% to 6.4%). The use of pressure ulcer-related structures remarkably increased during each repetition to more than 90%. Regarding the use of preventive measures and devices as an indicator for pressure ulcer-related processes results were more incoherent. Conclusion: Repeated participation in pressure ulcer surveys led to a decrease in outcomes (lower pressure ulcer prevalence rates), to high opposite effects regarding pressure ulcer-related structures (increased use of all guidelines/risk assessment scales) and to moderate adverse effects regarding pressure ulcer-related processes (increased use of most preventive measures and devices). abstract_id: PUBMED:27059825 CNA Training Requirements and Resident Care Outcomes in Nursing Homes. Purpose Of The Study: To examine the relationship between certified nursing assistant (CNA) training requirements and resident outcomes in U.S. nursing homes (NHs). The number and type of training hours vary by state since many U.S. states have chosen to require additional hours over the federal minimums, presumably to keep pace with the increasing complexity of care. Yet little is known about the impact of the type and amount of training CNAs are required to have on resident outcomes. Design And Methods: Compiled data on 2010 state regulatory requirements for CNA training (clinical, total initial training, in-service, ratio of clinical to didactic hours) were linked to 2010 resident outcomes data from 15,508 NHs. Outcomes included the following NH Compare Quality Indicators (QIs) (Minimum Data Set 3.0): pain, antipsychotic use, falls with injury, depression, weight loss and pressure ulcers. Facility-level QIs were regressed on training indicators using generalized linear models with the Huber-White correction, to account for clustering of NHs within states. Models were stratified by facility size and adjusted for case-mix, ownership status, percentage of Medicaid-certified beds and urban-rural status. Results: A higher ratio of clinical to didactic hours was related to better resident outcomes. NHs in states requiring clinical training hours above federal minimums (i.e., >16hr) had significantly lower odds of adverse outcomes, particularly pain falls with injury, and depression. Total and in-service training hours also were related to outcomes. Implications: Additional training providing clinical experiences may aid in identifying residents at risk. This study provides empirical evidence supporting the importance of increased requirements for CNA training to improve quality of care. abstract_id: PUBMED:32471633 Sex-specific differences in prevention and treatment of institutional-acquired pressure ulcers in hospitals and nursing homes. Introduction: Gender and/or sex have a major impact on staying healthy, becoming ill, or care dependent. Differences between men and women have been described for socioeconomic positions, health behaviors, courses and severities of diseases and mortality rates. Consequently, sex and/or gender need to be adequately taken into account while developing and implementing evidence-based healthcare. Evidence regarding differences between men and women in pressure ulcer care is limited. Our research aim was to measure possible differences between male and female hospital patients and nursing home residents in prevention and treatment of institutional-acquired pressure ulcers. Methods: A secondary data analysis was conducted including data sets collected in nursing homes and hospitals in Germany annually from 2001 to 2016. Relevant variables were compared according to biological sex (men/woman). Results: The study included 38,655 nursing home residents (mean age 85.4 years women, 77.3 years men) and 58,760 hospital patients (mean age 66.7 years women, 63.4 years men). More women were underweight and at pressure ulcer risk in both settings. The proportion of institutional-acquired pressure ulcers was higher for men in hospitals. Slightly more men had a PU category 2 to 4 (OR 0.87, 95% CI 0.76 to 0.99) in nursing homes or developed an institutional-acquired pressure ulcers category 2 to 4 in both settings (OR 0.85, 95% CI 0.76 to 0.95). Special mattresses were more often used by women at PU risk. More men with an institutional-acquired pressure ulcer in hospitals received counseling of relatives (OR 0.53, 95% CI 0.39 to 0.72). Conclusion: Although slightly more men had institutional-acquired pressure ulcers than women, overall differences regarding pressure ulcer occurrence were minor. Gender and/or sex can rather not be considered as an independent risk factor for pressure ulcer development and differences regarding pressure ulcer prevention interventions seem to be minor. abstract_id: PUBMED:15894247 Quality improvement in nursing homes in Texas: results from a pressure ulcer prevention project. Background: Pressure ulcer prevalence, cost, associated mortality, and potential for litigation are major clinical problems in nursing homes despite guidelines for prevention and treatment. Objective: To improve the use of pressure ulcer prevention procedures at nursing homes in Texas through implementation of process of care system changes in collaboration with a state quality improvement organization (QIO). Design: Preintervention and postintervention measurement of performance for process of care quality indicators and of pressure ulcer incidence rates. Setting: Twenty nursing homes in Texas. Participants: Quality improvement teams at participating nursing homes. Measurement: Data were abstracted from medical records on performance measures (quality indicators) and pressure ulcer incidence rates between November 2000 and August 2002. Descriptive and inferential statistics were used. Interventions: Process of care system changes consisting of tools and education to prevent pressure ulcers were introduced to participating nursing homes. Results: Participating nursing homes showed statistically significant improvement in 8 out of 12 quality indicators. Pressure ulcer incidence rates also decreased, although not quite significantly. Furthermore, facilities with the greatest improvement in quality indicator scores had significantly lower pressure ulcer incidence rates than the facilities with the least improvement in quality indicator scores (S = 131.0, P = .03). This suggests that the interventions positively affected not only the process of care but also led to a decrease in pressure ulcer incidences. Conclusions: These results show that nursing homes in a collaborative effort with a QIO were able to improve their processes of care. Although significant improvement was noted on most of the quality indicators, opportunity remains for further improvement. Furthermore, these results suggest that implementation of process of care system changes by nursing homes in a collaborative relationship with a QIO may yield improvements in measures of patient outcome (eg, pressure ulcer incidence). abstract_id: PUBMED:19024039 Improving pressure ulcer prevention in nursing care homes. This article describes a tissue viability program in nursing care homes in Coventry, using a variety of methods to improve standards in pressure ulcer prevention including clinical practice, benchmarking, supervision, training, education and clinical assessments. abstract_id: PUBMED:25851826 Bodily pain intensity in nursing home residents with pressure ulcers: analysis of national minimum data set 3.0. Clinical reports suggest that superficial pressure ulcers produce pain, but that pain decreases as the wound advances in stage. This study of the relationship between pressure ulcer stage and bodily pain intensity in nursing home residents was a secondary analysis of the national Minimum Data Set 3.0 assessment data in long-term care facilities, collected from nursing home residents at least 65 years of age. Data were examined from residents with pressure ulcers who completed a bodily pain intensity interview between January and March 2012 (N = 41,680) as part of the MDS comprehensive assessment. After adjusting for other variables (e.g., cognition, functional impairment, presence of comorbidities, use of scheduled pain medication, and sociodemographic variables), bodily pain intensity for those with more severe pressure ulcers in comparison to those with Stage I ulcers was higher by 11% (Stage II), 14% (Stage III), 24% (Stage IV), and 22% (suspected deep tissue injury). Because multivariate analysis showed that greater bodily pain intensity was associated with an advanced stage of pressure ulcer, health care providers should assess bodily pain intensity and order appropriate pain management for nursing home residents with pressure ulcers, particularly for those with advanced pressure ulcers who are vulnerable to greater bodily pain intensity. Answer: The Minimum Data Set (MDS) pressure ulcer indicator does not necessarily reflect differences in care processes related to pressure ulcer prevention and treatment in nursing homes. A study examining whether nursing homes with extreme quartiles of pressure ulcer (PU) prevalence reported on the MDS provided different PU care found no significant differences in most care processes between homes with low and high PU prevalence rates. The study concluded that the assumption that homes with fewer PUs and thus low PU prevalence according to the MDS are providing better PU care was not supported. It was noted that neither low- nor high-PU prevalence nursing homes routinely repositioned residents every 2 hours, despite this being documented in medical records for nearly all residents. The study suggested that all nursing homes could improve PU prevention and that the MDS PU quality indicator is not a useful measure of the quality of PU care in nursing homes, as it can be misleading if not presented with an explanation of its meaning (PUBMED:12919231).
Instruction: Are patients with bipolar affective disorder socially disadvantaged? Abstracts: abstract_id: PUBMED:12190713 Are patients with bipolar affective disorder socially disadvantaged? A comparison with a control group. Background: Bipolar affective disorder (BPAD) has considerable implications for personal and social functioning. However a tendency to be over-represented in high socio-economic classes has been reported in earlier studies, suggesting that social disadvantage accompanying the illness is not severe. In addition, an association between affective disorders in general and increased residential mobility has been suggested, but it is unclear if such an association exists with BPAD. Objectives: (1) To investigate the suggestion made in previous studies that patients with bipolar disorder are advantaged socially. (2) To test the hypothesis that patients with bipolar disorder show greater residential mobility compared with other patients with psychiatric disorders. Methods: Ninety patients with DSM IV diagnosis of bipolar disorder admitted to the acute in-patient unit of a public-financed district psychiatric service in Dublin were compared with a control group of 91 randomly selected patients with other psychiatric diagnoses, excluding schizophrenia. Socio-economic, educational and employment ratings were compared, and also duration of illness, frequency of admission and residential mobility. The data were collected retrospectively from case notes and through semistructured interviews with patients or their relatives. The bipolar group was compared with the control group and to the unipolar depression subgroup. Results: The bipolar and control groups were found to have similar demographic and socio-economic features, although the bipolar group had more years of education compared with the whole control group but not when compared with the unipolar depression group. The bipolar group showed longer duration of psychiatric disorder, more frequent hospital admissions and more frequent residential moves since the onset of the illness. Conclusion: Bipolar patients requiring in-patient care in this service experience severe disruption to their lives over prolonged periods. abstract_id: PUBMED:38314526 Social cognition and social motivation in schizophrenia and bipolar disorder: are impairments linked to the disorder or to being socially isolated? Background: People with schizophrenia on average are more socially isolated, lonelier, have more social cognitive impairment, and are less socially motivated than healthy individuals. People with bipolar disorder also have social isolation, though typically less than that seen in schizophrenia. We aimed to disentangle whether the social cognitive and social motivation impairments observed in schizophrenia are a specific feature of the clinical condition v. social isolation generally. Methods: We compared four groups (clinically stable patients with schizophrenia or bipolar disorder, individuals drawn from the community with self-described social isolation, and a socially connected community control group) on loneliness, social cognition, and approach and avoidance social motivation. Results: Individuals with schizophrenia (n = 72) showed intermediate levels of social isolation, loneliness, and social approach motivation between the isolated (n = 96) and connected control (n = 55) groups. However, they showed significant deficits in social cognition compared to both community groups. Individuals with bipolar disorder (n = 48) were intermediate between isolated and control groups for loneliness and social approach. They did not show deficits on social cognition tasks. Both clinical groups had higher social avoidance than both community groups. Conclusions: The results suggest that social cognitive deficits in schizophrenia, and high social avoidance motivation in both schizophrenia and bipolar disorder, are distinct features of the clinical conditions and not byproducts of social isolation. In contrast, differences between clinical and control groups on levels of loneliness and social approach motivation were congruent with the groups' degree of social isolation. abstract_id: PUBMED:36609729 Bipolar disorders in Nigeria: a mixed-methods study of patients, family caregivers, clinicians, and the community members' perspectives. Background: Bipolar Disorders (BDs) are chronic mental health disorders that often result in functional impairment and contribute significantly to the disability-adjusted life years (DALY). BDs are historically under-researched compared to other mental health disorders, especially in Sub-Saharan Africa and Nigeria. Design: We adopted a mixed-methods design. Study 1 examined the public knowledge of BDs in relation to sociodemographic outcomes using quantitative data whilst Study 2 qualitatively assessed the lived experiences of patients with BDs, clinicians, and family caregivers. Methods: In Study 1, a non-clinical sample of n = 575 participants responded to a compact questionnaire that examined their knowledge of BDs and how they relate to certain sociodemographic variables. One-way ANOVA was used to analyse quantitative data. Study 2 interviewed N = 15 participants (n = 5 patients with BDs; n = 7 clinicians; n = 3 family caregivers). These semi-structured interviews were audio-recorded, transcribed, and thematically analysed. Results: In Study 1, findings showed no statistically significant differences, suggesting low awareness of BDs, especially among vulnerable populations such as young people and older adults. However, there was a trajectory in increased knowledge of BDs among participants between the ages of 25-44 years and part-time workers compared to other ages and employment statuses. In Study 2, qualitative findings showed that BDs are perceived to be genetically and psycho-socially induced by specific lived experiences of patients and their family caregivers. Although psychotropic medications and psychotherapy are available treatment options in Nigeria, cultural and religious beliefs were significant barriers to treatment uptake. Conclusions: This study provides insight into knowledge and beliefs about BDs, including the lived experiences of patients with BDs, their caregivers and clinicians in Nigeria. It highlights the need for further studies assessing Nigeria's feasibility and acceptability of culturally adapted psychosocial interventions for patients with BDs. abstract_id: PUBMED:26890335 Living with bipolar disorder: the impact on patients, spouses, and their marital relationship. Objectives: Patients with bipolar disorder are characterized by an unusually high divorce rate. As such, the purpose of the present study was to uncover information relating specifically to the impact of bipolar disorder on patients and spouses individually, and on the marital relationship from the perspectives of both patients and spouses. Methods: Eleven patients with bipolar disorder and ten spouses were interviewed separately about the impact of bipolar disorder on their lives and on their marital relationship. Data were analyzed using the grounded theory method. Results: The impact of bipolar disorder for spouses included self-sacrifice, caregiving burden, emotional impact, and a sense of personal evolution. The impact of bipolar disorder on patients included an emotional impact, responsibility for self-care, and struggling socially and developmentally. When comparing patient and spouse perspectives on the impact of the disorder, neither the patient nor the spouse was able to accurately assess the impact of the disorder on their partner's lives. The impact of bipolar disorder on the relationship included volatility in the relationship, strengthening the relationship, weakening the relationship, and family planning. Conclusions: The research indicated that patients and partners alike struggle with the tremendous impact of bipolar disorder on their lives and on their relationships. Given the high rates of divorce and volatility in these relationships, healthcare professionals can provide (or refer to) emotional and practical support both to patients and spouses on their own, and as a couple in their clinics. abstract_id: PUBMED:38298008 An ERP Study of Face Processing in Schizophrenia, Bipolar Disorder, and Socially Isolated Individuals from the Community. People with schizophrenia (SCZ) and bipolar disorder (BD) have impairments in processing social information, including faces. The neural correlates of face processing are widely studied with the N170 ERP component. However, it is unclear whether N170 deficits reflect neural abnormalities associated with these clinical conditions or differences in social environments. The goal of this study was to determine whether N170 deficits would still be present in SCZ and BD when compared with socially isolated community members. Participants included 66 people with SCZ, 37 with BD, and 125 community members (76 "Community-Isolated"; 49 "Community-Connected"). Electroencephalography was recorded during a face processing task in which participants identified the gender of a face, the emotion of a face (angry, happy, neutral), or the number of stories in a building. We examined group differences in the N170 face effect (greater amplitudes for faces vs buildings) and the N170 emotion effect (greater amplitudes for emotional vs neutral expressions). Groups significantly differed in levels of social isolation (Community-Isolated > SCZ > BD = Community-Connected). SCZ participants had significantly reduced N170 amplitudes to faces compared with both community groups, which did not differ from each other. The BD group was intermediate and did not differ from any group. There were no significant group differences in the processing of specific emotional facial expressions. The N170 is abnormal in SCZ even when compared to socially isolated community members. Hence, the N170 seems to reflect a social processing impairment in SCZ that is separate from level of social isolation. abstract_id: PUBMED:15780673 Validating affective temperaments in their subaffective and socially positive attributes: psychometric, clinical and familial data from a French national study. Background: One of the major objectives of the French National EPIDEP Study was to show the feasibility of systematic assessment of bipolar II (BP-II) disorder and beyond. In this report we focus on the utility of the affective temperament scales (ATS) in delineating this spectrum in its clinical as well as socially desirable expressions. Methods: Forty-two psychiatrists working in 15 sites in four regions of France made semi-structured diagnoses based on DSM IV criteria in a sample of 452 consecutive major depressive episode (MDE) patients (from which bipolar I had been removed). At least 1 month after entry into the study (when the acute depressive phase had abated), they assessed affective temperaments by using a French version of the precursor of the Temperament Evaluation of Memphis, Pisa, Paris and San Diego (TEMPS). Principal component analyses (PCA) were conducted on hyperthymic (HYP-T), depressive (DEP-T) and cyclothymic (CYC-T) temperament subscales as assessed by clinicians, and on a self-rated cyclothymic temperament (CYC-TSR). Scores on each of the temperament subscales were compared in unipolar (UP) major depressive disorder versus BP-II patients, and in the entire sample subdivided on the basis of family history of bipolarity. Results: PCAs showed the presence of a global major factor for each clinician-rated subscale with respective eigenvalues of the correlation matrices as follows: 7.1 for HYP-T, 6.0 for DEP-T, and 4.7 for CYC-T. Likewise, on the self-rated CYC-TSR, the PCA revealed one global factor (with an eigenvalue of 6.6). Each of these factors represented a melange of both affect-laden and adaptive traits. The scores obtained on clinician and self-ratings of CYC-T were highly correlated (r=0.71). The scores of HYP-T and CYC-T were significantly higher in the BP-II group, and DEP-T in the UP group (P<0.001). Finally, CYC-T scores were significantly higher in patients with a family history of bipolarity. Conclusion: These data uphold the validity of the affective temperaments under investigation in terms of face, construct, clinical and family history validity. Despite uniformity of depressive severity at entry into the EPIDEP study, significant differences on ATS assessment were observed between UP and BP-II patients in this large national cohort. Self-rating of cyclothymia proved reliable. Adding the affective temperaments-in particular, the cyclothymic-to conventional assessment methods of depression, a more enriched portrait of mood disorders emerges. More provocatively, our data reveal socially positive traits in clinically recovering patients with mood disorders. abstract_id: PUBMED:34232514 Characterization and interrelationships of theory of mind, socially competitive emotions and affective empathy in bipolar disorder. Objective: Evidence shows impaired theory of mind (ToM) in patients with bipolar disorder (BD), yet research examining its cognitive and affective components simultaneously is sparse. Moreover, recognition of socially competitive 'fortune of others' emotions (e.g. envy/gloat) may be related to ToM, but has not been assessed in BD. Finally, if and how ToM and 'fortune of others' emotions relate to affective empathy in BD is currently unclear. This study aimed to address these points. Methods: 64 BD patients and 34 healthy controls completed the Yoni task, a visual task assessing first- and second-order cognitive and affective ToM as well as 'fortune of others' emotions. The Toronto Empathy Questionnaire was used to assess self-reported affective empathy. Results: Patients with BD showed no deficits in cognitive and affective ToM or recognition of 'fortune of others' emotions. The ability to infer 'fortune of others' emotions correlated with several ToM measures, indicating that these functions are part of the same system. Patients with BD reported similar levels of affective empathy to healthy controls, and this was not related to ToM or 'fortune of others' emotions, suggesting that affective empathy represents a separate social domain. Conclusions: These findings highlight areas of spared social functioning in BD, which may be utilized in therapeutic strategies. Practitioner Points: Our results suggest theory of mind and empathy may represent areas of potentially spared cognitive functioning in BD. As many BD patients have experienced adversity during developmental periods in which theory of mind and empathy develop, our findings suggest that these abilities may be markers of resilience in the disorder. Our findings are important for the formulation of therapeutic interventions for BD, which may include considering practical ways that a patients' knowledge of intact ToM and empathy could be utilized to reduce self-stigma and promote self-efficacy, improved well-being and functioning. abstract_id: PUBMED:36204253 Bio-behavioural changes in treatment-resistant socially isolated FSL rats show variable or improved response to combined fluoxetine-olanzapine versus olanzapine treatment. Background: Exposure of Flinders Sensitive Line (FSL) rats to post-weaning social isolation rearing (SIR) causes depressive- and social anxiety-like symptoms resistant to, or worsened by, fluoxetine. SIR typically presents with psychotic-like symptoms, while the paradoxical response to fluoxetine suggests unaddressed psychotic-like manifestations. Psychotic depression (MDpsy) is invariably treatment resistant. To further explore the mood-psychosis continuum in fluoxetine resistant FSL-SIR rats (Mncube et al., 2021), mood-, psychotic-, anxiety-, and social-related behaviour and biomarker response to antidepressant/antipsychotic treatment was studied in FSL-SIR rats. Methods: Sprague Dawley (SD) and FSL pups were subjected to social rearing or SIR from postnatal day (PND) 21. Thereafter FSL-SIR rats received olanzapine (5 mg/kg x 14 days) or olanzapine+fluoxetine (OLZ+FLX; 5 mg/kg + 10 mg/kg for 14 days) from PND 63. Psychotic-like, depressive, anxiety, and social behaviour were assessed from PND 72, versus saline-treated FSL-SIR rats, using the prepulse inhibition (PPI), forced swim, open field and social interaction tests. Post-mortem cortico-hippocampal norepinephrine (NE), serotonin (5-HT), and dopamine (DA), as well as plasma corticosterone and dopamine-beta-hydroxylase levels were evaluated. Results: SD-SIR and FSL-SIR rats present with significant depressive-like behaviour (p < 0.01) as well as significantly reduced sensorimotor gating (p < 0.01), although exacerbation versus SIR alone was not observed. Anxiety was significant in FSL-SIR (p < 0.01) but not SD-SIR rats. No deficit in social behaviour was evident. Cortico-hippocampal monoamines (NE, 5-HT, DA; p < .05) and dopamine beta hydroxylase (d = 1.13) were reduced in FSL-SIR rats, less so in SD-SIR rats. Except for dopamine-beta-hydroxylase, these deficits were reversed by both olanzapine and OLZ+FLX (p < 0.01). OLZ+FLX was superior to reverse hippocampal NE and DA changes (p < 0.01). However, OLZ (p < .05) and OLZ+FLX (p < .01) worsened depressive-like behaviour and failed to reverse PPI deficits in FSL-SIR rats. Conclusion: SIR-exposed FSL rats display worsened anxiety, as well as depressive and psychotic-like symptoms, variably responsive to olanzapine or OLZ+FLX. Depleted monoamines are reversed by OLZ+FLX, less so by olanzapine. FSL-SIR rats show promising face and construct but limited predictive validity for MDpsy, perhaps more relevant for bipolar disorder. abstract_id: PUBMED:22840614 The dominance behavioral system and manic temperament: motivation for dominance, self-perceptions of power, and socially dominant behaviors. Unlabelled: The dominance behavioral system has been conceptualized as a biologically based system comprising motivation to achieve social power and self-perceptions of power. Biological, behavioral, and social correlates of dominance motivation and self-perceived power have been related to a range of psychopathological tendencies. Preliminary evidence suggests that mania and risk for mania (manic temperament) relate to the dominance system. Method: Four studies examine whether manic temperament, measured with the Hypomanic Personality Scale (HPS), is related to elevations in dominance motivation, self-perceptions of power, and engagement in socially dominant behavior across multiple measures. In Study 1, the HPS correlated with measures of dominance motivation and the pursuit of extrinsically-oriented ambitions for fame and wealth among 454 undergraduates. In Study 2, the HPS correlated with perceptions of power and extrinsically-oriented lifetime ambitions among 780 undergraduates. In Study 3, the HPS was related to trait-like tendencies to experience hubristic (dominance-related) pride, as well as dominance motivation and pursuit of extrinsically-oriented ambitions. In Study 4, we developed the Socially Dominant Behavior Scale to capture behaviors reflecting high power. The scale correlated highly with the HPS among 514 undergraduates. Limitations: The studies rely on self-ratings of manic temperament and dominance constructs, and findings have not yet been generalized to a clinical sample. Conclusions: Taken together, results support the hypothesis that manic temperament is related to a focus on achieving social dominance, ambitions related to achieving social recognition, perceptions of having achieved power, tendencies to experience dominance-related pride, and engagement in social behaviors consistent with this elevated sense of power. abstract_id: PUBMED:21234161 Insight and its relationship with stigma in psychiatric patients. Background: The literature on insight has paid insufficient attention to the social experiences that are associated with receiving and endorsing a diagnosis of mental illness. The psychological and behavioral commitments associated with insight extend beyond agreeing with a diagnosis and accepting treatment to include taking on the identity of an individual diagnosed with mental illness. This study sought to examine the relationship between insight and stigma in psychiatric patients. Materials And Methods: Cross-sectional assessment of insight and stigma was done using the system adopted by Kaplan and Sadock in their comprehensive textbook of psychiatry and Felt Stigma Scale in 100 psychiatric patients (40 patients suffering from Bipolar affective disorder, 30 Schizophrenics, 20 Substance dependents and 10 with Obsessive Compulsive disorder). Results: It was found that the level of stigma felt by patients with insight was significantly higher than that felt by patients without insight. Conclusion: Though there is a certain extent of stigma present in patients without insight, as is expected, the level of stigma increases as the patients develop insight. Answer: Patients with bipolar affective disorder (BPAD) do experience significant social disadvantages. The study comparing 90 patients with BPAD to a control group of 91 patients with other psychiatric diagnoses, excluding schizophrenia, found that while the demographic and socio-economic features were similar between the groups, the bipolar group had a longer duration of psychiatric disorder, more frequent hospital admissions, and more frequent residential moves since the onset of the illness (PUBMED:12190713). This suggests that bipolar patients requiring in-patient care experience severe disruption to their lives over prolonged periods, which can be indicative of social disadvantage. Additionally, individuals with bipolar disorder have been found to have higher social avoidance than community groups, which could contribute to social disadvantage (PUBMED:38314526). While they did not show deficits on social cognition tasks, the presence of high social avoidance motivation in bipolar disorder is a distinct feature of the clinical condition and not a byproduct of social isolation. This could impact their ability to engage in social interactions and maintain social connections, further contributing to social disadvantage. In Nigeria, a mixed-methods study showed that bipolar disorders are perceived to be genetically and psycho-socially induced, and cultural and religious beliefs were significant barriers to treatment uptake (PUBMED:36609729). This indicates that patients with bipolar disorder may face social disadvantages due to stigma and lack of understanding about the condition, which can hinder their ability to seek and receive appropriate treatment. Moreover, bipolar disorder has been associated with a high divorce rate, indicating that the disorder can have a significant impact on patients' relationships and marital stability (PUBMED:26890335). This can lead to social isolation and a lack of support systems, which are crucial for managing the disorder and maintaining social standing. In summary, patients with bipolar affective disorder are socially disadvantaged due to the severe disruption to their lives caused by the illness, higher social avoidance, barriers to treatment uptake due to cultural beliefs, and the impact on personal relationships such as marriage.
Instruction: Does the association between depressive symptoms and cardiovascular mortality risk vary by race? Abstracts: abstract_id: PUBMED:23530295 Does the association between depressive symptoms and cardiovascular mortality risk vary by race? Evidence from the Health and Retirement Study. Objective: To test whether the association between depressive symptoms and cardiovascular disease (CVD) mortality is stronger among Blacks than Whites. Design, Setting And Participants: 2,638 Black and 15,132 White participants from a prospective, observational study of community-dwelling Health and Retirement Study participants (a nationally representative sample of U.S. adults aged > or = 50). Average follow-up was 9.2 years. Outcome Measure: Cause of death (per ICD codes) and month of death were identified from National Death Index linkages. Methods: The associations between elevated depressive symptoms and mortality from stroke, ischemic heart disease (IHD), or total CVD were assessed using Cox proportional hazards models to estimate adjusted hazard ratios (HRs). We used interaction terms for race by depressive symptoms to assess effect modification (multiplicative scale). Results: For both Whites and Blacks, depressive symptoms were associated with a significantly elevated hazard of total CVD mortality (Whites: HR=1.46; 95% CI: 1.33, 1.61; Blacks: HR=1.42, 95% CI: 1.10, 1.83). Adjusting for health and socioeconomic covariates, Whites with elevated depressive symptoms had a 13% excess hazard of CVD mortality (HR=1.13, 95% CI: 1.03, 1.25) compared to Whites without elevated depressive symptoms. The HR in Blacks was similar, although the confidence interval included the null (HR=1.12, 95% CI: .86, 1.46). The hazard associated with elevated depressive symptoms did not differ significantly by race (P>.15 for all comparisons). Patterns were similar in analyses restricted to respondents age > or =65. Conclusion: Clinicians should consider the depressive state of either Black or White patients as a potential CVD mortality risk factor. abstract_id: PUBMED:21505153 Depressive symptoms and cardiovascular mortality in older black and white adults: evidence for a differential association by race. Background: An emerging body of research suggests that depressive symptoms may confer an "accelerated risk" for cardiovascular disease (CVD) in blacks compared with whites. Research in this area has been limited to cardiovascular risk factors and early markers; less is known about black-white differences in associations with important clinical end points. Methods And Results: The authors examined the association between depressive symptoms and overall CVD mortality, ischemic heart disease (IHD) mortality, and stroke mortality in a sample of 6158 (62% black; 61% female) community-dwelling older adults. Cox proportional hazards models were used to model time-to-CVD, IHD, and stroke death over a 9- to 12-year follow-up. In race-stratified models adjusted for age and sex, elevated depressive symptoms were associated with CVD mortality in blacks (hazard ratio [HR], 1.95; 95% confidence interval [CI], 1.61 to 2.36; P<0.001) but were not significantly associated with CVD mortality in whites (HR, 1.26; 95% CI, 0.95 to 1.68; P=0.11; race by depressive symptoms interaction, P=0.03). Similar findings were observed for IHD mortality (black: HR, 1.99; 95% CI, 1.49 to 2.64; P<0.001; white: HR, 1.28; 95% CI, 0.86 to 1.89; P=0.23) and stroke mortality (black: HR, 2.08; 95% CI, 1.32 to 3.27; P=0.002; white: HR, 1.32; 95% CI, 0.69 to 2.52; P=0.40). Findings for total CVD mortality and IHD mortality were attenuated but remained significant after adjusting for standard risk factors. Findings for stroke were reduced to marginal significance. Conclusions: Elevated depressive symptoms were associated with multiple indicators of CVD mortality in older blacks but not in whites. Findings were not completely explained by standard risk factors. Efforts aimed at reducing depressive symptoms in blacks may ultimately prove beneficial for their cardiovascular health. abstract_id: PUBMED:36958666 Association between depressive symptoms and the risk of all-cause and cardiovascular mortality among US adults. Depression is a preventable and treatable mental health condition. Therefore, there are important clinical implications for identifying people with the highest mortality risk in a nationally representative sample. This study included 26,207 participants aged ≥18 years from the 2005-2014 National Health and Nutrition Examination Survey in USA. We investigated the association between depressive symptoms (defined as Patient Health Questionnaire 9 scores ≥10) and all-cause and cardiovascular disease (CVD) mortalities, adjusted for multiple factors (sociodemographic in Model 1, behavioral added in Model 2, and metabolic syndrome added in Model 3) and stratified by age and sex. During an average follow-up of 69.15 months (standard deviation [SD] 34.45), 1872 (7.3%) participants had died (person-years in the non-depressive and depressive groups, 12.12/1000 and 16.43/1000, respectively). Depressive symptoms increased all-cause (crude hazard ratio [HR] 1.37, 95% confidence interval [CI], 1.33-1.58) and CVD mortalities (crude HR 1.64, 95% CI, 1.20-2.24). Although the significance of all-cause mortality and CVD mortality was maintained in Models 1 (HR 1.58 and 2.08) and 2 (HR 1.48 and 1.79), it was not maintained in Model 3. Current smoking and lower physical activity were associated with reduced strength of the association between depression and all-cause mortality risk. The effect of depression on mortality risk was particularly pronounced in middle-aged men and older women. Our findings suggest that depressive symptoms increase mortality risk, even after adjusting for behavioral factors. Depression-induced mortality risk is particularly high among middle-aged men and older women. abstract_id: PUBMED:31276018 Do Race and Everyday Discrimination Predict Mortality Risk? Evidence From the Health and Retirement Study. Everyday discrimination is a potent source of stress for racial minorities, and is associated with a wide range of negative health outcomes, spanning both mental and physical health. Few studies have examined the relationships linking race and discrimination to mortality in later life. We examined the longitudinal association among race, everyday discrimination, and all-cause mortality in 12,081 respondents participating in the Health and Retirement Study. Cox proportional hazards models showed that everyday discrimination, but not race, was positively associated with mortality; depressive symptoms and lifestyle factors partially accounted for the relationship between everyday discrimination and mortality; and race did not moderate the association between everyday discrimination and mortality. These findings contribute to a growing body of evidence on the role that discrimination plays in shaping the life chances, resources, and health of people, and, in particular, minority members, who are continuously exposed to unfair treatment in their everyday lives. abstract_id: PUBMED:32981424 Time-Varying Depressive Symptoms and Cardiovascular and All-Cause Mortality: Does the Risk Vary by Age or Sex? Background Depressive symptoms are associated with mortality. Data regarding moderation of this effect by age and sex are inconsistent, however. We aimed to identify whether age and sex modify the association between depressive symptoms and all-cause and cardiovascular disease (CVD) mortality. Methods and Results The REGARDS (Reasons for Geographic and Racial Differences in Stroke) study is a prospective cohort of Black and White individuals recruited between 2003 and 2007. Associations between time-varying depressive symptoms (Center for Epidemiologic Studies Depression scale score ≥4 versus <4) and all-cause and CVD mortality were measured using Cox proportional hazard models adjusting for demographic and clinical risk factors. All results were stratified by age or sex and by self-reported health status. Of 29 491 participants, 3253 (11%) had baseline elevated depressive symptoms. Mean age was 65 (9.4) years, with 55.1% of participants female, 41.1% Black, and 46.4% had excellent/very good health. Depressive symptoms were measured at baseline, on average 4.9 (SD, 1.5), then 2.1 (SD, 0.4) years later. Neither age nor sex moderated the association between elevated time-varying depressive symptoms and all-cause or CVD mortality (all-cause: age 45-64 years adjusted hazard ratio [aHR], 1.38; 95% CI, 1.18-1.61 versus age ≥65 years aHR,1.36; 95% CI, 1.23-1.50; P=0.05; CVD: age 45-64 years aHR, 1.17; 95% CI, 0.90-1.53 versus age ≥65 years aHR, 1.26; 95% CI, 1.06-1.50; P=0.54; all-cause: males aHR, 1.46; 95% CI, 1.29-1.64 versus female aHR, 1.34; 95% CI, 1.19-1.50; P=0.35; CVD: male aHR, 1.32; 95% CI, 1.08-1.62 versus female aHR, 1.22; 95% CI, 1.00-1.47; P=0.64). Similar results were observed when stratified by self-reported health status. Conclusions Depressive symptoms confer mortality risk regardless of age and sex, including individuals who report excellent/very good health. abstract_id: PUBMED:29997533 Association of Depression and Anxiety With the 10-Year Risk of Cardiovascular Mortality in a Primary Care Population of Latvia Using the SCORE System. Background: Depression and anxiety have been recognized as independent risk factors for both the development and prognosis of cardiovascular (CV) diseases (CVD). The Systematic Coronary Risk Evaluation (SCORE) function measures the 10-year risk of a fatal CVD and is a crucial tool for guiding CV patient management. This study is the first in Latvia to investigate the association of depression and anxiety with the 10-year CV mortality risk in a primary care population. Methods: This cross-sectional study was conducted at 24 primary care facilities. During a 1-week period in 2015, all consecutive adult patients were invited to complete a nine-item Patient Health Questionnaire (PHQ-9) and a seven-item Generalized Anxiety Disorder scale (GAD-7) followed by sociodemographic questionnaire and physical measurements. The diagnostic Mini International Neuropsychiatric Interview (M.I.N.I.) was administered by telephone in the period of 2 weeks after the first contact at the primary care facility. A hierarchical multivariate analysis was performed. Results: The study population consisted of 1,569 subjects. Depressive symptoms (PHQ-9 ≥10) were associated with a 1.57 (95% confidence interval (CI): 1.06-2.33) times higher odds of a very high CV mortality risk (SCORE ≥10%), but current anxiety disorder (M.I.N.I.) reduced the CV mortality risk with an odds ratio of 0.58 (95% CI: 0.38-0.90). Conclusions: Our findings suggest that individuals with SCORE ≥10% should be screened and treated for depression to potentially delay the development and improve the prognosis of CVD. Anxiety could possibly have a protective influence on CV prognosis. abstract_id: PUBMED:22347196 Latent constructs in psychosocial factors associated with cardiovascular disease: an examination by race and sex. This study examines race and sex differences in the latent structure of 10 psychosocial measures and the association of identified factors with self-reported history of coronary heart disease (CHD). Participants were 4,128 older adults from the Chicago Health and Aging Project. Exploratory factor analysis (EFA) with oblique geomin rotation was used to identify latent factors among the psychosocial measures. Multi-group comparisons of the EFA model were conducted using exploratory structural equation modeling to test for measurement invariance across race and sex subgroups. A factor-based scale score was created for invariant factor(s). Logistic regression was used to test the relationship between the factor score(s) and CHD adjusting for relevant confounders. Effect modification of the relationship by race-sex subgroup was tested. A two-factor model fit the data well (comparative fit index = 0.986; Tucker-Lewis index = 0.969; root mean square error of approximation = 0.039). Depressive symptoms, neuroticism, perceived stress, and low life satisfaction loaded on Factor I. Social engagement, spirituality, social networks, and extraversion loaded on Factor II. Only Factor I, re-named distress, showed measurement invariance across subgroups. Distress was associated with a 37% increased odds of self-reported CHD (odds ratio: 1.37; 95% confidence intervals: 1.25, 1.50; p-value < 0.0001). This effect did not differ by race or sex (interaction p-value = 0.43). This study identified two underlying latent constructs among a large range of psychosocial variables; only one, distress, was validly measured across race-sex subgroups. This construct was robustly related to prevalent CHD, highlighting the potential importance of latent constructs as predictors of cardiovascular disease. abstract_id: PUBMED:23898807 Sleep duration and sleep disturbances partly explain the association between depressive symptoms and cardiovascular mortality: the Whitehall II cohort study. Depressive symptoms are associated with an increased risk of death, but most of this association remains unexplained. Our aim was to explore the contribution of sleep duration and disturbances to the association between depressive symptoms, all-cause and cardiovascular disease mortality. A total of 5813 (4220 men and 1593 women) aged 50-74 years at baseline, participants of the British Whitehall II prospective cohort study, were included. Depressive symptoms, sleep duration and disturbances were assessed in 2003-04. Mortality was ascertained through linkage to the national mortality register until August 2012, with a mean follow-up of 8.8 years. Depressive symptoms were associated with an increased risk of mortality from all causes [hazard ratio (HR) = 1.51; 95% confidence interval (CI): 1.16-1.97)] and cardiovascular diseases (HR = 1.63; 95% CI: 1.01-2.64) after adjustment for sociodemographic characteristics. Further adjustment for sleep duration and disturbances reduced the association between depressive symptoms and cardiovascular mortality by 21% (HR = 1.53; 95% CI: 0.91-2.57). Sleep seems to have a role, as a mediator or confounder, in explaining the association between depressive symptoms and cardiovascular mortality. These findings need replication in larger studies with longer follow-up. abstract_id: PUBMED:27822616 Perceived Neighborhood Safety Better Predicts Risk of Mortality for Whites than Blacks. Aim: The current study had two aims: (1) to investigate whether single-item measures of subjective evaluation of neighborhood (i.e., perceived neighborhood safety and quality) predict long-term risk of mortality and (2) to test whether these associations depend on race and gender. Methods: The data came from the Americans' Changing Lives Study (ACL), 1986-2011, a nationally representative longitudinal cohort of 3361 Black and White adults in the USA. The main predictors of interest were perceived neighborhood safety and perceived neighborhood quality, as measured in 1986 using single items and treated as dichotomous variables. Mortality due to all internal and external causes was the main outcome. Confounders included baseline age, socioeconomic status (education, income), health behaviors (smoking, drinking, and exercise), and health (chronic medical conditions, self-rated health, and depressive symptoms). Race and gender were focal effect modifiers. Cox proportional hazard models were ran in the pooled sample and stratified by race and gender. Results: In the pooled sample, low perceived neighborhood safety and quality predicted increased risk of mortality due to all causes as well as internal causes, net of all covariates. Significant interaction was found between race and perceived neighborhood safety on all-cause mortality, indicating a stronger association for Whites compared to Blacks. Race did not interact with perceived neighborhood quality on mortality. Gender also did not interact with perceived neighborhood safety or quality on mortality. Perceived neighborhood safety and quality were not associated with mortality due to external causes. Conclusion: Findings suggest that single items are appropriate for the measurement of perceived neighborhood safety and quality. Our results also suggest that perceived neighborhood safety better predicts increased risk of mortality over the course of 25 years among Whites than Blacks. abstract_id: PUBMED:32451197 Depressive symptoms do not discriminate: racial and economic influences between time-varying depressive symptoms and mortality among REGARDS participants. Purpose: Depressive symptoms relapse and remit over time, perhaps differentially by race and income. Few studies have examined whether time-varying depressive symptoms (TVDS) differentially predict mortality. We sought to determine whether race (white/black) and income (</≥$35,000) moderate the association between TVDS and mortality in a large cohort. Methods: The REGARDS study is a prospective cohort study among community-dwelling U.S. adults aged 45 years or older. Cox proportional hazard models were constructed to separately analyze the association between mortality (all cause, cardiovascular death, noncardiovascular death, and cancer death) and TVDS in race and income stratified models. Results: Point estimates were similar and statistically significant for white (aHR = 1.24 [95% CI: 1.10, 1.41]), black (aHR = 1.26 [95% CI: 1.11, 1.42]), and low-income participants (aHR = 1.28 [95% CI: 1.16, 1.43]) for the association between TVDS and mortality. High-income participants had a lower hazard (aHR = 1.19 [95% CI: 1.02, 1.38]). Baseline depressive symptoms predicted mortality in blacks only (aHR = 1.17, 95% CI: [1.00, 1.35]). Conclusions: We found that TVDS significantly increased the immediate hazard of mortality similarly across race and income strata. TVDS may provide more robust evaluations of depression impact compared with the baseline measures, making apparent racial disparities cited in the extant literature a reflection of the imperfection of using baseline measures. Answer: The association between depressive symptoms and cardiovascular mortality risk does not consistently vary by race according to the provided abstracts. Some studies found no significant difference in the association by race, while others reported a differential association. For instance, one study found that depressive symptoms were associated with a significantly elevated hazard of total cardiovascular disease (CVD) mortality for both Whites and Blacks, with no significant difference in the hazard associated with elevated depressive symptoms by race (PUBMED:23530295). This suggests that the association between depressive symptoms and CVD mortality does not vary significantly between these racial groups. Conversely, another study reported that elevated depressive symptoms were associated with multiple indicators of CVD mortality in older Blacks but not in Whites, even after adjusting for standard risk factors (PUBMED:21505153). This finding suggests a differential association by race, with depressive symptoms conferring an "accelerated risk" for CVD in Blacks compared to Whites. Another study that included a nationally representative sample of U.S. adults found that depressive symptoms increased all-cause and CVD mortalities, with the effect of depression on mortality risk being particularly pronounced in middle-aged men and older women (PUBMED:36958666). However, this study did not specifically address the variation of this association by race. Additionally, a study from the Health and Retirement Study found that everyday discrimination, but not race, was positively associated with mortality, and race did not moderate the association between everyday discrimination and mortality (PUBMED:31276018). This suggests that factors such as discrimination may play a role in health outcomes, but the study did not directly address the interaction between depressive symptoms and race in relation to cardiovascular mortality. In summary, the evidence from the provided abstracts is mixed, with some studies suggesting that the association between depressive symptoms and cardiovascular mortality risk does not vary significantly by race, while others indicate a stronger association for Blacks compared to Whites.
Instruction: Can Decision Support Help Patients With Spinal Stenosis Make a Treatment Choice? Abstracts: abstract_id: PUBMED:27018897 Can Decision Support Help Patients With Spinal Stenosis Make a Treatment Choice?: A Prospective Study Assessing the Impact of a Patient Decision Aid and Health Coaching. Study Design: A prospective, randomized study on patients with lumbar spinal stenosis who received a decision support intervention to facilitate their treatment choice. Objective: The aim of this study was to assess the impact of telephone health coaching (HC) in addition to a video decision aid (DA) compared with a DA alone for patients with spinal stenosis. Summary Of Background Data: Treatment options for lumbar spinal stenosis include surgical and nonsurgical approaches. Patient DAs and HC have been shown to help patients make an informed treatment choice consistent with personal preferences. Methods: Eligible patients with spinal stenosis were identified by an orthopedic surgeon or a nonsurgical spine specialist. Consenting participants were randomly assigned to either a video DA or a video DA along with HC (DA + HC). Patients completed baseline and follow-up questionnaires at 2 weeks, and 6 months after the decision support intervention(s). Results: Ninety-eight patients were randomized to the DA + HC group and 101 to the DA-only group; 168 of 199 (84%) patients completed responses at all time points. Both groups showed improved understanding of spinal stenosis treatments and progress in decision making after watching the DA (P < 0.001). At 2 weeks, more patients in the coaching group had made a treatment decision (DA + HC 74% vs. DA only 52%, P < 0.01). At 6-month follow-up, the uptake of surgery was similar for both groups (DA + HC 21% had surgery vs. DA only 17%); satisfaction with the treatments received was similar for both groups (DA + HC, 84% satisfied vs. DA only, 85%). Conclusion: These results suggest that watching the video DA improved patient knowledge and reduced decisional uncertainty about their spinal stenosis treatment choice. The addition of telephone coaching helped some patients choose a treatment more quickly; 6-month decisional outcomes were similar for both groups. Level Of Evidence: 3. abstract_id: PUBMED:33965775 How people with lumbar spinal stenosis make decisions about treatment: A qualitative study using the Health Belief Model. Objective: Surgery rates for lumbar spinal stenosis (LSS) have increased despite inherent risks, high reoperation rates, and a lack of evidence for benefit over conservative treatment. Scant research has investigated how people make decisions about treatment, which may help clinicians better support patients during the course of care. The purpose of the present study was to explore the beliefs of people with LSS and how they make decisions about treatment. Design: Cross-sectional qualitative study. Methods: Semi-structured individual interviews were conducted with participants who had LSS (based on diagnostic imaging and recent symptoms). Transcribed interview data was analyzed using directed content analysis informed by the Health Belief Model. Results: Twelve patients (mean age 75.3 years, range 63-87 years, 9 female, 6 with previous LSS surgery) participated. The Health Belief Model appeared useful for explaining decisions about treatment. Perceived threat of LSS was higher in those who had surgery. Patients who decided on surgery perceived themselves as more susceptible to surgery, often because of pathoanatomical beliefs. These patients had lower perceived control over symptoms and the treatment decision itself. Although patients saw benefit in conservative treatment because of its lower risk and ability to foster self-management, many had no or poor education and reported previous experiences with ineffective conservative treatment. Conclusion: Patients with LSS make decisions about treatment by weighing the perceived threat of LSS against the perceived barriers and benefits of conservative treatment. Consistent and nonthreatening educational messages from clinicians may help these patients during their decision-making process. abstract_id: PUBMED:28314703 Factors and concerns of patients that influence the decision for spinal surgery and implications for practice: A review of literature. Study Design: A literature review. Objectives: To identify the factors and concerns that influence the decision of patients to undergo spinal surgery. Methods: Electronic databases MEDLINE, PsycINFO, CINAHL plus, and Embase were searched for relevant studies published from 2000 to 2015. The keywords for the search included: spine surgery OR spinal stenosis AND decision making OR consideration OR preference OR willingness OR concern. Seven quantitative studies met the criteria for inclusion and were included in this review. Results: The findings showed that patients were more likely to decide on surgery when they were suffering from severe bodily pain, poor physical function, poor psychosocial health and a higher level of functional disability. Concerns that affected the patients' decision on whether or not to opt for surgery were: the benefits weighed against the perceived risks of different modalities of treatment, the effectiveness of medical treatments, their level of satisfaction with their symptoms and a preference for autonomy or a reliance on the opinion of medical professionals. The findings relating to patient characteristics and preference for surgery were inconsistent. Conclusion: Patients go through a complex and a multi-factorial process in making the decision whether or not to undergo surgery, which calls for decision support interventions that will help them to make the decision. abstract_id: PUBMED:22426453 Predictors of treatment choice in lumbar spinal stenosis: a spine patient outcomes research trial study. Study Design: A retrospective cohort study. Objective: In this article, we examined the Spine Patient Outcomes Research Trial lumbar stenosis observational cohort to determine baseline patient characteristics that are predictive of the treatment patients chose. We also evaluated cutoff points on validated patient questionnaires that differentiate patients who chose surgery from those who chose nonsurgical management. Summary Of Background Data: Although the evidence from current studies suggests that surgical intervention is effective for lumbar spinal stenosis, the same studies show that nonoperative patients also improve. Thus, the reasons for patients choosing surgery versus nonoperative care are of continuing interest. Methods: Baseline patient and clinical characteristics between those who received operative intervention and those who received nonoperative care were compared to determine baseline predictors of lumbar spinal stenosis management. Also, an evaluation of responses to the 36-Item Short Form Health Survey Bodily Pain (BP), 36-Item Short Form Health Survey Physical Function (PF), and the modified Oswestry Disability Index (ODI) questionnaires was performed to determine the percentage of patients choosing surgical versus nonoperative care relative to their initial questionnaire values. Results: This analysis looked at the 356 patients in the observational spinal stenosis cohort of Spine Patient Outcomes Research Trial who completed at least 1 follow-up visit. Patients choosing surgery were younger (P = 0.022), had worse BP (P < 0.001), worse PF (P < 0.001), worse ODI (P < 0.001), worse Stenosis Bothersomeness Index (P < 0.001), were dissatisfied with their symptoms (P = 0.001), and had a worse self-assessed health trend (P < 0.001). Patients tended to choose surgery if they had lateral recess stenosis (P = 0.022). Kaplan-Meier curves demonstrate that patients with a BP score of 32 or less, PF score of 30 or less, and ODI greater than 29 chose surgery 75% of the time. Conclusion: A greater understanding of baseline characteristics that influence patient choices in the treatment of lumbar spinal stenosis can aid the patient and the surgeon during the shared decision-making process. abstract_id: PUBMED:21358485 Effects of viewing an evidence-based video decision aid on patients' treatment preferences for spine surgery. Study Design: Secondary analysis within a large clinical trial. Objective: To evaluate the changes in treatment preference before and after watching a video decision aid as part of an informed consent process. Summary Of Background Data: A randomized trial with a similar decision aid in herniated disc patients had shown decreased rate of surgery in the video group, but the effect of the video on expressed preferences is not known. Methods: Subjects enrolling in the Spine Patient Outcomes Research Trial (SPORT) with intervertebral disc herniation, spinal stenosis, or degenerative spondylolisthesis at 13 multidisciplinary spine centers across the United States were given an evidence-based videotape decision aid viewed prior to enrollment as part of informed consent. Results: Of the 2505 patients, 86% (n = 2151) watched the video and 14% (n = 354) did not. Watchers shifted their preference more often than nonwatchers (37.9% vs. 20.8%, P < 0.0001) and more often demonstrated a strengthened preference (26.2% vs. 11.1%, P < 0.0001). Among the 806 patients whose preference shifted after watching the video, 55% shifted toward surgery (P = 0.003). Among the 617 who started with no preference, after the video 27% preferred nonoperative care, 22% preferred surgery, and 51% remained uncertain. Conclusion: After watching the evidence-based patient decision aid (video) used in SPORT, patients with specific lumbar spine disorders formed and/or strengthened their treatment preferences in a balanced way that did not appear biased toward or away from surgery. abstract_id: PUBMED:23025864 Clinical decision support and acute low back pain: evidence-based order sets. Low back pain is one of the most common reasons for visits to physicians in the ambulatory care setting. Estimated medical expenditures related to low back pain have increased disproportionately relative to the more modest increase in the prevalence of self-reported low back pain in the past decade. The increase in spine care expenditures has not been associated with improved patient outcomes. Evidence-based order templates presented in this article are designed to assist practitioners through the process of managing patients with acute low back pain. A logical method of choosing, developing, and implementing clinical decision support interventions is presented that is based on the best available scientific evidence. These templates may be reasonably expected to improve patient care, decrease inappropriate imaging utilization, reduce the inappropriate use of steroids and narcotics, and potentially decrease the number of inappropriate invasive procedures. abstract_id: PUBMED:29877995 Considering Spine Surgery: A Web-Based Calculator for Communicating Estimates of Personalized Treatment Outcomes. Study Design: Prospective evaluation of an informational web-based calculator for communicating estimates of personalized treatment outcomes. Objective: To evaluate the usability, effectiveness in communicating benefits and risks, and impact on decision quality of a calculator tool for patients with intervertebral disc herniations, spinal stenosis, and degenerative spondylolisthesis who are deciding between surgical and nonsurgical treatments. Summary Of Background Data: The decision to have back surgery is preference-sensitive and warrants shared decision making. However, more patient-specific, individualized tools for presenting clinical evidence on treatment outcomes are needed. Methods: Using Spine Patient Outcomes Research Trial data, prediction models were designed and integrated into a web-based calculator tool: http://spinesurgerycalc.dartmouth.edu/calc/. Consumer Reports subscribers with back-related pain were invited to use the calculator via email, and patient participants were recruited to use the calculator in a prospective manner following an initial appointment at participating spine centers. Participants completed questionnaires before and after using the calculator. We randomly assigned previously validated questions that tested knowledge about the treatment options to be asked either before or after viewing the calculator. Results: A total of 1256 consumer reports subscribers and 68 patient participants completed the calculator and questionnaires. Knowledge scores were higher in the postcalculator group compared to the precalculator group, indicating that calculator usage successfully informed users. Decisional conflict was lower when measured following calculator use, suggesting the calculator was beneficial in the decision-making process. Participants generally found the tool helpful and easy to use. Conclusion: Although the calculator is not a comprehensive decision aid, it does focus on communicating individualized risks and benefits for treatment options. Moreover, it appears to be helpful in achieving the goals of more traditional shared decision-making tools. It not only improved knowledge scores but also improved other aspects of decision quality. Level Of Evidence: 2. abstract_id: PUBMED:28763411 Patient Decision Aids Improve Decision Quality and Patient Experience and Reduce Surgical Rates in Routine Orthopaedic Care: A Prospective Cohort Study. Background: Patient decision aids are effective in randomized controlled trials, yet little is known about their impact in routine care. The purpose of this study was to examine whether decision aids increase shared decision-making when used in routine care. Methods: A prospective study was designed to evaluate the impact of a quality improvement project to increase the use of decision aids for patients with hip or knee osteoarthritis, lumbar disc herniation, or lumbar spinal stenosis. A usual care cohort was enrolled before the quality improvement project and an intervention cohort was enrolled after the project. Participants were surveyed 1 week after a specialist visit, and surgical status was collected at 6 months. Regression analyses adjusted for clustering of patients within clinicians and examined the impact on knowledge, patient reports of shared decision-making in the visit, and surgical rates. With 550 surveys, the study had 80% to 90% power to detect a difference in these key outcomes. Results: The response rates to the 1-week survey were 70.6% (324 of 459) for the usual care cohort and 70.2% (328 of 467) for the intervention cohort. There was no significant difference (p > 0.05) in any patient characteristic between the 2 cohorts. More patients received decision aids in the intervention cohort at 63.6% compared with the usual care cohort at 27.3% (p = 0.007). Decision aid use was associated with higher knowledge scores, with a mean difference of 18.7 points (95% confidence interval [CI], 11.4 to 26.1 points; p < 0.001) for the usual care cohort and 15.3 points (95% CI, 7.5 to 23.0 points; p = 0.002) for the intervention cohort. Patients reported more shared decision-making (p = 0.009) in the visit with their surgeon in the intervention cohort, with a mean Shared Decision-Making Process score (and standard deviation) of 66.9 ± 27.5 points, compared with the usual care cohort at 62.5 ± 28.6 points. The majority of patients received their preferred treatment, and this did not differ by cohort or decision aid use. Surgical rates were lower in the intervention cohort for those who received the decision aids at 42.3% compared with 58.8% for those who did not receive decision aids (p = 0.023) and in the usual care cohort at 44.3% for those who received decision aids compared with 55.7% for those who did not receive them (p = 0.45). Conclusions: The quality improvement project successfully integrated patient decision aids into a busy orthopaedic clinic. When used in routine care, decision aids are associated with increased knowledge, more shared decision-making, and lower surgical rates. Clinical Relevance: There is increasing pressure to design systems of care that inform and involve patients in decisions about elective surgery. In this study, the authors found that patient decision aids, when used as part of routine orthopaedic care, were associated with increased knowledge, more shared decision-making, higher patient experience ratings, and lower surgical rates. abstract_id: PUBMED:31226911 Comparison of Three Measures of Shared Decision Making: SDM Process_4, CollaboRATE, and SURE Scales. Objective. If shared decision making (SDM) is to be part of quality assessment, it is necessary to have good measures of SDM. The purpose of this study is to compare the psychometric performance of 3 short patient-reported measures of SDM. Methods. Patients who met with a specialist to discuss possible surgery for hip or knee osteoarthritis (hips/knees), lumbar herniated disc, or lumbar spinal stenosis (backs) were surveyed shortly after the visit and again 6 months later. Some of the patients saw a patient decision aid (PDA) prior to the meeting. The 3 SDM measures were the SDM Process_4 (SDMP) survey, CollaboRATE, and SURE scale. The follow-up survey included measures of decision regret, satisfaction, and decision quality. Results. Patients in the sample (N = 649) had a mean age of 63.3 years, 51% were female, 60% were college educated, and there were more hip/knee patients than back patients (69% v. 31%). Forty-nine percent had surgery. For hips/knees, the SDMP and SURE scores were significantly associated with viewing all of the PDA compared with those who did not (P < 0.001), but not for CollaboRATE (P = 0.35). For backs, none of the scores were significantly associated with viewing all the PDA. All 3 scores were significantly associated with less regret and higher satisfaction (P < 0.001) for hips/knees. For backs, only SURE and CollaboRATE were significantly associated with less regret, and only SDMP was significantly associated with higher satisfaction. For hips/knees and backs, the SDMP and SURE scales were significantly associated with an informed patient-centered decision (P < 0.001), but this relationship was not significant for CollaboRATE (hips/knees: P = 0.24; backs: P = 0.25). Discussion. Each measure has some evidence of validity. SURE and SDMP better discriminate the use of PDAs and have higher decision quality. abstract_id: PUBMED:34837113 Prediction of outcome after spinal surgery-using The Dialogue Support based on the Swedish national quality register. Purpose: To evaluate the predictive precision of the Dialogue Support, a tool for additional help in shared decision-making before surgery of the degenerative spine. Methods: Data in Swespine (Swedish national quality registry) of patients operated between 2007 and 2019 found the development of prediction algorithms based on logistic regression analyses, where socio-demographic and baseline variables were included. The algorithms were tested in four diagnostic groups: lumbar disc herniation, lumbar spinal stenosis, degenerative disc disease and cervical radiculopathy. By random selection, 80% of the study population was used for the prediction of outcome and then tested against the actual outcome of the remaining 20%. Outcome measures were global assessment of pain (GA), and satisfaction with outcome. Results: Calibration plots demonstrated a high degree of concordance on a group level. On an individual level, ROC curves showed moderate predictive capacity with AUC (area under the curve) values 0.67-0.68 for global assessment and 0.6-0.67 for satisfaction. Conclusion: The Dialogue Support can serve as an aid to both patient and surgeon when discussing and deciding on surgical treatment of degenerative conditions in the lumbar and cervical spine. Level Of Evidence: I. Answer: Yes, decision support can help patients with spinal stenosis make a treatment choice. A prospective, randomized study assessing the impact of a patient decision aid (DA) and health coaching (HC) found that both interventions improved patient knowledge and reduced decisional uncertainty about their spinal stenosis treatment choice. The addition of telephone coaching helped some patients choose a treatment more quickly, although 6-month decisional outcomes were similar for both groups (PUBMED:27018897). A qualitative study using the Health Belief Model indicated that patients with lumbar spinal stenosis (LSS) make decisions about treatment by weighing the perceived threat of LSS against the perceived barriers and benefits of conservative treatment. Consistent and nonthreatening educational messages from clinicians may help these patients during their decision-making process (PUBMED:33965775). A literature review highlighted that patients go through a complex and multi-factorial process in making the decision whether or not to undergo surgery, which calls for decision support interventions to help them make the decision (PUBMED:28314703). A retrospective cohort study found that understanding baseline characteristics that influence patient choices in the treatment of lumbar spinal stenosis can aid the patient and the surgeon during the shared decision-making process (PUBMED:22426453). A secondary analysis within a large clinical trial showed that after watching an evidence-based patient decision aid (video), patients with specific lumbar spine disorders formed and/or strengthened their treatment preferences in a balanced way that did not appear biased toward or away from surgery (PUBMED:21358485). A web-based calculator designed to communicate estimates of personalized treatment outcomes was found to be helpful in achieving the goals of more traditional shared decision-making tools, improving knowledge scores and other aspects of decision quality (PUBMED:29877995). A prospective cohort study demonstrated that patient decision aids, when used as part of routine orthopaedic care, were associated with increased knowledge, more shared decision-making, higher patient experience ratings, and lower surgical rates (PUBMED:28763411). In summary, decision support tools, including patient decision aids, health coaching, and web-based calculators, have been shown to facilitate informed treatment choices for patients with spinal stenosis, leading to improved knowledge, reduced decisional conflict, and in some cases, lower surgical rates.
Instruction: Can sedentary behavior be made more active? Abstracts: abstract_id: PUBMED:31649369 "Active Guide" Brochure Reduces Sedentary Behavior of Elderly People: A Randomized Controlled Trial. The aim of this study was to examine in a randomized controlled trial how much the sedentary behavior (sitting time) of community-dwelling elderly Japanese subjects decreased as a result of using the "Active Guide" brochure published by the Ministry of Health, Labour and Welfare (2013) and additional documents related to the benefits of reducing sedentary behavior. A total of 86 elderly people who participated in health-club activities for one year were randomly allocated to two groups. Subjects in the intervention group received explanations of the importance of physical activity using the "Active Guide" brochure (n=42) and additional documents, while subjects in the control group did not (n=44). Physical activity was measured using a triaxial accelerometer for two weeks at baseline and again after one year. After one year of intervention, the difference in the sedentary behavior rate from baseline was -2.2% for the intervention group (n=40) and +2.5% for controls (n=40) (Welch's t-test, p=0.007). Use of the "Active Guide" brochure and additional documents may reduce the sedentary behavior of community dwelling elderly people in Japan. abstract_id: PUBMED:37593656 Linking sedentary behavior and mental distress in higher education: a cross-sectional study. Background: Sedentary behavior among university students could negatively affect their mental health. Objective: The aim of this study was to examine the relationship of mental health (anxiety and depression) and sedentary behavior between gender in Health Degrees at the University of Zaragoza. Design: Cross-sectional descriptive study. Participants: Sample of 257 University students who completed an online questionnaire. Methods: Sedentary behavior was assessed with the SBQ questionnaire. Anxiety and depression were assessed with the GADS questionnaire. The Mann-Whitney U test and multiple linear regression models were used. Results: In comparison to men, female students with symptoms of anxiety spend more time in total engaged in sedentary behaviors (10.56 ± 4.83) vs. (7.8 ± 3.28; p < 0.001) and mentally-passive sedentary activities [2.24 (1.57) vs. 1.15 (0.90; p < 0.005)]. Female students at risk of depression also spend more hours engaged in mentally-passive sedentary behaviors in comparison to men (8.28 ± 50.70 vs. 1.27 ± 1.02; p = 0.009). Conclusion: Female students at risk of anxiety and/or depression spend more time engaged in sedentary activities in comparison to male students. The risk of anxiety and depression is associated with the total number of hours a day spent engaged in sedentary behaviors and with mentally passive behaviors, but not mentally active behaviors. abstract_id: PUBMED:36753072 Sedentary Behavior, Dietary Habits, and Cardiometabolic Risk in Physically Active Children and Adolescents. Background: Sedentary behavior has been associated with several cardiometabolic risk factors during childhood. However, little is known about the impact of sedentary behavior on the health and eating habits of physically active children and adolescents. Objective: To evaluate the association between sedentary behavior and cardiometabolic risk factors and eating habits in physically active children and adolescents. Methods: This cross-sectional study was conducted, including 516 physically active children and adolescents (10 to 18 years old; both sexes) enrolled in the social project "Estação Conhecimento-Vale" were evaluated. Biochemical and lifestyle variables (questionnaire) were collected. Sedentary behavior was determined indirectly (questionnaire), by using sitting time ≥ 3 hours per day as a cutoff point. A p-value < 0.05 was considered statistically significant for all tests. Results: Sedentary behavior was not associated with overweight/obesity (odds ratio = 0.72 [95% confidence interval (CI): 0.325-1.389]), hypertriglyceridemia (odds ratio = 0.63 [95% CI: 0.306-1.297]), low HDL cholesterol (odds ratio = 0.57 [95% CI: 0.323-1.019]), or high non-HDL cholesterol (odds ratio = 0.63 [95% CI: 0.283-1.389]). However, children and adolescents with sedentary behavior were more likely to regularly consume food in front of the television (odds ratio = 1.96 [95% CI: 1.114-3.456]) and to consume at least one ultra-processed food per day (odds ratio = 2.42 [95% CI: 1.381-4.241]). In addition, they were less likely to consume fruit regularly (odds ratio = 0.52 [95% CI: 0.278-0.967]). Conclusion: There was no association between sedentary behavior and cardiometabolic risk factors in physically active children and adolescents. However, sedentary behavior was associated with inadequate eating habits. Thus, we may suggest that the regular engagement in physical activity may attenuate the deleterious effects of sedentary behavior on the cardiometabolic parameters of children and adolescents. abstract_id: PUBMED:34120583 The effectiveness of two levels of active office interventions to reduce sedentary behavior in office workers: a mixed-method approach. Sedentary behavior (SB) rates are rising globally, especially during working hours. This research focused on the effectiveness of two levels of active office interventions to reduce SB in office workers. Participants were 78 nonacademic university employees divided into a control (CON) group and an intervention (INT) group. At the organizational level, it was found that the organizational health culture, the physical and social environment, and the organizational health behavior were dramatically changed. At the individual level, compared with the CON group, the INT group was significantly higher in the METs rate; light-intensity physical activity (LPA); and moderate-to-vigorous-intensity physical activity, and was lower in SB (CON, 397.30 ± 39.33 minutes vs. INT, 389.09 ± 37.59 minutes), all p < .05. The intervention was effective in changing health behavior related to SB of office workers in both organization and individual levels. abstract_id: PUBMED:27351099 Experimentally increasing sedentary behavior results in increased anxiety in an active young adult population. Introduction: Knowledge regarding the effects of sedentary behavior on anxiety has resulted mainly from observational studies. The purpose of this study was to examine the effects of a free-living, sedentary behavior-inducing randomized controlled intervention on anxiety symptoms. Methods: Participants confirmed to be active (i.e., acquiring 150min/week of physical activity) via self-report and accelerometry were randomly assigned into a sedentary behavior intervention group (n=26) or a control group (n=13). For one week, the intervention group eliminated exercise and minimized steps to ≤5000 steps/day whereas the control group continued their normal physical activity levels. Both groups completed the Overall Anxiety Severity Impairment Scale (OASIS) pre- and post-intervention, with higher OASIS scores indicating worse overall anxiety. The intervention group resumed normal physical activity levels for one week post-intervention and then completed the survey once more. Results: A significant group x time interaction effect was observed (F(1,37)=11.13; P=.002), with post-hoc contrast tests indicating increased OASIS scores in the intervention group in Visit 2 compared with Visit 1. That is, we observed an increase in anxiety levels when participants increased their sedentary behavior. OASIS scores significantly decreased from Visit 2 to Visit 3 (P=.001) in the intervention group. Conclusion: A one-week sedentary behavior-inducing intervention has deleterious effects on anxiety in an active, young adult population. To prevent elevated anxiety levels among active individuals, consistent regular physical activity may be necessary. Clinicians treating inactive patients who have anxiety may recommend a physical activity program in addition to any other prescribed treatment. abstract_id: PUBMED:34280406 Association between sedentary time and cognitive function: A focus on different domains of sedentary behavior. Studies which examined the association between sedentary behavior (SB) and cognitive function have presented equivocal findings. Mentally active/inactive sedentary domains may relate differently to cognitive function. We examined associations between SB and cognitive function, specifically focusing on different domains. Participants were recruited from the Nijmegen Exercise Study 2018 in the Netherlands. SB (h/day) was measured with the Sedentary Behavior Questionnaire. Cognitive function was assessed with a validated computer self-test (COST-A), and a z-score calculated for global cognitive function. Multivariate linear regression assessed associations between tertiles of sedentary time and cognitive function. Cognition tests were available from 2821 participants, complete data from 2237 participants (43% female), with a median age of 61 [IQR 52-67] and a mean sedentary time of 8.3 ± 3.2 h/day. In fully adjusted models, cognitive function was significantly better in participants with the highest total sedentary time (0.07 [95% CI 0.02-0.12], P = 0.01), work-related sedentary time (0.13 [95% CI 0.07-0.19], P < 0.001), and non-occupational computer time (0.07 [95% CI 0.02-0.12], P = 0.01), compared to the least sedentary. Leisure sedentary time and time spent sedentary in the domains TV, reading or creative time showed no association with cognitive function in final models (all P > 0.05). We found a strong, independent positive association between total SB and cognitive function in a heterogenous population. This relation was not consistent across different domains, with especially work- and computer-related SB being positively associated with cognitive function. This highlights the importance of assessing the various sedentary domains in understanding the relation between sedentary time and cognitive function. abstract_id: PUBMED:32290586 Are all Sedentary Behaviors Equal? An Examination of Sedentary Behavior and Associations with Indicators of Disease Risk Factors in Women. Sedentary behavior increases risk for non-communicable diseases; associations may differ within different contexts (e.g., leisure time, occupational). This study examined associations between different types of sedentary behavior and disease risk factors in women, using objectively measured accelerometer-derived sedentary data. A validation study (n = 20 women) classified sedentary behavior into four categories: lying down; sitting (non-active); sitting (active); standing. A cross-sectional study (n = 348 women) examined associations between these classifications and disease risk factors (body composition, metabolic, inflammatory, blood lipid variables). Participants spent an average of 7 h 42 min per day in sedentary behavior; 58% of that time was classified as non-active sitting and 26% as active sitting. Non-active sitting showed significant (p ≤ 0.001) positive correlations with BMI (r = 0.244), body fat percent (r = 0.216), body mass (r = 0.236), fat mass (r = 0.241), leptin (r = 0.237), and negative correlations with HDL-cholesterol (r = -0.117, p = 0.031). Conversely, active sitting was significantly (p ≤ 0.001) negatively correlated with BMI (r = -0.300), body fat percent (r = -0.249), body mass (r = -0.305), fat mass (r = -0.320), leptin (r = -0.259), and positively correlated with HDL-cholesterol (r = 0.115, p = 0.035). In summary, sedentary behavior can be stratified using objectively measured accelerometer-derived activity data. Subsequently, different types of sedentary behaviors may differentially influence disease risk factors. Public health initiatives should account for sedentary classifications when developing sedentary behavior recommendations. abstract_id: PUBMED:22820077 Longitudinal change in active and sedentary behavior during the after-school hour. Background: Relatively little is known regarding after-school behavior. This study examined after-school active and sedentary behaviors among youth participating in the Study of Early Child Care and Youth Development. Methods: An interview guided time-use approach was used to obtain detailed longitudinal information about after-school (3-6 PM) behavior of a mixed gender cohort (n = 886) at ages 9 and 11 yrs. Responses obtained in 15-min intervals were coded into 29 exclusive behaviors and separated into 3 main categories [moderate and vigorous-intensity physical activity (MVPA), light-intensity physical activity, and sedentary]. Sedentary category was further divided into screen and nonscreen categories. A mixed ANOVA design was used to examine gender and age-related differences in MVPA, light-intensity physical activity, sedentary, screen, and nonscreen. Results: MVPA was higher among boys compared with girls (P < .001) and decreased from 9 to 11 yrs (P < .001). Overall, total sedentary time was comparable between boys and girls despite a difference in reported screen time (boys > girls; P < .001) and nonscreen time (boys < girls; P < .001). Total sedentary time increased from 9 to 11 yrs (P < .001). Conclusion: Engagement in after-school behavior appears to change during preadolescence. Additional research is needed to understand factors associated with the selection of active and sedentary behavior over time. abstract_id: PUBMED:28601100 Patterns of Sedentary Behavior in Older Adults. Objectives: We measured the volume and patterns of sedentary behavior (including breaks from sedentary behavior) in a sample of older adults via accelerometry. Methods: Inactive, older adults (≥50 years of age) were eligible to participate. A cut point of <100 counts/minute was used to estimate: (1) total volume; (2) > 10-, > 30-, and > 60-minute bouts; and (3) patterns of sedentary behavior according to time of day and day of the week were computed. Total breaks in sedentary time also were calculated. Results: Participants (N = 67) were sedentary 62% of the day, engaging in 73.3 total bouts of daily sedentary behavior, and each bout lasted, on average, 7.8 minutes. All participants engaged in >1 daily bout of sedentary behavior > 10 and > 30 minutes. Sedentary time was slightly greater during the evening and on weekdays. Participants averaged 72.9 daily breaks from sedentary behavior, with each break lasting approximately 4.4 minutes. Conclusions: Our findings add valuable insights into the patterns of sedentary behaviors among older adults. Concentrated research efforts should be made to gain a clearer understanding of patterns and enablers to sedentary behaviors, which will allow for development of targeted interventions for this high-risk population. abstract_id: PUBMED:33802133 Effects of Classroom Active Desks on Children and Adolescents' Physical Activity, Sedentary Behavior, Academic Achievements and Overall Health: A Systematic Review. The purpose of this systematic review was to examine the effects of active desks in the school setting on sedentary behavior, physical activity, academic achievements and overall health among children and adolescents aged 5-17 years. A systematic literature search was conducted using five databases until October 2020. Twenty-three studies were included. Studies reported an increase of around 36% in energy expenditure for cycling desks and between 15% and 27.7% for upright active desks. Children increased inhibitory control and selective attention capacity while using cycling desks. A heterogeneous quality of design and of results were observed limiting comparisons and conclusions for each active desk. Despite the lack of strong methodology for the included studies, active desks appear to be a promising intervention in classrooms to improve health-related outcomes in children aged 5-17 years. Due to weak methodology, future studies with stronger study designs and methodology are needed to better inform policy and practice about the role of classroom active desks on health-related outcomes in children and adolescents. Answer: Yes, sedentary behavior can be made more active through various interventions and strategies. The research suggests that providing education and resources, such as the "Active Guide" brochure, can effectively reduce sedentary behavior in elderly populations (PUBMED:31649369). Additionally, active office interventions, which may include changes to the physical and social environment, as well as organizational health culture, have been shown to reduce sedentary behavior in office workers (PUBMED:34120583). Furthermore, the use of active desks in classrooms has been reported to increase energy expenditure and improve certain cognitive functions in children and adolescents, indicating that sedentary behavior in educational settings can also be made more active (PUBMED:33802133). The research also highlights that not all sedentary behaviors are equal, and different types of sedentary activities may have different associations with health outcomes. For instance, active sitting, characterized by some level of physical engagement while seated, has been associated with better disease risk factor profiles compared to non-active sitting in women (PUBMED:32290586). This suggests that incorporating more active forms of sedentary behavior, such as standing desks or pedal desks, could mitigate some of the negative health impacts associated with prolonged sitting. Moreover, the context of sedentary behavior is important to consider. For example, mentally active sedentary domains, such as work-related sedentary time and non-occupational computer time, have been positively associated with cognitive function, whereas leisure sedentary time and time spent watching TV or reading showed no such association (PUBMED:34280406). In summary, sedentary behavior can be made more active through targeted interventions, changes to the environment, and by encouraging mentally active forms of sedentary activities. These strategies can help to attenuate the negative health impacts associated with sedentary lifestyles.
Instruction: Alvarado score: can it reduce unnecessary CT scans for evaluation of acute appendicitis? Abstracts: abstract_id: PUBMED:25542452 Alvarado score: can it reduce unnecessary CT scans for evaluation of acute appendicitis? Objective: The objective of the study is to assess the utility of Alvarado score in the diagnosis of acute appendicitis and the utility of computed tomographic (CT) scan for evaluation of acute appendicitis when stratified by Alvarado scores. Materials And Methods: Retrospective cohort study comprised adult patients who underwent abdominal CT for suspected acute appendicitis between January 2006 and December 2009. Two abdominal radiologists independently reviewed the CT scans; any discrepancies were resolved by a consensus review. Alvarado scores were calculated and categorized as low (0-3), equivocal (4-6), or high (7-10) probability for appendicitis. The diagnostic utility of CT scans and Alvarado score for acute appendicitis were compared with the criterion standard of combined medical chart review and pathology findings. Results: In a cohort of 158 subjects, 73 (46.2%) had clinical diagnoses of acute appendicitis. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of CT scan in the diagnosis of acute appendicitis were 97.5%, 98.6%, 96.5%, 96.0%, and 98.8%, respectively. The mean Alvarado score for subjects with complicated appendicitis was significantly higher (7.95) than subjects with uncomplicated appendicitis (6.67) and those with other diagnoses (5.95). Acute appendicitis was confirmed in 2 (13.3%) of 15 subjects with low probability Alvarado scores, 16 (30.8%) of 52 subjects with equivocal scores, and 55 (60.4%) of 91 subjects with high probability scores. Conclusion: The CT scan had high diagnostic utility for acute appendicitis. The Alvarado score was not a reliable independent predictive tool for acute appendicitis and could not replace CT scan. abstract_id: PUBMED:37113930 Clinical scores (Alvarado and AIR scores) versus imaging (ultrasound and CT scan) in the diagnosis of equivocal cases of acute appendicitis: a randomized controlled study. About 50% of acute appendicitis cases are atypical in their presentation. The objectives of this study was to assess and compare the feasibility of clinical scores [Alvarado and Appendicitis Inflammatory Response (AIR)] and imaging [ultrasound and abdominopelvic computed tomography (CT) scan] in the evaluation of equivocal cases of acute appendicitis in a clinical trial to identify that subset of patients who really need and will benefit from imaging, mainly CT scan. Methods: A total of 286 consecutive adult patients with suspected acute appendicitis were included. The clinical scores, including Alvarado and AIR scores and ultrasound, were done for all patients. Abdominal and pelvic CT scans were done for 192 patients to resolve the diagnosis of acute appendicitis. The sensitivity, specificity, positive and negative predictive values, and accuracy rate of both clinical scores and imaging (ultrasound and CT scan) were compared. The final histopathology was used as the gold standard for which the diagnostic feasibility of the clinical score and imaging were compared. Results: Out of 286 total patients who presented with right lower quadrant abdominal pain, a presumptive diagnosis of acute appendicitis was made in 211 patients (123 males and 88 females) after thorough clinical evaluation, clinical scores, and imaging, and they were submitted to appendicectomy. The overall prevalence of acute appendicitis proved by histopathology as a gold standard was 89.1% (188 patients) with a negative appendectomy rate of 10.9%. Simple acute appendicitis was reported in 165 (78.2%) patients and perforated appendicitis in 23 (10.9%) patients. For patients with equivocal clinical scores (≥4 to ≤6), the sensitivity, specificity, predictive values, and accuracy rate of CT scan were significantly higher than those of Alvarado and AIR scores. Patients with low clinical scores (≤4) and high clinical scores (≥7), the sensitivity, specificity, predictive values, and accuracy rate of clinical scores and imaging were comparable. The diagnostic feasibility of AIR scores was significantly higher than the Alvarado score, and the clinical scores were associated with significantly higher diagnostic accuracy than ultrasound. CT scan is unlikely to be needed and will add little to the diagnosis of acute appendicitis for patients with high clinical scores (≥7). The sensitivity of the CT scan for perforated appendicitis was lower than that for nonperforated appendicitis. The use of CT scans for query cases did not change the negative appendectomy rate. Conclusion: CT scan evaluation is beneficial only for patients with equivocal clinical scores. For patients with high clinical scores, surgery is recommended. AIR score was superior to the Alvarado score in terms of sensitivity, specificity, and predictive values. A CT scan is usually not required for patients with low scores since acute appendicitis is unlikely; in such cases, ultrasound could be of help to exclude other diagnoses. abstract_id: PUBMED:27843179 The Alvarado score versus computed tomography in the diagnosis of acute appendicitis: A prospective study. Background: To assess and compare performance of the Alvarado score and computed tomography scan in the diagnosis of acute appendicitis at King Hussein Medical Center. Methods: A total of 320 patients with suspected acute appendicitis were included in this study over a period of 2 years. The Alvarado score was calculated for all of these patients and 112 CT scans were requested selectively by surgeons caring for the patients. The histopathology diagnosis was used as the gold standard against which diagnostic performance of Alvarado score and CT scan were compared. Results: The complete data of 196 males and 124 females were analyzed at the end of the study period. The mean age was 26.1 ± 11.3 years. Appendectomy was performed in 263 patients with a negative appendectomy rate of 14.83% overall (12.28 in males and 19.56 in females). The remaining 57 patients were assumed to have no appendicitis. The diagnostic performance of CT scan was superior to that of Alvarado score with sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of 94.2 versus 85.4%, 90 versus 65%, 9.42 versus 2.44, and 0.065 versus 0.224, respectively (p-value < 0.05). The overall diagnostic accuracy of CT scan was 92.6% compared to 77.5% for Alvarado score. Conclusion: The Alvarado score was far from good and CT scan is more accurate in diagnosis of acute appendicitis in our patient population. abstract_id: PUBMED:32357897 Randomized control trial comparing an Alvarado Score-based management algorithm and current best practice in the evaluation of suspected appendicitis. Background: An objective algorithm for the management of suspected appendicitis guided by the Alvarado Score had previously been proposed. This algorithm was expected to reduce computed tomography (CT) utilization without compromising the negative appendectomy rate. This study attempts to validate the proposed algorithm in a randomized control trial. Methods: A randomized control trial comparing the management of suspected acute appendicitis using the proposed algorithm compared to current best practice, with the rate of CT utilization as the primary outcome of interest. Secondary outcomes included the percentage of missed diagnosis, negative appendectomies, length of stay in days, and overall cost of stay in dollars. Results: One hundred sixty patients were randomized. Characteristics such as age, ethnic group, American Society of Anesthesiologist score, white cell count, and symptom duration were similar between the two groups. The overall CT utilization rate of the intervention arm and the usual care arm were similar (93.7% vs 92.5%, p = 0.999). There were no differences in terms of negative appendectomy rate, length of stay, and cost of stay between the intervention arm as compared to the usual care arm (p = 0.926, p = 0.705, and p = 0.886, respectively). Among patients evaluated with CT, 75% (112 out of 149) revealed diagnoses for the presenting symptoms. Conclusion: The proposed AS-based management algorithm did not reduce the CT utilization rate. Outcomes such as missed diagnoses, negative appendectomy rates, length of stay, and cost of stay were also largely similar. CT utilization was prevalent as 93% of the study cohort was evaluated by CT scan. Trial Registration: The study has been registered at ClinicalTrials.gov (NCT03324165, Registered October 27 2017). abstract_id: PUBMED:29809191 Comparative analysis of diagnostic scales of acute appendicitis: Alvarado, RIPASA and AIR Introduction: Acute appendicitis is the most common surgical disease in emergency surgery, however, it remains a diagnostic problem and represents a challenge despite the experience and the different clinical and paraclinical diagnostic methods. Objective: To evaluate in a comparative way the scale of Alvarado, AIR and RIPASA to determine which one is best as a diagnostic test of acute appendicitis in our population in order to arrive to an accurate diagnosis in the shortest possible time and cost. Method: Observational, prospective, transversal and comparative study of 137 patients to whom the scale of Alvarado, AIR and RIPASA was applied, who entered the emergency service of the Civil Hospital of Culiacán (México) with abdominal pain syndrome suggestive of acute appendicitis. Results: The Alvarado scale presented sensitivity 97.2% and specificity of 27.6%. AIR presented sensitivity of 81.9% and specificity of 89.5%. RIPASA showed the same results as Alvarado. All tests showed diagnostic accuracy above 80. Conclusions: Alvarado and RIPASA presented good sensitivity, however, AIR is more specific, and has better accuracy for the diagnosis of acute appendicitis, making a better screening and thus reducing unnecessary surgeries. Therefore, it is recommended to use more AIR than Alvarado and RIPASA. abstract_id: PUBMED:31762884 Evaluation of Alvarado score in diagnosing acute appendicitis. Using a practical scoring system for diagnosing acute appendicitis can help reduce the rate of unnecessary surgery. This prospective study was carried out to evaluate Alvarado scoring system for diagnosing of acute appendicitis in our set up. Out of total 100 patients, appendicitis was confirmed in 80 patients, thus giving negative appendectomy rate of 20% (male 6%, female 14%). Perforation rate was 4%. Positive predictive value was 89%. The sensitivity was 54% and specificity 75%. Alvarado score is not a sensitive tool for aiding diagnosis of acute appendicitis. abstract_id: PUBMED:27436935 The comparison of the effectiveness of tomography and Alvarado scoring system in patients who underwent surgery with the diagnosis of appendicitis. Objective: The aim of this study is to compare the effectiveness of computed tomography and Alvarado scoring system in the diagnosis of acute appendicitis in patients who underwent appendectomy with the preliminary diagnosis of acute appendicitis. Material And Methods: One hundred and one patients who underwent appendectomy with the diagnosis of acute appendicitis between January and December 2011 were included in the study. Alvarado scores were calculated, and abdominal tomography scans were obtained for each patient before surgery. Patients with Alvarado score ≥7 were considered to have appendicitis while patients with a score <7 were considered not to have appendicitis. Patients were classified into two groups based on the presence of appendicitis findings on abdominal tomography. Histopathological examination of the appendices was performed following appendectomy. All patients were classified into groups according to pathology results, Alvarado score and tomography findings. The effectiveness of Alvarado score and tomography were compared using the McNemar test. Results: Sixty patients (59.4%) were male and 41 (40.6%) were female, with a mean age of 32 years (5-85 years). The rate of negative appendectomy was 3.9%. In 78 patients (77.3%) the Alvarado score was ≥7, while 23 patients (22.7%) had Alvarado scores <7. The presence of appendicitis was determined by histopathology in 22 out of 23 patients whose Alvarado score was <7. Tomography indicated appendicitis in 97 patients (95.9%) whereas four patients (4.1%) exhibited no signs of appendicitis by tomography. However, histopathological evaluation indicated the presence of appendicitis in those four patients as well. Conclusion: The study results imply that tomography is a more effective means of diagnosing acute appendicitis as compared to the Alvarado scoring system. abstract_id: PUBMED:28944334 Evaluation of the Alvarado scoring system in the management of acute appendicitis. Objective: In this study, we aimed to show the effectiveness of Alvarado score and its components to predict the correct diagnosis of acute appendicitis and to find an optimum cut-off value for Alvarado score. Material And Methods: The patients who underwent surgical operation between January 2011 and January 2012 with the suspicion of acute appendicitis were included in the study. Their demographic and clinical features and histopathological results were retrieved from the medical records. They were divided into three groups according to their Alvarado scores. With the use of "receiver operating characteristic" curve analysis, the optimum cut-off value needed to make a correct diagnosis of acute appendicitis was determined. Results: In all, 156 patients were included into the study. The mean age was 31.41±13.27 years. Histopathologically, acute appendicitis was detected in 125 (80.1%) patients, and negative appendectomy was found in 31 patients (19.8%). Mean Alvarado score was 6.44±1.49. There was a significant correlation between negative appendectomy and low Alvarado score (p<0.001). The main component of Alvarado score that makes the difference was rebound. Fever higher than 37.3°C, rebound, loss of appetite, and existence of shifting pain were statistically differential components (p=0.042, p<0.001, p=0.045, p<0.001, respectively). The rate of correct diagnosis of acute appendicitis was maximum in group 3 (100%) and minimum in group 1 (21.7%). Optimum cut-off value for Alvarado score was 7. Conclusion: Patients with an Alvarado score of over 7 can be taken into surgical operation without the need of imaging methods. abstract_id: PUBMED:29426650 The RIPASA score for the diagnosis of acute appendicitis: A comparison with the modified Alvarado score. Introduction And Objectives: Acute appendicitis is the first cause of surgical emergencies. It is still a difficult diagnosis to make, especially in young persons, the elderly, and in reproductive-age women, in whom a series of inflammatory conditions can have signs and symptoms similar to those of acute appendicitis. Different scoring systems have been created to increase diagnostic accuracy, and they are inexpensive, noninvasive, and easy to use and reproduce. The modified Alvarado score is probably the most widely used and accepted in emergency services worldwide. On the other hand, the RIPASA score was formulated in 2010 and has greater sensitivity and specificity. There are very few studies conducted in Mexico that compare the different scoring systems for appendicitis. The aim of our article was to compare the modified Alvarado score and the RIPASA score in the diagnosis of patients with abdominal pain and suspected acute appendicitis. Material And Methods: An observational, analytic, and prolective study was conducted within the time frame of July 2002 and February 2014 at the Hospital Universitario de Puebla. The questionnaires used for the evaluation process were applied to the patients suspected of having appendicitis. Results: The RIPASA score with 8.5 as the optimal cutoff value: ROC curve (area .595), sensitivity (93.3%), specificity (8.3%), PPV (91.8%), NPV (10.1%). Modified Alvarado score with 6 as the optimal cutoff value: ROC curve (area .719), sensitivity (75%), specificity (41.6%), PPV (93.7%), NPV (12.5%). Conclusions: The RIPASA score showed no advantages over the modified Alvarado score when applied to patients presenting with suspected acute appendicitis. abstract_id: PUBMED:29106084 Ultrasonography and the Alvarado score in the diagnosis of acute appendicitis: impact on the negative appendectomy rate Objectives: To establish the negative appendectomy rate (NAR) after patients with acute abdomen were evaluated with the Alvarado score and compare it to the NAR in patients evaluated with abdominal ultrasound. Material And Methods: Cross-sectional, retrospective, descriptive study in patients who underwent emergency surgery for suspected acute appendicitis in a tertiary-care hospital over a period of 1 year. Results: A total of 225 patients were included. The NAR was 7.11% for the series. An Alvarado score of 5 or more had an odds ratio (OR) of 7.46 (P=.0002) for acute appendicitis; sensitivity was 94.2%. Ultrasound findings consistent with acute appendicitis had an OR of 3.58 (P=.0125) for the diagnosis; the NAR was 5%. Conclusion: The high sensitivity of an Alvarado score of 5 or more supports using this tool to evaluate all patients who come to the emergency department with pain in the right iliac fossa. With scores of 7 or more the NAR falls to 3%. Abdominal ultrasound is a rapid, inexpensive diagnostic procedure associated with a low NAR. Answer: The Alvarado score, a clinical scoring system for diagnosing acute appendicitis, has been evaluated for its utility in potentially reducing unnecessary CT scans. However, the evidence from the provided abstracts suggests that the Alvarado score alone may not be a reliable independent predictive tool for acute appendicitis and cannot replace CT scans (PUBMED:25542452). A study comparing clinical scores (Alvarado and AIR scores) with imaging (ultrasound and CT scan) found that CT scan evaluation is beneficial only for patients with equivocal clinical scores (PUBMED:37113930). For patients with high clinical scores, surgery is recommended without the need for a CT scan. The AIR score was found to be superior to the Alvarado score in terms of sensitivity, specificity, and predictive values. Another study concluded that the Alvarado score was far from good and that CT scans are more accurate in the diagnosis of acute appendicitis (PUBMED:27843179). Similarly, a randomized control trial found that an Alvarado Score-based management algorithm did not reduce CT utilization rates and outcomes such as missed diagnoses, negative appendectomy rates, length of stay, and cost of stay were also largely similar (PUBMED:32357897). A comparative analysis of diagnostic scales for acute appendicitis found that the Alvarado and RIPASA presented good sensitivity, but the AIR score was more specific and had better accuracy, suggesting that AIR might be a better screening tool to reduce unnecessary surgeries (PUBMED:29809191). However, another study evaluating the Alvarado score concluded that it is not a sensitive tool for aiding the diagnosis of acute appendicitis (PUBMED:31762884). In summary, while the Alvarado score can be a useful tool in the initial assessment of patients with suspected acute appendicitis, it does not consistently reduce the need for CT scans, especially in cases with equivocal scores. CT scans remain a valuable diagnostic tool with high sensitivity and specificity for acute appendicitis, and the decision to use CT imaging should be based on the clinical context and the patient's Alvarado score (PUBMED:25542452; PUBMED:37113930; PUBMED:27843179; PUBMED:32357897; PUBMED:29809191; PUBMED:31762884).
Instruction: Can Westgard Quality Control Rules determine the suitability of frozen sperm pellets as a control material for computer assisted semen analyzers? Abstracts: abstract_id: PUBMED:12645867 Can Westgard Quality Control Rules determine the suitability of frozen sperm pellets as a control material for computer assisted semen analyzers? Purpose: To evaluate the drop-to-drop and pellet-to-pellet repeatability and stability of frozen sperm pellets. Methods: Ten pellets were thawed per batch (low and normal concentration) and evaluated by two investigators to establish a quality control chart. Then low and normal concentration pellets were thawed and evaluated daily for 10 days by both investigators. The values for both investigators were averaged and plotted on the chart. Results: The low sperm concentration specimen had a systematic error while the normal sperm concentration specimen had a random error as well as a systematic error. The low sperm concentration specimen violated the warning rule for motility whereas the normal concentration violated the warning rule, the random error rule, and the systematic error rule when applied to motility. Conclusions: Frozen sperm pellets are not acceptable as a daily-use quality control material for semen analysis when using a computer assisted semen analyzer. abstract_id: PUBMED:10689026 Using videotaped specimens to test quality control in a computer-assisted semen analysis system. Objective: To determine the feasibility of using semen samples previously recorded on videotape for intralaboratory and interlaboratory quality control of computer-assisted semen analysis (CASA) systems. Design: Blinded, controlled study. Setting: Pooled semen specimens from two normal human volunteers in an academic research environment. Patient(s): None. Intervention(s): None. Main Outcome Measure(s): Semen parameters from a videotape analyzed internally and by four external laboratories. Result(s): Preliminary experiments designed to examine intralaboratory variation by repeated analysis of semen samples recorded on videotape revealed some significant differences for every variable examined. When these data were analyzed by using the larger biologic error caused by subsampling, no significant differences were found for any of the variables examined. When either a standard set or the specific laboratories' sets of parameters were used to analyze the same videotaped semen specimen, no statistically significant differences were detected for sperm concentration for motility among the five laboratories after the biological error caused by subsampling was applied to results. Conclusion(s): These data strongly suggest that videotaped semen specimens can serve as quality control for intralaboratory and interlaboratory testing of CASA equipment as long as the biologic error caused by subsampling is used to compare results. abstract_id: PUBMED:33382220 Internal quality control products for computer-assisted sperm analysis: Research and application Objective: To investigate the application of the self-made semen quality control (QC) product in internal QC of computer-assisted sperm analysis (CASA). Methods: CASA was calibrated with high- and low-concentration commercially available semen QC product and meanwhile 15 samples of self-made mixed semen QC product were placed in 75 cryotubes containing liquid nitrogen, followed by CASA of the concentration, motility, curvilinear velocity (VCL), straight line velocity (VSL), average path velocity (VAP), linearity (LIN), wobble (WOB) and straightness (STR) of the sperm using standard procedures and 50 days of continuous monitoring. The Makler counting plate was used to measure the concentration and motility of the self-made sperm. Results: The coefficients of variation (CV) of the commercially available semen QC product at high and low concentrations were 6.18% and 7.85%, respectively. CASA showed that the concentration of the self-made QC product was (25.97 ± 1.41) ×10⁶/ml, with a CV of 5.42%, and the sperm motility, VCL, VSL, VAP, LIN, WOB and STR were (22.15 ± 1.75)% (CV = 7.9%), (59.18 ± 2.05) μm/s (CV = 3.46%), (26.79 ± 1.2) μm/s (CV = 4.48%), (34.98 ± 1.4) μm/s (CV = 4.01%), 46.81 ± 1.55 (CV = 3.3%), 60.52 ± 1.3 (CV = 2.15%) and 76.46 ± 1.98 (CV = 2.59%), respectively. The concentration and motility of the self-made sperm detected with the Makler counting plate were (34.39 ± 2.37) ×10⁶/ml (CV = 6.89%) and (38.04 ± 1.69)% (CV = 4.44%), respectively. Levey-Jennings QC charts were plotted for the indicators using the means and standard deviation. Conclusions: The self-made internal QC product by liquid nitrogen cryopreservation is feasible and effective for monitoring the accuracy and precision of CASA-derived sperm concentration and motion parameters, and it has a smaller CV than the commercially available QC product in measuring sperm concentration. abstract_id: PUBMED:16331534 External quality control program for semen analysis: Spanish experience. Purpose: Results from an external quality control programme for semen analysis carried out in Spain are analysed. Methods: Quality control materials were distributed and the following seminal parameters were determined: concentration, total motility, progressive motility, rapid progressive motility, morphology and sperm vitality. The between-laboratories coefficients of variation were assessed on different types of quality control material. Results: The majority of participating laboratories utilised manual versus computer-assisted semen analysis methods. Some between-laboratories coefficients of variation ranges were: 20.8-33.8% for concentration (semen pool suspension); 13.9-19.2% for total motility (videotapes); 54.2-70.2% for sperm morphology (strict criteria using stained smears); and 9.8-41.1% for sperm vitality (stained smears). There was an inverse relation between mean percentage of sperm and coefficients of variation between laboratories for sperm motility, morphology and vitality. Conclusions: These data highlight the urgent need for improvement in the overall quality of andrology testing. abstract_id: PUBMED:10363115 Quality assessment of computer-assisted semen analysis (CASA) in the andrology laboratory. If quality is assessed with regard to computer-assisted semen analysis (CASA), the evaluation of seminal fluid in the andrological laboratory has to be considered. Three levels of quality assessment are generally accepted: structure, process and results. Quality of structure mainly concerns the quality of laboratory assessment, in particular the skill of the staff and the equipment used. The quality of the CASA system itself is difficult to assess. Process quality concerns the quality of performing a diagnosis. When the parameter settings of the CASA system and the handling of the sample are defined, the reproducibility of the CASA values is clearly better than that of the visual estimation of motility. CASA systems are also superior to other methods regarding the documentation of laboratory values, as all the values are obtained directly online. Result quality comprises the precision, reliability and reproducibility of measurement as well as the significance of values with respect to their biological relevance. Concluding from the definitions as quoted above and from reports of the literature it may be stated that: (i) in the dimensions of structure and process quality, CASA is superior to other methods of measuring sperm motility; (ii) the evaluation of results and quality of results, however, is highly problematic; (iii) CASA systems do not appear to be superior to the visual estimation of sperm motility with respect to the fertilizing capacity of spermatozoa; (iv) the guidelines of the WHO task force form a basis for sufficient process quality; (v) further efforts should actually focus not on the improvement of investigation technology, but on the improvement in the qualification of investigators. abstract_id: PUBMED:34430409 The validity and reliability of computer-aided semen analyzers in performing semen analysis: a systematic review. Background: Computer-aided sperm analyzers (CASA) are currently used worldwide for semen analysis. However, there are doubts about their reliability to fully substitute the human operator. Therefore, this study aimed to systematically review the current literature comparing results from semen evaluation by both CASA-based and manual approaches. Methods: A systematic screening of the literature was performed based on the PRISMA guidelines and by searching on PubMed, Scopus, and Embase databases. Results: A total of 14 studies were included. Our results showed a high degree of correlation for sperm concentration and motility when analysis was performed either manually or by using a CASA system. However, CASA results showed increased variability in low (<15 million/mL) and high (>60 million/mL) concentration specimens, while sperm motility assessment was inaccurate in samples with higher concentration or in the presence of non-sperm cells and debris. Morphology results showed the highest level of difference, due to the high amount of heterogeneity seen between the shapes of the spermatozoa either in one sample or across multiple samples from the same subject. Conclusions: Overall, our study suggests CASA systems as a valid alternative for the evaluation of semen parameters in clinical practice, especially for sperm concentration and motility. However, further technological improvements are required before these devices can one day completely replace the human operator. Artificial intelligence-based CASA devices promise to offer higher efficiency of the analysis and improve the reliability of results. abstract_id: PUBMED:8567848 Implementing comprehensive quality control in the andrology laboratory. Comprehensive quality control procedures were integrated into the routine semen analysis workload of a large university-based andrology laboratory. Methods were chosen to match as far as possible those which have been used successfully for many years in disciplines such as clinical chemistry. Levey-Jennings and cusum charts were plotted in order to monitor the immunobead-binding test for antisperm antibodies and a video-taped control sample for computerized semen analysis. A cryopreserved semen control was also charted. Daily manual sperm counts were plotted against the corresponding computer-assisted semen analysis (CASA) value. Multiple readings of 30 slides were used to monitor morphology assessments. Monthly means for morphology were also calculated regularly. Coefficients of variation were calculated for all variables and were found to be more appropriate for some aspects, such as CASA, than for others, such as morphology, when difference from the previous reading of the same slide was found to be more useful. These integrated quality control procedures had a direct influence on the production of results from the laboratory. Together with a high standard of technician training, comprehensive routine quality control based on repeated analyses of control samples is an effective way of assuring the validity of semen analysis results. abstract_id: PUBMED:26296522 Effect of seminal plasma vesicular structures in canine frozen-thawed semen. Membrane vesicles (MVs) in the ejaculate have been identified in various species and are considered to affect membrane fluidity due to their characteristic molecular composition. Addition of MV to human frozen semen has been shown to improve post-thaw motility. Similarly, a beneficial effect has been suggested for frozen equine semen. As post-thaw canine semen quality varies widely between dogs, the aim of our study was to test for the effect of addition of canine MV on post-thaw semen quality in dogs. Semen samples from 10 male dogs were purified from MV and prepared for freezing. In experiment 1, three groups were compared: sperm frozen (1) with MV (S1); (2) without MV, but MV added immediately after thawing (S2); and (3) without MV (C). Semen analysis included computer-assisted sperm analysis of motility parameters immediately after thawing (t0), after 10 (t10) and 30 minutes (t30), % living sperm, % membrane intact, % morphologically normal sperm (all t0 and t30). Computer-assisted sperm analysis motility distance and velocity parameters (all P < 0.05) and % living sperm (P < 0.001) were significantly affected by treatment with a temporary increase of distance and velocity parameters at t0 to t10, but a significant decrease of the aforementioned parameters at t30 in samples with MV. In experiment 2, different MV protein concentrations added after thawing were compared: 0.05 mg, 0.1 mg, and 0.2 mg/mL. Computer-assisted sperm motility analysis was performed at t0, t10, and t30. No differences between MV concentrations were identified, only a significant interaction between effect of treatment and time for progressive motility (P < 0.01). Our study identified a short-term beneficial effect of canine MV on post-thaw distance and velocity parameters, whereas at t30 progressive motility, motility parameters and % living sperm were reduced in samples with MV compared to C. The results point to species-specific differences regarding the MV effect on frozen semen and indicate the need for further studies using different semen and MV purification protocols and more frequent analyses. At the moment, addition of MV is not an option to improve post-thaw semen quality in dogs. abstract_id: PUBMED:11775967 Computer assisted semen analyzers in andrology research and veterinary practice. The evaluation of sperm cell motility and morphology is an essential parameter in the examination of sperm quality and in the establishment of correlations between sperm quality and fertility. Computer-assisted sperm analysis (CASA) allows an objective assessment of different cell characteristics: motion, velocity, and morphology. The development and problems related to this technology are raised in this review, paying particular attention to the biases and standardization requirements absolutely needed to obtain useful results. Although some interesting results, mainly in humans, have already been obtained, many questions remain, which have to be answered to allow for further development of this technology in veterinary medicine, clinical fertility settings, physiological, and toxicology research activities. The main problem is related to the standardization and optimization of the equipment and procedures. The different CASA instruments have all demonstrated high levels of precision and reliability using different sperm classification methodology. Their availability gives us a great tool to objectively compare sperm motility and morphology and to improve our knowledge and ability to manipulate spermatozoa. abstract_id: PUBMED:22474989 Internal quality control of sperm concentrations in semen analysis: establishment of a method Objective: To establish a method for internal quality control (IQC) of sperm concentration test in the laboratory. Methods: We set the concentrations of frozen semen at 20 x 10(6) and 80 x 10(6) as low and high concentrations of putative IQC products, with QC-BEADSTM quality control beads (QCBs) as the control. Using the double-blind method, four technicians determined the sperm concentrations of the IQC products and QCBs by computer-assisted sperm analysis, and drew a quality control chart (Xbar chart and Sbar chart) for each product. Through a month of continuous detection, we calculated and compared the intra- and inter-batch coefficients of variation (CV%) of the quality control products of high and low concentrations. Results: The intra-batch coefficients of variation of the assumed IQC products of high and low concentrations were CV3.5% and CV2.4%, and their inter-batch coefficients of variation were CV10.2% and CV9.6%. The intra-batch coefficients of variation of the QCBs of high and low concentrations were CV5.1% and CV7.1%, and their inter-batch coefficients of variation were CV7.1% and CV8%. The intra-batch coefficients of variation of both IQC products and QCBs of high and low concentrations were <10%, and their inter-batch coefficients of variation were <15%, which conformed to Levey-Jennings quality control principles and achieved IQC purposes. No significant differences were found in either intra- or inter-batch coefficients of variation between the IQC products and QCBs of high and low concentrations (P>0.05), indicating that assumed IQC products can replace QCBs for internal quality control in the laboratory. Conclusion: The IQC method we established for determining sperm concentration is simple, feasible and reliable. Answer: No, Westgard Quality Control Rules indicate that frozen sperm pellets are not acceptable as a daily-use quality control material for semen analysis when using a computer-assisted semen analyzer (CASA). The study described in abstract PUBMED:12645867 found that the low sperm concentration specimen had a systematic error, while the normal sperm concentration specimen had both a random and a systematic error. The low sperm concentration specimen violated the warning rule for motility, and the normal concentration violated the warning rule, the random error rule, and the systematic error rule when applied to motility. These findings suggest that frozen sperm pellets do not provide the necessary consistency and reliability needed for quality control in CASA.
Instruction: Does adenosine pharmacologically precondition human myocardium during coronary bypass surgery? Abstracts: abstract_id: PUBMED:15956927 Does adenosine pharmacologically precondition human myocardium during coronary bypass surgery? Aim: Adenosine (Ado) triggers ischemic preconditioning. We investigated whether Ado provides additional myocardial protection in patients during intermittent aortic cross-clamping (IAC) bypass surgery. Methods: The placebo group was made of 15 male of 66+/-8 years while the Ado group was made of 19 male of 65+/-10 years. The patients of the Ado group had a 3-vessel heart disease and were treated with elective surgery. With the aortic cross-clamping, Ado or vehicle were infused over 10 min at systemic pressure together with sufficient blood via the aortic root. Blood samples before anaesthesia and onset of ECC, 1 hour after end of surgery, and on day 1 and 2 post-surgery to assess CK-MB and troponin I were performed. Hemodynamic measures (heart rate, left ventricular pressure, max/min pressure rise, central venous pressure) before installation and 15 min after completion of the coronary artery bypass. Different ECGs for electrophysiological analyses were performed. Results: Hemodynamic and laboratory measures revealed no significant advantages of either protocol. Mortality rate was zero in both groups. Conclusions: The comparable outcome is likely due to cardioprotection provided by both IAC bypass surgery and hypothermia, which might obscure beneficial effects of pharmacological preconditioning in patients with good left ventricular function (ejection fraction >50%). As the benefit might have been marginal, it may well become apparent in a larger study on patients with more severe left ventricular dysfunction. abstract_id: PUBMED:8452257 Vasodilation with adenosine or sodium nitroprusside after coronary artery bypass surgery: a comparative study on myocardial blood flow and metabolism. The effects of adenosine and sodium nitroprusside (SNP) on central hemodynamics and myocardial blood flow and metabolism were investigated postoperatively after elective coronary artery bypass (CABG) surgery in ten sedated and mechanically ventilated patients in the intensive care unit. During three consecutive 15-min periods, SNP (0.8 +/- 0.1 micrograms.kg-1 x min-1), adenosine (88.9 +/- 13.3 micrograms.kg-1 x min-1), and then again SNP (0.7 +/- 0.1 micrograms.kg-1 x min-1) were infused to control postoperative hypertension at a mean arterial pressure of approximately 80 mm Hg. Systemic and pulmonary hemodynamics and global (coronary sinus flow, CSF) as well as regional (great cardiac vein flow, GCVF) myocardial blood flow and metabolic variables were measured. During adenosine infusion, in comparison to SNP, heart rate was unchanged, stroke volume index and cardiac index increased (24% and 32%, respectively), and the systemic vascular resistance index decreased (-26%). Mean pulmonary arterial pressure (24%) as well as pulmonary capillary wedge pressure (27%) and central venous pressure (18%) were higher with adenosine compared to SNP. Adenosine also increased CSF and GCVF (108% and 103%, respectively) without altering the CSF/GCVF flow ratio compared to SNP. Furthermore, adenosine increased the coronary oxygen content (51%) and decreased the arterio-great cardiac vein oxygen content difference (-48%) without changing regional myocardial oxygen consumption, indicating a more pronounced hyperkinetic myocardial circulation compared to SNP. In addition, adenosine infusion decreased arterial PO2 (-11%) and increased the intrapulmonary shunt fraction (57%). The PR interval time of the electrocardiogram was prolonged (12%) and the ST segment was more depressed during adenosine infusion compared to SNP.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:11555521 Cardioprotective effect of adenosine pretreatment in coronary artery bypass grafting. Objective: There are several reports of the use of adenosine as a cardioprotective agent during cardiac surgery. Adenosine treatment might affect neutrophils and inflammatory mediators. The present prospective randomized study was designed to investigate the effect of adenosine pretreatment on myocardial recovery and inflammatory response in patients undergoing elective coronary artery bypass surgery. Design: A prospective, randomized, controlled study. Setting: Operative unit and ICU in a university hospital in Finland. Patients: Thirty male patients undergoing primary, elective coronary revascularization. Interventions: Patients in the adenosine group received a 7-min infusion of adenosine (total, 650 microg/kg) before the initiation of cardiopulmonary bypass. Measurements: Postoperative creatine kinase (CK)-MB release and hemodynamics were recorded. Perioperative leukocyte and cytokine release were measured. Results: Adenosine pretreatment resulted in less CK-MB release and an improved postbypass cardiac index. Similar leukocyte counts and cytokine responses were seen in both groups perioperatively. Neutrophil counts were similar between the groups before and after myocardial ischemia when measured simultaneously in arterial and coronary sinus blood. Conclusions: The present results support the hypothesis that adenosine pretreatment is cardioprotective in humans, but the present dose failed to regulate the inflammatory responses after coronary artery bypass grafting. abstract_id: PUBMED:2617242 Adenosine-induced increase in graft flow during coronary bypass surgery. The influence of systemic adenosine infusion (30-50 micrograms/kg/min) on peroperative coronary graft flow was investigated in 16 patients undergoing bypass surgery. The central hemodynamic and graft flow (electromagnetic flow determination) responses were studied after 5-min, and in nine patients also after 30-min infusion. The low-dose adenosine infusion had little effect on the central hemodynamic parameters, while the graft flow increased in all patients (mean 84 +/- 12%, total 22 grafts). The adenosine-induced increase in graft flow was maintained when the infusion was prolonged. It is concluded that adenosine can produce marked coronary vasodilation in man at infusion rates that exert only minor systemic hemodynamic effects. abstract_id: PUBMED:29749315 Irisin in Coronary Bypass Surgery. Introduction: In coronary bypass surgery, after cardiopulmonary bypass is initiated by arterial cannulation in the ascending aorta and venous cannulation through a single vein generally in the right atrium, the process of cooling the patient is started. Objective: There is a relation between cooling the patient and irisin, which is responsible for releasing heat. Therefore, the main objective of the present study is to explore how irisin concentrations and some other panel of myocardium injury in patients undergoing coronary artery bypass surgery. Methods: The blood samples collected before induction (T1), before bypass (T2), before (T3) and after (T4) removing the cross-clamp, upon admission to intensive care (T5), and at the postoperative 24 (T6) and 72 (T7) hours, and whether these concentrations are correlated with lactate levels classically used in monitoring this surgery. A total of biological samples, 23 from control individuals and 105 from bypass patients (14-16 samples for each timeframe) were analyzed to determine irisin, CK-MB, TnT and BNP levels by ELISA and lactate levels by lactate assay kit. Both lactate and irisin were seen to increase gradually from the time of induction to the removal of the cross-clamp. After the cross-clamp was removed and the patient was started to be warmed, both parameters began to decrease gradually and were restored to normal levels on the second and third post-operative days. The increase and decrease in irisin were found correlated with lactate levels. CK-MB, TnT and BNP alteration were similar to each other. Results: Based on these results, it is estimated that measurement of irisin along with lactate may prove to be a useful parameter in monitoring the coronary bypass surgery and irisin may be a significant marker of hypothermia. Beside CK-MB, TnT and BNP, measurements of irisin concentration in open heart surgery may also be useful parameters for the panel of myocardium injury. abstract_id: PUBMED:32059928 Recovery of hibernating myocardium using stem cell patch with coronary bypass surgery. Objective: This study aims to investigate the utility of mesenchymal stem cells (MSCs) applied as an epicardial patch during coronary artery bypass graft (CABG) to target hibernating myocardium; that is, tissue with persistently decreased myocardial function, in a large animal model. Methods: Hibernating myocardium was induced in juvenile swine (n = 12) using a surgically placed constrictor on the left anterior descending artery, causing stenosis without infarction. After 12 weeks, single-vessel CABG was performed using left internal thoracic artery to left anterior descending artery graft. During CABG, an epicardial patch was applied to the hibernating myocardium region consisting either of MSCs grown onto a polyglactin mesh (n = 6), or sham polyglactin mesh without MSCs (n = 6). Four weeks after CABG and patch placement, cardiac magnetic resonance imaging was performed and cardiac tissue was examined by gross inspection, including coronary dilators for vessel stenosis and patency, electron microscopy, protein assays, and proteomic analysis. Results: CABG + MSC myocardium showed improvement in contractile function (78.24% ± 19.6%) compared with sham patch (39.17% ± 5.57%) during inotropic stimulation (P < .05). Compared with sham patch control, electron microscopy of CABG + MSC myocardium showed improvement in mitochondrial size, number, and morphology; protein analysis similarly showed increases in expression of the mitochondrial biogenesis marker peroxisome proliferator-activated receptor gamma coactivator 1-alpha (0.0022 ± 0.0009 vs 0.023 ± 0.009) (P < .01) along with key components of the electron transport chain, including succinate dehydrogenase (complex II) (0.06 ± 0.02 vs 0.14 ± 0.03) (P < .05) and adenosine triphosphate synthase (complex V) (2.7 ± 0.4 vs 4.2 ± 0.26) (P < .05). Conclusions: In hibernating myocardium, placement of a stem cell patch during CABG shows promise in improving myocardial function by improving mitochondrial morphology and function. abstract_id: PUBMED:8026011 Myocardial release of malondialdehyde and purine compounds during coronary bypass surgery. Background: Free radicals and lipid peroxidation have been suggested to play an important role in the pathophysiology of myocardial reperfusion injury. The purpose of the present study was to monitor myocardial malondialdehyde (MDA) production as an index of lipid peroxidation during ischemia-reperfusion sequences in patients undergoing elective coronary bypass grafting. There has been a lot of debate on the role of xanthine oxidase as a potential superoxide anion generator and thus lipid peroxidation in human myocardium. To evaluate the activity of xanthine oxidase pathway, we measured the changes in the transcardiac concentration differences in adenosine, inosine, hypoxanthine, xanthine, and uric acid. Methods And Results: The coronary sinus-aortic root differences (CS-Ao) of MDA, oxypurines, and nucleosides were measured by a recently developed ion-pairing high-performance liquid chromatographic (HPLC) method. Fifteen patients were included in the study, and 13 of them demonstrated a more than 10-fold increase in net myocardial production of MDA on intermittent reperfusion during the aortic cross-clamp period. In 2 patients, MDA was not detectable in any of the CS or Ao samples. Before aortic cross-clamping, the CS-Ao concentration differences in adenosine, inosine, hypoxanthine, xanthine, and uric acid were 0.59 +/- 0.19, 0.23 +/- 0.05, 0.89 +/- 0.36, 0.58 +/- 0.32, and 11.4 +/- 4.9 mumol/L, respectively. After aortic cross-clamping, the sum of the transcardiac differences of these compounds increased up to 2.8-fold and then gradually decreased after declamping of the aorta. There was a weak positive correlation between transcardiac concentration differences of MDA and xanthine plus uric acid (r = .48, P < .01). The postoperative functional recovery or leakage of cardiac enzymes was not affected by the level of MDA net release during the aortic cross-clamp period, however. Conclusions: We conclude that myocardial lipid peroxidation, estimated as MDA formation, is common during intermittent ischemia-reperfusion sequences in coronary bypass surgery, although some patients may be better protected. Xanthine oxidase appears to be operative in human myocardium, and free radicals generated in this reaction might also be involved in the observed lipid peroxidation process. Increased degradation of myocardial adenine nucleotides and concomitant lipid peroxidation may play a specific role in the development of reperfusion injury. In this study, however, more extensive lipid peroxidation was not associated with impaired functional recovery. abstract_id: PUBMED:306806 Pathology of coronary artery bypass graft surgery. Coronary artery bypass graft surgery has been available and widely successful for the symptomatic treatment of ischemic heart disease. Despite its widespread use, there is little information available on the pathological consequences of this procedure on the human heart. In this article, morphological consequences of coronary artery bypass graft surgery is reviewed. Intimal changes occurring within the vein graft itself consist predominately of fibrous initimal proliferation, which in some patients may progress to from an occlusive plaque. Most occlusions, however, occur at the coronary artery bypass graft anastomosis site and the mechanisms of occlusion include compression of the vascular lumen, thrombosis, and dissection of the coronary artery. Most graft failure occurs in the setting of too small a native coronary artery lumen. The myocardium is also at risk for alterations as a result of the bypass operation. Contraction band or reperfusion necrosis is the type of injury most commonly seen, and it appears to occur most often in the distribution of patent grafts. Accelerated atherosclerosis in vein grafts and the myocardial injury associated with revascularization require further detailed morphological studies, but these are important areas for pathological exploration since they bear on important and yet unanswered questions about coronary bypass surgery: can it in the long run perserve myocardium and prolong life? abstract_id: PUBMED:10969685 Is adenosine preconditioning truly cardioprotective in coronary artery bypass surgery? Background: The large number of experimental studies showing that adenosine "turns on" the protein kinase C (PKC)-mediated pathway that accounts for the cardioprotection conferred by ischemic preconditioning contrasts with the scarcity of clinical data documenting the preconditioning-like protective effect of adenosine during cardiac operations on humans. Methods: Forty-five patients undergoing coronary artery bypass were randomized to receive, after the onset of cardiopulmonary bypass, a 5-minute infusion of adenosine (140 microg x kg(-1) x min(-1)) followed by 10 minutes of washout before cardioplegic arrest (n = 23) or an equivalent period (15 minutes) of prearrest drug-free bypass (controls, n = 22). Outcome measurements included troponin I release over the first 48 postoperative hours and activity of ecto-5'-nucleotidase, an admitted reporter of PKC activation, as assessed on right atrial biopsies taken before bypass and at the end of the preconditioning protocol (or after 15 minutes of bypass in control patients). Results: Aortic cross-clamping times were not different between the two groups. Likewise, prebypass values of ecto-5'-nucleotidase (nanomoles/mg protein per minute) were similar in control (3.14+/-1.02) and adenosine-treated (2.66+/-1.08) patients. They subsequently remained unchanged in control patients (3.87+/-1.65) whereas they significantly increased after adenosine preconditioning (4.47+/-1.96, p<0.001 versus base line values). However, peak postoperative values of troponin I (microg/L) were not significantly different between control (4.8+/-2.8) and adenosine-preconditioned patients (5.9+/-6.6) nor were the areas under the curve. There were no adverse effects related to adenosine. Conclusions: Adenosine, given at a clinically safe dose, can turn on the PKC-mediated signaling pathway involved in preconditioning but this biochemical event does not translate into reduced cell necrosis after coronary artery surgery, suggesting that a preconditioning-like protocol may not be the best suited for exploiting the otherwise well-documented cardioprotective effetcs of adenosine. abstract_id: PUBMED:8574025 Pretreatment of human myocardium with adenosine during open heart surgery. Background: Depressed myocardial performance after cardiac surgery can be contributed to ischemic reperfusion injury (IRI) incurred during and following the cardiopulmonary bypass (CPB). Myocardial preconditioning (PC) achieved by brief ischemia and subsequent reperfusion appears to be a clinically useful method of improved cardiac protection during surgery involving CPB by retarding IRI. Based on animal studies, activation of cardiac adenosine (ADO) receptors prior to the prolonged ischemic period appears to mimic this PC phenomenon. Aims And Methods: We investigated whether the human myocardial PC can be mimicked with ADO in the setting of the coronary artery bypass graft (CABG). The specific proposed objective of this study was to determine whether ADO infusion just prior to starting the CPB can improve post-CPB myocardial hemodynamics. Patients undergoing elective CABG with poor ventricular function (ejection fraction approximately 30%), and with at least three-vessel disease were selected for this study (n = 7 ADO, and n = 7 control). Results: Our results show that ADO infusion (250-350 micrograms/kg X 10 min) just prior to CPB resulted in an immediately improved postbypass cardiac index (CI) in the OR (CI increase of 41.5% +/- 11.1% for ADO vs 9.7% +/- 6.0% for control, p < 0.05). Forty hours postoperatively in the intensive care unit, ADO patients had improved CI (3.3 +/- 0.2 L/min per m2 for ADO, vs 2.6 +/- 4 L/min per m2 for control, p < 0.05). ADO patients maintained lowered resting heart rate (90 +/- 6 for ADO, vs 108 +/- 4 for control, p < 0.05) 40 hours after surgery. ADO patients also released significantly less CPK during the first 24 hours of the postoperative period. Conclusion: Based on these measurements, ADO pretreated patients had improved ventricular performance postoperatively. It also appears that ADO pretreatment results in lowered postoperative myocardial energy demand and less myocellular injury during CPB. To our knowledge, this is the first study to demonstrate that human myocardium can be hemodynamically improved with ADO pretreatment, and may be protected against IRI incurred during and following the CPB. We believe that a cardiac surgeon may now have the unique opportunity to confer myocardial protection during and after a cardiac surgical procedure. Answer: The use of adenosine in pharmacologically preconditioning the human myocardium during coronary bypass surgery has been investigated in several studies, with varying results. One study found that adenosine pretreatment resulted in less creatine kinase (CK)-MB release and an improved postbypass cardiac index, suggesting that adenosine is cardioprotective in humans. However, this dose of adenosine failed to regulate the inflammatory responses after coronary artery bypass grafting (CABG) (PUBMED:11555521). Another study showed that adenosine infusion prior to cardiopulmonary bypass improved postbypass cardiac index in the operating room and 40 hours postoperatively in the intensive care unit, indicating improved ventricular performance and potential protection against ischemic reperfusion injury (PUBMED:8574025). Conversely, another study found no significant advantages in hemodynamic and laboratory measures when adenosine was used, suggesting that the cardioprotection provided by intermittent aortic cross-clamping (IAC) bypass surgery and hypothermia might obscure the beneficial effects of pharmacological preconditioning in patients with good left ventricular function (PUBMED:15956927). Similarly, another study reported that while adenosine could activate the protein kinase C-mediated signaling pathway involved in preconditioning, this did not translate into reduced cell necrosis after coronary artery surgery (PUBMED:10969685). Additionally, adenosine has been shown to increase graft flow during coronary bypass surgery with little effect on central hemodynamic parameters (PUBMED:2617242), and to increase myocardial blood flow and metabolism more pronouncedly than sodium nitroprusside (PUBMED:8452257). In summary, while there is evidence that adenosine can have cardioprotective effects and improve myocardial blood flow during coronary bypass surgery, the extent of its benefits and its ability to precondition the myocardium may vary among patients and depend on factors such as the presence of good left ventricular function and the specific surgical and patient conditions. Further research may be needed to fully understand the role of adenosine in pharmacological preconditioning during coronary bypass surgery.
Instruction: Evaluation of the surveillance of hemolytic uremic syndrome in British Columbia: should it remain reportable? Abstracts: abstract_id: PUBMED:18767272 Evaluation of the surveillance of hemolytic uremic syndrome in British Columbia: should it remain reportable? Background: Hemolytic Uremic Syndrome (HUS) was made reportable in British Columbia (BC) in 1998 to detect, control and prevent verotoxigenic Escherichia coli (VTEC) cases. Concerns about under-reporting of HUS cases triggered the assessment of the sensitivity and timeliness of the reporting process in order to guide recommendations around reportability of this syndrome in BC. Methods: The BC hospitalization database was used to estimate the total number of HUS cases from April 30, 1998 to December 31,2005. HUS and VTEC cases reported in the integrated Public Health Information System (iPHIS), and HUS cases reported by a surveillance form were linked to hospitalized cases. The proportion of HUS cases detected by each of the surveillance processes was assessed. The time interval between onset of diarrhea and reporting of HUS and VTEC cases to the BC Centre for Disease Control was compared. Results: 57 HUS cases were hospitalized. Sensitivity of reporting through the surveillance form and through iPHIS was 7.0% and 19.3%, respectively. The median time interval between onset of diarrhea and reporting of both HUS and VTEC cases to iPHIS was seven days. The median time interval for reporting HUS cases via the surveillance form was 25 days. Conclusions: HUS cases were severely under-reported, the timeliness of reporting of these cases had no advantage when compared to the reporting of VTEC cases, and no public health action aimed at reducing the transmission of VTEC infections resulted from this surveillance system. The reportability of HUS in BC needs to be reconsidered, or its surveillance considerably improved. abstract_id: PUBMED:23512362 Hospital surveillance during major outbreaks of community-acquired diseases. Pandemic Influenza Hospital Surveillance (PIKS) 2009/2010 and Surveillance of Bloody Diarrhea (SBD) 2011 Background And Objective: During the influenza pandemic 2009/2010 and the outbreak of entero-haemorrhagic Escherichia coli (EHEC)/hemolytic-uremic syndrome (HUS) 2011, the statutory reporting system in Germany was complemented by additional event-related surveillance systems in hospitals. The Pandemic Influenza Hospital Surveillance (PIKS) and the Surveillance of Bloody Diarrhea (SBD) were evaluated, to make experiences available for similar future situations. Methods: The description and evaluation of our surveillance systems is based on the "Updated Guidelines for Evaluating Public Health Surveillance Systems" published by the U.S. Centers for Disease Control and Prevention in 2001. Results: PIKS and SBD could be implemented quickly and were able to capture resilient data in a timely manner both on the severity and course of the influenza pandemic 2009/2010 and the outbreak of EHEC and HUS 2011. Although lacking in representativeness, sensitive and useful data were generated. Conclusion: In large outbreaks of severe diseases, the establishment of specific hospital surveillance should be considered as early as possible. In Germany, the participating hospitals were able to rapidly implement the required measures. abstract_id: PUBMED:31556405 Shiga toxin-producing Escherichia coli in British Columbia, 2011-2017: Analysis to inform exclusion guidelines. Background: Shiga toxin-producing Escherichia coli (STEC) can cause severe illness including bloody diarrhea and hemolytic-uremic syndrome (HUS) through the production of Shiga toxins 1 (Stx1) and 2 (Stx2). E. coli O157:H7 was the most common serotype detected in the 1980s to 1990s, but improvements in laboratory methods have led to increased detection of non-O157 STEC. Non-O157 STEC producing only Stx1 tend to cause milder clinical illness. Exclusion guidelines restrict return to high-risk work or settings for STEC cases, but most do not differentiate between STEC serogroups and Stx type. Objective: To analyze British Columbia (BC) laboratory and surveillance data to inform the BC STEC exclusion guideline. Methods: For all STEC cases reported in BC in 2011-2017, laboratory and epidemiological data were obtained through provincial laboratory and reportable disease electronic systems, respectively. Incidence was measured for all STEC combined as well as by serogroup. Associations were measured between serogroups, Stx types and clinical outcomes. Results: Over the seven year period, 984 cases of STEC were reported. A decrease in O157 incidence was observed, while non-O157 rates increased. The O157 serogroup was significantly associated with Stx2. Significant associations were observed between Stx2 and bloody diarrhea, hospitalization and HUS. Conclusion: The epidemiology of STEC has changed in BC as laboratories increasingly distinguish between O157 and non-O157 cases and identify Stx type. It appears that non-O157 cases with Stx1 are less severe than O157 cases with Stx2. The BC STEC exclusion guidelines were updated as a result of this analysis. abstract_id: PUBMED:26292181 Foodborne Diseases Active Surveillance Network-2 Decades of Achievements, 1996-2015. The Foodborne Diseases Active Surveillance Network (FoodNet) provides a foundation for food safety policy and illness prevention in the United States. FoodNet conducts active, population-based surveillance at 10 US sites for laboratory-confirmed infections of 9 bacterial and parasitic pathogens transmitted commonly through food and for hemolytic uremic syndrome. Through FoodNet, state and federal scientists collaborate to monitor trends in enteric illnesses, identify their sources, and implement special studies. FoodNet's major contributions include establishment of reliable, active population-based surveillance of enteric diseases; development and implementation of epidemiologic studies to determine risk and protective factors for sporadic enteric infections; population and laboratory surveys that describe the features of gastrointestinal illnesses, medical care-seeking behavior, frequency of eating various foods, and laboratory practices; and development of a surveillance and research platform that can be adapted to address emerging issues. The importance of FoodNet's ongoing contributions probably will grow as clinical, laboratory, and informatics technologies continue changing rapidly. abstract_id: PUBMED:9458574 British Paediatric Surveillance Unit annual report. N/A abstract_id: PUBMED:22572665 Strategies for surveillance of pediatric hemolytic uremic syndrome: Foodborne Diseases Active Surveillance Network (FoodNet), 2000-2007. Background: Postdiarrheal hemolytic uremic syndrome (HUS) is the most common cause of acute kidney failure among US children. The Foodborne Diseases Active Surveillance Network (FoodNet) conducts population-based surveillance of pediatric HUS to measure the incidence of disease and to validate surveillance trends in associated Shiga toxin-producing Escherichia coli (STEC) O157 infection. Methods: We report the incidence of pediatric HUS, which is defined as HUS in children <18 years. We compare the results from provider-based surveillance and hospital discharge data review and examine the impact of different case definitions on the findings of the surveillance system. Results: During 2000-2007, 627 pediatric HUS cases were reported. Fifty-two percent of cases were classified as confirmed (diarrhea, anemia, microangiopathic changes, low platelet count, and acute renal impairment). The average annual crude incidence rate for all reported cases of pediatric HUS was 0.78 per 100,000 children <18 years. Regardless of the case definition used, the year-to-year pattern of incidence appeared similar. More cases were captured by provider-based surveillance (76%) than by hospital discharge data review (68%); only 49% were identified by both methods. Conclusions: The overall incidence of pediatric HUS was affected by key characteristics of the surveillance system, including the method of ascertainment and the case definitions. However, year-to-year patterns were similar for all methods examined, suggesting that several approaches to HUS surveillance can be used to track trends. abstract_id: PUBMED:30128150 Haemolytic uremic syndrome surveillance in children less than 15 years in Belgium, 2009-2015. Background: The Haemolytic Uremic Syndrome (HUS) is the most severe manifestation of infection with Shiga toxin-producing Escherichia coli (STEC). In Belgium, the surveillance of paediatric HUS cases is conducted by a sentinel surveillance network of paediatricians called Pedisurv. In this article, we present the main findings of this surveillance from 2009 to 2015 and we describe an annual incidence of HUS. Methods: For each case of HUS < 15 years notified by the paediatricians, clinical, microbiological and epidemiological data were collected by a questionnaire. National hospital discharge data with ICD-9 code 283.11 were used to calculate the incidence of HUS in children < 15 years. Results: From 2009 to 2015, 110 cases were notified to the Pedisurv network with a mean annual notification rate of 0.8/100,000 in children < 15 years. Death occurred in 2.5% of all patients and the median number of days of hospitalization was 10 days. One third (35.4%) of the HUS cases were confirmed positive STEC, with a majority of STEC O157. The mean annual incidence based on the hospital discharge data was 3.2/100,000 in children < 15 years and 4.5/100,000 in children < 5 years. Conclusion: The incidence of paediatric HUS in Belgium is high compared to other European countries. Its surveillance in Belgium is quite comprehensive and, although less effective than monitoring all STEC infections to detect the emergence of outbreaks, is important to better monitor circulation of the most pathogenic STEC strains. In this context, efforts are still needed to send samples and STEC strains from HUS cases to the National Reference Centre. abstract_id: PUBMED:21699769 Enhanced surveillance during a large outbreak of bloody diarrhoea and haemolytic uraemic syndrome caused by Shiga toxin/verotoxin-producing Escherichia coli in Germany, May to June 2011. Germany has a well established broad statutory surveillance system for infectious diseases. In the context of the current outbreak of bloody diarrhoea and haemolytic uraemic syndrome caused by Shiga toxin/ verotoxin-producing Escherichia coli in Germany it became clear that the provisions of the routine surveillance system were not sufficient for an adequate response. This article describes the timeline and concepts of the enhanced surveillance implemented during this public health emergency. abstract_id: PUBMED:38310676 Genomic surveillance of STEC/EHEC infections in Germany 2020 to 2022 permits insight into virulence gene profiles and novel O-antigen gene clusters. Shiga toxin-producing E. coli (STEC), including the subgroup of enterohemorrhagic E. coli (EHEC), are important bacterial pathogens which cause diarrhea and the severe clinical manifestation hemolytic uremic syndrome (HUS). Genomic surveillance of STEC/EHEC is a state-of-the-art tool to identify infection clusters and to extract markers of circulating clinical strains, such as their virulence and resistance profile for risk assessment and implementation of infection prevention measures. The aim of the study was characterization of the clinical STEC population in Germany for establishment of a reference data set. To that end, from 2020 to 2022 1257 STEC isolates, including 39 of known HUS association, were analyzed and lead to a classification of 30.4 % into 129 infection clusters. Major serogroups in all clinical STEC analyzed were O26, O146, O91, O157, O103, and O145; and in HUS-associated strains were O26, O145, O157, O111, and O80. stx1 was less frequently and stx2 or a combination of stx, eaeA and ehxA were more frequently found in HUS-associated strains. Predominant stx gene subtypes in all STEC strains were stx1a (24 %) and stx2a (21 %) and in HUS-associated strains were mainly stx2a (69 %) and the combination of stx1a and stx2a (12.8 %). Furthermore, two novel O-antigen gene clusters (RKI6 and RKI7) and strains of serovars O45:H2 and O80:H2 showing multidrug resistance were detected. In conclusion, the implemented surveillance tools now allow to comprehensively define the population of clinical STEC strains including those associated with the severe disease manifestation HUS reaching a new surveillance level in Germany. abstract_id: PUBMED:9241128 Surveillance for Escherichia coli O157:H7 infections in Minnesota by molecular subtyping. Background: Escherichia coli O157:H7 is a leading cause of diarrhea and the hemolytic-uremic syndrome. Current public health surveillance for E. coli O157:H7 requires considerable resources; traditional methods lack the sensitivity and specificity to detect outbreaks effectively. Methods: During 1994 and 1995, the Minnesota Department of Health requested that all clinical isolates of E. coli O157:H7 be submitted to our laboratory. Isolates were subtyped by pulsed-field gel electrophoresis (PFGE), and patients were interviewed about potential sources of infection. Results: In 1994 and 1995, 344 cases of E. coli O157:H7 infection were reported to the Minnesota Department of Health; 317 (92 percent) were subtyped by PFGE, and 143 distinct PFGE patterns were identified. Ten outbreaks of E. coli O157:H7 were identified; these accounted for 56 (18 percent) of the 317 subtyped cases. Four outbreaks were detected solely as a result of subtype-specific surveillance. In 11 two-week periods, the number of reported cases of E. coli O157:H7 doubled from the previous two weeks. In eight of these instances, the patterns identified were dissimilar and there were no outbreaks. Two of the remaining three increases resulted from multiple simultaneous outbreaks. Conclusions: Subtype-specific surveillance for E. coli O157:H7 can identify outbreaks that are not detected by traditional methods and can ascertain whether sudden increases in reported cases are due to sporadic isolated cases or to one or more outbreaks. Answer: The evaluation of the surveillance of hemolytic uremic syndrome (HUS) in British Columbia (BC) indicates that there are significant issues with the current reporting system that need to be addressed. The study found that HUS cases were severely under-reported, with the sensitivity of reporting through the surveillance form and through the integrated Public Health Information System (iPHIS) being only 7.0% and 19.3%, respectively (PUBMED:18767272). Additionally, the timeliness of reporting HUS cases had no advantage when compared to the reporting of verotoxigenic Escherichia coli (VTEC) cases, and no public health action aimed at reducing the transmission of VTEC infections resulted from this surveillance system (PUBMED:18767272). Given these findings, the reportability of HUS in BC needs to be reconsidered, or its surveillance considerably improved. The current system's lack of sensitivity and timeliness undermines its effectiveness in detecting, controlling, and preventing VTEC cases, which is the primary purpose of making HUS reportable. To improve the surveillance of HUS and similar diseases, the establishment of specific hospital surveillance during major outbreaks should be considered as early as possible, as demonstrated by the successful implementation of the Pandemic Influenza Hospital Surveillance (PIKS) and the Surveillance of Bloody Diarrhea (SBD) during the influenza pandemic and EHEC/HUS outbreak in Germany (PUBMED:23512362). These systems were able to capture resilient data in a timely manner, although they lacked representativeness. In conclusion, the surveillance of HUS in BC as it currently stands is not sufficiently effective, and significant improvements or a reevaluation of its reportability are necessary to ensure that the surveillance system fulfills its intended public health functions.
Instruction: Is impaired set-shifting an endophenotype of anorexia nervosa? Abstracts: abstract_id: PUBMED:25046823 Cognitive deficits as an endophenotype for anorexia nervosa: an accepted fact or a need for re-examination? Objective: To investigate whether impaired set shifting and weak central coherence represent state or trait characteristics and, therefore, candidate endophenotypes of anorexia nervosa (AN). Method: Forty-nine individuals with lifetime AN (24 acutely unwell, 10 weight recovered, and 15 fully recovered) and 43 healthy controls completed the Wisconsin Card Sorting Test (WCST), the Matching Familiar Figures Test, and the Rey Complex Figure Task measuring cognitive flexibility, local processing, and global processing, respectively. Participants also completed questionnaires assessing eating disorder, anxiety and depressive symptoms, obsessional traits, interpersonal functioning, and quality of life. Body mass index was calculated from height and weight measurements. Results: Participants with lifetime AN demonstrated poorer set shifting ability than healthy controls as evidenced by a greater number of perseverative errors on the WCST. When participants were grouped according to illness status, only those in the two recovered groups demonstrated poorer set shifting ability than healthy controls while patients with acute AN performed comparably to all other groups. There were no significant differences between groups on measures of local and global processing. No relationship was found between specific clinical features of AN and cognitive performance. Discussion: The results of this study are consistent with a global trend toward set shifting difficulties in patients with AN but do not support weak central coherence as a candidate endophenotype for AN. These findings have clinical implications in terms of treatment selection and planning, particularly in relation to the use of cognitive remediation therapy with patients with AN. abstract_id: PUBMED:16330590 Is impaired set-shifting an endophenotype of anorexia nervosa? Objective: Set-shifting difficulties have been reported in subjects with anorexia nervosa and appear to persist after recovery; therefore, they may be endophenotypic traits. The goals of this study were to investigate whether set-shifting difficulties are familial by examining discordant sister-pairs in comparison with healthy unrelated women and to replicate, with a broader battery, the lack of influence of an acute illness state on neuropsychological performance. Method: Forty-seven pairs of sisters discordant for anorexia nervosa and 47 healthy unrelated women who were comparable in age and IQ completed neuropsychological tasks selected to assess set-shifting ability. Analyses of variance with standard errors that are robust against correlations within family clusters were used to compare the groups. Results were adjusted for obsessive-compulsive, anxiety, and depression symptoms. Subjects with acute (N=24) and fully remitted (N=23) anorexia nervosa were compared to assess state versus trait effects. Results: Sisters with and without anorexia nervosa took significantly longer than unrelated healthy women to shift their cognitive set (CatBat task) and demonstrated greater perceptual rigidity (Haptic Illusion task) but did not differ significantly from each other. Women with anorexia nervosa were slower than other groups on Trail Making tasks. Women who had fully recovered from anorexia nervosa made significantly fewer errors than those with acute anorexia nervosa on the Trail Making alphabet task, but these subgroups did not differ on other measures. Conclusions: Both affected and unaffected sisters had more set-shifting difficulties than unrelated healthy women. This finding, together with the replicated finding that set-shifting difficulties persist after recovery, suggests that set-shifting difficulties are trait characteristics and may inform the search for the endophenotype in anorexia nervosa. abstract_id: PUBMED:21253415 Impaired Set-Shifting Ability in Patients with Eating Disorders, Which Is Not Moderated by Their Catechol-O-Methyltransferase Val158Met Genotype. The aim of this study was to examine the set-shifting ability in women with both anorexia nervosa (AN) and bulimia nervosa (BN) and to investigate whether it is contributed by the catechol-O-methyltransferase (COMT) Val158Met genotype. A total of 102 Korean participants-40 women with lifetime AN, 28 women with lifetime BN, and 34 healthy women of comparable age and intelligence quotient- were examined. A neuropsychological battery of tests was applied and blood samples were obtained for COMT Val158Met genotyping. Set-shifting impairments Trail Making Test (TMT, Part B) were found in patients with AN and BN, respectively. Furthermore, the eating disorders were also linked to deficits in attentional mechanisms (TMT, Part A) and motor skills (Finger Tapping Test). Finally, set-shifting and its link to eating disorders were not moderated by COMT Val158Met genotype. abstract_id: PUBMED:25897402 Are poor set-shifting abilities associated with a higher frequency of body checking in anorexia nervosa? Background: The rigid and obsessional features of anorexia nervosa (AN) have led researchers to explore possible underlying neuropsychological difficulties. Numerous studies have demonstrated poorer set-shifting in patients with AN. However, due to a paucity of research on the connection between neuropsychological difficulties and the clinical features of AN, the link remains hypothetical. The main objective of this study was to explore the association between set-shifting and body checking. Methods: The sample consisted of 30 females diagnosed with AN and 45 healthy females. Set-shifting was assessed using the Wisconsin Card Sorting Test (WCST) and frequency of body checking was assessed using the Body Checking Questionnaire (BCQ). Results: The analysis showed no significant correlations between any of the WCST scores and the BCQ. Conclusion: The results suggest that there is no association between set-shifting difficulties and frequency of body checking among patients with AN. An alternative explanation could be that the neuropsychological measure included in this study is not sensitive to the set-shifting difficulties observed in clinical settings. We recommend that future studies include more ecologically valid measures of set-shifting in addition to standard neuropsychological tests. abstract_id: PUBMED:30856379 Set-shifting in adolescents with weight-restored anorexia nervosa and their unaffected family members. Set-shifting difficulties have been suggested to underlie rigid and inflexible thinking in patients with anorexia nervosa (AN). Studies reported set-shifting deficiencies in adults with AN and also in their unaffected family members, suggesting that set-shifting deficits are heritable in AN. Surprisingly, studies failed to show set-shifting difficulties in adolescents with AN. If set-shifting difficulties are heritable, it is not clear why they are absent in adolescents with AN. The current study aimed to elucidate this discrepancy by assessing several components of set-shifting in adolescents with weight-restored AN (WR-AN) and their unaffected parents and siblings. Twenty-one families that include an adolescent who was diagnosed with AN prior to weight restoration (N = 19), an unaffected parent (N = 18), and an unaffected sibling (N = 20) were recruited. Additionally, 28 healthy control families were recruited and included an age-matched adolescent (N = 27), a parent (N = 26), and a sibling (N = 17). Visual-motor set-shifting, verbal set-shifting, and set-shifting clean of inhibition were assessed using the Delis-Kaplan Executive Function System. The results revealed intact set-shifting in parents and siblings of adolescents with WR-AN. Surprisingly, the results revealed superior visual-motor and verbal set-shifting in adolescents with WR-AN compared to age-matched controls. However, when controlling for inhibition abilities, poorer set-shifting was revealed in adolescents with WR-AN. The results suggest that superior inhibition abilities in adolescents with WR-AN may compensate for their set-shifting deficiencies. The study emphasizes the importance of controlling for inhibition abilities when assessing neurocognitive functioning in adolescents with AN. Furthermore, the study does not support the notion that set-shifting deficits are heritable in adolescent AN. abstract_id: PUBMED:24802417 Set-shifting and its relation to clinical and personality variables in full recovery of anorexia nervosa. Objective: First, this study aimed to explore whether set-shifting is inefficient after full recovery of anorexia nervosa (recAN). Second, this study wanted to explore the relation of set-shifting to clinical and personality variables. Method: A total of 100 recAN women were compared with 100 healthy women. Set-shifting was assessed with Berg's Card Sorting Test. Expert interviews yielded assessments for the inclusion/exclusion criteria, self-ratings for clinical and personality variables. Results: Compared with the healthy control group, the recAN participants achieved fewer categories, showed more perseverations and spent less time for shifting set. Perfectionism is correlated with set-shifting but in converse directions in the two groups. Discussion: Our study supports the findings of inefficiencies in set-shifting after full recovery from AN. Higher perfectionism in the recAN group is associated with better set-shifting ability, whereas higher perfectionism in the healthy control group is related to worse set-shifting ability. abstract_id: PUBMED:22748187 Is impaired set-shifting a feature of "pure" anorexia nervosa? Investigating the role of depression in set-shifting ability in anorexia nervosa and unipolar depression. Impaired set-shifting has been reported in patients with anorexia nervosa (AN) and in patients with affective disorders, including major depression. Due to the prevalent comorbidity of major depression in AN, this study aimed to examine the role of depression in set-shifting ability. Fifteen patients with AN without a current comorbid depression, 20 patients with unipolar depression (UD) and 35 healthy control participants were assessed using the Trail Making Test (TMT), the Wisconsin Card Sorting Test (WCST) and a Parametric Go/No-Go Test (PGNG). Set-shifting ability was intact in patients with AN without a comorbid depression. However, patients with UD performed significantly poorer in all three tasks compared to AN patients and in the TMT compared to healthy control participants. In both patient groups, set-shifting ability was moderately negatively correlated with severity of depressive symptoms, but was unrelated to BMI and severity of eating disorder symptoms in AN patients. Our results suggest a pivotal role of comorbidity for neuropsychological functioning in AN. Impairments of set-shifting ability in AN patients may have been overrated and may partly be due to comorbid depressive disorders in investigated patients. abstract_id: PUBMED:20398910 Exploring the neurocognitive signature of poor set-shifting in anorexia and bulimia nervosa. Poor set-shifting has been implicated as a risk marker, maintenance factor and candidate endophenotype of eating disorders (ED). This study aimed to add clarity to the cognitive profile of set-shifting by examining the trait across ED subtypes, assessing whether it is a state or trait marker, and whether it runs in families. A battery of neuropsychological tasks was administered to 270 women with current anorexia (AN) and bulimia nervosa (BN), women recovered from AN, unaffected sisters of AN and BN probands, and healthy control women. Set-shifting was examined using both individual task scores and a composite variable (poor/intact/superior shifting) calculated from four neuropsychological tasks. Poor set-shifting was found at a higher rate in those with an ED particularly binge/purging subtypes. Some evidence for poor set-shifting was also present in those recovered from AN and in unaffected sisters of AN and BN. Clinically, poor set-shifting was associated with a longer duration of illness and more severe ED rituals but not body mass index. In sum, poor set-shifting is a transdiagnostic feature related to aspects of the illness but not to malnutrition. In part it is a familial trait, and is likely involved in the maintenance of the illness. abstract_id: PUBMED:35060623 Is set-shifting a risk factor for anorexia nervosa or a broader range of disordered eating? Steegers et al. using data from the Generation R cohort study examined whether set-shifting difficulties at age 4 predict body dissatisfaction, weight status, and restrictive eating at age 9 as a way to examine whether cognitive rigidity was a risk factor for an eating disorder. The authors concluded that set-shifting difficulties problems were predictive of features of anorexia nervosa. An alternative interpretation is that set shifting is predictive of large weight changes and eating behaviors more generally. It is not clear that set-shifting predicts becoming underweight. Set shifting appears protective against larger weight gains. It is possible that set shifting is related to degree to which the children resist the myriad of cues to eat for reasons other than hunger. Whether this is due to a higher degree of awareness, genetics, or other reasons should be investigated. The results demonstrate that the cognitive process that relate to restrictive behaviors are complex. To better understand those relationships, it will be essential for the field to adopt a conceptualization of eating disorders that includes the earliest stages (disordered eating) of the full range of eating disorders. abstract_id: PUBMED:31145908 Functional MRI with a set-shifting task in adolescent anorexia nervosa: A cross-sectional and follow-up study. Studies of set-shifting in adolescent AN present conflicting results, since not all have found differences with regard to controls. To date, no functional Magnetic Resonance Imaging (fMRI) studies have been carried out in adolescent patients, nor have patients been assessed after weight recovery. In this study, 30 female AN patients aged 12-17 and 16 matched control subjects were assessed both at baseline and after six months and renutrition using a structured diagnostic interview, clinical and neurocognitive scales, and fMRI during a set-shifting task. Adolescent AN patients presented similar performance on different neurocognitive tests and also on a set-shifting task during fMRI, but they showed a lower activation in the inferior and middle occipital and lingual gyri, fusiform gyri and cerebellum during the set-shifting task. No correlations were found between decreased activation and clinical variables such as body mass index, eating or depressive symptoms. After six months of treatment and renutrition in AN patients, there were no differences between patients and controls. These results show that adolescent AN patients have lower activation in relevant brain areas during a set-shifting task, and support the use of fMRI with set-shifting paradigms as a biomarker in future studies. Answer: Impaired set-shifting has been suggested as a potential endophenotype for anorexia nervosa (AN). Several studies have investigated this cognitive feature in individuals with AN, their recovered counterparts, and family members to determine whether it represents a state or trait characteristic of the disorder. Research has shown that individuals with lifetime AN, including those who are acutely unwell, weight recovered, and fully recovered, demonstrate poorer set-shifting ability than healthy controls, as evidenced by a greater number of perseverative errors on the Wisconsin Card Sorting Test (WCST) (PUBMED:25046823). This finding is consistent with the hypothesis that set-shifting difficulties may be a trait characteristic of AN. Further supporting this notion, a study found that both affected and unaffected sisters of individuals with AN had more set-shifting difficulties than unrelated healthy women, suggesting that these difficulties may be trait characteristics and could inform the search for an endophenotype in AN (PUBMED:16330590). Additionally, set-shifting impairments were found in patients with AN and bulimia nervosa (BN), and these deficits were not moderated by the catechol-O-methyltransferase (COMT) Val158Met genotype, indicating a potential cognitive marker for eating disorders (PUBMED:21253415). However, the relationship between set-shifting and clinical features of AN is not straightforward. One study found no significant correlation between set-shifting difficulties and the frequency of body checking among patients with AN (PUBMED:25897402). Another study revealed that while set-shifting deficits were present in adults with AN and their unaffected family members, they were not observed in adolescents with weight-restored AN, suggesting that superior inhibition abilities in adolescents may compensate for their set-shifting deficiencies (PUBMED:30856379). Moreover, set-shifting inefficiencies were observed after full recovery from AN, and these were associated with higher perfectionism in the recovered AN group, indicating a possible link between set-shifting and personality variables (PUBMED:24802417). However, another study suggested that the role of depression in set-shifting ability is significant, as patients with AN without comorbid depression did not show impaired set-shifting, while those with unipolar depression did (PUBMED:22748187).
Instruction: Is clinical breast examination effective in Japan? Abstracts: abstract_id: PUBMED:24925524 Is clinical breast examination effective in Japan? Consideration from the age-specific performance of breast cancer screening combining mammography with clinical breast examination. Background: There is controversy about the value of clinical breast examination (CBE) in breast cancer screening programs that include mammography. Methods: In Fukui Prefecture, a screening combining mammography with CBE was employed on 62,447 women from 2004 to 2009. We examined the sensitivity and specificity of mammography alone, and mammography and CBE together for each age group (40-49, 50-59, 60-69, and 70-79). Results: 167 breast cancers and 49 false-negative cancers were detected during 5 years. For the combined screening, the sensitivities were 73.1, 74.1, 78.3, and 86.5 %, and the specificities were 83.8, 87.5, 89.8, and 90.9 % in the groups of 40-49, 50-59, 60-69, and 70-79 years, respectively. In the mammography-specific analysis, sensitivity decreased to 69.8 % (-3.3 %), 66.7 % (-7.7 %), 77.3 % (-1.0 %), and 83.8 % (-2.7 %) in the groups of 40-49, 50-59, 60-69, and 70-79 years, respectively. There were greater reductions in the groups of 40-49 and 50-59 years than in those of 60-69 and 70-79 years, but there was no statistically significant decrease. Specificity generally increased with increasing age and there was a significant improvement in specificity among all age groups, except that of 70-79 years. Conclusions: Our findings suggest that there is a trade-off between sensitivity and specificity associated with CBE added to mammography. This tendency is greater in those 40-50 years of age than in those 60-70 years of age. We consider that CBE may be omitted from breast cancer screening among women aged 60 and 70 years. Furthermore, another modality to complement mammography screening in younger Japanese women is expected. abstract_id: PUBMED:10429651 A case control study on the effectiveness of breast cancer screening by clinical breast examination in Japan. A case-control study was conducted in Miyagi and Gunma prefectures, Japan, to evaluate the effectiveness of breast cancer screening by clinical breast examination (CBE) alone in reducing breast cancer mortality. Case subjects, who were female and had died of breast cancer, were collected from residential registry files and medical records. Control subjects matched in sex, age and residence were randomly selected from residential registry files. The screening histories during 5 years prior to the cases having been diagnosed as breast cancer were surveyed using the examinee files of the screening facilities. Finally, the data of 93 cases and 375 controls were analyzed. The odds ratio (OR) of breast cancer death for participating in screening at least once during 5 years was 0.93 (95% confidence interval (95% CI) 0.48-1.79). The cases were more symptomatic than the controls when screened. If the participants who had had symptoms in their breasts were classified as not screened, the OR decreased to 0.56 (95% CI 0.27-1.18). The case control study suggests that the current screening modality (CBE) lacks effectiveness (OR = 0.93), although it might be effective for an asymptomatic population (OR = 0.56). The number of cases was small, and a larger case-control study is desirable to define whether CBE is effective or not. However, it is necessary to consider the introduction of mammographic screening to reduce breast cancer mortality in Japan. abstract_id: PUBMED:9458312 The effect of mass screening by physical examination combined with regular breast self-examination on clinical stage and course of Japanese women with breast cancer. A mass screening program for breast cancer in Japan consists of physical examination (PE) and education on regular breast self-examination (BSE). The effect of PE with BSE on clinical stages and courses of breast cancer patients were retrospectively analyzed. Clinical stages and courses were compared between; i) patients who were examined in outpatient clinics (OPC, n=587), ii) patients who were detected by mass screening with regular BSE [BSE(+), n=68], and iii) without BSE [BSE(-), n=178]. Clinical stage in BSE(+) was significantly earlier than that in BSE(-) or OPC. As early stage cancer was most common in BSE(+), conservative surgery was mostly selected. Survival curve in BSE(+) was significantly better than those in BSE(-) or OPC. BSE complements the role of mass screening by PE for early detection and a more favorable clinical course. abstract_id: PUBMED:21596057 The role of clinical breast examination and breast self-examination. The efficacy of screening by clinical breast examination (CBE) and/or breast self-examination (BSE) is reviewed using indirect evidence from randomized breast screening trials and that from observational studies. In countries where breast cancer is diagnosed at an advanced stage, screening by CBE with the teaching of BSE as an integral component will probably be effective in reducing breast cancer mortality. However, in technically advanced countries where adequate treatment is given, no screening modality is likely to be sufficiently beneficial to outweigh the harms of screening, especially false positives and over-diagnosis. abstract_id: PUBMED:1544105 Clinical breast examination and breast self-examination. Past and present effect on breast cancer survival. Increasing attention to self-detection of breast masses and clinical breast examination during this century have contributed to a progressive reduction in the size of breast cancers at detection and a progressive improvement in survival. Mammography is more sensitive than breast palpation for the detection of breast cancer, however, mammography does not detect all palpable cancers and additional interval cancers become palpable between screenings. Breast self-examination, clinical breast examination, and mammography are complementary screening modalities. In populations where mammography is not available or is not appropriate as a screening modality, clinical breast examination and breast self-examination are particularly important. abstract_id: PUBMED:28764277 Evaluation of the Efficacy of Clinical Breast Examination Gloves in the Diagnosis of Breast Lumps. Introduction: Recent studies have questioned the efficacy of mammography in reducing breast cancer-related mortality. Additionally, the efficacies of commercially available gloves marketed as aiding the detection of breast lumps have not been independently verified. Aim: To evaluate the efficacy of clinical breast examination gloves in the detection of breast lumps. Materials And Methods: During the period from October 2011 to June 2012, patients were submitted to clinical examination with and without gloves. This prospective study involved 202 patients who underwent conventional clinical breast examination (test 1) or clinical breast examination with Sensifemme® gloves (test 2). All patients underwent subsequent bilateral ultrasonography (test 3) to confirm the findings of the physical examinations. The Chi-square test was used to compare values, while the kappa concordance index was used to determine the concordance between the diagnostic tests. Results: The mean age of the patients was 43 years; 298 breast lumps were detected. In the clinical examination group (test 1), sensitivity was 54%, specificity was 78%, and accuracy was 57%. These rates for clinical breast examinations with gloves (test 2) were 68%, 58%, and 66%, respectively. The glove increased the diagnosis of breast nodules by 14%; the rate of false-positives was also higher (42% for test 2 compared to 22% for test 1). The accuracy of the glove was found to be superior to clinical examination after 100 patients had been examined. The kappa indices for test 1 vs. test 3 and for test 2 vs. test 3 were 0.15 and 0.16, respectively. Conclusion: Clinical examination using the glove was more effective than clinical examination with bare hands for the diagnosis of breast lumps, as it increased the sensitivity and accuracy of lump detection. However, this was at the expense of a higher false-positive rate, which can lead to further tests, unnecessary biopsies, and patient anxiety. The concordance of clinical examination results (whether performed with or without the glove) with those of ultrasonography is weak. Moreover, the glove has a steep learning curve that may discourage its use in certain circumstances. abstract_id: PUBMED:8402601 Relationships of barriers and facilitators to breast self-examination, mammography, and clinical breast examination in a worksite population. The American Cancer Society recommends a regimen for breast cancer screening that includes mammograms, clinical breast examination, and breast self-examination. Compliance with breast cancer screening guidelines has been linked to a number of barriers and facilitators. These barriers and facilitators seem to lie within the cognitive framework and generalized beliefs of women, and in the situational contexts in which they lead their lives. A comprehensive study was designed to investigate variables related to breast cancer screening behaviors (breast self-examination, mammography, and clinical breast examination) of working women > or = 35 years of age at their worksite environments. A factor analysis identified similar sets of composite variables related to each of the screening modalities, and a discriminant analysis was performed for each screening technique to identify those variables that were most significant in predicting compliance with screening guidelines. The variables discomfort, perceived efficacy, and desire for control over health were significant for all three screening behaviors. Perceived importance was identified as a fourth variable for mammography and clinical breast examination, and lack of knowledge was a fourth variable for breast self-examination. Effective breast cancer screening programs involve all three screening techniques. In the design of education and intervention programs at worksites, it is critical to emphasize the commonalities of the variables that emerged in this study as important for each screening technique. Health-care professionals who implement such intervention programs need to explore and bring into the open these common barriers and facilitators to maximize working women's compliance with breast screening guidelines. abstract_id: PUBMED:36119195 Factors related to clinical breast examination: A cross-sectional study. Background And Aim: Breast cancer is one of the most common types of cancer among women as well as one of the most serious and important public health issues in developing countries. The aim of the present study was to evaluate the factors related to clinical breast examination in women in Tehran. Method: This cross-sectional study was conducted on 859 women in Tehran, Iran in 2020. Logistic regression was applied to identify determinant factors that related to clinical breast examination. Result: The prevalence of clinical breast examination was 52.6%. Results indicated significant differences between those who underwent clinical breast examination and those who had a nonclinical breast examination in terms of age, housing conditions, marital status, problem in the breast, perceived susceptibility, perceived barriers, fatalism, and self-care. Conclusion: It is essential to inform and educate women about breast cancer and associated complications and problems after being diagnosed with breast cancer as well as about the screening and diagnostic methods, including the need for clinical breast examination by a specialist. abstract_id: PUBMED:11029764 Effectiveness of mass screening for breast cancer in Japan. Background: Breast cancer screening has been conducted in Japan mainly by physical examination, the standard method for breast cancer screening according to the Law of Health Services for the Elderly. The purpose of this study was to evaluate the effectiveness of mass screening for breast cancer in Japan. Methods: We calculated the average coverage-rates for breast cancer screening per year from 1986 to 1995 for women aged 30-69 years for all of the 3255 municipalities in Japan, selecting "high coverage-rate" municipalities with average coverage-rates of 20%, 30%, 40% or more. Two municipalities were selected as "controls" for each high coverage-rate municipality, and were matched for population, National Health Insurance rate, and the age-adjusted death rate from cancer of the female breast in the period 1986-90. We compared the change in the age-adjusted death rate from 1986-90 to 1991-95 of the high coverage-rate municipalities and the comparable controls. Results: The percent reduction in the age-adjusted death rate from cancer of the breast in the high coverage-rate municipalities was statistically significantly greater than those in the controls. Conclusions: The results suggest that mass screening for female breast cancer, mainly by physical examination, contributed to the reduction of mortality from breast cancer. abstract_id: PUBMED:16286905 Mammographic screening for non-palpable breast cancer in Japan. In Japan, mammography was endorsed for breast cancer screening in women aged 40 and over by the Ministry of Health, Labor and Welfare in 2004. The spread of mammographic screening has caused an increase in the incidence of non-palpable breast cancer. Precision reading for mammography is necessary to detect non-palpable breast cancer. When mass and focal asymmetric density is noted, it is important to analyze the density, density gradient, internal structure, margin and associated findings. Calcifications are classified by morphology and distribution mainly. It is necessary to distinguish secretor-type calcifications from necrotic-type calcifications. The Iwate Cancer Association performed a population-based screening program for breast cancer using mammography combined with clinical breast examination of 42,065 women in Iwate Prefecture from 1999 to 2003. A total of 2,329(5.7%)women were recalled and the 112 cases(0.27%) of cancer were detected in 114 breasts. Of 114 breast cancers, 40 (35%) were non-palpable and 74(65%) were palpable. The early breast cancer (stage 0 and I) rate of the non-palpable group was significantly higher than that of the palpable group (90% and 47%, p = 0.0003). The node-negative rate of the non-palpable group was significantly higher than that of the palpable group (92% and 68%, p = 0.011). It is expected that mammographic screening will be expanded and that the mortality rate from breast cancer in Japan will decrease, as in Europe and the United States. Answer: The effectiveness of clinical breast examination (CBE) in Japan has been evaluated through various studies, with mixed results. In Fukui Prefecture, a screening program combining mammography with CBE was employed, and the findings suggested that there is a trade-off between sensitivity and specificity associated with adding CBE to mammography. The study indicated that CBE may be omitted from breast cancer screening among women aged 60 and 70 years, and another modality to complement mammography screening in younger Japanese women is expected (PUBMED:24925524). A case-control study in Miyagi and Gunma prefectures evaluated the effectiveness of CBE alone in reducing breast cancer mortality. The results suggested that the current screening modality (CBE) lacks effectiveness (OR = 0.93), although it might be effective for an asymptomatic population (OR = 0.56). The study concluded that it is necessary to consider the introduction of mammographic screening to reduce breast cancer mortality in Japan (PUBMED:10429651). Another study analyzed the effect of mass screening by physical examination combined with regular breast self-examination (BSE) on clinical stages and courses of breast cancer patients. It found that clinical stage in patients who practiced BSE was significantly earlier than those who did not, and survival curves were significantly better, suggesting that BSE complements the role of mass screening by PE for early detection and a more favorable clinical course (PUBMED:9458312). The efficacy of CBE and BSE is reviewed using indirect evidence from randomized breast screening trials and observational studies. It is suggested that in countries where breast cancer is diagnosed at an advanced stage, screening by CBE with the teaching of BSE as an integral component will probably be effective in reducing breast cancer mortality. However, in technically advanced countries where adequate treatment is given, no screening modality is likely to be sufficiently beneficial to outweigh the harms of screening (PUBMED:21596057). In summary, while CBE may have some effectiveness, particularly when combined with BSE or mammography, its standalone efficacy in reducing breast cancer mortality in Japan is questionable. The introduction of mammographic screening is considered necessary to improve outcomes, and CBE may not be as beneficial in populations where other advanced screening modalities are available and where early detection and treatment are already in place.
Instruction: Solitary Hepatic Nodule Adjacent to the Right Portal Vein: A Common Finding of Alagille Syndrome? Abstracts: abstract_id: PUBMED:26284540 Solitary Hepatic Nodule Adjacent to the Right Portal Vein: A Common Finding of Alagille Syndrome? Background: Hepatic lesions have been described in Alagille syndrome (ALGS) in isolated case reports, and most of these have been reported to be hepatocellular carcinoma. Objectives: The aim of the present study was to determine the frequency, imaging, and histopathologic characteristics of hepatic lesions in children with ALGS. Methods: Available abdominal imaging of children with ALGS was retrospectively reviewed to note the presence of any focal liver lesion, its location, and imaging characteristics. Other findings including signs of portal hypertension, portal lymph nodes, and splenic and renal abnormalities were also noted. Findings were correlated with pathology in available cases and with clinical follow-up. Results: Of 55 children with clinically and/or genetically confirmed ALGS followed in the liver clinic, 39 (19 boys, 20 girls; mean age 8.9 years) with imaging available on picture archival and communication system were included in the study. Focal hepatic lesions were seen in 12 of the 39 (30%) children, solitary in 11 and multiple in 1. Ten of these children had a large nodule adjacent to the right portal vein. The median diameter of the lesions was 8.1 cm (range 5.6-9.8 cm). Magnetic resonance imaging features and pathology in available cases were suggestive of a regenerative nodule. α-fetoprotein levels were normal in all except 1 child who had mild elevation. Conclusions: Combining our series and previous case reports, the presence of a large nodule adjacent to the right portal vein appears to be a common finding in ALGS. The typical location, normal α-fetoprotein levels, and magnetic resonance imaging features with vessels coursing through the lesion can reliably differentiate this benign nodule from hepatocellular carcinoma. abstract_id: PUBMED:36062284 Giant Hepatic Regenerative Nodule in a Patient With Hepatitis B Virus-related Cirrhosis. Hepatic regenerative nodules are reactive hepatocellular proliferations that develop in response to liver injury. Giant hepatic regenerative nodules of 10 cm or more are extremely rare and have only been reported in patients with biliary atresia or Alagille syndrome. A 50-year-old man presented with a pathologically confirmed giant 11.3×9.4×11.2 cm hepatic regenerative nodule and hepatitis B virus-related cirrhosis. Imaging of intrahepatic nodule included mild hyperenhancement in the portal phase of contrast-enhanced CT and the hepatobiliary phase in the gadoxetic acid-enhanced MRI scan, as well as the portal vein crossing through sign in the setting of liver cirrhosis. This case highlights the imaging characteristics of giant hepatic regenerative nodules in hepatitis cirrhosis. abstract_id: PUBMED:12732101 Presinusoidal portal hypertension due to portal thrombosis in a patient with Alagille's syndrome We present the case of a 16-year old woman with Alagille's syndrome, who had upper gastrointestinal bleeding due to rupture of esophageal varices secondary to presinusoidal portal hypertension without liver fibrosis. Portal thrombosis is a manifestation previously unreported in association to this syndrome. abstract_id: PUBMED:16549187 Venous complications after orthotopic liver transplantation. We report venous complications, including portal vein and hepatic vein stenoses, that required interventional radiological treatment in three pediatric and two adult living related liver transplant recipients. Between April 2001 and April 2005, 81 liver transplantations were performed at our hospital. Sixty-two grafts were from living donors. During follow-up, three portal vein stenoses were identified in three pediatric recipients, and two hepatic vein stenoses in two adult patients. In the children, two had received left lateral segment grafts, and one had received a right lobe graft from two mothers and one father, respectively. The etiologies of liver failure were Alagille syndrome, biliary atresia, and fulminant Wilson's disease. Portal vein stenoses were identified at 8, 11, and 12 months after transplantation; all three patients underwent percutaneous transhepatic portal venous angioplasty with a success rate of 100%. The mean follow-up was 102 days; no recurrence has occurred. In contrast, hepatic venous stenoses were diagnosed in two adult recipients. One of them was a 24-year-old woman with autoimmune hepatitis and the other a 43-year-old man with cryptogenic cirrhosis. Hepatic vein stenoses were diagnosed at 3 and 4 months after transplantation. Both hepatic vein stenoses were dilated with balloon angioplasties via the transjugular route. Venous complications identified by Doppler ultrasonography were confirmed by computerized tomographic angiography. Angioplasty represents an effective and safe alternative to reconstructive surgery in the treatment of venous complications after liver transplantation. abstract_id: PUBMED:35317173 Development of the nervous system in mouse liver. Background: The role of the hepatic nervous system in liver development remains unclear. We previously created functional human micro-hepatic tissue in mice by co-culturing human hepatic endodermal cells with endothelial and mesenchymal cells. However, they lacked Glisson's sheath [the portal tract (PT)]. The PT consists of branches of the hepatic artery (HA), portal vein, and intrahepatic bile duct (IHBD), collectively called the portal triad, together with autonomic nerves. Aim: To evaluate the development of the mouse hepatic nervous network in the PT using immunohistochemistry. Methods: Liver samples from C57BL/6J mice were harvested at different developmental time periods, from embryonic day (E) 10.5 to postnatal day (P) 56. Thin sections of the surface cut through the hepatic hilus were examined using protein gene product 9.5 (PGP9.5) and cytokeratin 19 (CK19) antibodies, markers of nerve fibers (NFs), and biliary epithelial cells (BECs), respectively. The numbers of NFs and IHBDs were separately counted in a PT around the hepatic hilus (center) and the peripheral area (periphery) of the liver, comparing the average values between the center and the periphery at each developmental stage. NF-IHBD and NF-HA contacts in a PT were counted, and their relationship was quantified. SRY-related high mobility group-box gene 9 (SOX9), another BEC marker; hepatocyte nuclear factor 4α (HNF4α), a marker of hepatocytes; and Jagged-1, a Notch ligand, were also immunostained to observe the PT development. Results: HNF4α was expressed in the nucleus, and Jagged-1 was diffusely positive in the primitive liver at E10.5; however, the PGP9.5 and CK19 were negative in the fetal liver. SOX9-positive cells were scattered in the periportal area in the liver at E12.5. The Jagged-1 was mainly expressed in the periportal tissue, and the number of SOX9-positive cells increased at E16.5. SOX9-positive cells constructed the ductal plate and primitive IHBDs mainly at the center, and SOX-9-positive IHBDs partly acquired CK19 positivity at the same period. PGP9.5-positive bodies were first found at E16.5 and HAs were first found at P0 in the periportal tissue of the center. Therefore, primitive PT structures were first constructed at P0 in the center. Along with remodeling of the periportal tissue, the number of CK19-positive IHBDs and PGP9.5-positive NFs gradually increased, and PTs were also formed in the periphery until P5. The numbers of NFs and IHBDs were significantly higher in the center than in the periphery from E16.5 to P5. The numbers of NFs and IHBDs reached the adult level at P28, with decreased differences between the center and periphery. NFs associated more frequently with HAs than IHBDs in PTs at the early phase after birth, after which the number of NF-IHBD contacts gradually increased. Conclusion: Mouse hepatic NFs first emerge at the center just before birth and extend toward the periphery. The interaction between NFs and IHBDs or HAs plays important roles in the morphogenesis of PT structure. abstract_id: PUBMED:27796468 Giant hepatic regenerative nodules in Alagille syndrome. Background: Children with Alagille syndrome undergo surveillance radiologic examinations as they are at risk for developing cirrhosis and hepatocellular carcinoma. There is limited literature on the imaging of liver masses in Alagille syndrome. We report the ultrasound (US) and magnetic resonance imaging (MRI) appearances of incidental benign giant hepatic regenerative nodules in this population. Objective: To describe the imaging findings of giant regenerative nodules in patients with Alagille syndrome. Materials And Methods: A retrospective search of the hospital database was performed to find all cases of hepatic masses in patients with Alagille syndrome during a 10-year period. Imaging, clinical charts, laboratory data and available pathology were reviewed and analyzed and summarized for each patient. Results: Twenty of 45 patients with confirmed Alagille syndrome had imaging studies. Of those, we identified six with giant focal liver masses. All six patients had large central hepatic masses that were remarkably similar on US and MRI, in addition to having features of cirrhosis. In each case, the mass was located in hepatic segment VIII and imaging showed the mass splaying the main portal venous branches at the hepatic hilum, as well as smaller portal and hepatic venous branches coursing through them. On MRI, signal intensity of the mass was isointense to liver on T1-weighted sequences in four of six patients, but hyperintense on T1 in two of six patients. In all six cases, the mass was hypointense on T2- weighted sequences. The mass post-contrast was isointense to adjacent liver in all phases in five the cases. Five out of six patients had pathological correlation demonstrating preserved ductal architecture confirming the final diagnosis of a regenerative nodule. Conclusion: Giant hepatic regenerative nodules with characteristic US and MR features can occur in patients with Alagille syndrome with underlying cirrhosis. Recognizing these lesions as benign giant hepatic regenerative nodules should, thereby, mitigate any need for intervention. abstract_id: PUBMED:18515089 p75 Neurotrophin receptor is a marker for precursors of stellate cells and portal fibroblasts in mouse fetal liver. Background & Aims: Hepatic stellate cells (HSCs) and portal fibroblasts (PFs) are 2 distinct mesenchymal cells in adult liver. HSCs in sinusoids accumulate lipids and express p75 neurotrophin receptor (p75NTR). HSCs and PFs play pivotal roles in liver regeneration and fibrosis. However, the roles of mesenchymal cells in fetal liver remain poorly understood. In this study, we aimed to characterize mesenchymal cells in mouse fetal liver. Methods: We prepared an anti-p75NTR monoclonal antibody applicable for flow cytometry and immunohistochemistry. p75NTR(+) cells isolated from fetal liver by flow cytometry were characterized by reverse-transcription polymerase chain reaction, immunohistochemistry, and cell cultivation. Lipid-containing cells were visualized by Oil-red O staining. Results: p75NTR(+) cells in fetal liver were clearly distinct from endothelial cells and showed characteristics of mesenchymal cells. At embryonic day (E) 10.5, p75NTR(+) cells were present at the periphery of the liver bud in close contact with endothelial cells, and spread over the liver at E11.5. With the formation of the liver architecture, they began to localize to 2 distinct areas, parenchymal and portal areas, and lipid-containing p75NTR(+) cells increased accordingly. p75NTR(+) cells around portal veins were adjacent to cholangiocytes and expressed Jagged1, a crucial factor for the commitment of hepatoblasts to cholangiocytes. By cultivation, p75NTR(+) cells showed features of adult HSCs with markedly increased expression of glial fibrillary acidic protein and alpha-smooth muscle actin. Conclusions: p75NTR(+) mesenchymal cells in fetal liver include progenitors for HSCs and PFs, and the anti-p75NTR monoclonal antibody is useful for their isolation. abstract_id: PUBMED:19175516 Reverse portal flow after liver transplantation--ominous or acceptable? Reversal of portal flow or hepatofugal flow after liver transplantation is a rare complication after liver transplantation. The available reports in the literature suggest that it is an ominous condition that requires immediate operative intervention, failing which prognosis would be grim. We report two children from two different centers who developed hepatofugal flow in the immediate post-operative period after liver transplantation. The possible etiologies in these patients were acute rejection in one and absence of an MHV causing inadequate hepatic venous outflow in the other. Both patients were treated non-operatively with steroids and immunosuppression. Spontaneous reversal to a normal hepatopetal flow occurred in both and the patients continue to be well six months after the transplant. Our experience contradicts the viewpoint that hepatofugal flow equates to mortality in the absence of surgical intervention. It remains to be defined as to which patients with hepatofugal flow will benefit from surgical intervention. abstract_id: PUBMED:12234275 Repeated detection of gas in the portal vein after liver transplantation: A sign of EBV-associated post-transplant lymphoproliferation? A 1-yr-old child presented with intractable right sided pleural effusion and progressive clinical deterioration 3 weeks after liver transplantation for Alagille Syndrome. He had been treated successfully for severe acute rejection before. Ultrasound and Doppler mode studies repeatedly demonstrated air in the portal vein. Intra-abdominal and intra-thoracic lymphoproliferation was detected, and EBV virus load and serology were suggestive of primary EBV infection. Liver biopsy revealed blast-like infiltrates of B-cells, considered diagnostic for post-transplant lymphoproliferative disease. The disease resolved upon reduction of immunosuppression. We suggest that the detection of portal vein gas in pediatric liver transplant recipients beyond the early post-operative period may be a sign of intra-abdominal post-transplant lymphoproliferative disease. abstract_id: PUBMED:21062863 Jagged1 in the portal vein mesenchyme regulates intrahepatic bile duct development: insights into Alagille syndrome. Mutations in the human Notch ligand jagged 1 (JAG1) result in a multi-system disorder called Alagille syndrome (AGS). AGS is chiefly characterized by a paucity of intrahepatic bile ducts (IHBD), but also includes cardiac, ocular, skeletal, craniofacial and renal defects. The disease penetration and severity of the affected organs can vary significantly and the molecular basis for this broad spectrum of pathology is unclear. Here, we report that Jag1 inactivation in the portal vein mesenchyme (PVM), but not in the endothelium of mice, leads to the hepatic defects associated with AGS. Loss of Jag1 expression in SM22α-positive cells of the PVM leads to defective bile duct development beyond the initial formation of the ductal plate. Cytokeratin 19-positive cells are detected surrounding the portal vein, yet they are unable to form biliary tubes, revealing an instructive role of the vasculature in liver development. These findings uncover the cellular basis for the defining feature of AGS, identify mesenchymal Jag1-dependent and -independent stages of duct development, and provide mechanistic information for the role of Jag1 in IHBD formation. Answer: Yes, the presence of a solitary hepatic nodule adjacent to the right portal vein appears to be a common finding in Alagille Syndrome (ALGS). A study that included 55 children with clinically and/or genetically confirmed ALGS found that 30% of the 39 children with available imaging had focal hepatic lesions, with 11 of these children having a large nodule adjacent to the right portal vein. The median diameter of the lesions was 8.1 cm, and the magnetic resonance imaging features and pathology in available cases were suggestive of a regenerative nodule. The typical location, normal α-fetoprotein levels, and imaging features with vessels coursing through the lesion can reliably differentiate this benign nodule from hepatocellular carcinoma (PUBMED:26284540).
Instruction: Does pulmonary rehabilitation give additional benefit over tiotropium therapy in primary care management of chronic obstructive pulmonary disease? Abstracts: abstract_id: PUBMED:16336289 Does pulmonary rehabilitation give additional benefit over tiotropium therapy in primary care management of chronic obstructive pulmonary disease? Randomized controlled clinical trial in Hong Kong Chinese. Objective: To evaluate whether multidisciplinary pulmonary rehabilitation programme (PRP) provides additional benefit over tiotropium therapy in managing chronic obstructive pulmonary disease (COPD) in primary care. Design: A randomized controlled trial to analyse the difference in outcomes of COPD patients receiving tiotropium plus PRP vs. tiotropium treatment alone. Setting: Two primary care teaching clinics affiliated with a university which serves a population of 600,000. Participants: Fifty primary care COPD patients. Methods: Fifty subjects underwent spirometry and their status of COPD was confirmed by using the Vitalograph Gold Standard. They were then assessed by the 6-min walking distance (6MWD), Peak Visual Analogue Scale (Peak VAS) and Chronic Respiratory Disease Questionnaire (CRQ). All subjects were given tiotropium to optimize their treatment. After a 6-week period, half were randomized to the intervention group (i.e. receiving PRP), whereas the rest were randomized to control group which received only medication. Spirometry, 6MWD, Peak VAS and CRQ were performed in both groups at 6 weeks, 12 weeks and 3 months. Outcomes: Spirometry, 6MWD, Peak VAS and CRQ. Results: Significant improvement (P < 0.05) was seen in 6MWD, symptoms of dyspnoea measured by Peak VAS and CRQ. The improvement was sustained at 3-month follow-up. However, no additional significant improvement was seen in the intervention group when compared with control. Conclusion: Tiotropium therapy has improved health outcomes in COPD patients in primary care settings. A 6 weekly PRP did not give any additional benefits in patients already given tiotropium. abstract_id: PUBMED:20625276 Primary care management of chronic obstructive pulmonary disease to reduce exacerbations and their consequences. Exacerbations of chronic obstructive pulmonary disease (COPD)-acute worsenings of dyspnea, cough and/or sputum production beyond daily symptom variations, necessitating a change in treatment-account for most COPD-related morbidity, care burden and direct costs. Frequent exacerbations (especially those requiring emergency, inpatient or intensive care) reduce physical activity, accelerate lung function decline and increase mortality. This review profiles exacerbation diagnosis, treatment and reduction measures for primary care physicians. Chronic maintenance pharmacotherapy is important to reduce exacerbations. Tiotropium, a long-acting anticholinergic, and salmeterol/fluticasone, a long-acting β-agonist/inhaled corticosteroid combination, are Food and Drug Administration-approved maintenance therapies to reduce exacerbations of COPD. Influenza and pneumonia vaccinations reduce infectious triggers; pulmonary rehabilitation reduces exacerbation recurrence. Acute exacerbation treatment (short-acting bronchodilators, systemic corticosteroids and/or antibiotics) should be complemented by long-term COPD maintenance therapy to reduce future exacerbations. Recognition of a COPD exacerbation signals primary care physicians to establish long-term COPD management to reduce morbidity, disability and mortality. abstract_id: PUBMED:28276136 Initial diagnosis and management of chronic obstructive pulmonary disease in Australia: views from the coal face. Background: Early diagnosis and management can mitigate the long-term morbidity and mortality of chronic obstructive pulmonary disease (COPD). Aims: To gain insights into the initial diagnostic process and early management of COPD by Australian general practitioners (GP). Methods: A random sample of Australian GP was invited to complete a postal survey, which assessed familiarity with and use of contemporary practice guidelines, diagnostic criteria and management preferences for COPD. Results: A total of 233 GP completed the survey. While most GP based a COPD diagnosis on smoking history (94.4%), symptoms (91.0%) and spirometry (88.8%), only 39.9% of respondents recorded a formal diagnosis of COPD after the patient's first symptomatic presentation. Tiotropium was the preferred treatment in 77.3% of GP for the initial management of COPD, while only 27.5% routinely recommended pulmonary rehabilitation. GP routinely recorded patients' smoking status and offered smoking cessation advice, but the timing of this advice varied. Less than half of the respondents routinely used COPD management guidelines or tools and resources provided by the Australian Lung Foundation. Conclusion: There is scope for major improvement in GP familiarity with and use of COPD management guidelines and readily available tools and resources. Some systematic issues were highlighted in the Australian primary care setting, such as a reactive and relatively passive and delayed approach to diagnosis, potentially delayed smoking cessation advice and underutilisation of pulmonary rehabilitation. There is an urgent need to devise strategies for improving patient outcomes in COPD using resources that are readily available. abstract_id: PUBMED:20861599 Innovations to achieve excellence in COPD diagnosis and treatment in primary care. Recognition of chronic obstructive pulmonary disease (COPD) is often missed or delayed in primary care. Once recognized, COPD is often undertreated or episodically treated, focusing on acute exacerbations without establishing maintenance treatment to control ongoing disease. Diagnostic and therapeutic pessimism result in missed opportunities to reduce exacerbations, maintain physical functioning, and reduce emergent health care requirements. Proactive diagnosis and evidence-based management can alleviate the impact of COPD on patients' lives. Smoking cessation has been proven to slow the rate of lung function decline. Maintenance pharmacotherapy and immunizations reduce exacerbations. Pulmonary rehabilitation improves respiratory symptoms and physical functioning and reduces rehospitalizations after exacerbations. Self-management education improves health-related quality of life and reduces inpatient and emergency care usage. Maintenance treatment with long-acting inhaled bronchodilators is appropriate beginning in moderate COPD to maintain airway patency and reduce exacerbations. Tiotropium is US Food and Drug Administration (FDA) approved to treat bronchospasm and reduce exacerbations in patients with COPD; salmeterol/fluticasone is FDA approved to treat airflow obstruction in COPD and reduce exacerbations in patients with a history of exacerbations. Other maintenance long-acting bronchodilators-salmeterol, formoterol, and budesonide/formoterol-are FDA approved to treat airway obstruction in COPD but lack an approved indication against exacerbations. FDA warnings on the use of long-acting beta-adrenergic agents (LABAs) in asthma specifically exempt COPD and do not apply to LABA/inhaled corticosteroid combinations used in COPD. The actual effectiveness achieved in practice with any COPD therapies depends on patients' inhaler technique, adherence, and persistence. Medication usage rates and inhaler proficiency may be improved by concordance, in which the health care provider and patient collaborate to make treatment plans sustainable in the patient's daily life. Practice redesign for whole-patient primary care provides additional tools for comprehensive COPD management. Innovations such as group visits and the patient-centered medical home provide newer ways to interact with COPD patients and their families. Patient-focused and evidence-based options enable primary care practices to manage COPD longitudinally and improve patient outcomes through the course of the disease. abstract_id: PUBMED:36908830 Treatment Patterns, Healthcare Utilization and Clinical Outcomes of Patients with Chronic Obstructive Pulmonary Disease Initiating Single-Inhaler Long-Acting β2-Agonist/Long-Acting Muscarinic Antagonist Dual Therapy in Primary Care in England. Purpose: Selection of treatments for patients with chronic obstructive pulmonary disease (COPD) may impact clinical outcomes, healthcare resource use (HCRU) and direct healthcare costs. We aimed to characterize these outcomes along with treatment patterns, for patients with COPD following initiation of single-inhaler long-acting muscarinic antagonist/long-acting β2-agonist (LAMA/LABA) dual therapy in the primary care setting in England. Patients And Methods: This retrospective cohort study used linked primary care electronic medical record data (Clinical Practice Research Datalink-Aurum) and secondary care administrative data (Hospital Episode Statistics) in England to assess outcomes for patients with COPD who had a prescription for one of four single-inhaler LAMA/LABA dual therapies between 1st June 2015-31st December 2018 (indexing period). Outcomes were assessed during a 12-month follow-up period from the index date (date of earliest prescription of a single-inhaler LAMA/LABA within the indexing period). Incident users were those without previous LAMA/LABA dual therapy prescriptions prior to index; this manuscript focuses on a subset of incident users: non-triple therapy users (patients without concomitant inhaled corticosteroid use at index). Results: Of 10,991 incident users included, 9888 (90.0%) were non-triple therapy users, indexed on umeclidinium/vilanterol (n=4805), aclidinium/formoterol (n=2109), indacaterol/glycopyrronium (n=1785) and tiotropium/olodaterol (n=1189). At 3 months post-index, 63.3% of non-triple therapy users remained on a single-inhaler LAMA/LABA, and 22.1% had discontinued inhaled therapy. Most patients (86.9%) required general practitioner consultations in the first 3 months post-index. Inpatient stays were the biggest contributor to healthcare costs. Acute exacerbations of COPD (AECOPDs), adherence, time-to-triple therapy, time-to-first on-treatment moderate-to-severe AECOPD, time-to-index treatment discontinuation, HCRU and healthcare costs were similar across indexed therapies. Conclusion: Patients initiating treatment with single-inhaler LAMA/LABA in primary care in England were unlikely to switch treatments in the first three months following initiation, but some may discontinue respiratory medication. Outcomes were similar across indexed treatments. abstract_id: PUBMED:14733626 Translating new understanding into better care for the patient with chronic obstructive pulmonary disease. Despite an enormous amount of research and many official statements, the definition, diagnosis, and staging of chronic obstructive pulmonary disease (COPD) remain inconsistent, and we have yet to agree on who should be tested with spirometry or on where and how to do it. We know that inflammation, not just airflow limitation, is important in determining the course of COPD, especially with respect to exacerbations. We can detect and treat alpha-1 antitrypsin deficiency, an under-recognized condition, but whether alpha-1 antitrypsin augmentation therapy affects the disease's clinical course remains unclear. Smoking cessation is the most important of all interventions for COPD, with proven techniques and adjuncts, but implementation remains difficult and success rates are disappointingly low. Similarly, pulmonary rehabilitation has well-documented benefits but is grossly underutilized because it is difficult to pay for and is not made available to most patients. Symptoms, costs, and other outcomes can be improved through comprehensive disease management, including the use of practice guidelines, yet multiple barriers prevent the potential benefits of these interventions to patients from being realized. Many patients who do not meet threshold oxygenation criteria for oxygen therapy during the daytime desaturate during sleep, but evidence that nocturnal oxygen administration benefits these patients is lacking. However, other sleep-related breathing disorders are common in COPD patients. Lung volume reduction surgery has recently been shown to improve function and survival for certain COPD patients, but lung transplantation has generally been disappointing. New pharmaceutical agents are being developed for treating COPD, and at least one of them (tiotropium) should soon be available in the United States. Noninvasive ventilation is effective in treating acute decompensations of COPD and should be the standard of care in that setting; evidence supporting its use in stable patients with end-stage disease is scant. Appropriate palliative care can greatly benefit patients and their families in the terminal phase of COPD and needs to be more widely applied. abstract_id: PUBMED:35983168 Characterization of Patients with Chronic Obstructive Pulmonary Disease Initiating Single-Inhaler Long-Acting Muscarinic Antagonist/Long-Acting β2-Agonist Dual Therapy in a Primary Care Setting in England. Purpose: Treatment pathways of patients with chronic obstructive pulmonary disease (COPD) receiving single-inhaler dual therapies remain unclear. We aimed to describe characteristics, prescribed treatments, healthcare resource use (HCRU) and costs of patients with COPD who initiated single-inhaler long-acting muscarinic antagonist/long-acting β2-agonist (LAMA/LABA) dual therapy in primary care in England. Patients And Methods: Retrospective study using linked data from Clinical Practice Research Datalink Aurum and Hospital Episode Statistics datasets. Patients with COPD with ≥1 single-inhaler LAMA/LABA prescription between June 2015 and December 2018 (index) were included. Demographic and clinical characteristics, prescribed treatments, HCRU and costs were evaluated in the 12 months pre-index. Data are presented for patients not receiving concomitant inhaled corticosteroids at index (non-triple users). Results: Of 10,991 patients initiating LAMA/LABA, 9888 were non-triple users, of whom 21.3% (n=2109) received aclidinium bromide/formoterol, 18.1% (n=1785) received indacaterol/glycopyrronium, 12.0% (n=1189) received tiotropium bromide/olodaterol and 48.6% (n=4805) received umeclidinium/vilanterol. Demographic and clinical characteristics were similar across indexed therapies. LAMA monotherapy was the most frequently prescribed respiratory therapy at 12 (18.4-25.8% of patients) and 3 months (23.9-33.7% of patients) pre-index across indexed therapies; 42.5-59.0% of patients were prescribed no respiratory therapy at these time points. COPD-related HCRU during the 12 months pre-index was similar across indexed therapies (general practitioner consultations: 62.0-68.6% patients; inpatient stays: 19.3-26.1% patients). Pre-index COPD-related costs were similar across indexed therapies, with inpatient stays representing the highest contribution. Mean total direct annual COPD-related costs ranged from £805-£1187. Conclusion: Characteristics of patients newly initiating single-inhaler LAMA/LABA dual therapy were highly consistent across indexed therapies. As half of non-triple users were not receiving respiratory therapy one year prior to LAMA/LABA initiation, there may be an opportunity for early optimization of treatment to relieve clinical burden versus current prescribing patterns in primary care in England. abstract_id: PUBMED:28507288 Identifying individuals with physician-diagnosed chronic obstructive pulmonary disease in primary care electronic medical records: a retrospective chart abstraction study. Little is known about using electronic medical records to identify patients with chronic obstructive pulmonary disease to improve quality of care. Our objective was to develop electronic medical record algorithms that can accurately identify patients with obstructive pulmonary disease. A retrospective chart abstraction study was conducted on data from the Electronic Medical Record Administrative data Linked Database (EMRALD®) housed at the Institute for Clinical Evaluative Sciences. Abstracted charts provided the reference standard based on available physician-diagnoses, chronic obstructive pulmonary disease-specific medications, smoking history and pulmonary function testing. Chronic obstructive pulmonary disease electronic medical record algorithms using combinations of terminology in the cumulative patient profile (CPP; problem list/past medical history), physician billing codes (chronic bronchitis/emphysema/other chronic obstructive pulmonary disease), and prescriptions, were tested against the reference standard. Sensitivity, specificity, and positive/negative predictive values (PPV/NPV) were calculated. There were 364 patients with chronic obstructive pulmonary disease identified in a 5889 randomly sampled cohort aged ≥ 35 years (prevalence = 6.2%). The electronic medical record algorithm consisting of ≥ 3 physician billing codes for chronic obstructive pulmonary disease per year; documentation in the CPP; tiotropium prescription; or ipratropium (or its formulations) prescription and a chronic obstructive pulmonary disease billing code had sensitivity of 76.9% (95% CI:72.2-81.2), specificity of 99.7% (99.5-99.8), PPV of 93.6% (90.3-96.1), and NPV of 98.5% (98.1-98.8). Electronic medical record algorithms can accurately identify patients with chronic obstructive pulmonary disease in primary care records. They can be used to enable further studies in practice patterns and chronic obstructive pulmonary disease management in primary care. Chronic Lung Disease: NOVEL ALGORITHM SEARCH TECHNIQUE: Researchers develop an algorithm that can accurately search through electronic health records to find patients with chronic lung disease. Mining population-wide data for information on patients diagnosed and treated with chronic obstructive pulmonary disease (COPD) in primary care could help inform future healthcare and spending practices. Theresa Lee at the University of Toronto, Canada, and colleagues used an algorithm to search electronic medical records and identify patients with COPD from doctors' notes, prescriptions and symptom histories. They carefully adjusted the algorithm to improve sensitivity and predictive value by adding details such as specific medications, physician codes related to COPD, and different combinations of terminology in doctors' notes. The team accurately identified 364 patients with COPD in a randomly-selected cohort of 5889 people. Their results suggest opportunities for broader, informative studies of COPD in wider populations. abstract_id: PUBMED:19892540 Combining triple therapy and pulmonary rehabilitation in patients with advanced COPD: a pilot study. Background: The synergistic interactions between pharmacotherapy and pulmonary rehabilitation has been provided, but it remains to be established whether this may also apply to more severe patients. Objectives: We have examined whether tiotropium enhances the effects of exercise training in patients with advanced COPD (FEV(1)</=60% predicted, hypoxemia at rest corrected with oxygen supplementation, and limitations of physical activity). Methods: We enrolled 22 patients that were randomised to tiotropium 18mug or placebo inhalation capsules taken once daily. Both groups (11 patients in each group) underwent an in patient pulmonary rehabilitation program and were under regular treatment with salmeterol/fluticasone twice daily. Each rehabilitation session was held 5 days per week (3h/day) for a total of 4 weeks. Results: Compared to placebo, tiotropium had larger impact on pulmonary function (FEV(1)+0.164L, FVC +0.112L, RV -0.544L after tiotropium, FEV(1)+0.084L, FVC -0.039L, RV -0.036L after placebo). The addition of tiotropium allowed a longer distance walked in 6min (82.3m vs. 67.7m after placebo) and reduced dyspnoea (Borg score) (-0.4 vs. +0.18 after placebo) when compared with baseline (pre pulmonary rehabilitation program). The changes in SGRQ from baseline to the end of treatment were: total score -28.3U, activity -27.8U, impact -14.5U, and symptoms -33.4U in the placebo group; and total score -19.1U, activity -18.9U, impact -16.4U, and symptoms -33.8U in the tiotropium group. Conclusions: Our study clearly indicates that there is an advantage in combining pulmonary rehabilitation with an aggressive drug therapy in more severe patients. abstract_id: PUBMED:14600189 Contemporary management of chronic obstructive pulmonary disease: scientific review. Context: The care of patients with chronic obstructive pulmonary disease (COPD) has changed radically over the past 2 decades, and novel therapies can not only improve the health status of patients with COPD but also modify its natural course. Objective: To systematically review the impact of long-acting bronchodilators, inhaled corticosteroids, nocturnal noninvasive mechanical ventilation, pulmonary rehabilitation, domiciliary oxygen therapy, and disease management programs on clinical outcomes in patients with COPD. Data Sources: MEDLINE and Cochrane databases were searched to identify all randomized controlled trials and systematic reviews from 1980 to May 2002 evaluating interventions in patients with COPD. We also hand searched bibliographies of relevant articles and contacted experts in the field. Study Selection And Data Extraction: We included randomized controlled trials that had follow-up of at least 3 months and contained data on at least 1 of these clinical outcomes: health-related quality of life, exacerbations associated with COPD, or death. For pulmonary rehabilitation, we included studies that had a follow-up of at least 6 weeks. Using standard meta-analytic techniques, the effects of interventions were compared with placebo or with usual care. In secondary analyses, the effects of interventions were compared against each other, where possible. Data Synthesis: Long-acting beta2-agonists and anticholinergics (tiotropium) reduced exacerbation rates by approximately 20% to 25% (relative risk [RR] for long-acting beta2-agonists, 0.79; 95% CI, 0.69-0.90; RR for tiotropium, 0.74; 95% CI, 0.62-0.89) in patients with moderate to severe COPD. Inhaled corticosteroids also reduced exacerbation rates by a similar amount (RR, 0.76; 95% CI, 0.72-0.80). The beneficial effects were most pronounced in trials enrolling patients with FEV1 between 1 L and 2 L. Combining a long-acting beta2-agonist with an inhaled corticosteroid resulted in an approximate 30% (RR, 0.70; 95% CI, 0.62-0.78) reduction in exacerbations. Pulmonary rehabilitation improved the health status of patients with moderate to severe disease, but no material effect was observed on long-term survival or hospitalization rates. Domiciliary oxygen therapy improved survival by approximately 40% in patients with PaO2 lower than 60 mm Hg, but not in those without hypoxia at rest. The data on disease management programs were heterogeneous, but overall no effect was observed on survival or risk of hospitalization. Noninvasive mechanical ventilation was not associated with improved outcomes. Conclusions: A significant body of evidence supports the use of long-acting bronchodilators and inhaled corticosteroids in reducing exacerbations in patients with moderate to severe COPD. Domiciliary oxygen therapy is the only intervention that has been demonstrated to prolong survival, but only in patients with resting hypoxia. Answer: According to the randomized controlled clinical trial conducted in Hong Kong Chinese primary care settings, pulmonary rehabilitation programme (PRP) did not provide additional benefits over tiotropium therapy in managing chronic obstructive pulmonary disease (COPD) when both groups were assessed using spirometry, 6-min walking distance (6MWD), Peak Visual Analogue Scale (Peak VAS), and Chronic Respiratory Disease Questionnaire (CRQ). The study found significant improvement in 6MWD, symptoms of dyspnoea measured by Peak VAS, and CRQ in COPD patients given tiotropium therapy, but no additional significant improvement was observed in the intervention group receiving PRP compared to the control group receiving medication alone (PUBMED:16336289).
Instruction: Are there modifiable risk factors to prevent a cerebrospinal fluid leak following vestibular schwannoma surgery? Abstracts: abstract_id: PUBMED:25415063 Are there modifiable risk factors to prevent a cerebrospinal fluid leak following vestibular schwannoma surgery? Object: The following study was conducted to identify risk factors for a postoperative CSF leak after vestibular schwannoma (VS) surgery. Methods: The authors reviewed a prospectively maintained database of all patients who had undergone resection of a VS at the Mayo Clinic between September 1999 and May 2013. Patients who developed a postoperative CSF leak within 30 days of surgery were compared with those who did not. Data collected included patient age, sex, body mass index (BMI), tumor size, tumor side, history of prior tumor treatment, operative time, surgical approach, and extent of resection. Both univariate and multivariate regression analyses were performed to evaluate all variables as risk factors of a postoperative CSF leak. Results: A total of 457 patients were included in the study, with 45 patients (9.8%) developing a postoperative CSF leak. A significant association existed between increasing BMI and a CSF leak, with those classified as overweight (BMI 25-29.9), obese (BMI 30-39.9), or morbidly obese (BMI≥40) having a 2.5-, 3-, and 6-fold increased risk, respectively. Patients undergoing a translabyrinthine (TL) approach experienced a higher rate of CSF leaks (OR 2.5, 95% CI 1.3-4.6; p=0.005), as did those who had longer operative times (OR 1.04, 95% CI 1.02-1.07; p=0.0006). The BMI, a TL approach, and operative time remained independent risk factors on multivariate modeling. Conclusions: Elevated BMI is a risk factor for the development of a postoperative CSF leak following VS surgery. Recognizing this preoperatively can allow surgeons to better counsel patients regarding the risks of surgery as well as perhaps to alter perioperative management in an attempt to decrease the likelihood of a leak. Patients undergoing a TL approach or having longer operative times are also at increased risk of developing a postoperative CSF leak. abstract_id: PUBMED:28633408 Risk Factors for Readmission with Cerebrospinal Fluid Leakage Within 30 Days of Vestibular Schwannoma Surgery. Background: Cerebrospinal fluid (CSF) leak is a well-recognized complication after surgical resection of vestibular schwannomas and is associated with a number of secondary complications, including readmission and meningitis. Objective: To identify risk factors for and timing of 30-d readmission with CSF leak. Methods: Patients who had undergone surgical resection of a vestibular schwannoma from 1995 to 2010 were identified in the California Office of Statewide Health Planning and Development database. The most common admission diagnoses were identified by International Classification of Disease, ninth Revision, diagnosis codes, and predictors of readmission with CSF leak were determined using logistic regression. Results: A total of 6820 patients were identified. CSF leak, though a relatively uncommon cause of admission after discharge (3.52% of all patients), was implicated in nearly half of 490 readmissions (48.98%). Significant independent predictors of readmission with CSF leak were male sex (odds ratio [OR] 1.72, 95% confidence interval [CI] 1.32-2.25), first admission at a teaching hospital (OR 3.32, 95% CI 1.06-10.39), CSF leak during first admission (OR 1.84, 95% CI 1.33-2.55), obesity during first admission (OR 2.10, 95% CI 1.20-3.66), and case volume of first admission hospital (OR of log case volume 0.82, 95% CI 0.70-0.95). Median time to readmission was 6 d from hospital discharge. Conclusion: This study has quantified CSF leak as an important contributor to nearly half of all readmissions following vestibular schwannoma surgery. We propose that surgeons should focus on technical factors that may reduce CSF leakage and take advantage of potential screening strategies for the detection of CSF leakage prior to first admission discharge. abstract_id: PUBMED:21142748 Hydrocephalus associated with vestibular schwannomas: management options and factors predicting the outcome. Object: The current, generally accepted optimal management for hydrocephalus related to vestibular schwannomas (VSs) is primary tumor removal, with further treatment reserved only for patients who remain symptomatic. Previous studies have shown, however, that this management can lead to an increase in surgery-related complications. In this study, the authors evaluated their experience with the treatment of such patients, with the aim of identifying the following: 1) the parameters correlating to the need for specific hydrocephalus treatment following VS surgery; and 2) patients at risk for developing hydrocephalus-related complications. Methods: This was a retrospective study of a 400-patient series. The complication rates and outcomes following primary hydrocephalus treatment versus primary VS removal were compared. Patients undergoing primary tumor removal were further subdivided on the basis of the need for subsequent hydrocephalus treatment. The 3 categories of parameters tested for correlation with the need for such subsequent treatment as well as with heightened risk for developing complications were patient-, tumor-, and hydrocephalus-related. Results: Of the entire series, 53 patients presented with hydrocephalus. Forty-eight of 53 patients underwent primary VS surgery, of whom 42 (87.5%) did not require additional hydrocephalus treatment. Of the 6 patients who did require additional hydrocephalus treatment, only 3 ultimately required a VP shunt. Factors correlating to the need of hydrocephalus treatment after VS removal were large tumor size, irregular tumor surface, and severe preoperative hydrocephalus. Patients with a longer symptom duration prior to surgery, those with polycyclic tumors, or with inhomogeneous VS, were at heightened risk for the development of CSF leaks. The general and functional outcome of surgery showed no correlation to the presence of preoperative hydrocephalus. Conclusions: Primary tumor removal is the optimum management of disease in patients with VS with associated hydrocephalus; it leads to resolution of the hydrocephalus in the majority of cases, and the outcome is similar to that of patients without hydrocephalus. Certain factors may aid in identifying patients at risk for developing persistent hydrocephalus as well as those at risk for CSF leaks. abstract_id: PUBMED:21099580 Petrous bone pneumatization is a risk factor for cerebrospinal fluid fistula following vestibular schwannoma surgery. Background: For the prevention of postoperative CSF fistula a better understanding of origins and risk factors is necessary. Objective: To identify the petrous bone air cell volume as a risk factor for developing CSF fistula, we performed a retrospective analysis. Methods: From 2000 to 2007 519 patients had a retrosigmoidal surgical removal of a vestibular schwannoma. The 22 who had a postoperative CSF fistula were chosen for evaluation in addition to 78 patients who were randomly selected in 4 equally sized cohorts: male/female with small/large tumors. Preoperative CT scans were analyzed regarding petrous bone air cell volume, area of visible pneumatization at the level of the internal auditory canal (IAC), tumor grade, and sex. Results: : Women developed nearly half as many CSF fistulas (2.7%) as men (5.2%). The mean volume of the petrous bone air cells was 10.97 mL (SD, 4.9; range, 1.38-27.25). It was significantly lower for women (mean, 9.23 mL; SD, 3.8) than for men (mean, 12.5 mL; SD, 5.28; P = .0008). The mean air cell volume of CSF-fistula patients was 13.72 mL (SD, 5.22). The difference concerning the air cell volume between patients who developed CSF fistulas and patients from the control group was significant (P = .0042). There was a significant positive correlation between the air cell volume and the area of pneumatization in one CT slide at the level of the IAC. Conclusion: The higher incidence of CSF fistulas in men compared with women can be explained by means of differently pneumatized petrous bones. A high amount of petrous bone pneumatization has to be considered as a risk factor for the development of postoperative CSF fistula after vestibular schwannoma surgery. abstract_id: PUBMED:19326990 Factors affecting postoperative cerebrospinal fluid leaks after retrosigmoidal craniotomy for vestibular schwannomas. Object: The aim of this study was to identify patients likely to develop CSF leaks after vestibular schwannoma surgery using a retrospective analysis for the identification of risk factors. Methods: Between January 2001 and December 2006, 420 patients underwent retrosigmoidal microsurgical tumor removal in a standardized procedure. Of these 420 patients, 363 underwent treatment for the first time, and 27 suffered from recurrent tumors. Twenty-six patients had bilateral tumors due to neurofibromatosis Type 2, and 4 patients had previously undergone radiosurgical treatment. An analysis was performed to examine the incidence of postoperative CSF fistulas in all 4 groups. Results: The incidence of CSF leakage was higher in the tumor recurrence group (11.1%) than in patients undergoing surgery for the first time (4.4%). There were no CSF fistulas in the neurofibromatosis Type 2 group or in patients with preoperative radiosurgical treatment. Tumor size was identified as a possible risk factor in a previous study. Conclusions: Surgery for recurrent tumors is a significant risk factor for the development of CSF leaks. abstract_id: PUBMED:27107262 Surgical salvage of recurrent vestibular schwannoma following prior stereotactic radiosurgery. Objectives/hypothesis: To evaluate outcomes of salvage surgery for vestibular schwannoma (VS) that failed primary stereotactic radiosurgery (SRS). Methods: Case-control study of 37 patients who underwent surgical resection of sporadic VS following prior SRS at two tertiary academic referral centers between 2003 and 2015. A cohort of nonirradiated control subjects, matched according to tumor size, age, and treatment center, were used as comparison. Results: Thirty-seven patients were included. The median time from radiation to surgical salvage was 36 months (range 9.6-153 months). Following tumor progression after SRS, 18 (49%) patients underwent gross total resection, 10 (27%) underwent near-total resection, and nine (24%) underwent subtotal resection. Postoperative complications following salvage surgery included one (3%) case of stroke, four (11%) cases of cerebrospinal fluid leak, and two (5%) cases of meningitis. Twenty-seven (73%) patients had good postoperative facial nerve outcome (House-Brackmann Score I-II) at long-term follow-up. There were no cases of tumor recurrence or regrowth after a median length of 26 months following microsurgical salvage (range 3-114 months). The rate of satisfactory postoperative facial nerve function was not different between study and control subjects (73% vs. 76%; P = 0.8); however, less-than-complete resection was utilized more frequently among previously radiated patients (P = 0.01). Conclusion: Microsurgical salvage of VS following primary radiation therapy is challenging. Less-than-complete resection is required in a greater percentage of patients to preserve facial nerve integrity and prevent neurological complications. Long-term follow-up is needed to determine the risk of delayed progression following incomplete tumor removal. Level Of Evidence: 3b. Laryngoscope, 126:2580-2586, 2016. abstract_id: PUBMED:30885097 Risk Recall of Complications Associated with Vestibular Schwannoma Treatment. Objective: To assess the risk recall of complications among patients who underwent different vestibular schwannoma (VS) treatments. Study Design: Patients with VS completed a voluntary and anonymous survey. Setting: Survey links were distributed via the Acoustic Neuroma Association (ANA) website, Facebook, and email list. Subjects And Methods: Surveys were distributed to ANA members from January to March 2017. Of the 3200 ANA members with a VS diagnosis at the time of survey distribution, 789 (25%) completed the survey. Results: Subjects reported the following incidence of posttreatment complications: imbalance (60%), hearing issues (51%), dry eyes (30%), headache (29%), and facial weakness (27%). Overall, 188 (25%) recalled remembering all the risks associated with their treatment. Among those in the surgical cohort (52%) who experienced balance issues, facial weakness, cerebrospinal fluid leak, meningitis, and stroke, 73%, 91%, 77%, 67%, and 33% claimed recall of these associated risks. Among those in the radiosurgery cohort (28%) who experienced balance issues, facial weakness, and hydrocephalus, 56%, 52%, and 60% recalled discussions of those risks. Patients with higher-level education (P = .026) and those who underwent surgery (P = .001) had a significantly higher risk recall ratio, while sex, age, and tumor size were not significant contributing factors. Conclusion: Not all patients with VS who experienced treatment complications recalled remembering those risks being discussed with them. Patients with higher education and those who underwent surgery had a better recall of risks associated with different treatment modalities. The risk recall ratio of patients experiencing complications ranged 33% to 91%, suggesting an opportunity for decision-making and discussion improvement. abstract_id: PUBMED:8361315 Factors affecting the development of cerebrospinal fluid leak and meningitis after translabyrinthine acoustic tumor surgery. Meningitis and cerebrospinal fluid (CSF) leak are serious complications of acoustic tumor surgery. Previous reports have varied in the incidence of and the predisposing factors to these complications. This study reviews a series of 723 acoustic tumors removed via the translabyrinthine approach at the House Ear Clinic in Los Angeles. The incidences of CSF leak and meningitis were 6.8% and 2.9%, respectively. The patients who developed these problems were compared to the remainder of the study population for differences in age at surgery, tumor size, operative time, and length of hospital stay. Meningitis occurred more frequently in larger tumors, and patients with either complication had a longer hospital stay. The presence of CSF leak did not predispose to meningitis. It is concluded that technical factors account for postoperative CSF leak and meningitis after translabyrinthine acoustic tumor removal. abstract_id: PUBMED:31530434 National 30-day readmission and prolonged length of stay after vestibular schwannoma surgery: Analysis of the Nationwide Readmissions Database. Purpose: To determine the risk factors for unanticipated readmission, prolonged index admission, and discharge to a facility after vestibular schwannoma surgery. Materials And Methods: Retrospective cohort study of those undergoing surgery for vestibular schwannoma in the Nationwide Readmissions Database (2013-2014). Main outcome measures included readmission rate, length of stay, discharge destination. Results: There were 4585 cases identified. The overall unanticipated readmission rate was 8.1%, and 9.1% had a prolonged length of stay (PLOS) of ≥7 days. Mean and median LOS were 4.63 and 4.00 days, respectively, and >90% of patients were discharged after 7 days. Disposition to a facility occurred in 6.7% of cases. Teaching hospitals were protective against unintended readmission (odds ratio [OR] 0.44, p < .001). Major functional loss was associated with PLOS (OR 12.55, p < .001). High volume centers were associated with decreased risk of PLOS (OR 0.46, p < .001) and facility discharge (OR 0.68, p < .001). The most common readmission diagnoses included "other nervous system complications" (n = 128), cerebrospinal fluid leak (n = 71), "other postoperative infection" (n = 61), and meningitis (n = 59). Conclusions: Unanticipated readmission and prolonged LOS following vestibular schwannoma surgery are common, with varied sociodemographic, hospital, and patient factors independently associated with each. Further studies are needed to investigate targeted interventions aimed at minimizing readmission and prolonged LOS using the factors outlined above. abstract_id: PUBMED:34221589 Factors associated with cerebrospinal fluid leak after a retrosigmoid approach for cerebellopontine angle surgery. Background: The retrosigmoid approach represents a crucial surgical route to address different lesions in the cerebellopontine angle but cerebrospinal fluid (CSF) leak still remains the most frequent complication after this approach. Here, we analyzed the impact of different factors in CSF leak development after a retrosigmoid approach. Identifying risk factors related to a specific approach may help the surgeon to tailor the perioperative management and to appropriately counsel patients regarding their risk profile. Methods: We retrospectively reviewed the clinical, surgical, and outcome data of 103 consecutive patients (M/F, 47/56; mean follow-up 35.6 ± 23.9 months) who underwent a retrosigmoid approach for different cerebellopontine angle pathologies and studied the impact of different factors on the occurrence of a CSF leak to univariate and multivariate analysis. Results: Seventy-nine patients (76.7%) were operated for tumors growing in the cerebellopontine angle. Twenty-four patients (23.2%) underwent microvascular decompression to treat a drug-resistant trigeminal neuralgia. Sixteen patients (15.5%) developed CSF leak in the postoperative course of which six underwent surgical revision. Performing a craniectomy as surgical procedure (P = 0.0450) and performing a reopening procedure (second surgery; P = 0.0079) were significantly associated to a higher risk of developing CSF leak. Moreover, performing a reopening procedure emerged as an independent factor for CSF developing on multivariate analysis (P = 0.0156). Conclusion: Patients submitted to craniectomy and patients who underwent a second surgery showed an higher CSF leak rate. Ongoing improvement of biomaterial technology may help neurosurgeons to prevent this potentially life-threatening complication. Answer: Yes, there are modifiable risk factors that can help prevent a cerebrospinal fluid (CSF) leak following vestibular schwannoma surgery. One significant modifiable risk factor is the patient's body mass index (BMI). Patients classified as overweight (BMI 25-29.9), obese (BMI 30-39.9), or morbidly obese (BMI≥40) have a 2.5-, 3-, and 6-fold increased risk, respectively, of developing a postoperative CSF leak (PUBMED:25415063). Therefore, weight management and reduction in BMI could potentially decrease the likelihood of a CSF leak. Additionally, the choice of surgical approach is another modifiable factor. Patients undergoing a translabyrinthine (TL) approach experienced a higher rate of CSF leaks (PUBMED:25415063). Surgeons might consider alternative surgical approaches when feasible to reduce the risk of CSF leaks. Operative time is also a modifiable risk factor, with longer operative times being associated with an increased risk of CSF leaks (PUBMED:25415063). Efforts to optimize surgical efficiency and reduce operative time may help in lowering the risk of CSF leaks. Furthermore, the volume of petrous bone pneumatization has been identified as a risk factor, with a higher incidence of CSF fistulas in patients with greater petrous bone pneumatization (PUBMED:21099580). While this factor is not directly modifiable, it can inform preoperative planning and risk counseling. Lastly, the management of patients who have undergone previous interventions, such as stereotactic radiosurgery, may need to be tailored, as these patients may require less-than-complete resection to preserve facial nerve integrity and prevent neurological complications (PUBMED:27107262). In summary, modifiable risk factors such as BMI, surgical approach, operative time, and tailored management based on patient history can be addressed to reduce the risk of CSF leaks following vestibular schwannoma surgery.
Instruction: Does warfarin help prevent ischemic stroke in patients presenting with post coronary bypass paroxysmal atrial fibrillation? Abstracts: abstract_id: PUBMED:22971719 Does warfarin help prevent ischemic stroke in patients presenting with post coronary bypass paroxysmal atrial fibrillation? Purpose: This study examines the efficacy of warfarin in preventing ischemic stroke due to paroxysmal atrial fibrillation (PAF) after coronary artery bypass grafting (CABG). Methods: Postoperative PAF occurred in 151(33.5%) of 447 patients undergoing conventional CABG. The patients were divided into two groups: group I consisting of 93 patients administered two types of antiplatelet agents and group II consisting of 58 patients treated with a single antiplatelet agent and warfarin. We compared the two groups in terms of CHADS2 score, incidence of ischemic stroke, and independent risk for stroke associated with post-CABG PAF. Results: The group I CHADS2 score (2.24 ±1.67) was significantly lower than the group II score (2.64 ± 1.22), p = 0.0452. However, 12 patients in group I (12.9%) suffered postoperative ischemic stroke, a rate significantly higher than that of group II (1 patient, 1.7%; p = 0.0173). Any recurrence of PAF or atrial fibrillation with bradycardia was assessed at the time of stroke onset. Logistic regression analysis showed that the absence of warfarin therapy constituted a risk factor for post-CABG stroke associated with PAF (Odds 13.04, p = 0.027). Conclusion: Warfarin therapy administered concomitantly with an antiplatelet agent dramatically reduced the incidence of ischemic stroke associated with postoperative PAF. abstract_id: PUBMED:33251914 New-Onset Atrial Fibrillation After Coronary Artery Bypass Grafting and Long-Term Outcome: A Population-Based Nationwide Study From the SWEDEHEART Registry. Background The long-term impact of new-onset postoperative atrial fibrillation (POAF) after coronary artery bypass grafting and the benefit of early-initiated oral anticoagulation (OAC) in patients with POAF are uncertain. Methods and Results All patients who underwent coronary artery bypass grafting without preoperative atrial fibrillation in Sweden from 2007 to 2015 were included in a population-based study using data from 4 national registries: SWEDEHEART (Swedish Web System for Enhancement and Development of Evidence-based Care in Heart Disease Evaluated According to Recommended Therapies), National Patient Registry, Dispensed Drug Registry, and Cause of Death Registry. POAF was defined as any new-onset atrial fibrillation during the first 30 postoperative days. Cox regression models (adjusted for age, sex, comorbidity, and medication) were used to assess long-term outcome in patients with and without POAF, and potential associations between early-initiated OAC and outcome. In a cohort of 24 523 patients with coronary artery bypass grafting, POAF occurred in 7368 patients (30.0%), and 1770 (24.0%) of them were prescribed OAC within 30 days after surgery. During follow-up (median 4.5 years, range 0‒9 years), POAF was associated with increased risk of ischemic stroke (adjusted hazard ratio [aHR] 1.18 [95% CI, 1.05‒1.32]), any thromboembolism (ischemic stroke, transient ischemic attack, or peripheral arterial embolism) (aHR 1.16, 1.05‒1.28), heart failure hospitalization (aHR 1.35, 1.21‒1.51), and recurrent atrial fibrillation (aHR 4.16, 3.76‒4.60), but not with all-cause mortality (aHR 1.08, 0.98‒1.18). Early initiation of OAC was not associated with reduced risk of ischemic stroke or any thromboembolism but with increased risk for major bleeding (aHR 1.40, 1.08‒1.82). Conclusions POAF after coronary artery bypass grafting is associated with negative prognostic impact. The role of early OAC therapy remains unclear. Studies aiming at reducing the occurrence of POAF and its consequences are warranted. abstract_id: PUBMED:25776468 Anticoagulation for stroke prevention in new atrial fibrillation after coronary artery bypass graft surgery. Background: The benefit of early anticoagulation for stroke prophylaxis in atrial fibrillation after coronary artery bypass graft (CABG) surgery is uncertain. We therefore studied what proportion of ischemic strokes in patients with atrial fibrillation early after CABG surgery were potentially preventable by anticoagulation with warfarin. Methods: We reviewed medical records from 2264 patients with isolated CABG performed during a period when our institution had no policy on anticoagulation for postoperative atrial fibrillation. The outcome was ischemic stroke within 30days postoperatively and verified with computed tomography (CT) in patients with new postoperative atrial fibrillation for more than 48h. Results: New, postoperative atrial fibrillation occurred in 403 (17.8%) of the patients and 191 of those (47.4%) were not started on warfarin at 48hours. Eight patients developed CT-verified ischemic stroke, which occurred on postoperative day 1-3 in 4 patients and in 3 patients was of the lacunar type. In two patients (stroke day 25 and day 30) warfarin could have been preventive. In another patient with onset of neurological symptoms on postoperative day 8 (4days from onset of the arrhythmia), systemic anticoagulation might have limited the severity of the stroke but warfarin therapy would not likely have reached therapeutic levels within 2days. Conclusion: The preventive effect of warfarin on early stroke associated with new atrial fibrillation after CABG seems limited. Treatment with warfarin during the hospitalization has to take the risk of bleeding, particularly into the pericardium, as reported in the literature, into account. abstract_id: PUBMED:30710988 Purpose: To assess the prevalence of atrial fibrillation (AF) and use of antithrombotic agents in adult patients with acute coronary syndrome (ACS). Materials And Methods: We consecutively enrolled all ACS patients (n=1155) who were hospitalized in two Moscowbased percutaneous coronary intervention centers (each center performs over 500 PCIs a year) between October 2017 and February 2018. AF was diagnosed in 204 patients (17.7%). The risk of thromboembolic complications was assessed using the CHA2DS2-VASc Score. The risk of hemorrhagic complications was assessed using the HAS-BLED Score. The data were processed using StatSoft Statistica 10.0 and IBM SPSS Statistics v.23 software. Results: The prevalence of diagnosed AF was 13.6%, while the prevalence of undiagnosed AF was 4.1%. Of the 179 discharged patients with AF, only 2 had a low risk of ischemic stroke (IS). One hundred and fifty patients (83.8%) eligible for oral anticoagulant therapy received oral anticoagulants. Patients with diagnosed AF were administered oral anticoagulants (OACs) significantly more often than patients with undiagnosed AF [125 (91.9%) vs. 25 (58.1%), р<0.001]. Novel oral anticoagulants (NOACs) were administered four times more often than vitamin K antagonists [120 (80.0%) vs. 29 (19.3%), р<0.001]. Rivaroxaban was used in 51.3% of cases. Of the 29 patients treated with warfarin, only 3 (10.3%) achieved the target international normalized ratio (INR) at discharge. Of the 107 patients who underwent percutaneous coronary intervention (PCI), 77 patients (80%) received an OAC and two antiplatelet agents (with 74% receiving this three-agent therapy for one month), 11 patients (10.3%) received an OAC and an antiplatelet agent, and 18 patients (16.8%) received two antiplatelet agents. The only antiplatelet agent used as part of the three-agent therapy was clopidogrel. The three-agent therapy without PCI was administered in 43.1% of cases. Conclusion: We found that the prevalence of AF in patients with ACS was high. The fact that doctors administered NOACs suggests that they are aware of the need to use these agents to prevent thromboembolic complications in AF patients. abstract_id: PUBMED:23089528 Antithrombotic therapy after coronary stenting in patients with nonvalvular atrial fibrillation. Background: The safety and efficacy of triple therapy (TT; warfarin with dual antiplatelet therapy [DAPT]) in post-percutaneous coronary intervention (PCI) patients with atrial fibrillation (AF) are unclear. We aimed to determine whether TT is associated with a decreased stroke rate and an acceptable bleeding rate in this population. Methods: This was a single-centre, retrospective study. Primary composite outcome was death, ischemic stroke, or transient ischemic attack. Secondary outcomes included components of primary outcome, bleeding, and blood transfusion rates. Results: Of 602 post-PCI patients with AF between 2000 and 2009, 382 received TT, 220 DAPT. Mean follow-up post PCI was 5.9 ± 5.0 months. The TT group had a higher CHADS(2) score (2.6 vs 2.1, P < 0.001), older age (72.9 vs 70.5 years, P = 0.039), more heart failure (72.3% vs 36.9%, P = 0.010), and more strokes (14.4% vs 6.4%, P = 0.010). Neither primary outcome, major bleeding, nor blood transfusion rates differed between treatment groups, but more gastrointestinal bleeding occurred with TT use (2.6% vs 0.5%, P = 0.045). Net clinical benefit was -5.2 (CHADS(2) ≤ 2), 0.9 (CHADS(2) > 2), and -3.2 (overall) per 100 patient-years. Conclusions: Although we found no association with TT usage and a reduction in cerebrovascular ischemic or major bleeding events in post-PCI patients with AF regardless of CHADS(2) score vs DAPT, the study was likely underpowered to demonstrate a clinically relevant reduction. TT was associated with a 5-fold increase in gastrointestinal bleeding vs DAPT. Net clinical benefit calculations suggest benefits of TT in patients with CHADS(2) > 2. Stratification with CHADS(2) might be useful to determine the optimal antithrombotic therapy post PCI. abstract_id: PUBMED:26003433 Rationale and design of the RT-AF study: Combination of rivaroxaban and ticagrelor in patients with atrial fibrillation and coronary artery disease undergoing percutaneous coronary intervention. Objective: Optimal antithrombotic strategy for patients with concomitant coronary artery disease and atrial fibrillation (AF) undergoing percutaneous coronary intervention (PCI) is still controversial, and the role of novel antithrombotic agents has nerve been tested. Therefore, the aim of this study is to evaluate and overall safety and efficacy profile of the combination of rivaroxaban and ticagrelor in this particular population. Design: The RT-AF study is an open-label, randomized, active-controlled, multicenter clinical trial with up to 420 subjects enrolled in 5 centers. Eligible patients, who have a history or new onset paroxysmal, persistent, or permanent non-valvular AF, referred to the study centers with indications for PCI will be randomly assigned to receive triple therapy (including warfarin, clopidogrel and aspirin) or dual therapy (rivaroxaban and ticagrelor). All subjects will have clinical follow-up at discharge, at 30 days, 6 months and 12 months. The primary end point is major or clinically relevant non-major bleeding events at 12 months. The major secondary end point is the composite efficacy outcome of death, myocardial infarction, stent thrombosis and ischemic stroke. Conclusion: The study will be sufficiently powered to provide data primarily regarding the safety of dual therapy with rivaroxaban and ticagrelor over the traditional triple therapy in patients with AF undergoing PCI at 12 months. It will also provide important information regarding the efficacy of the two different antithrombotic regimens. (ClinicalTrials.gov identifier: NCT02334254). abstract_id: PUBMED:21519218 Antithrombotic regimens in patients with indication for long-term anticoagulation undergoing coronary interventions-systematic analysis, review of literature, and implications on management. There is lack of consensus regarding use of antithrombotic therapy (AT) in patients with indications for long-term anticoagulation who undergo percutaneous coronary intervention. We sought to evaluate the safety and efficacy of various antithrombotic regimens in this patient population. We conducted a Medline search for all English language, full-text articles from January 2000 to June 2009 that evaluated major cardiovascular outcomes in patients with indications for anticoagulation who undergo percutaneous coronary intervention. Data were analyzed from these studies to calculate annual incidence of major bleeding, stroke, and stent thrombosis with various antithrombotic regimens. Major bleeding events were calculated at 30 days and at 1 year. Ten retrospective studies, 1 post hoc analysis of a major registry, and 2 prospective studies qualified for our analysis. Atrial fibrillation was the most common indication for anticoagulation. Risk of major bleeding was 1.5% at 30 days and 5.2% at 1 year with triple AT (aspirin + warfarin + clopidogrel/ticlopidine). Dual antiplatelet therapy (aspirin + clopidogrel/ticlopidine) was associated with 2.4% annual risk of major bleeding. The annual incidence of both ischemic stroke and stent thrombosis was 1% with triple antithrombotic regimen. Risk of major bleeding increases proportionately with incremental duration of triple AT. Triple AT is effective in the prevention of ischemic stroke and stent thrombosis. Dual antiplatelet regimen is effective in patients with low annual risk of ischemic stroke (<4%; CHADS-2 score <2) due to lower annual risk of bleeding associated with this regimen (2.4%). abstract_id: PUBMED:28648031 Impact of different antithrombotic therapy strategy on prognosis in coronary heart disease patients combining with atrial fibrillation: a meta analysis Objective: To evaluate the impact of various anticoagulation antiplatelet therapy strategies on the prognosis of patients with coronary heart disease combining with atrial fibrillation. Methods: Present meta analysis was performed according to search results on English EMBASE database by computer retrieval, Pubmed, the Cochrane Central Register of Controlled Trials, Medline, Chinese CBM database, CNKI database, Wan Fang database, China science and technology papers online electronic databases, manual retrieval for important international conference proceedings up to April 30 2016. Trials published in English and Chinese language, which met the Cochrane system evaluation requirements were included and the inclusion and exclusion criteria were made based on Cochrane system evaluation requirements. The end point is the incident of major adverse cardiac events (MACE), ischemic stroke and major bleeding events. The patients were randomly assigned into triple antithrombotic therapy (aspirin+ clopidogrel+ warfarin) group and dual antiplatelet therapy (aspirin+ clopidogrel) group.The collected full-text literatures underwent further quality assessment of the risks of bias using RevMan 5.3 software. Impact of various antithrombotic therapeutic strategies on the outcome of coronary heart disease patients combining with atrial fibrillation were evaluated. Results: In this meta analysis, 12 randomized controlled trials with 11 353 patients were included. Among these patients, 3 486 patients received triple antithrombotic therapy and 7 867 patients received dual anti-platelet therapy. There was no significant difference in incidence rate of MACE (OR=0.93, 95%CI 0.74-1.18, P>0.05) and the incidence rate of ischemic stroke (OR=0.88, 95%CI 0.70-1.10, P=0.27) between the two patients groups. However, the incidence rate of major bleeding events in triple antithrombotic therapy group was twice higher than that in dual anti-platelet therapy (OR=1.94, 95%CI 1.33-2.82, P=0.000 6). Conclusion: Compared with dual anti-platelet therapy strategy, coronary heart disease patients combining with atrial fibrillation who were treated by triple antithrombotic therapy strategy have the similar outcome on risk of ischemic stroke, but higher risk of major bleeding events. abstract_id: PUBMED:23689944 Antithrombotic therapy for patients with nonvalvular atrial fibrillation undergoing percutaneous coronary intervention: a review. Patients with atrial fibrillation who have risk factors for thromboembolism benefit from chronic oral anticoagulation therapy, and antiplatelet therapy alone is of relatively little benefit for prevention of ischemic stroke and systemic embolism. Patients undergoing percutaneous coronary intervention with drug-eluting stents require dual antiplatelet therapy with aspirin and a thienopyridine for 3 to 12 months or more prevention of stent thrombosis and recurrent ischemic events. When patients with atrial fibrillation undergo percutaneous coronary intervention, the need to combine dual antiplatelet therapy and warfarin raises the risk of major bleeding complications considerably. Recent trials have explored the option of omitting aspirin with promising results. The introduction of novel oral anticoagulants that specifically inhibit factor IIa (dabigatran) or factor Xa (rivaroxaban, apixaban, and edoxaban) and antiplatelet agents that inhibit the P(2)Y(12) receptor (prasugrel and ticagrelor) makes management of these patients even more challenging, but future trials addressing myriad alternative regimens may identify better tolerated strategies. abstract_id: PUBMED:21950643 New and emerging anticoagulant therapy for atrial fibrillation and acute coronary syndrome. Abstract Thrombosis is an underlying cause of many cardiovascular disorders, and generation of thrombi in the arterial circulation can lead to unstable angina, myocardial infarction, or ischemic stroke. Antithrombotic therapy is widely used, with proven benefit to prevent ischemic stroke and thromboembolic events in patients with atrial fibrillation (AF) or to prevent further ischemic complications in patients with acute coronary syndrome (ACS). Traditional anticoagulants (including unfractionated heparin, low-molecular-weight heparin, and warfarin) and antiplatelet agents (including aspirin, clopidogrel, and prasugrel) are typically used for these indications. Limitations to their use include variable pharmacokinetic and pharmacodynamic profiles, inability to inhibit fibrin-bound thrombin, risk of heparin-induced thrombocytopenia, delayed onset of action, numerous drug interactions, need for substantial laboratory monitoring and dosage titrations, hyporesponsiveness or resistance, hypersensitivity, adverse events, and bleeding. To overcome some of the limitations of traditional agents, new antithrombotic agents under development are highly selective for specific coagulation factors blocking the synthesis of thrombin. Clinicians must have an understanding of the new anticoagulants to aid in the selection of appropriate therapies for patients. We describe the most relevant phases II and III clinical trials that evaluated several recent emerging anticoagulant drugs for use in patients with AF or ACS. The advantages of many new agents include predictable pharmaco-dynamic response and pharmacokinetic parameters, allowing for fixed oral dosing with no need for laboratory monitoring. For patients with AF, dabigatran is already approved for the prevention of stroke and systemic embolism, rivaroxaban appears to be an effective alternative to warfarin in high-risk patients, and apixaban may also be an effective alternative to aspirin in patients unable to take warfarin. Otamixaban shows promise as an intravenous alternative for patients with ACS in the acute care setting. Likewise, rivaroxaban, dabigatran, and darexaban with or without dual antiplatelet therapy may be beneficial for secondary prevention of ischemic events in patients with ACS. Answer: Yes, warfarin does help prevent ischemic stroke in patients presenting with post coronary bypass paroxysmal atrial fibrillation (PAF). A study examining the efficacy of warfarin in preventing ischemic stroke due to PAF after coronary artery bypass grafting (CABG) found that warfarin therapy, when administered concomitantly with an antiplatelet agent, dramatically reduced the incidence of ischemic stroke associated with postoperative PAF. In this study, patients treated with a single antiplatelet agent and warfarin had a significantly lower incidence of postoperative ischemic stroke (1.7%) compared to those administered two types of antiplatelet agents without warfarin (12.9%). Logistic regression analysis indicated that the absence of warfarin therapy was a risk factor for post-CABG stroke associated with PAF (Odds 13.04, p = 0.027) (PUBMED:22971719). However, it is important to note that the role of early oral anticoagulation (OAC) therapy in patients with new-onset postoperative atrial fibrillation (POAF) after CABG is uncertain. While POAF is associated with an increased risk of ischemic stroke, heart failure hospitalization, and recurrent atrial fibrillation, early initiation of OAC was not associated with a reduced risk of ischemic stroke or any thromboembolism but was associated with an increased risk for major bleeding (PUBMED:33251914). In another study, the preventive effect of warfarin on early stroke associated with new atrial fibrillation after CABG appeared limited, and the treatment with warfarin had to be balanced against the risk of bleeding, particularly into the pericardium (PUBMED:25776468). Overall, while warfarin can be effective in reducing the risk of ischemic stroke in patients with PAF post-CABG, the decision to initiate anticoagulation therapy must consider individual patient risk factors, including the potential for bleeding complications.
Instruction: Does hyperbaric oxygen administration decrease side effect and improve quality of life after pelvic radiation? Abstracts: abstract_id: PUBMED:18046062 Does hyperbaric oxygen administration decrease side effect and improve quality of life after pelvic radiation? Aim: to evaluate the influence of HBOT to the side effect and quality of life after pelvic radiation. Methods: this is an open randomized, parallel, prospective study conducted in Department of Obstetrics and Gynecology, Oncology Division and Department of Radiotherapy. Endoscopy procedure was performed in Department of Internal Medicine and tissue biopsy in Department of Pathology Anatomy. The hyperbaric oxygen therapy (HBOT) was done in Dr. Mintohardjo, Navy Seal Hospital Jakarta. The side effect was measured using LENT SOMA scale ratio, the quality of life used the Karnofsky score. The difference of two mean was analyzed using student t test. Results: of 32 patients undergoing HBOT and 33 patients as control, the ratio of ASE of control group was 44.1+/-28.2%, HBOT group was 0.7+/-30.1%; p<0.001; the LSE of control group was 33.6+/-57.6%, HBOT group was -19.6+/-69.4%; p=0.008. Quality of life of control group after intervention was 4.5+/-10.7%; HBOT group was 19.7+/-9.6%; p <0.001. After 6 months of intervention the quality of life was 2.5+/-16.1% in the control group, and HBOT group was 15.2+/-14.7%; p =0.007. Conclusion: the study showed that HBOT decreased acute and late side effect, also improved the quality of life of patients with proctitis radiation. abstract_id: PUBMED:35320424 Symptom burden and health-related quality of life six months after hyperbaric oxygen therapy in cancer survivors with pelvic radiation injuries. Purpose: Late radiation tissue injuries (LRTIs) after treatment for pelvic cancer may impair health related quality of life (HRQoL). Hyperbaric oxygen therapy is an adjuvant therapy for LRTIs, but limited studied. The aim of this study was to explore the development and association between symptoms of LRTI and HRQoL following hyperbaric oxygen treatment. Methods: A pretest-posttest design was used to evaluate the changes in pelvic LRTIs and HRQoL from baseline (T1), immediately after treatment (T2) and at six-month follow-up (T3). EPIC and EORTC-QLQ-C30 were used to assess LRTIs and HRQoL. Changes were analysed with t-tests, and associations with Pearson's correlation and multiple regression analyses. Results: Ninety-five participants (mean age 65 years, 52.6% men) were included. Scores for urinary and bowel symptoms, overall HRQoL, all function scales and the symptoms scales sleep, diarrhoea, pain and fatigue were significantly improved six months after treatment (P-range = 0.00-0.04). Changes were present already at T2 and maintained or further improved to T3. Only a weak significant correlation between changes in symptoms and overall HRQoL was found (Pearson r-range 0.20-0.27). Conclusion: The results indicate improvement of pelvic LRTIs and HRQoL following hyperbaric oxygen therapy, corresponding to minimal or moderate important changes. Cancer survivors with pelvic LRTIs and impaired HRQoL may benefit from undergoing hyperbaric oxygen therapy. Especially the reduced symptom-severity and improved social- and role function can influence daily living positively. Trial Registration: ClinicalTrials.gov: NCT03570229. Released 2. May 2018. abstract_id: PUBMED:35976581 The role of hyperbaric oxygen therapy in the treatment of radiation lesions. Introduction: Cancer remains one of the leading causes of death worldwide, with 50-60% of patients requiring radiotherapy during the course of treatment. Patients' survival rate has increased significantly, with an inevitable increase in the number of patients experiencing side effects from cancer therapy. One such effect is late radiation injuries in which hyperbaric oxygen therapy appears as complementary treatment. With this work we intend to divulge the results of applying hyperbaric oxygen therapy among patients presenting radiation lesions in our Hyperbaric Medicine Unit. Materials And Methods: Retrospective analysis of clinical records of patients with radiation lesions treated at the Hyperbaric Medicine Unit assessed by the scale Late Effects of Normal Tissues-Subjective, Objective, Management, Analytical (LENT-SOMA) before and after treatment, between October 2014 and September 2019 were included. Demographic characteristics, primary tumor site, subjective assessment of the LENT-SOMA scale before and after treatment were collected and a comparative analysis (Students t test) was done. Results: 88 patients included: 33 with radiation cystitis, 20 with radiation proctitis, 13 with osteoradionecrosis of the mandible and 22 with radiation enteritis. In all groups, there was a significant decrease (p < 0.005) in the subjective parameter of the LENT-SOMA scale. Discussion: Late radiation lesions have a major influence on patients' quality of life. In our study hyperbaric oxygen therapy presents as an effective therapy after the failure of conventional treatments. Conclusion: Hyperbaric oxygen therapy is an effective complementary therapy in the treatment of refractory radiation lesions. abstract_id: PUBMED:34279734 The impact of hyperbaric oxygen therapy on late radiation toxicity and quality of life in breast cancer patients. Purpose: To evaluate symptoms of late radiation toxicity, side effects, and quality of life in breast cancer patients treated with hyperbaric oxygen therapy (HBOT). Methods: For this cohort study breast cancer patients treated with HBOT in 5 Dutch facilities were eligible for inclusion. Breast cancer patients with late radiation toxicity treated with ≥ 20 HBOT sessions from 2015 to 2019 were included. Breast and arm symptoms, pain, and quality of life were assessed by means of the EORTC QLQ-C30 and -BR23 before, immediately after, and 3 months after HBOT on a scale of 0-100. Determinants associated with persistent breast pain after HBOT were assessed. Results: 1005/1280 patients were included for analysis. Pain scores decreased significantly from 43.4 before HBOT to 29.7 after 3 months (p < 0.001). Breast symptoms decreased significantly from 44.6 at baseline to 28.9 at 3 months follow-up (p < 0.001) and arm symptoms decreased significantly from 38.2 at baseline to 27.4 at 3 months follow-up (p < 0.001). All quality of life domains improved at the end of HBOT and after 3 months follow-up in comparison to baseline scores. Most prevalent side effects of HBOT were myopia (any grade, n = 576, 57.3%) and mild barotrauma (n = 179, 17.8%). Moderate/severe side effects were reported in 3.2% (n = 32) of the patients. Active smoking during HBOT and shorter time (i.e., median 17.5 vs. 22.0 months) since radiotherapy were associated with persistent breast pain after HBOT. Conclusion: Breast cancer patients with late radiation toxicity reported reduced pain, breast and arm symptoms, and improved quality of life following treatment with HBOT. abstract_id: PUBMED:18222656 Improved quality of life with hyperbaric oxygen therapy in patients with persistent pelvic radiation-induced toxicity. Aims: We report the results of hyperbaric oxygen therapy (HBOT) used in the treatment of radiation-induced persistent side-effects after the irradiation of pelvic tumours. Materials And Methods: Between January 2001 and December 2005, 13 women (median age 60.3 years) with radiation combined proctitis/cystitis (n=6), longstanding vaginal ulcers and fistulas (n=5) and longstanding skin injuries (n=2) underwent HBOT in a multiplace chamber for a median of 27 sessions (range 16-40). The treatment schedule was HBOT 100% oxygen, at 2 absolute atmospheres, for 90 min, once a day. For radiation-induced toxicity grading we used the National Cancer Institute Common Toxicity Criteria (CTC) grading system, before and after HBOT. Results: Thirteen patients underwent an adequate number of HBOT sessions. The mean CTC grading score before HBOT was 3.3+/-0.75, whereas the mean CTC grading score after HBOT was 0.3+/-0.63. The scores showed a significant improvement after HBOT (P=0.001; exact Wilcoxon signed-rank test). Rectal bleeding ceased in five of six patients with proctitis and dysuria resolved in six of seven cystitis patients. Macroscopic haematuria stopped in seven of seven patients. Scar complications resolved in two of two patients. None reported HBOT-associated side-effects. Conclusion: HBOT is apparently safe and effective in managing radiation-induced late side-effects, such as soft tissue necrosis (skin and vagina), cystitis, proctitis and fistulas. abstract_id: PUBMED:24377190 Quality of life--the effect of hyperbaric oxygen treatment on radiation injury. The purpose of the present study was to assess changes in health-related quality of life (HRQL) among patients with radiation injury one year after hyperbaric oxygen (HBO2 therapy). HBO2 therapy was given once daily, five times a week in monoplace hyperbaric chambers for at least 19 days. HRQL was measured by SF-36 (Short Form with 36 questions). The study population was 101 patients, and among these 53.5% had radiation injury to the head and neck region, 35.6% to the intestine and 10.9% to the bladder. Testing for differences before and one year after HBO2 therapy showed significant improvement for the following SF-36 scales: Physical Function an increase of 4.54 (p = 0.01). Role Performance an increase of 8.79 (p = 0.04). Vitality an increase of 6.88 (p = 0.001). Social Function an increase of 8.04 (p = 0.002). Time since radiation at HBO2 therapy was 1-39 years. A total of 82% received radiation more than one year ago, and 33% more than seven years ago. Changes in physical and mental sum scores were not associated with time since radiation. Patients below the age of 70 seemed to have the best effect of HBO2 therapy measured by HRQL. abstract_id: PUBMED:28838774 The role of hyperbaric oxygen therapy in the prevention and management of radiation-induced complications of the head and neck - a systematic review of literature. Radiation therapy for the treatment of head and neck cancer can injure normal tissues and have devastating side effects. Hyperbaric oxygen (HBO) is known to reduce the severity of radiation-induced injury by promoting wound healing. While most of the research in literature has focused on its efficacy in osteonecrosis, HBO has other proven benefits as well. The aim of this review was to identify the various benefits of hyperbaric oxygen therapy in patients who have undergone radiation for head and neck cancer. An electronic database search was carried out to identify relevant articles and selected articles were reviewed in detail. The quality of evidence for each benefit, including preserving salivary gland function, preventing osteonecrosis, dental implant success, and overall quality of life, was evaluated. Evidence showed that HBO was effective in improving subjective symptoms of xerostomia, swallowing, speech and overall quality of life. There was no conclusive evidence to show that HBO improved implant survival, prevented osteonecrosis, or improved salivary gland function. The high costs and accessibility of HBO therapy must be weighed against the potential benefits to each patient. abstract_id: PUBMED:12084193 Hyperbaric oxygen for the treatment of radiation cystitis and proctitis. Chronic radiation cystitis and radiation proctitis are known complications of the use of radiotherapy in the treatment of pelvic malignancies. These complications are, in part, due to endothelial damage and decreased vascularity and oxygenation to pelvic tissues. Hyperbaric oxygen therapy may be able to improve oxygenation and induce angiogenesis in damaged organs, resulting in recovery from radiation injury. Several studies have shown significant rates of response to hyperbaric oxygen treatment, however, no randomized trial exists to definitively demonstrate its effectiveness for cystitis and proctitis. In addition, concerns exist regarding the durability of the beneficial effect. abstract_id: PUBMED:36940182 Advances in the management of radiation-induced cystitis in patients with pelvic malignancies. Objective: Radiotherapy plays a vital role as a treatment for malignant pelvic tumors, in which the bladder represents a significant organ at risk involved during tumor radiotherapy. Exposing the bladder wall to high doses of ionizing radiation is unavoidable and will lead to radiation cystitis (RC) because of its central position in the pelvic cavity. Radiation cystitis will result in several complications (e.g. frequent micturition, urgent urination, and nocturia) that can significantly reduce the patient's quality of life and in very severe cases become life-threatening. Methods: Existing studies on the pathophysiology, prevention, and management of radiation-induced cystitis from January 1990 to December 2021 were reviewed. PubMed was used as the main search engine. Besides the reviewed studies, citations to those studies were also included. Results And Discussions: In this review, the symptoms of radiation cystitis and the mainstream grading scales employed in clinical situations are presented. Next, preclinical and clinical research on preventing and treating radiation cystitis are summarized, and an overview of currently available prevention and treatment strategies as guidelines for clinicians is provided. Treatment options involve symptomatic treatment, vascular interventional therapy, surgery, hyperbaric oxygen therapy (HBOT), bladder irrigation, and electrocoagulation. Prevention includes filling up the bladder to remove it from the radiation field and delivering radiation based on helical tomotherapy and CT-guided 3D intracavitary brachytherapy techniques. abstract_id: PUBMED:26677440 Pelvic radiation disease: Updates on treatment options. Pelvic cancers are among the most frequently diagnosed neoplasms and radiotherapy represents one of the main treatment options. The irradiation field usually encompasses healthy intestinal tissue, especially of distal large bowel, thus inducing gastrointestinal (GI) radiation-induced toxicity. Indeed, up to half of radiation-treated patients say that their quality of life is affected by GI symptoms (e.g., rectal bleeding, diarrhoea). The constellation of GI symptoms - from transient to long-term, from mild to very severe - experienced by patients who underwent radiation treatment for a pelvic tumor have been comprised in the definition of pelvic radiation disease (PRD). A correct and evidence-based therapeutic approach of patients experiencing GI radiation-induced toxicity is mandatory. Therapeutic non-surgical strategies for PRD can be summarized in two broad categories, i.e., medical and endoscopic. Of note, most of the studies have investigated the management of radiation-induced rectal bleeding. Patients with clinically significant bleeding (i.e., causing chronic anemia) should firstly be considered for medical management (i.e., sucralfate enemas, metronidazole and hyperbaric oxygen); in case of failure, endoscopic treatment should be implemented. This latter should be considered the first choice in case of acute, transfusion requiring, bleeding. More well-performed, high quality studies should be performed, especially the role of medical treatments should be better investigated as well as the comparative studies between endoscopic and hyperbaric oxygen treatments. Answer: Yes, hyperbaric oxygen administration does decrease side effects and improve quality of life after pelvic radiation. Studies have shown that hyperbaric oxygen therapy (HBOT) decreased acute and late side effects and also improved the quality of life of patients with proctitis radiation (PUBMED:18046062). Additionally, improvements in health-related quality of life (HRQoL) were observed following HBOT, with significant improvements in urinary and bowel symptoms, overall HRQoL, function scales, and symptoms scales such as sleep, diarrhea, pain, and fatigue (PUBMED:35320424). Furthermore, retrospective analysis indicated that HBOT presents as an effective therapy after the failure of conventional treatments for late radiation lesions, significantly decreasing the subjective parameter of the LENT-SOMA scale and thereby influencing patients' quality of life positively (PUBMED:35976581). In breast cancer patients with late radiation toxicity, HBOT led to reduced pain, breast and arm symptoms, and improved quality of life (PUBMED:34279734). Another study reported significant improvements in CTC grading scores after HBOT for patients with radiation-induced late side-effects such as soft tissue necrosis, cystitis, proctitis, and fistulas (PUBMED:18222656). Moreover, one year after HBOT, patients showed significant improvement in various scales of HRQL, including Physical Function, Role Performance, Vitality, and Social Function (PUBMED:24377190). While the evidence supports the benefits of HBOT in improving subjective symptoms and overall quality of life, it is important to note that the high costs and accessibility of HBOT therapy must be considered, and more high-quality comparative studies are needed to further investigate the role of medical treatments and the comparative effectiveness between endoscopic and hyperbaric oxygen treatments (PUBMED:26677440).
Instruction: Are standard intra-abdominal pressure values different during pregnancy? Abstracts: abstract_id: PUBMED:24204808 Are standard intra-abdominal pressure values different during pregnancy? Background: Measurement of intra-abdominal pressure (IAP) is an important parameter in the surveillance of intensive care unit patients. Standard values of IAP during pregnancy have not been well defined. The aim of this study was to assess IAP values in pregnant women before and after cesarean delivery. Methods: This prospective study, carried out from January to December 2011 in a French tertiary care centre, included women with an uneventful pregnancy undergoing elective cesarean delivery at term. IAP was measured through a Foley catheter inserted in the bladder under spinal anaesthesia before cesarean delivery, and every 30 minutes during the first two hours in the immediate postoperative period. Results: The study included 70 women. Mean IAP before cesarean delivery was 14.2 mmHg (95%CI: 6.3-23). This value was significantly higher than in the postoperative period: 11.5 mmHg (95%CI: 5-19.7) for the first measurement (p = 0.002). IAP did not significantly change during the following two postoperative hours (p = 0.2). Obese patients (n = 25) had a preoperative IAP value significantly higher than non-obese patients: 15.7 vs. 12.4; p = 0.02. Conclusion: In term pregnancies, IAP values are significantly higher before delivery than in the post-partum period, where IAP values remain elevated for at least two hours at the level of postoperative classical abdominal surgery. The knowledge of these physiological changes in IAP values may help prevent organ dysfunction/failure when abdominal compartment syndrome occurs after cesarean delivery. abstract_id: PUBMED:28619279 Intra-abdominal pressure and intra-abdominal hypertension in critically ill obstetric patients: a prospective cohort study. Background: Critically ill obstetric patients may have risk factors for intra-abdominal hypertension. This study evaluated the intra-abdominal pressure and its effect on organ function and the epidemiology of intra-abdominal hypertension. Methods: Obstetric patients admitted to an Intensive Care Unit, with an anticipated stay greater than 24hours, were included. Intra-abdominal pressure was measured daily via a Foley catheter, based on intravesical pressure. Results: One-hundred-and-one patients were enrolled. The intra-abdominal pressure was 5-7mmHg in 34%; 7-12mmHg in 60%; and ≥12mmHg (intra-abdominal hypertension) in 6%. All six patients with intra-abdominal hypertension were pregnant at the time of admission. The intra-abdominal pressure in four patients normalized to <12mmHg following delivery, but in the remaining two it persisted ≥12mmHg and both these patients died. Correlation between intra-abdominal pressure and organ dysfunction was weak (r=0.211). Statistical comparison between patients with and without intra-abdominal hypertension for risk factors, daily intra-abdominal pressures, and Sequential Organ Failure Assessment score could not be done due to the disproportionately small number of patients with intra-abdominal hypertension as opposed to those without (6 versus 95). Intra-abdominal pressure did not significantly differ between survivors and non-survivors (8.5±1.1 vs 7.9±1.7mmHg, P=0.079). Conclusions: The incidence of intra-abdominal hypertension in critically ill obstetric patients was lower than previously defined for mixed Intensive Care Unit populations, with an association with the pregnant state. Normalization of intra-abdominal pressure after delivery was associated with better survival. There was no correlation between intra-abdominal pressure and organ function or mortality. abstract_id: PUBMED:23118052 Effects of ovariohysterectomy on intra-abdominal pressure and abdominal perfusion pressure in cats. Intra-abdominal pressure (IAP) and abdominal perfusion pressure (APP) have shown clinical relevance in monitoring critically ill human beings submitted to abdominal surgery. Only a few studies have been performed in veterinary medicine. The aim of this study was to assess how pregnancy and abdominal surgery may affect IAP and APP in healthy cats. For this purpose, pregnant (n=10) and non-pregnant (n=11) queens undergoing elective spaying, and tomcats (n=20, used as controls) presented for neutering by scrotal orchidectomy were included in the study. IAP, mean arterial blood pressure (MAP), APP, heart rate and rectal temperature (RT) were determined before, immediately after, and four hours after surgery. IAP increased significantly immediately after abdominal surgery in both female groups when compared with baseline (P<0.05) and male (P<0.05) values, and returned to initial perioperative readings four hours after surgery. Tomcats and pregnant females (P<0.05) showed an increase in MAP and APP immediately after surgery decreasing back to initial perioperative values four hours later. A significant decrease in RT was appreciated immediately after laparotomy in both pregnant and non-pregnant queens. IAP was affected by abdominal surgery in this study, due likely to factors, such as postoperative pain and hypothermia. Pregnancy did not seem to affect IAP in this population of cats, possibly due to subjects being in early stages of pregnancy. abstract_id: PUBMED:22326198 Measurement of intra-abdominal pressure in term pregnancy: a pilot study. Background: This study was conducted to assess the feasibility of measuring intra-abdominal pressure in term parturients under spinal anesthesia. Methods: Intra-abdominal pressure was measured in 20 term parturients after spinal anesthesia for elective caesarean section. Pressure was measured in the supine and 10° left lateral tilt positions with a constant reference point throughout. Results: Intra-abdominal pressure measurement was feasible and safe to perform. Pressure was significantly lower in the left lateral tilt position than supine (10.9 mmHg ± 4.67 vs. 8.9 mmHg ± 4.87, P=0.0004). The range of intra-abdominal pressure in pregnancy was wide, from 2 to 20 mmHg, with >25% of patients resting with pressures above 12 mmHg in both positions. Conclusions: Under spinal anesthesia, intra-abdominal pressure in >25% of healthy term parturients was > 12 mmHg, which has conventionally been defined as intra-abdominal hypertension. The intra-abdominal pressure in term pregnancy should be performed in the left lateral tilt position to avoid falsely elevated pressure measurements. abstract_id: PUBMED:33251897 Physiology of intra-abdominal volume during pregnancy. A total of 580 pregnant and 50 puerperal women were included in this cross-sectional study to assess the physiological changes that allow women to adapt to a chronic increase in intra-abdominal pressure during pregnancy. The volume of the uterus, intra-abdominal volume (IAV), visceral and subcutaneous fat was calculated. During pregnancy, the IAV increases up to 1.5 times. Changes in IAV until 24 weeks present a linear relationship (5.2%); thereafter, changes become exponential and, at 40 weeks, IAV increases by 61%. This fact is exclusively related to the progressive growth of the foetus and to the increase in uterine size. At term, the IAV reserve is exhausted, becoming equal the anteroposterior and transverse diameters of the abdomen.In conclusion, the adaptive capabilities of IAV related to the foetal growth are limited by the IAV reserve. The reserve capacity of the IAV and tensile properties of the abdominal wall can be estimated by the dynamics of the anteroposterior and transverse abdominal diameters.IMPACT STATEMENTWhat is already known on this subject? A causal relationship between intra-abdominal hypertension and the development of adverse obstetric and perinatal outcomes has been suggested. Nevertheless, the role of this condition as a leading cause of systemic dysfunction during pregnancy remains unrecognised and underestimated.What do the results of this study add? This study assesses the dynamics of IAV in uncomplicated singleton pregnancies.What are the implications of these findings for clinical practice and/or further research? The study of abdominal pressure indicators such as intra-abdominal volume and compliance will help to a better understand the aetiology, pathophysiology, prognosis and treatment strategies for pregnant women with intra-abdominal hypertension. abstract_id: PUBMED:31070780 Management of peripartum intra-abdominal hypertension and abdominal compartment syndrome. Normal pregnancy leads to a state of chronically increased intra-abdominal pressure. Obstetric and non-obstetric conditions may increase intra-abdominal pressure further, causing intra-abdominal hypertension and abdominal compartment syndrome, which leads to maternal organ dysfunction and a compromised fetal state. Limited medical literature exists to guide treatment of pregnant women with these conditions. In this state-of-the-art review, we propose a diagnostic and treatment algorithm for the management of peripartum intra-abdominal hypertension and abdominal compartment syndrome, informed by newly available studies. abstract_id: PUBMED:25499016 The effect of intra-abdominal pressure on sensory block level of single-shot spinal anesthesia for cesarean section: an observational study. Background: Increased intra-abdominal pressure in pregnancy is thought to affect intrathecal drug spread. However this assumption remains largely untested. The aim of this prospective study was to evaluate the association between intra-abdominal pressure and maximum sensory block level in parturients receiving spinal anesthesia for cesarean section. Methods: Parturients having elective cesarean section with single-shot spinal anesthesia using hyperbaric bupivacaine 12.5mg were included. Intra-abdominal pressure was measured via a bladder catheter after establishing a T4 sensory block and at the end of surgery in the supine position with 10° left lateral tilt. We recorded demographic data, descriptive characteristics of pregnancy, self-reported weight gain and weight of the newborn. As secondary outcomes, we evaluated onset of sensory block, maximum sensory block, motor block, number of hypotensive episodes, fluid and ephedrine requirements, time to first analgesic request, time to one-point recovery of motor block and side effects. Results: The median value of the maximum sensory block level was T2 in 117 parturients. Median [interquartile range] pre-incision and postoperative intra-abdominal pressure were 13 [11-16] and 9 [6-10]mmHg respectively. No association was observed between maximum sensory block level and pre-incision intra-abdominal pressure (P=0.83). Weight was associated with pre-incision intra-abdominal pressure with an estimated odds ratio of 1.04 per kg (99.4% CI: 1.00-1.08). There was a moderate correlation between pre-incision and postoperative intra-abdominal pressure with a Spearman correlation coefficient of 0.67 (99.5% CI: 0.5-0.79). There was no association between pre-incision intra-abdominal pressure and secondary outcomes. Conclusions: In parturients, intra-abdominal pressure was not associated with spinal block spread, block onset time, recovery or side effects. abstract_id: PUBMED:37851647 The association between maternal intra-abdominal pressure and hypertension in pregnancy. Introduction: Pregnancy leads to a state of chronically increased intra-abdominal pressure (IAP) caused by a growing fetus, fluid, and tissue. Increased intra-abdominal pressure is leading to state of Intra-Abdominal Hypertension (IAH) and Abdominal Compartment Syndrome. Clinical features and risk factors of preeclampsia is comparable to abdominal compartment syndrome. IAP may be associated with the hypertension in pregnancy (HIP). Objectives: The study aimed to determine the antepartum and postpartum IAP levels in women undergoing caesarean delivery (CD) and association between hypertension in pregnancy, and antepartum and postpartum IAP levels in women undergoing CD. Method: Seventy pregnant women (55 normotensive, 15 HIP) undergoing antepartum, non-emergency CD, had their intravesical pressure measured before and after the CD, the intravesical pressure measurements obtained with the patient in the supine position were considered to correspond to the IAP. Multivariable linear regression models were used to study associations between intraabdominal pressure and baseline characteristics in normotensive pregnancies and hypertensive pregnancies. Results: In normotensive pregnancies at mean gestation age of 38.2 weeks (95%CI 37.9 to 38.6), mean antepartum IAP was 12.7 mmHg(95%CI 11.6 to 13.8) and the mean postpartum IAP was 7.3 mmHg (95% CI 11.6 to 13.8). Multivariable linear regression models showed HIP group antepartum IAP positively associated with coefficient value of 1.617 (p = 0.268) comparing with normotensive pregnancy group. Postpartum IAP in HIP group positively associated with coefficient value of 2.519 (p = 0.018) comparing with normotensive pregnancy group. IAP difference is negatively associated with HIP (coefficient -1.013, p = 0.179). Conclusion: In normotensive pregnancies at term, the IAP was in the IAH range of the non-pregnant population. Higher Antepartum IAP and Postpartum IAP are associated with HIP. Reduction of IAP from antepartum period to postpartum period was less with HIP. abstract_id: PUBMED:25117778 Intra-abdominal pressure measurements in term pregnancy and postpartum: an observational study. Objective: To determine intra-abdominal pressure (IAP) and to evaluate the reproducibility of IAP-measurements using the Foley Manometer Low Volume (FMLV) in term uncomplicated pregnancies before and after caesarean section (CS), relative to two different reference points and to non-pregnant values. Design: Observational cohort study. Setting: Secondary level referral center for feto-maternal medicine. Population: Term uncomplicated pregnant women as the case-group and non-pregnant patients undergoing a laparoscopic assisted vaginal hysterectomy (LAVH) as control group. Methods: IAP was measured in 23 term pregnant patients, before and after CS and in 27 women immediately after and 1 day after LAVH. The midaxillary line was used as zero-reference (IAPMAL) in all patients and in 13 CS and 13 LAVH patients, the symphysis pubis (IAPSP) was evaluated as additional zero-reference. Intraobserver correlation (ICC) was calculated for each zero-reference. Paired student's t-tests were performed to compare IAP values and Pearson's correlation was used to assess correlations between IAP and gestational variables. Main Outcome Measures: ICC before and after surgery, IAP before and after CS, IAP after CS and LAVH. Results: The ICC for IAPMAL before CS was lower than after (0.71 versus 0.87). Both mean IAPMAL and IAPSP were significantly higher before CS than after: 14.0±2.6 mmHg versus 9.8±3.0 mmHg (p<0.0001) and 8.2±2.5 mmHg versus 3.5±1.9 mmHg (p = 0.010), respectively. After CS, IAP was not different from values measured in the LAVH-group. Conclusion: IAP-measurements using FMLV is reproducible in pregnant women. Before CS, IAP is increased in the range of intra-abdominal hypertension for non-pregnant individuals. IAP significantly decreases to normal values after delivery. abstract_id: PUBMED:21335951 Reference values and early determinants of intra-abdominal fat mass in primary school children. Background: Intra-abdominal fat (IAF) is a valuable predictor of cardiovascular morbidity. However, neither reference values nor determinants are known in children. Methods: IAF was assessed as sonographically measured intra-abdominal depth in 1,046 children [median age 7.6 years, interquartile range (IQR) 7.2-7.9; 54% boys] of the URMEL-ICE study. Results: The intraclass correlation coefficient for intraobserver agreement was 0.93. The median IAF showed a significant gender difference (boys: 54.6 mm, IQR 50.1-59.3, vs. girls: 51.7 mm, IQR 46.3-56.4; p < 0.001). Age- and gender-specific centiles were generated. IAF showed a positive correlation to systolic blood pressure [regression coefficient (β) = 0.24 mm Hg/mm; p < 0.001] and a negative correlation to HDL cholesterol (β = -0.01 mmol/l/mm; p < 0.001). IAF showed a positive association with increased paternal and maternal BMI (β = 0.28 mm/kg/m(2) and 0.27 mm/kg/m(2); p < 0.001), increased weight gain in the first 2 years of life (β = 3.04 mm; p < 0.001), and maternal smoking during pregnancy (β = 2.4 mm; p = 0.001). Increased parental education was negatively associated with IAF (maternal: β = -0.65 mm/degree; p = 0.004, and paternal: β = -0.61 mm/degree; p = 0.002). Conclusion: Sonography was a reliable tool to estimate IAF. Factors influencing IAF included rapid infant weight gain, smoking during pregnancy, and parental BMI and education. Since IAF showed an association with cardiovascular risk factors even in prepubertal children, it might become a valuable predictor of cardiovascular vulnerability. Answer: Yes, standard intra-abdominal pressure (IAP) values are different during pregnancy. Studies have shown that IAP values are significantly higher before delivery than in the post-partum period (PUBMED:24204808). In term pregnancies, IAP values remain elevated for at least two hours at the level of postoperative classical abdominal surgery (PUBMED:24204808). Additionally, the incidence of intra-abdominal hypertension in critically ill obstetric patients was found to be lower than previously defined for mixed Intensive Care Unit populations, with an association with the pregnant state (PUBMED:28619279). A study on cats indicated that pregnancy did not seem to affect IAP, possibly due to subjects being in early stages of pregnancy (PUBMED:23118052). However, in humans, the physiological changes that allow women to adapt to a chronic increase in IAP during pregnancy have been documented, with the IAV increasing up to 1.5 times by 40 weeks of gestation (PUBMED:33251897). Furthermore, under spinal anesthesia, more than 25% of healthy term parturients had IAP values greater than 12 mmHg, which has conventionally been defined as intra-abdominal hypertension (PUBMED:22326198). The study of intra-abdominal pressure in term pregnancy should be performed in the left lateral tilt position to avoid falsely elevated pressure measurements (PUBMED:22326198). Management of peripartum intra-abdominal hypertension and abdominal compartment syndrome is also influenced by the chronically increased IAP during normal pregnancy (PUBMED:31070780). Lastly, an observational study found that before cesarean section, IAP is increased in the range of intra-abdominal hypertension for non-pregnant individuals and significantly decreases to normal values after delivery (PUBMED:25117778). In summary, standard IAP values are indeed different during pregnancy, with increased values observed before delivery and a decrease postpartum, reflecting the physiological adaptations and changes that occur during this period.
Instruction: Do American oncologists know how to use prognostic variables for patients with newly diagnosed primary breast cancer? Abstracts: abstract_id: PUBMED:8021733 Do American oncologists know how to use prognostic variables for patients with newly diagnosed primary breast cancer? Purpose: This project was designed to investigate how American medical oncologists actually use prognostic information to treat primary breast cancer patients, and to study their difficulties in combining complex and sometimes contradictory information. Methods: A simple 2-page questionnaire was faxed in May and June 1993 to a sample of American medical oncologists who were members of the American Society of Clinical Oncology (ASCO). Results: When presented with simple case histories of patients with newly diagnosed invasive breast cancer and asked to assess prognosis on the basis of tumor size, number of involved axillary nodes, patient age, estrogen receptor level, and progesterone receptor level, there was a wide divergence of opinions about the probability of disease-free survival at 10 years (both for cases in which the patient received no adjuvant therapy and for those in which the patient did receive such therapy). The use of additional prognostic data (such as S-phase, tumor histologic and nuclear grading, and cathepsin D status) did not refine the estimates, but led to an equal or greater dispersion of estimates of prognosis. Conclusion: There is a clear need for tools to help oncologists integrate prognostic information for primary breast cancer patients. Such tools might lead to greater accuracy and uniformity of prognostic estimates. Such tools might also help make clear what prognostic tests are worth using for routine clinical practice. abstract_id: PUBMED:23472936 Measuring reliability and validity of a newly developed stress instrument: Newly Diagnosed Breast Cancer Stress Scale. Aims And Objectives: To assess the reliability and validity of a developed instrument entitled Newly Diagnosed Breast Cancer Stress Scale. Background: Distress, clinical anxiety and depression are evident in patients with cancer, leading to poor psychosocial and quality-of-life outcomes. Design: Instrument development study with norm-referenced measurements. Methods: Content validity was determined by expert review. Cronbach's α was used to assess internal consistency reliability and product-moment correlations were conducted. Exploratory factor analysis measured validity of items using varimax rotation method. Criterion-related validity testing used the Perceived Stress Scale and the convergent validity test of construct validity used the Hospital Anxiety and Depression Scale. A total of 125 women pathologically diagnosed with breast cancer were interviewed on the day prior to initial breast surgery. Results: After testing, the Newly Diagnosed Breast Cancer Stress Scale consisted of four main factors with 17 items with acceptable reliability and good validity, and its length and time to complete the questionnaire were appropriate. Internal consistency reliability of the scale was shown by Cronbach's α = 0·84, the criterion validity of Perceived Stress Scale-10 was r = 0·46 (p < 0·001), the convergent validity of Hospital Anxiety and Depression Scale-14 was r = 0·57 (p < 0·001) for anxiety and r = 0·35 (p < 0·001) for depression. Conclusions: The Newly Diagnosed Breast Cancer Stress Scale has acceptable reliability and good validity to measure stress in newly diagnosed patients with breast cancer. Relevance To Clinical Practice: The Newly Diagnosed Breast Cancer Stress Scale can provide healthcare workers with an instrument to better identify stress levels in newly diagnosed breast cancer patients and provide valuable information when defining psychosocial care interventions. abstract_id: PUBMED:30482726 Management of Localized Breast Angiosarcoma by North American Radiation and Medical Oncologists. Introduction: Primary breast angiosarcoma is a rare malignancy with no clinical trials to guide management. The current use of surgery, chemotherapy, and radiotherapy among North American oncologists is unknown. Patients And Methods: An institutional review board-approved anonymous electronic survey was distributed to 9660 practicing North American radiation and medical oncologists. Questions pertained to treatment recommendations for localized nonmetastatic primary breast angiosarcoma, as well as knowledge/use of β-blockers in angiosarcoma. The Fisher exact test was used to compare responses of medical and radiation oncologists. Results: Surgery was recommended by 95% of all respondents. Chemotherapy was recommended by over half of medical and radiation oncologists. Radiotherapy was recommended by 92% of radiation and 56% of medical oncologists. The most common treatment recommendation was a trimodal treatment, with up-front surgery followed by adjuvant chemotherapy, then by adjuvant radiotherapy. Twenty-two percent of respondents were aware of clinical data pertaining to the use of β-blockers in management of angiosarcoma, and among these respondents 69% were comfortable incorporating this treatment into standard practice. Conclusion: Trimodal management of primary localized breast angiosarcoma is supported by North American radiation and medical oncologists, with the majority recommending up-front surgery followed by adjuvant chemotherapy and radiation. The recently published reports of successful use of β-blockers are not yet known among North American clinicians, but there is a great enthusiasm to incorporate these commonly prescribed medications into standard practice. These findings may greatly influence the standard of care for breast angiosarcoma treatment, particularly given the absence of Level I-supported evidence. abstract_id: PUBMED:26244115 Indicators of distress in newly diagnosed breast cancer patients. Background. The diagnosis, treatment, and long-term management of cancer can present individuals with a multitude of stressors at various points in that trajectory. Psychosocial distress may appear early in the diagnostic process and have negative effects on compliance with treatment and subsequent quality of life. Purpose. The aim of the study was to determine early-phase predictors of distress before any medical treatment. Method. Consistent with the goals of the study, 123 newly diagnosed breast cancer patients (20 to 74 years old) completed multiple indicators of knowledge about breast cancer management and treatment, attitudes toward cancer, social support, coping efficacy, and distress. Results. SEM analysis confirmed the hypothesized model. Age was negatively associated with the patient's knowledge (β = - 0.22), which, in turn, was positively associated with both attitudes toward breast cancer (β = 0.39) and coping self-efficacy (β = 0.36). Self-efficacy was then directly related to psychological distress (β = - 0.68). Conclusions. These findings establish indicators of distress in patients early in the cancer trajectory. From a practical perspective, our results have implications for screening for distress and for the development of early interventions that may be followed by healthcare professionals to reduce psychological distress. abstract_id: PUBMED:27698895 African American Race is an Independent Risk Factor in Survival from Initially Diagnosed Localized Breast Cancer. BACKGROUND: African American race negatively impacts survival from localized breast cancer but co-variable factors confound the impact. METHODS: Data sets were analyzed from the Surveillance, Epidemiology and End Results (SEER) directories from 1973 to 2011 consisting of patients with designated diagnosis of breast adenocarcinoma, race as White or Caucasian, Black or African American, Asian, American Indian or Alaskan Native, Native Hawaiian or Pacific Islander, age, stage I, II or III, grade 1, 2 or 3, estrogen receptor or progesterone receptor positive or negative, marital status as single, married, separated, divorced or widowed and laterality as right or left. The Cox Proportional Hazards Regression model was used to determine hazard ratios for survival. Chi square test was applied to determine the interdependence of variables found significant in the multivariable Cox Proportional Hazards Regression analysis. Cells with stratified data of patients with identical characteristics except African American or Caucasian race were compared. RESULTS: Age, stage, grade, ER and PR status and marital status significantly co-varied with race and with each other. Stratifications by single co-variables demonstrated worse hazard ratios for survival for African Americans. Stratification by three and four co-variables demonstrated worse hazard ratios for survival for African Americans in most subgroupings with sufficient numbers of values. Differences in some subgroupings containing poor prognostic co-variables did not reach significance, suggesting that race effects may be partly overcome by additional poor prognostic indicators. CONCLUSIONS: African American race is a poor prognostic indicator for survival from breast cancer independent of 6 associated co-variables with prognostic significance. abstract_id: PUBMED:37461757 A Cross-Sectional Study on the Epidemiology of Newly Diagnosed Breast Cancer Patients Attending Tertiary Care Hospitals in a Tribal Preponderant State of India: Regression Analysis. Introduction: Breast cancer (BC) is globally prevalent and the leading cause of death due to cancer in females. Due to changes in risk factor profiles, improved cancer registration, and cancer detection, its incidence and death rates have risen over the past three decades. Both modifiable and immutable risk factors for BC make up a sizable portion of the total risk factors. Methodology: This was a hospital-based cross-sectional study carried out in the Department of Surgery, Rajendra Institute of Medical Sciences (RIMS), Ranchi. Consecutive sampling was done with a complete enumeration of all newly diagnosed cases of breast cancer >15 years old. Those who consented to participate and those who were extremely ill, deaf, or dumb were excluded from the study. Results: A total of 88 patients were included. Maximum patients diagnosed with breast cancer belonged to the age group of 40-50 years (37.5%), Hindu by religion (76.1%), non-tribal (80.68%), illiterate (89.8%), married (98.9%), housewives (92%), and of class IV socio-economic status (SES) (65.9%). Conclusion: Regular training of Sahiya (the local name of Accredited Social Health Activist (ASHA) in Jharkhand), empowerment of screening clinics for cancer, and upgraded diagnostic facilities for timely referral should be stressed upon. abstract_id: PUBMED:26906129 Beliefs in Chemotherapy and Knowledge of Cancer and Treatment Among African American Women With Newly Diagnosed Breast Cancer. Purpose/objectives: To examine beliefs regarding the necessity of chemotherapy and knowledge of breast cancer and its treatment in African American women with newly diagnosed breast cancer, and to explore factors associated with women's beliefs and knowledge. . Design: Descriptive, cross-sectional study. . Setting: Six urban cancer centers in Western Pennsylvania and Eastern Ohio. . Sample: 101 African American women with newly diagnosed breast cancer. . Methods: Secondary analysis using baseline data collected from participants in a randomized, controlled trial at their first medical oncology visit before the first cycle of chemotherapy. . Main Research Variables: Belief in chemotherapy, knowledge of cancer and recommended treatment, self-efficacy, healthcare system distrust, interpersonal processes of care, symptom distress, and quality of life. . Findings: African American women endorsed the necessity of chemotherapy. Most women did not know their tumor size, hormone receptors, specific therapy, or why chemotherapy was recommended to them. Women who perceived better interpersonal communication with physicians, less self-efficacy, or were less involved in their own treatment decision making held stronger beliefs about the necessity of chemotherapy. Women without financial difficulty or having stronger social functioning had more knowledge of their cancer and recommended chemotherapy. . Conclusions: African American women with newly diagnosed breast cancer generally agreed with the necessity of chemotherapy. Knowledge of breast cancer, treatment, and risk reduction through adjuvant therapy was limited. . Implications For Nursing: Oncology nurses could help advocate for tailored educational programs to support informed decision making regarding chemotherapy acceptance for African American women. abstract_id: PUBMED:31337657 18F-FES PET/CT Influences the Staging and Management of Patients with Newly Diagnosed Estrogen Receptor-Positive Breast Cancer: A Retrospective Comparative Study with 18F-FDG PET/CT. Purpose: We compared the clinical value of 16a-18F-fluoro-17b-estradiol (18F-FES) positron emission tomography (PET)/computed tomography (CT) and 18F-fluoro-2-deoxy-D-glucose (18F-FDG) PET/CT and investigated whether and how 18F-FES PET/CT affects the implemented management of newly diagnosed estrogen receptor positive breast cancer patients. Materials And Methods: We retrospectively analyzed 19 female patients newly diagnosed with immunohistochemistry-confirmed estrogen receptor (ER)-positive breast cancer who underwent 18F-FES and 18F-FDG PET/CT within 1 week in our center. The sensitivity of 18F-FES and 18F-FDG in diagnosed lesions were compared. To investigate the definite clinical impact of 18F-FES on managing patients with newly diagnosed ER positive breast cancer, we designed two kinds of questionnaires. Referring physicians completed the first questionnaire based on the 18F-FDG report to propose the treatment regime, and the second was completed immediately after reviewing the imaging report of 18F-FES to indicate intended management changes. Results: In total, 238 lesions were analyzed in 19 patients with newly diagnosed ER-positive breast cancer. Lesion detection was achieved in 216 sites with 18F-FES PET and in 197 sites with 18F-FDG PET/CT. These results corresponded to sensitivities of 90.8% for 18F-FES versus 82.8% for 18F-FDG PET/CT in diagnosed lesions. Thirty-five physicians were given the questionnaires referring to the treatment strategy, with 27 of them completing both questionnaires. The application of 18F-FES in addition to 18F-FDG PET/CT changed the management in 26.3% of the 19 patients with newly diagnosed ER-positive breast cancer. Conclusion: Performing 18F-FES PET/CT in newly diagnosed ER-positive breast cancer patients increases the value of diagnosis equivocal lesions and treatment management compared with 18F-FDG PET/CT. Implications For Practice: This study investigated whether 16a-18F-fluoro-17b-estradiol (18F-FES) positron emission tomography (PET)/computed tomography (CT) affects the clinical management of patients with newly diagnosed estrogen receptor (ER)-positive breast cancer. Physicians completing two questionnaires comparing the clinical impact of 18F-FES and 18F-FDG on individual management plans in patients with newly diagnosed ER-positive breast cancer confirmed that 18F-FES scans led to change in management in 26.3% of the 19 patients with newly diagnosed ER positive breast cancer. This retrospective study indicates the potential impact of 18F-FES PET/CT on intended management of patients with newly diagnosed estrogen receptor positive breast cancer in comparison to 18F-fluoro-2-deoxy-D-glucose PET/CT. abstract_id: PUBMED:34662201 Genetic Counseling and Testing in African American Patients With Breast Cancer: A Nationwide Survey of US Breast Oncologists. Purpose: To determine if physicians' self-reported knowledge, attitudes, and practices regarding genetic counseling and testing (GCT) vary by patients' race. Methods: We conducted a nationwide 49-item survey among breast oncology physicians in the United States. We queried respondents about their own demographics, clinical characteristics, knowledge, attitudes, practices, and perceived barriers in providing GCT to patients with breast cancer. Results: Our survey included responses from 277 physicians (females, 58.8%; medical oncologists, 75.1%; academic physicians, 61.7%; and Whites, 67.1%). Only 1.8% indicated that they were more likely to refer a White patient than refer an African American patient for GCT, and 66.9% believed that African American women with breast cancer have lower rates of GCT than White women. Regarding perceived barriers to GCT, 63.4% of respondents indicated that African American women face more barriers than White women do and 21% felt that African American women require more information and guidance during the GCT decision-making process than White women. Although 32% of respondents indicated that lack of trust was a barrier to GCT in all patients, 58.1% felt that this was a greater barrier for African American women (P < .0001). Only 13.9% believed that noncompliance with GCT is a barrier for all patients, whereas 30.6% believed that African American women are more likely than White women to be noncompliant (P < .0001). Conclusion: We demonstrated that racial differences exist in oncology physicians' perceived barriers to GCT for patients with breast cancer. This nationwide survey will serve as a basis for understanding physicians' determinants of GCT for African American women and highlights the necessity of education and interventions to address bias among physicians. Awareness of such physician biases can enable further work to address inequities, ultimately leading to improved GCT equity for African American women with breast cancer. abstract_id: PUBMED:32043896 "I'd Want to Know, Because a Year's Not a Long Time to Prepare for a Death": Role of Prognostic Information in Shared Decision Making among Women with Metastatic Breast Cancer. Context: Increasing emphasis on patient-centered care has led to highlighted importance of shared decision making, which better aligns medical decisions with patient care preferences. Effective shared decision making in metastatic breast cancer (MBC) treatment requires prognostic understanding, without which patients may receive treatment inconsistent with personal preferences. Objectives: To assess MBC patient and provider perspectives on the role of prognostic information in treatment decision making. Methods: We conducted semi-structured interviews with MBC patients and community oncologists and separate focus groups involving lay navigators, nurses, and academic oncologists. Qualitative analysis utilized a content analysis approach that included a constant comparative method to generate themes. Results: Of 20 interviewed patients with MBC, 30% were African American. Academic oncologists were mostly women (60%), community oncologists were all Caucasian, and nurses were all women and 28% African American. Lay navigators were all African American and predominately women (86%). Five emergent themes were identified. (1) Most patients wanted prognostic information but differed in when they wanted to have this conversation, (2) Emotional distress and discomfort was a critical reason for not discussing prognosis, (3) Religious beliefs shaped preferences for prognostic information, (4) Health care professionals differed on prognostic information delivery timing, and (5) Providers acknowledged that an individualized approach taking into account patient values and preferences would be beneficial. Conclusion: Most MBC patients wanted prognostic information, yet varied in when they wanted this information. Understanding why patients want limited or unrestricted prognostic information can inform oncologists' efforts toward shared decision making. Answer: The available abstracts suggest that there is a wide divergence of opinions among American oncologists regarding the use of prognostic variables for patients with newly diagnosed primary breast cancer. A study conducted in 1993 found that when presented with simple case histories and asked to assess prognosis based on tumor size, number of involved axillary nodes, patient age, estrogen receptor level, and progesterone receptor level, oncologists provided a broad range of estimates for the probability of disease-free survival at 10 years. The introduction of additional prognostic data, such as S-phase, tumor histologic and nuclear grading, and cathepsin D status, did not refine the estimates but led to equal or greater dispersion of estimates of prognosis (PUBMED:8021733). This indicates that there may be a lack of consensus or clarity among oncologists on how to integrate complex prognostic information, highlighting the need for tools to help oncologists more accurately and uniformly estimate prognosis for primary breast cancer patients.
Instruction: A comprehensive assessment protocol including patient reported outcomes, physical tests, and biological sampling in newly diagnosed patients with head and neck cancer: is it feasible? Abstracts: abstract_id: PUBMED:25110298 A comprehensive assessment protocol including patient reported outcomes, physical tests, and biological sampling in newly diagnosed patients with head and neck cancer: is it feasible? Purpose: Large cohort studies are needed taking into account cancer-related, personal, biological, psychobehavioral, and lifestyle-related factors, to guide future research to improve treatment and supportive care. We aimed to evaluate the feasibility of a comprehensive baseline assessment of a cohort study evaluating the course of quality of life (QoL). Methods: Newly diagnosed head and neck cancer (HNC) patients were asked to participate. Assessments consisted of questionnaires (635 items), a home visit (including a psychiatric interview, physical tests, and blood and saliva collection), and tissue collection. Representativeness of the study sample was evaluated by comparing demographics, clinical factors, depression, anxiety, and QoL between responders and non-responders. Feasibility was evaluated covering the number of questions, time investment, intimacy, and physical burden. Results: During the inclusion period (4 months), 15 out of 26 (60 %) patients agreed to participate. Less women participated, 13 % in responders group versus 63 % in non-responders group (p = 0.008). No other differences were found between responders and non-responders. Responders completed more than 95 % of the questionnaires' items and rated the number of questions, time investment and intimacy as feasible, and the physical and psychological burden as low. It took on average 3 h to complete the questionnaires and 1.5 h for the home visit. Conclusions: This study reveals that a comprehensive assessment including various questionnaires, physical measurements, and biological assessments is feasible according to patients with newly diagnosed HNC. A large prospective cohort study has started aiming to include 739 HNC patients and their informal caregivers in the Netherlands. abstract_id: PUBMED:32862317 Health-Related Quality of Life and Patient-Reported Outcomes in Radiation Oncology Clinical Trials. Opinion Statement: The importance of assessing health-related quality of life (HRQoL) and patient-reported outcomes (PROs) is now well recognized as an essential measure when evaluating the effectiveness of new cancer therapies. Quality of life measures provide for a multi-dimensional understanding of the impact of cancer treatment on measures ranging from functional, psychological, and social aspects of a patient's health. Patient-reported outcomes provide for an assessment of physical and functional symptoms that are directly elicited from patients. Collection of PROs and HRQoL data has been shown to not only be feasible but also provide for reliable measures that correlate with established outcomes measures better than clinician-scored toxicities. The importance of HRQoL measures has been emphasized by both patients and clinicians, as well as policy makers and regulatory bodies. Given the benefits associated with measuring HRQoL and PROs in oncology clinical trials, it is increasingly important to establish methods to effectively incorporate PROs and HRQoL measures into routine clinical practice. abstract_id: PUBMED:30100773 Patient-reported outcomes in head and neck cancer: prospective multi-institutional patient-reported toxicity. Purpose: Head and neck cancer is occurring in an increasingly younger patient population, with treatment toxicity that can cause significant morbidity. Using a patient guided, Internet-based survivorship care plan program, we obtained and looked at patterns of patient-reported outcomes data from survivors seeking information after treatment for head and neck cancer. Methods: The Internet-based OncoLife and LIVESTRONG Care Plan programs were employed, which design unique survivorship care plans based on patient-reported data. Care plans created for survivors of head and neck cancer were used in this evaluation. Demographics, treatment modality, and toxicity were included in this evaluation. Toxicity was further analyzed, grouped into system-based subsets. Results: A total of 602 care plans were created from self-identified head and neck cancer survivors, from which patient-reported outcome data were attained. A majority of patients were Caucasian (96.2%) with median age at diagnosis of 55 years, living in suburban locations (39.9%), with ~50% receiving care within 20 miles of their residence. There was an equal distribution of education levels from high school only to graduate school. The majority of patients received care through cancer centers (96.7%), with a split between academic and non-academic centers. Ninety-three percent of patients had radiation therapy as part of their treatment modality, with 70.3% having chemotherapy and 60.1% having surgery. The most common system toxicities affected the oropharynx, followed by epithelium (skin/hair/nail), and then general global health. Specifically, the most common side effects were difficulty swallowing (61.5%) and changes in skin color/texture (49.7%). One third of patients experienced hearing/tinnitus/vertigo, xerostomia, loss of tissue flexibility, or fatigue. Conclusion: The current work demonstrates the ability to obtain patient-reported outcomes of head and neck cancer survivors through an Internet-based survivorship care plan program. For this group dysphagia and dermatitis were the most commonly reported toxicities, as was expected; however, global effects of therapy, such as fatigue, were also significant and should be addressed in future survivorship planning. abstract_id: PUBMED:33901767 Effectiveness of physical activity interventions in improving objective and patient-reported outcomes in head and neck cancer survivors: A systematic review. Objective: To assess the effectiveness of physical activity interventions in improving objective and patient-reported outcomes in HNC survivors. Introduction: Multiple guidelines recommend that head and neck cancer (HNC) survivors participate in regular physical activity. Physical activity is associated with improved outcomes and mortality in healthy individuals as well as in certain cancer populations. However, the effectiveness of physical activity interventions in HNC survivors is inadequately understood. Methods And Results: Our literature search through December 2018 identified 2,392 articles. After de-duplication, title and abstract review, full-text review and bibliographic search, 20 studies met all inclusion criteria. Inclusion criteria included any full-body physical activity intervention in HNC survivors that did not target discrete organ sites or functions (e.g. swallowing). Study cohorts included 749 predominantly male participants with a mean age range of 48-63 years. At their conclusion, physical activity interventions were associated with at least one significant improvement in an objective or patient-reported outcome in 75% of studies. Aerobic capacity and fatigue were the most commonly improved outcomes. None of the included studies evaluated associations with survival or recurrence. Although traditional aerobic and resistance interventions were more common, a greater proportion of alternative physical activity (yoga and Tai Chi) interventions demonstrated improved objective and patient-reported outcomes. Conclusion: Physical activity interventions in HNC survivors often conferred some improvement in objective and patient-reported outcomes. Additional highly-powered, randomized controlled studies are needed to establish the optimal type, intensity, and timing of physical activity interventions as well as their impact on oncologic outcomes. abstract_id: PUBMED:30125799 Patient-reported outcomes with nivolumab in advanced solid cancers. Patients with recurrent or metastatic cancer commonly suffer from debilitating toxicity associated with conventional treatment modalities, as well as disease-related symptoms, often with a concomitant negative impact on health-related quality of life (HRQoL). Patient-reported outcomes (PROs) provide important insights into the patient experience in clinical trials. Nivolumab is a programmed death-1 receptor inhibitor that extends survival in patients with recurrent or metastatic disease in multiple tumor types. In this review, we summarize published PRO analyses from eight phase II-IV clinical trials with nivolumab for the treatment of melanoma, non-small cell lung cancer, renal cell carcinoma (RCC), and squamous cell carcinoma of the head and neck (SCCHN). Symptom burden, physical functioning, and HRQoL were measured using generic, cancer-specific, and tumor type-specific validated PRO instruments. Nivolumab showed sustained stabilization across all tumor types and, in some cases, clinically meaningful improvement in HRQoL, whereas standard of care therapies often led to deteriorations. Exploratory analyses found a positive correlation between baseline HRQoL scores and overall survival in RCC, and between baseline HRQoL scores and healthcare resource utilization in SCCHN, suggesting that patient-reported symptoms at treatment initiation may have clinical value. In the era of value-based oncology care, stakeholders are increasingly interested in PRO findings to guide clinical, regulatory, and reimbursement decisions. However, missing data remain a significant challenge in PRO analyses, including in nivolumab trials. Future clinical trials in immuno-oncology should incorporate PRO data collection, including beyond treatment discontinuation or trial completion to assess the long-term effects of treatment on HRQoL. abstract_id: PUBMED:27178143 A controlled study of use of patient-reported outcomes to improve assessment of late effects after treatment for head-and-neck cancer. Background And Purpose: To test the effect of longitudinal feedback on late effects reported by survivors of head-and-neck cancer (HNC) to clinicians during regular follow-up. Material And Methods: A total of 266 participants were sequentially assigned to either control or intervention group and filled in electronic versions of the EORTC QLQ C-30, H&N35, HADS and a study-specific list of symptoms at up to two consecutive follow-up visits. Participants' symptoms displayed according to severity were provided to the clinician for the intervention group but not for the control group. Linear mixed-effects models were used to examine the number of symptoms assessed by clinicians (primary outcome). Multivariate linear regression models examined participants' long-term symptom control and QoL (secondary outcome). Results: More symptoms were assessed by clinicians in the intervention group at all three visits (P<0.001, <0.001, and P=0.04). No effect was observed on most patient outcomes. When prompted by patient-reported outcomes at consultations, clinicians and patients were in better agreement about the occurrence of severe symptoms at all three visits. Conclusion: Timely patient-reported outcomes to clinicians in routine follow-up of HNC survivors enhanced clinicians' rates of assessment of late symptoms. Giving reports of patient-reported outcome to clinicians had limited impact on participants' QoL or symptom burden. abstract_id: PUBMED:30276480 Head and Neck Cancer: Improving Patient-Reported Outcome Measures for Clinical Practice. Opinion Statement: Head and neck cancer includes a wide range of tumors that occur in several areas of the upper aerodigestive tract. Most head and neck cancer patients report treatment-related late effects (both physical and psycho-social). High-quality and patient-centered care in head and neck cancer depend on the understanding of the continuum patient's experience-the disease pathway. Healthcare has been improved by involving patients more actively in the disease process, and a few reports support that patient-reported outcomes-built around the patient's experience-given in a timely manner to oncologists are extremely valuable in oncology clinical care. Implementation and clinical use of patient-reported outcomes requires some procedures involving head and neck cancer patients, clinicians, researchers, and institutional leaders The unified and integrated vision is still absent and some current concerns are being discussed to optimize benefits of patient-reported outcomes use in clinical practice. The inclusion of all first-line caregivers, team formation and training, continuous monitoring improvement, and analysis are critical success factors to consider. Our team developed a broader and inclusive understanding of patient-reported outcomes. Patient-reported outcome (Health-Related Quality of Life) assessment is implemented as a systematic and routine process in Head and Neck Unit. Head and neck cancer patients consider the questionnaire administration as part of the clinical approach. We are currently working in a program (PROimp) using mathematical models to identify common head and neck cancer patterns and building prognostic predictive models, to predict future outcomes, to appraise risk/benefit of treatments (standard or new), and to estimate patient's risk of future disease development. It is our aim to better comprehend the singular and unexpected perceptions to really provide directed and personalized cancer care defining the patient pathway. The future holds promising for PROs that are ascending as a nuclear outcome in head and neck oncology. abstract_id: PUBMED:24719292 PROMIS evaluation for head and neck cancer patients: a comprehensive quality-of-life outcomes assessment tool. Objectives/hypothesis: The objective of this study was to evaluate the Patient-Reported Outcomes Measure Information System (PROMIS) in a head and neck cancer patient cohort by assessing the associations of the PROMIS instruments with the responses to the European Oncology Research and Treatment of Cancer (EORTC) general measures, EORTC head and neck (H&N) measures, and Voice Handicap Index (VHI-10). We hypothesized that PROMIS scores are related to the other measures and may be used as assessment tools to help determine quality-of-life outcomes in head and neck cancer patients. Study Design: Prospective baseline assessment of quality-of-life outcomes. Methods: Thirty-nine head and neck cancer patients were included in the study. PROMIS (domains of fatigue, physical functioning, sleep disturbance, sleep-related impairment, and negative perceived cognitive function, EORTC (general), EORTC H&N, and the VHI-10 were given to all patients at the onset of their cancer diagnosis. Spearman correlation coefficients were computed to assess relationships between the measures. Correlations with corresponding P values <.0083 (Bonferroni adjustment) were considered statistically significant. Descriptive statistics of means, standard deviations, medians, and ranges were computed for all the instruments and measures. Results: Significant correlations between the PROMIS instruments and EORTC functional scales were observed. The PROMIS instruments were also associated with some of the EORTC symptom scales, as well as some of the EORTC H&N symptoms measures. PROMIS fatigue instrument was significantly correlated with the VHI-10 measure. Conclusions: PROMIS instruments are reasonable measures to determine quality-of-life outcomes in head and neck cancer patients. Computerized adaptive testing devices can be effectively utilized in this patient population. Level Of Evidence: 2c. abstract_id: PUBMED:34737963 Patient-Reported Outcomes-Guided Adaptive Radiation Therapy for Head and Neck Cancer. Purpose: To identify which patient-reported outcomes (PROs) may be most improved through adaptive radiation therapy (ART) with the goal of reducing toxicity incidence among head and neck cancer patients. Methods: One hundred fifty-five head and neck cancer patients receiving radical VMAT (chemo)radiotherapy (66-70 Gy in 30-35 fractions) completed the MD Anderson Symptom Inventory, MD Anderson Dysphagia Inventory (MDADI), and Xerostomia Questionnaire while attending routine follow-up clinics between June-October 2019. Hierarchical clustering characterized symptom endorsement. Conventional statistical approaches indicated associations between dose and commonly reported symptoms. These associations, and the potential benefit of interfractional dose corrections, were further explored via logistic regression. Results: Radiotherapy-related symptoms were commonly reported (dry mouth, difficulty swallowing/chewing). Clustering identified three patient subgroups reporting: none/mild symptoms for most items (60.6% of patients); moderate/severe symptoms affecting some aspects of general well-being (32.9%); and moderate/severe symptom reporting for most items (6.5%). Clusters of PRO items broadly consisted of acute toxicities, general well-being, and head and neck-specific symptoms (xerostomia, dysphagia). Dose-PRO relationships were strongest between delivered pharyngeal constrictor Dmean and patient-reported dysphagia, with MDADI composite scores (mean ± SD) of 25.7 ± 18.9 for patients with Dmean <50 Gy vs. 32.4 ± 17.1 with Dmean ≥50 Gy. Based on logistic regression models, during-treatment dose corrections back to planned values may confer ≥5% decrease in the absolute risk of self-reported physical dysphagia symptoms ≥1 year post-treatment in 1.2% of patients, with a ≥5% decrease in relative risk in 23.3% of patients. Conclusions: Patient-reported dysphagia symptoms are strongly associated with delivered dose to the pharyngeal constrictor. Dysphagia-focused ART may provide the greatest toxicity benefit to head and neck cancer patients, and represent a potential new direction for ART, given that the existing ART literature has focused almost exclusively on xerostomia reduction. abstract_id: PUBMED:33107986 Case study of the integration of electronic patient-reported outcomes as standard of care in a head and neck oncology practice: Obstacles and opportunities. Background: Patient-reported outcomes (PROs) allow for the direct measurement of functional and psychosocial effects related to treatment. However, technological barriers, survey fatigue, and clinician adoption have hindered the meaningful integration of PROs into clinical care. The objective of the authors was to develop an electronic PROs (ePROs) program that meets a range of clinical needs across a head and neck multidisciplinary disease management team. Methods: The authors developed the ePROs module using literature review and stakeholder input in collaboration with health informatics. They designed an ePROs platform that was integrated as the standard of care for personalized survey delivery by diagnosis across the disease management team. Tableau software was used to create dashboards for data visualization and monitoring at the clinical enterprise, disease subsite, and patient levels. All patients who were treated for head and neck cancer were eligible for ePROs assessment as part of the standard of care. A descriptive analysis of ePROs program implementation is presented herein. Results: The Head and Neck Service at Memorial Sloan Kettering Cancer Center has integrated ePROs into clinical care. Surveys are delivered via the patient portal at the time of diagnosis and longitudinally through care. From August 1, 2018, to February 1, 2020, a total of 4154 patients completed ePROs surveys. The average patient participation rate was 69%, with a median time for completion of 5 minutes. Conclusions: Integration of the head and neck ePROs program as part of clinical care is feasible and could be used to assess value and counsel patients in the future. Continued qualitative assessments of stakeholders and workflow will refine content and enhance the health informatics platform. Lay Summary: Patients with head and neck cancer experience significant changes in their quality of life after treatment. Measuring and integrating patient-reported outcomes as a part of clinical care have been challenging given the multimodal treatment options, vast subsites, and unique domains affected. The authors present a case study of the successful integration of electronic patient-reported outcomes into a high-volume head and neck cancer practice. Answer: Yes, a comprehensive assessment protocol that includes patient-reported outcomes (PROs), physical tests, and biological sampling in newly diagnosed patients with head and neck cancer is feasible. This is supported by a study that evaluated the feasibility of such a comprehensive baseline assessment as part of a cohort study aimed at evaluating the course of quality of life (QoL) in these patients. The study found that during the inclusion period, 60% of patients agreed to participate. The responders completed more than 95% of the questionnaires' items and rated the number of questions, time investment, and intimacy as feasible, with the physical and psychological burden rated as low. It took on average 3 hours to complete the questionnaires and 1.5 hours for the home visit. These findings suggest that patients with newly diagnosed head and neck cancer found the comprehensive assessment protocol to be feasible (PUBMED:25110298).
Instruction: Does rate matter? Abstracts: abstract_id: PUBMED:29774517 The Acute Effects of Age and Particulate Matter Exposure on Heart Rate and Heart Rate Variability in Mice. Exposure to ambient particulate matter (PM) is associated with increased cardiac morbidity and mortality with the elderly considered to be the most susceptible. The purpose of this study was to determine if exposure to PM would cause a greater impact on heart regulation in older DBA/2 (D2) male mice as determined by changes in heart rate (HR) and heart rate variability (HRV). D2 mice at the ages of 4, 12, and 19 months were instilled with 100 µg of PM or saline by aspiration. Before and after the aspiration, 3-min echocardiogram (ECG) samples for HR and HRV were recorded at 15-min intervals for 3 h along with corresponding measurements of homeostasis, such as temperature, metabolism, and ventilation. PM exposure resulted in an increase in HRV, declines in HR, and altered measures of homeostasis for a subset of the 12-mo mice. The PM aspiration did not affect cardiac or homeostasis parameters in the 4- or 19-mo mice. Our results suggest that a select group of middle-age mice are more susceptible to alterations in their heart rhythm after PM exposure and highlight that there are acute age-related differences in heart rhythm following PM exposure. abstract_id: PUBMED:32678728 Temporal association between particulate matter pollution and case fatality rate of COVID-19 in Wuhan. The coronavirus (COVID-19) epidemic reported for the first time in Wuhan, China at the end of 2019, which has caused 4648 deaths in China as of July 10, 2020. This study explored the temporal correlation between the case fatality rate (CFR) of COVID-19 and particulate matter (PM) in Wuhan. We conducted a time series analysis to examine the temporal day-by-day associations. We observed a higher CFR of COVID-19 with increasing concentrations of inhalable particulate matter (PM) with an aerodynamic diameter of 10 μm or less (PM10) and fine PM with an aerodynamic diameter of 2.5 μm or less (PM2.5) in the temporal scale. This association may affect patients with mild to severe disease progression and affect their prognosis. abstract_id: PUBMED:36708501 The mechanical behavior of bovine spinal cord white matter under various strain rate conditions: tensile testing and visco-hyperelastic constitutive modeling. The mechanical behavior of the white matter is important for estimating the damage of the spinal cord during accidents. In this study, we conducted uniaxial tension testing in vitro of bovine spinal cord white matter under extremely high strain rate conditions (up to 100 s-1). A visco-hyperelastic constitutive law for modeling the strain rate-dependent behavior of the bovine spinal cord white matter was developed. A set of material constants was obtained using a Levenberg-Marquardt fitting algorithm to match the uniaxial tension experimental data with various strain rates. Our experimental data confirmed that the modulus and tensile strength increased when the strain rate is higher. For the extremely high strain rate condition (100 s-1), we found that both the modulus and failure stress significantly increased compared with the low strain rate case. These new data in terms of mechanical response at high strain rate provide insight into the spine injury mechanism caused by high-speed impact. Moreover, the developed constitutive model will allow researchers to perform more realistic finite element modeling and simulation of spinal cord injury damage under various complicated conditions. abstract_id: PUBMED:31960823 Compression analysis of the gray and white matter of the spinal cord. The spinal cord is composed of gray matter and white matter. It is well known that the properties of these two tissues differ considerably. Spinal diseases often present with symptoms that are caused by spinal cord compression. Understanding the mechanical properties of gray and white matter would allow us to gain a deep understanding of the injuries caused to the spinal cord and provide information on the pathological changes to these distinct tissues in several disorders. Previous studies have reported on the physical properties of gray and white matter, however, these were focused on longitudinal tension tests. Little is known about the differences between gray and white matter in terms of their response to compression. We therefore performed mechanical compression test of the gray and white matter of spinal cords harvested from cows and analyzed the differences between them in response to compression. We conducted compression testing of gray matter and white matter to detect possible differences in the collapse rate. We found that increased compression (especially more than 50% compression) resulted in more severe injuries to both the gray and white matter. The present results on the mechanical differences between gray and white matter in response to compression will be useful when interpreting findings from medical imaging in patients with spinal conditions. abstract_id: PUBMED:34923129 The direction-dependence of apparent water exchange rate in human white matter. Transmembrane water exchange is a potential biomarker in the diagnosis and understanding of cancers, brain disorders, and other diseases. Filter-exchange imaging (FEXI), a special case of diffusion exchange spectroscopy adapted for clinical applications, has the potential to reveal different physiological water exchange processes. However, it is still controversial whether modulating the diffusion encoding gradient direction can affect the apparent exchange rate (AXR) measurements of FEXI in white matter (WM) where water diffusion shows strong anisotropy. In this study, we explored the diffusion-encoding direction dependence of FEXI in human brain white matter by performing FEXI with 20 diffusion-encoding directions on a clinical 3T scanner in-vivo. The results show that the AXR values measured when the gradients are perpendicular to the fiber orientation (0.77 ± 0.13 s - 1, mean ± standard deviation of all the subjects) are significantly larger than the AXR estimates when the gradients are parallel to the fiber orientation (0.33 ± 0.14 s - 1, p < 0.001) in WM voxels with coherently-orientated fibers. In addition, no significant correlation is found between AXRs measured along these two directions, indicating that they are measuring different water exchange processes. What's more, only the perpendicular AXR rather than the parallel AXR shows dependence on axonal diameter, indicating that the perpendicular AXR might reflect transmembrane water exchange between intra-axonal and extra-cellular spaces. Further finite difference (FD) simulations having three water compartments (intra-axonal, intra-glial, and extra-cellular spaces) to mimic WM micro-environments also suggest that the perpendicular AXR is more sensitive to the axonal water transmembrane exchange than parallel AXR. Taken together, our results show that AXR measured along different directions could be utilized to probe different water exchange processes in WM. abstract_id: PUBMED:20507901 Effects of personal exposure to particulate matter and ozone on arterial stiffness and heart rate variability in healthy adults. The effects on heart rate variability (HRV) and arterial stiffness from exposure to ambient particulate matter and ozone have not been studied simultaneously. The aim of this study was to analyze these effects with refined exposure estimates from personal measurements of ozone and size-resolved particulate matter mass concentrations. The authors recruited 17 mail carriers in a panel study in Taipei County, Taiwan, during February-March, 2007, and each subject was followed for 5-6 days. Personal ozone and size-fractionated particulate matter exposures were monitored during working hours while carriers delivered mail outdoors. Cardiovascular effects were evaluated with heart rate variability (HRV) indices and an arterial stiffness index, the cardio-ankle vascular index (CAVI). The authors used linear mixed models to examine the association between personal exposure data and the HRV index and CAVI. They found that an interquartile range increase in personal exposure to ozone and particulate matter of between 1.0 and 2.5 microm was associated with a 4.8% and 2.5% increase in CAVI, respectively, in the single-pollutant models. In contrast, the personal exposure data showed no significant effects on HRV. In 2-pollutant models, personal ozone exposure remained significantly associated with the CAVI measurements. The study results indicate that vascular function may be more sensitive to air pollutants than the autonomic balance. abstract_id: PUBMED:32446047 Acute particulate matter exposure is associated with disturbances in heart rate complexity in patients with prior myocardial infarction. Background: Ambient air pollutants can increase cardiovascular mortality. One possible mechanism is the effect on the autonomic balance of the cardiovascular system. Studies on acute effects of particulate matter (PM) exposure on heart rate variability (HRV), a surrogate marker for autonomic balance, in patients with prior myocardial infarction (MI) revealed inconsistent results. Method: We prospectively enrolled participants with acute MI. These participants received a 24-hour Holter electrocardiography examination and echocardiography six months after the index MI. Linear [standard deviation of all normal to normal intervals, standard deviation of NN intervals (SDNN), and a low-frequency to high-frequency ratio (LF/HF)] and non-linear parameters of heart rate variability [multiscale entropy (MSE)] were calculated to show autonomic balance. Data for PM2.5, PM2.5-10, and PM10, were obtained from a fixed-site station in Taiwan. Linear mixed effect models were used to estimate acute effects (within 0-3 days) of PM exposure (per 10 μg/m3) on heart rate variability. Results: A total of 90 participants were enrolled in this study with a mean age of 58.7 (13.3) and 83 (92.2%) male participants. Traditional HRV parameters, SDNN and LF/HF, were positively correlated with two-day lagged PM2.5-10 and PM10 [adjusted beta coefficient: SDNN: 130.3 and 58.5; LH/HF: 0.32 and 0.21 (all p < or = 0.01)]. MSE slopes 1-5 were negatively correlated with same-day PM2.5-10 and PM10 (adjusted beta coefficient -0.011 (p = 0.01) and -0.005 (p = 0.02), respectively). The left ventricular ejection fraction was negatively correlated with one-day lagged PM2.5-10, and PM10 (adjusted beta coefficient -0.49 and -0.4, respectively; both p < 0.05), after adjusting for MI size. Conclusion: Our results suggest that coarse PM may acutely affect cardiac autonomic balance. MSE is a sensitive marker for detecting changes in autonomic imbalance in patients with prior MI following acute PM exposure. abstract_id: PUBMED:35636602 Low-energy high-rate flotation technology for reduction of organic matter and disinfection by-products formation potential: A pilot-scale study. Despite operating complexity and high energy costs associated with its operation and maintenance, dissolved air flotation (DAF) is widely used in drinking water treatment processes. Recently, the focus has shifted to designing and developing DAF with high surface loading rates. This research compares the performance of pilot-scale high-rate DAF and low-energy high-rate flash-pressurized flotation (FPF) based on the removal behavior of natural organic matter, different molecular weight size fractions, and the formation potential of disinfection by-products. For a surface-loading rate of 30 m/h, the residual dissolved organic matter (DOC) concentrations in treated samples from high-rate DAF and FPF were 1.35 ± 0.02 (30.25 ± 0.15% removal) mg/L and 1.37 ± 0.03 (29.12 ± 1.72% removal) mg/L, respectively. In contrast, the removal of high-molecular-weight fractions, i.e., biopolymers and humic substances, showed similar removal performance for both treatment processes but not for building blocks. The removal rates were 27.10% and 6.64% for high-rate DAF and FPF, respectively. The formation potential of trihalomethanes/DOC for high-rate DAF with reaction times of 1, 3, 6, and 9 days 14.12 ± 0.18, 17.84 ± 0.22, 23.04 ± 0.29, and 29.73 ± 0.37 μg/mg C, respectively, and 16.83 ± 0.34, 22.69 ± 0.46, 27.08 ± 0.55, and 28.54 ± 0.58 for high-rate FPF. In the case of haloacetonitriles/dissolved organic nitrogen-humic substances and chloral hydrate/DOC, there were no significant differences. Thus, low-energy high-rate FPF with a reduction of energy of 55% provides an alternative to high-rate DAF. abstract_id: PUBMED:21485786 Individual exposure to particulate matter and heart rate variability (HRV) in patients with previous myocardial infarction. The exposures to particulate matter (PM) from urban air correlates with a variety of cardiovascular diseases, including myocardial infraction. Protection from the effects of particles on HRV by beta-blockers was observed. abstract_id: PUBMED:34039024 Regional White Matter Diffusion Changes Associated with the Cumulative Tensile Strain and Strain Rate in Nonconcussed Youth Football Players. The purpose of this study is to assess the relationship between regional white matter diffusion imaging changes and finite element strain measures in nonconcussed youth football players. Pre- and post-season diffusion-weighted imaging was performed in 102 youth football subject-seasons, in which no concussions were diagnosed. The diffusion data were normalized to the IXI template. Percent change in fractional anisotropy (%ΔFA) images were generated. Using data from the head impact telemetry system, the cumulative maximum principal strain one times strain rate (CMPS1 × SR), a measure of the cumulative tensile brain strain and strain rate for one season, was calculated for each subject. Two linear regression analyses were performed to identify significant positive or inverse relationships between CMPS1 × SR and %ΔFA within the international consortium for brain mapping white matter mask. Age, body mass index, days between pre- and post-season imaging, previous brain injury, attention disorder diagnosis, and imaging protocol were included as covariates. False discovery rate correction was used with corrected alphas of 0.025 and voxel thresholds of zero. Controlling for all covariates, a significant, positive linear relationship between %ΔFA and CMPS1 × SR was identified in the bilateral cingulum, fornix, internal capsule, external capsule, corpus callosum, corona radiata, corticospinal tract, cerebral and middle cerebellar peduncle, superior longitudinal fasciculus, and right superior fronto-occipital fasciculus. Post hoc analyses further demonstrated significant %ΔFA differences between high-strain football subjects and noncollision control athletes, no significant %ΔFA differences between low-strain subjects and noncollision control athletes, and that CMPS1 × SR significantly explained more %ΔFA variance than number of head impacts alone. Answer: Yes, rate matters in various contexts related to the studies provided. In the context of particulate matter (PM) exposure, the rate of exposure can influence the acute effects on heart rate (HR) and heart rate variability (HRV). For instance, a study found that exposure to PM resulted in an increase in HRV and declines in HR in a subset of middle-aged mice, suggesting that acute exposure can have age-related differences in heart rhythm (PUBMED:29774517). Additionally, a temporal association was observed between PM pollution and the case fatity rate (CFR) of COVID-19, indicating that the rate of PM exposure may affect disease progression and prognosis (PUBMED:32678728). In the mechanical behavior of biological tissues, the rate of strain (strain rate) is crucial. For bovine spinal cord white matter, the modulus and tensile strength increased with higher strain rates, which is important for understanding spinal cord injury mechanisms (PUBMED:36708501). Similarly, the response of spinal cord gray and white matter to compression showed that increased compression rates resulted in more severe injuries (PUBMED:31960823). In the study of water exchange in human white matter, the rate of diffusion-encoding gradient direction affected the apparent water exchange rate (AXR), with significant differences observed between gradients perpendicular and parallel to the fiber orientation, indicating that different water exchange processes are measured at different rates (PUBMED:34923129). Furthermore, personal exposure to ozone and particulate matter was associated with an increase in arterial stiffness, as measured by the cardio-ankle vascular index (CAVI), suggesting that vascular function may be sensitive to the rate of air pollutant exposure (PUBMED:20507901). Acute PM exposure was also associated with disturbances in heart rate complexity in patients with prior myocardial infarction, with coarse PM affecting cardiac autonomic balance at different rates (PUBMED:32446047). Lastly, in the context of water treatment, the rate of surface loading in high-rate dissolved air flotation (DAF) and flash-pressurized flotation (FPF) influenced the removal behavior of natural organic matter and the formation potential of disinfection by-products (PUBMED:35636602). In sports-related research, the cumulative tensile strain and strain rate were associated with changes in white matter diffusion in nonconcussed youth football players, indicating that the rate of brain strain and strain rate over a season can affect brain structure (PUBMED:34039024). Overall, these studies demonstrate that the rate of exposure, strain, compression, diffusion, and other processes can significantly impact biological and environmental outcomes.
Instruction: Single HCC in cirrhotic patients: liver resection or liver transplantation? Abstracts: abstract_id: PUBMED:26605279 Laparoscopic liver resection for hepatocellular carcinoma in cirrhotic patients: single center experience of 90 cases. Background: Hepatocellular carcinoma (HCC) with or without underlying liver disease can be treated by surgical resection. The aim of this study was to evaluate the feasibility, morbidity and mortality of a laparoscopic approach in cirrhotic patients with HCC. Methods: From 2004 to September 2014, 90 patients underwent a laparoscopic liver resection (LLR) for HCC. Data were collected in a prospectively maintained database since 2001. Preoperative patient evaluation was based on a multidisciplinary team meeting assessment. Results: Median age was 63 years; 67 (74.4%) patients were male. Median body mass index (BMI) was 26.7. Underlying liver disease was known in 68 patients: in 46 patients' hepatitis C virus (HCV)-related, in 15 patients to hepatitis B virus (HBV)-related, in 5 patients alcohol-related. Child-Pugh Score was of grade A in 85 patients and of grade B in 5 patients; 63 patients had a Model for End-stage Liver Disease (MELD) <10 and 27 patients MELD >10. A total of 18 left lateral sectionectomies, 1 left hepatectomy and 71 wedge resections or segmentectomies were performed. Conversion to laparotomy was necessary in 7 (7.7%) patients (five cases for bleeding and two cases for oncological reasons). In 90 patients, 98 HCC nodules were resected: 79 patients had one nodule, 8 patients had two nodules and 1 patient had three nodules. HCC nodules medium diameter was 29 mm (range, 4-100 mm) with median value of 25 mm. Tumor margins distance was 16 mm (range, 0-35 mm) with a median of 5 mm. Seventy nodules were located within the anterior sectors and 28 nodules within the posterior sectors. Conclusions: LLR for HCC can be performed with acceptable morbidity in patients with underlying liver disease. The use of laparoscopic surgery in cirrhotic patients may be proposed as the first-line treatment for HCC or as bridge treatment before liver transplantation. abstract_id: PUBMED:37124677 Prognostication algorithm for non-cirrhotic non-B non-C hepatocellular carcinoma-a multicenter study under the aegis of the French Association of Hepato-Biliary Surgery and liver Transplantation. Background: Liver resection and local ablation are the only curative treatment for non-cirrhotic hepatocellular carcinoma (HCC). Few data exist concerning the prognosis of patients resected for non-cirrhotic HCC. The objectives of this study were to determine the prognostic factors of recurrence-free survival (RFS) and overall survival (OS) and to develop a prognostication algorithm for non-cirrhotic HCC. Methods: French multicenter retrospective study including HCC patients with non-cirrhotic liver without underlying viral hepatitis: F0, F1 or F2 fibrosis. Results: A total of 467 patients were included in 11 centers from 2010 to 2018. Non-cirrhotic liver had a fibrosis score of F0 (n=237, 50.7%), F1 (n=127, 27.2%) or F2 (n=103, 22.1%). OS and RFS at 5 years were 59.2% and 34.5%, respectively. In multivariate analysis, microvascular invasion and HCC differentiation were prognostic factors of OS and RFS and the number and size were prognostic factors of RFS (P<0.005). Stratification based on RFS provided an algorithm based on size (P=0.013) and number (P<0.001): 2 HCC with the largest nodule ≤10 cm (n=271, Group 1); 2 HCC with a nodule >10 cm (n=176, Group 2); >2 HCC regardless of size (n=20, Group 3). The 5-year RFS rates were 52.7% (Group 1), 30.1% (Group 2) and 5% (Group 3). Conclusions: We developed a prognostication algorithm based on the number (≤ or >2) and size (≤ or >10 cm), which could be used as a treatment decision support concerning the need for perioperative therapy. In case of bifocal HCC, surgery should not be a contraindication. abstract_id: PUBMED:30363804 The current role of laparoscopic resection for HCC: a systematic review of past ten years. The use of laparoscopic liver resection (LLR) has progressively spread in the last 10 years. Several studies have shown the superiority of LLR to open liver resection (OLR) in term of perioperative outcomes. With this review, we aim to systematically assess short-term and long-term major outcomes in patients who underwent LLR for hepatocellular carcinoma (HCC) in order to illustrate the advantages of minimally invasive liver surgery. Through an advanced PubMed research, we selected all retrospective, prospective, and comparative clinical trials reporting short-term and long-term outcomes of any series of patients with diagnosis of HCC who underwent laparoscopic or robotic resection. Reviews, meta-analyses, or case reports were excluded. None of the patients included in this review has received a previous locoregional treatment for the same tumor nor has undergone a laparoscopic-assisted procedure. We considered morbidity and mortality for evaluation of major short-term outcomes, and overall survival (OS) and disease-free survival (DFS) for evaluation of long-term outcomes. A total of 1,501 patients from 17 retrospective studies were included, 15 studies compare LLR with OLR. Propensity-score matching (PSM) analysis was used in 11 studies (975 patients). The majority of the studies included patients with good liver function and a single HCC. Cirrhosis at pathology ranged from 33% to 100%. Overall mortality and morbidity ranges were 0-2.4% and 4.9-44% respectively, with most of the complications being Clavien-Dindo grade I or II (range: 3.9-23.3% vs. 0-9.52% for Clavien I-II and ≥ III respectively). The median blood loss ranged from 150 to 389 mL; the range of the median duration of surgery was 134-343 minutes. The maximum rate of conversion was 18.2%. The median duration of hospitalization ranged from 4 to 13 days. The ranges of overall survival rates at 1-, 3- and 5-year were 72.8-100%, 60.7-93.5% and 38-89.7% respectively. The ranges of disease free survival rates at 1-, 3- and 5-year were 45.5-91.5%, 20-72.2% and 19-67.8% respectively. The benefits of LLR in term of complication rate, blood loss, and duration of hospital stay make this procedure an advantageous alternative to OLR, especially for cirrhotic patients in whom the use of LLR reduces the risk of post-hepatectomy liver failure. The limits of LLR can be overcome by robotic surgery, which could therefore be preferred. Further benefits of minimally invasive surgery derive from its ability to reduce the formation of adhesions in view of a salvage liver transplant. In conclusion, the results of this review seem to confirm the safety and feasibility of LLR for HCC as well as its superiority to OLR according to perioperative outcomes. abstract_id: PUBMED:26734630 Laparoscopic right hepatectomy for hepatocellular carcinoma in cirrhotic patient. Hepatocellular carcinoma (HCC) is the sixth most common malignant tumor worldwide and the most common primary liver cancer. Liver resection or liver transplantation is the therapeutic gold standards in patient with HCC related with or without underline liver disease. We present a video case of a 68-year-old woman admitted to our surgical and liver transplantation unit for HCC on liver segment VII. Patient has HCV cirrhosis. Patient underwent to previous right portal vein embolization. Model of end staged liver disease was 7. Body mass index (BMI) was 26.3 and ASA score was 2. Alpha-fetoprotein was 768. According with our multidisciplinary group, we suggest a laparoscopic right hepatectomy for the patient. Operation time was 343 min and blood loss estimation was 200 CC. No transfusion was required. Post-operative course was uneventful, grade 0 of Clavien-Dindo Classification. Patient was discharged in day 7. Pathology report describes a 17 mm × 15 mm HCC grade 4, pT2N0. Laparoscopic liver resection (LLR) for HCC should be performed by dedicated surgical teams in hepatobiliary and laparoscopic surgery. The use of LLR in cirrhotic patients is in many centers proposed as the first-line treatment for HCC or as bridge treatment before liver transplantation. abstract_id: PUBMED:22965574 Single HCC in cirrhotic patients: liver resection or liver transplantation? Long-term outcome according to an intention-to-treat basis. Background: Compensated cirrhotic patients with single hepatocellular carcinoma (HCC) ≤5 cm may benefit from both liver resection (LR) and liver transplantation (LT); however, the better 10-year actuarial survival of the two treatments remains unclear. We aimed to assess the long-term outcome of cirrhotic patients with single HCC ≤5 cm treated either with LR or LT on an intention-to-treat basis. Methods: A total of 217 cirrhotic patients with single HCC ≤5 cm were evaluated at our department: 95 were treated with LR (LR group), and 122 were included on the waiting list for LT (LT group). Patients in the LR group were divided into very early HCC (tumor size ≤2 cm) and early HCC (tumor size >2 cm). Median follow-up was 5.3 (range 0.1-18) years. Results: Tumor recurrence was 72 % in the LR group versus 16 % in the LT group (p < 0.001). 1-, 5-, and 10-year cumulative risk of recurrence was 18, 69, and 83 % in the LR group versus 4, 18, and 20 % in the LT group (p < 0.001). Ten-year actuarial survival was 33 % in the LR group versus 49 % in the LT group (p = 0.002). At HCC recurrence, 27.3 % were included on the waiting list for salvage transplantation (very early HCC group) versus 15.1 % (early HCC group) (p = 0.2). After salvage transplantation, HCC recurrence was 0 % (very early HCC group) versus 40 % (early HCC group) (p = 0.2). No significant differences were observed in 1-, 5-, and 10-year actuarial survival between the very early HCC group and the LT group (95, 55, and 50 % vs. 82, 62, and 50 %). Conclusions: LR should be the treatment of choice for cirrhotic patients with very early HCC. abstract_id: PUBMED:31231700 Transplantation versus liver resection in patients with hepatocellular carcinoma. Hepatocellular carcinoma (HCC) is one of the most common solid cancers in the world. Its treatment strategies have evolved significantly over the past few decades but the best treatment outcomes remain in the surgical arena. Especially for early HCCs, the options are abundant. However, surgical resection and liver transplantation provide the best long-term survival. In addition, there are evidence the ablative therapy such as radiofrequency ablation, could provide equivalent outcome as compared to resection. However, HCC is a unique malignancy as the majority of patients develop this cancer in the background of cirrhotic livers. As such, the treatment consideration should not only look at the oncological perspective but also the functional status of the liver parenchyma, i.e., the state of cirrhosis and presence of portal hypertension. Even with the most widely adopted staging systems for HCC such as the Barcelona Clinic Liver Cancer (BCLC) staging system and many other staging systems, none of them are ideal in including the various considerations for patients with HCCs. In this article, the key issues between choosing surgical resection and liver transplantation are discussed. A comprehensive review of the current surgical options are outlined in order to explore the pros and cons of each option. abstract_id: PUBMED:36831521 Outcomes and Patient Selection in Laparoscopic vs. Open Liver Resection for HCC and Colorectal Cancer Liver Metastasis. Hepatocellular carcinoma (HCC) and colorectal liver metastasis (CRLM) are the two most common malignant tumors that require liver resection. While liver transplantation is the best treatment for HCC, organ shortages and high costs limit the availability of this option for many patients and make resection the mainstay of treatment. For patients with CRLM, surgical resection with negative margins is the only potentially curative option. Over the last two decades, laparoscopic liver resection (LLR) has been increasingly adopted for the resection of a variety of tumors and was found to have similar long-term outcomes compared to open liver resection (OLR) while offering the benefits of improved short-term outcomes. In this review, we discuss the current literature on the outcomes of LLR vs. OLR for patients with HCC and CRLM. Although the use of LLR for HCC and CRLM is increasing, it is not appropriate for all patients. We describe an approach to selecting patients best-suited for LLR. The four common difficulty-scoring systems for LLR are summarized. Additionally, we review the current evidence behind the emerging robotically assisted liver resection technology. abstract_id: PUBMED:25561775 Laparoscopic left liver lobectomy for hepatocellular carcinoma in a cirrhotic patient: a video report. We present a video case of a 51-year-old man admitted to our surgical and liver transplantation unit for hepatocellular cancer (HCC). Patient has a HCV cirrhosis with portal hypertension and esophageal varices F1. Child Pugh score was B7 and model of end staged liver disease (MELD) was 11. Body mass index (BMI) was 26.7 and ASA score was 2. No previous abdominal surgery. According with our multidisciplinary group we suggest a laparoscopic left lobectomy for the patient. Pringle manoeuvre was not performed. Operation time was 193 min and blood loss estimation was 100 cc. No transfusion was required. Post-operative course was uneventful, grade I of Clavien-Dindo Classification. Patient was discharged in day 8. In our experience laparoscopic resection in cirrhotic liver should be performed in selected patients and in an experienced team. abstract_id: PUBMED:25610292 Predictors of recurrence in hepatitis C virus related hepatocellular carcinoma after hepatic resection: a retrospective cohort study. Objective: Egypt is one of the hot spots in the international map of Hepatocellular carcinoma (HCC), which is where hepatitis C virus (HCV) infection is the major risk factor in development of HCC (80%). Due to low organ donation rates and lack of deceased liver transplantation, hepatic resection is the main line of treatment for HCC patients with sufficient liver reserve. We introduce our experience with patients who had HCV related HCC who underwent hepatic re-section to determine various predictors of tumour recurrence in this group. This is the first study to come from a country where chronic HCV hepatitis is endemic. Materials And Methods: This is a retrospective cohort study of 208 cases of HCC in hepatitis C virus positive patients with cirrhotic livers who underwent first-time liver resection, in Gastroenterology Surgical Centre, Mansoura University, Egypt during the period from January 2002 to December 2011. Shapiro-Wilk test was used to assess normality of data. Predictors of HCC recurrence were assessed by bivariate correlation tests, univariate analysis using the chi-square and t-test and binary logistic regression analysis. A P value <0.05 was considered statistically significant. Results: Tumour recurrence occurred in 88 patients (42.3%). Most of the recurrences occurred within the first year 55 patients (62.5%). The most common site for recurrence was the liver (n=68, 77.3%). Based on the univariate analysis; significant variables predicting tumor recurrence were alpha feto-protein (AFP), blood transfusion, multi-focality, cut margin, microvascular invasion, lack of capsule, tumour grade and stage. Based on multivariate analysis, the main variables predicting tumor recurrence were blood transfusion, cut margin, tumour capsule and microvascular invasion. Conclusion: Although the predictors of recurrence are the same for both HBV and HCV related HCC, the rate and aggressiveness of recurrence are higher in HCV related HCC. abstract_id: PUBMED:30109374 Treatment of hepatocellular carcinoma in the cirrhotic and non-cirrhotic liver Background And Objectives: The treatment of hepatocellular carcinoma (HCC) includes surgical, interventional and systemic approaches. This review highlights the importance of surgical treatment approaches in the multimodal treatment of HCC, based on the currently available literature, corresponding German guidelines as well as current developments in organ donation in Germany. Results: Curative treatment options for HCC include liver transplantation, partial liver resection, and local ablative procedures depending on the lesion size. In the case of an early HCC in a cirrhotic liver fulfilling the Milan criteria, liver transplantation is the treatment of choice. In view of the organ shortage in Germany and improved results after partial liver resection, surgery can alternatively be performed in patients with a sufficient liver function. In selected patients with multinodular HCC, regional lymph node metastases or macrovascular invasion, the possibility of liver resection must be decided on an individual basis; however, the latter two criteria are contraindications for transplantation. Local ablative procedures can be considered as an alternative to resection in selected patients with early solitary HCC. Surgery of HCC in the non-cirrhotic liver with a curative intent is guided by the general principles of oncological liver surgery. Discussion: Curative treatment options for HCC include liver transplantation, partial liver resection and local ablative procedures. Current developments in the area of organ donation and technical advances in minimally invasive liver surgery should be included in decision-making in tumor boards. Answer: The decision between liver resection (LR) and liver transplantation (LT) for single hepatocellular carcinoma (HCC) in cirrhotic patients is complex and depends on various factors, including the size of the tumor, the patient's liver function, and the availability of organs for transplantation. For cirrhotic patients with single HCC ≤5 cm, both LR and LT are potential treatment options. A study assessing the long-term outcome of such patients treated either with LR or LT on an intention-to-treat basis found that tumor recurrence was significantly higher in the LR group (72%) compared to the LT group (16%). The 10-year actuarial survival was also better in the LT group (49%) versus the LR group (33%). However, for very early HCC (tumor size ≤2 cm), LR should be the treatment of choice, as the long-term survival rates were comparable to those of the LT group (PUBMED:22965574). In cases where liver transplantation is the best treatment for HCC, organ shortages and high costs can limit its availability, making resection the mainstay of treatment. For patients with colorectal liver metastasis (CRLM), surgical resection with negative margins is the only potentially curative option (PUBMED:36831521). Laparoscopic liver resection (LLR) has been increasingly adopted and offers similar long-term outcomes to open liver resection (OLR) while providing improved short-term outcomes. LLR can be particularly advantageous for cirrhotic patients, as it may reduce the risk of post-hepatectomy liver failure (PUBMED:30363804). Ultimately, the choice between LR and LT should be made on an individual basis, considering the patient's specific circumstances and the expertise of the surgical team. The decision should be guided by multidisciplinary tumor boards that take into account the current developments in organ donation and advances in minimally invasive liver surgery (PUBMED:30109374).
Instruction: Does diabetes mellitus alter the onset and clinical course of vascular dementia? Abstracts: abstract_id: PUBMED:21098968 Does diabetes mellitus alter the onset and clinical course of vascular dementia? Background: Vascular dementia (VaD) is the second most common dementing illness. Multiple risk factors are associated with VaD, but the individual contribution of each to disease onset and progression is unclear. We examined the relationship between diabetes mellitus type 2 (DM) and the clinical variables of VaD. Methods: Data from 593 patients evaluated between June, 2003 and June, 2008 for cognitive impairment were prospectively entered into a database. We retrospectively reviewed the charts of 63 patients who fit the NINDS-AIREN criteria for VaD. The patients were divided into those with DM (VaD-DM, n=29) and those without DM (VaD, n=34). The groups were compared with regard to multiple variables. Results: Patients with DM had a significantly earlier onset of VaD (71.9 ± 6.54 vs. 77.2 ± 6.03, p< 0.001), a faster rate of decline per year on the mini mental state examination (MMSE; 3.60 ± 1.82 vs. 2.54 ± 1.60 points, p= 0.02), and a greater prevalence of neuropsychiatric symptoms at the time of diagnosis (62% vs. 21%, p=0.02). Conclusions: A history of pre-morbid DM was associated with an earlier onset and faster cognitive deterioration in VaD. Moreover, DM was associated with neuropsychiatric symptoms in patients with VaD. A larger study is needed to verify these associations. It will be important to investigate whether better glycemic control will mitigate the potential effects of DM on VaD. abstract_id: PUBMED:36868384 Onset age of diabetes and incident dementia: A prospective cohort study. Background: Relationship between age at diagnosis of diabetes and dementia is lacking. The aim of the study was to investigate whether diabetes onset at a younger age was associated with a higher incidence of dementia. Methods: 466,207 participants free of dementia in the UK biobank (UKB) were included in the analysis. Propensity score matching (PSM) was adopted to match diabetic and non-diabetic participants in different onset age of diabetes groups to evaluate onset age of diabetes and incident dementia. Results: Compared with non-diabetic participants, diabetes participants had an adjusted hazard ratio (HR) of 1.87 (95 % confidence interval [CI]: 1.73-2.03) for all-cause dementia, 1.85 (95 % CI: 1.60-2.04) for Alzheimer's disease (AD), and 2.86 (95 % CI: 2.47-3.32) for vascular dementia (VD). Among diabetic participants who reported onset age, the adjusted HRs for incident all-cause dementia, AD, and VD were 1.20 (95 % CI: 1.14-1.25), 1.19 (95 % CI: 1.10-1.29), and 1.19 (95 % CI: 1.10-1.28), respectively, per 10 years decrease in age at diabetes onset. After PSM, strength of association between diabetes and all-cause dementia increased with decreasing onset age of diabetes (≥60 years: HR = 1.47, 95 % CI: 1.25-1.74; 45-59 years: HR = 1.66, 95 % CI: 1.40-1.96; <45 years: HR = 2.92, 95 % CI: 2.13-4.01) after multivariable adjustment. Similarly, diabetic participants with onset age <45 years had greatest HRs for incident AD and VD, compared with their matched controls. Limitations: Our results only reflect the characteristics of UKB participants. Conclusions: Younger age at diabetes onset was significantly associated with a higher risk of dementia in this longitudinal cohort study. abstract_id: PUBMED:18977819 Alzheimer disease with cerebrovascular disease and vascular dementia: clinical features and course compared with Alzheimer disease. Objective: Vascular dementia (VaD) and Alzheimer disease with cerebrovascular disease (AD+CVD) are the leading causes of dementia after Alzheimer disease alone (AD). Little is known about the progression of either VaD or AD+CVD. The aim of this study was to compare demographic features, cognitive decline and survival of patients with VaD, AD+CVD and AD alone attending a memory clinic. Methods: This study included 970 patients who were followed at the Lille-Bailleul memory clinic, France. Cognitive functions were measured with the Mini Mental State Examination (MMSE) and the Dementia Rating Scale (DRS). Survival rate was analysed with a left-truncated Cox model. Analyses were adjusted for age, sex, education, hypertension, diabetes and baseline MMSE and DRS. Results: Of 970 patients, 141 had VaD, 663 AD alone and 166 AD+CVD. The latter were significantly older than AD or VaD patients at onset (71 (SD 7) vs 69 (9) and 68 (9) years, p = 0.01) and at first visit (75 (6) vs 73 (8) and 72 (8) years, p = 0.0002). Baseline MMSE and DRS evaluations were highest for VaD compared with AD alone or AD+CVD patients (p<0.006). Cognitive decline during follow-up was slowest for VaD, intermediate for AD+CVD and fastest for AD alone (p = 0.03). After adjustment, compared with AD patients, mortality risk was similar for those with VaD (relative mortality risk (RR) = 0.7 (0.5 to 1.1)) and tended to be lower for AD+CVD (RR = 0.7 (0.5 to 1.0)). The shorter the delay between first symptoms and first visit, the longer patients survived. Conclusion: This clinical cohort study shows that patients with VaD, AD+CVD and AD present different characteristics at baseline and during follow-up, and underlines the need to distinguish between them. abstract_id: PUBMED:36804018 Sex-specific associations between diabetes and dementia: the role of age at onset of disease, insulin use and complications. Background: Whether the association of type 2 diabetes (T2DM) with dementia was differed by sex remains unclear, and the roles of age at onset of disease, insulin use and diabetes' complications in their association are unknown. Methods: This study analyzed data of 447 931 participants from the UK Biobank. We used Cox proportional hazards models to estimate sex-specific hazard ratios (HRs) and 95% confidence intervals (CI), and women-to-men ratio of HRs (RHR) for the association between T2DM and incident dementia [all-cause dementia, Alzheimer's disease (AD), and vascular dementia (VD)]. The roles of age at onset of disease, insulin use and diabetes' complications in their association were also analyzed. Results: Compared to people with no diabetes at all, people with T2DM had increased risk of all-cause dementia (HR 2.85, 95% CI 2.56-3.17). The HRs between T2DM and AD were higher in women than men, with an RHR (95%CI) of 1.56 (1.20, 2.02). There was a trend that people who experienced T2DM before age 55 had higher risk of VD than those who had T2DM after age 55. In addition, there was a trend that T2DM had higher effect on VD that occurred before age 75 years than events that occurred after age 75. Patients with T2DM using insulin had higher risk of all-cause dementia than those without insulin, with an RHR (95%CI) of 1.54 (1.00-2.37). People with complications had doubled risk of all-cause dementia, AD and VD. Conclusions: Adopting a sex-sensitive strategy to address the risk of dementia in patients with T2DM is instrumental for a precision medicine approach. Meanwhile, it is warranted to consider patients' age at onset of T2DM, insulin use status and complications conditions. abstract_id: PUBMED:28503814 Diabetes mellitus and risk of early-onset Alzheimer's disease: a population-based case-control study. Background And Purpose: Previous studies have reported that diabetes is a risk factor for both all-cause and vascular dementia; however, diabetes as a risk factor for Alzheimer's disease (AD) remains controversial. Therefore, the aim was to elucidate the association between diabetes and early-onset AD. Methods: A case-control study was conducted using a population-based database that included medical and pharmacy claims and insurance eligibility data, from beneficiaries of corporate employees and their dependent family members. Cases were aged 40-64 years and were first prescribed medications for AD between 2005 and 2016. Up to four controls matched for age, sex and hospital type were included for each case. Data were analyzed using conditional logistic regression and compared between the sexes. Results: Data from 371 patients with AD (mean age 56.3 ± 5.3 years; 48% female) and 1484 controls were analyzed. Use of antidepressants, antipsychotics and antithrombotics during the index month was higher amongst patients with AD (19.4%, 34.5% and 11.3%, respectively) than amongst controls (2.9%, 10.3% and 7.3%, respectively). Our findings suggest no evidence for an association between diabetes and risk of early-onset AD (adjusted odds ratio 1.31; 95% confidence interval 0.90-1.92). In the subgroup analyses, adjusted odds ratios in patients with diabetes were 0.73 (95% confidence interval 0.38-1.39) and 1.68 (95% confidence interval 1.06-2.67) for female and male patients, respectively. Conclusions: There is no apparent association between diabetes and risk of early-onset AD in the total study population, although a weak association was observed amongst male patients. abstract_id: PUBMED:38217604 Comorbidity of Dementia: A Cross-Sectional Study of PUMCH Dementia Cohort. Background: Comorbidities reduce quality of life for people with dementia and caregivers. Some comorbidities share a genetic basis with dementia. Objective: The objective of this study is to assess comorbidity in patients with different dementia subtypes in order to better understand the pathogenesis of dementias. Methods: A total of 298 patients with dementia were included. We collected some common comorbidities. We analyzed the differences in comorbidities among patients with dementia according to clinical diagnosis, age of onset (early-onset: < 65 and late-onset: ≥65 years old) and apolipoprotein (APOE) genotypes by using the univariate and multivariate approaches. Results: Among 298 participants, there were 183 Alzheimer's disease (AD), 40 vascular dementia (VaD), 37 frontotemporal dementia (FTLD), 20 Lewy body dementia (LBD), and 18 other types of dementia. Based on age of onset, 156 cases had early-onset dementia and 142 cases had late-onset dementia. The most common comorbidities observed in all dementia patients were hyperlipidemia (68.1%), hypertension (39.9%), insomnia (21.1%), diabetes mellitus (19.5%), and hearing impairment (18.1%). The prevalence of hypertension and cerebrovascular disease was found to be higher in patients with VaD compared to those with AD (p = 0.002, p < 0.001, respectively) and FTLD (p = 0.028, p = 0.004, respectively). Additionally, patients with late-onset dementia had a higher burden of comorbidities compared to those with early-onset dementia. It was observed that APOE ɛ4/ɛ4 carriers were less likely to have insomnia (p = 0.031). Conclusions: Comorbidities are prevalent in patients with dementia, with hyperlipidemia, hypertension, insomnia, diabetes, and hearing impairment being the most commonly observed. Comorbidity differences existed among different dementia subtypes. abstract_id: PUBMED:24478258 Younger age of dementia diagnosis in a Hispanic population in southern California. Objective: Prior studies of US Hispanics, largely performed on the East Coast, have found a younger age of dementia onset than in White non-Hispanics. We performed a cross-sectional study to examine clinical and sociodemographic variables associated with age of dementia diagnosis in older Hispanics and White, non-Hispanics in southern California. Methods: Two hundred ninety (110 Hispanic and 180 White non-Hispanic) community dwelling, cognitively symptomatic subjects, aged 50 years and older, were assessed and diagnosed with probable Alzheimer's disease or probable vascular dementia. Apolipoprotein E (APOE) genotype was assessed in a subset of cases. Analysis of variance and multiple stepwise linear regression were used to assess main effects and interactions of ethnicity with dementia severity (indexed by mini mental state examination scores) and other sociodemographic and clinical variables on age of dementia diagnosis. Results: Hispanics were younger by an average of 4 years at the time of diagnosis, regardless of dementia subtype, despite a similar prevalence of the APOE ε4 genotype. The earlier age at diagnosis for Hispanics was not explained by gender, dementia severity, years of education, history of hypercholesterolemia, hypertension, or diabetes. Only ethnicity was significantly associated with age of onset. Conclusions: These findings confirm that US Hispanics living in the southwestern USA tend to be younger at the time of dementia diagnosis than their White non-Hispanic counterparts. As this is not explained by the presence of the APOE ε4 genotype, further studies should explore other cultural, medical, or genetic risk factors influencing the age of dementia onset in this population. abstract_id: PUBMED:31177182 BMI, Weight Change, and Dementia Risk in Patients With New-Onset Type 2 Diabetes: A Nationwide Cohort Study. Objective: This study examined the association between baseline BMI, percentage weight change, and the risk of dementia in patients newly diagnosed with type 2 diabetes. Research Design And Methods: Using the South Korean National Health Insurance Service-National Health Screening Cohort database, we identified 167,876 subjects aged ≥40 years diagnosed with new-onset type 2 diabetes between 2007 and 2012. Their weight changes were monitored for ∼2 years after diagnosis, with follow-up assessments occurring for an average of 3.5 years. The hazard ratios (HRs) and Bonferroni-adjusted 95% CIs of all-cause dementia, Alzheimer disease (AD), and vascular dementia were estimated using multivariable Cox proportional hazards regression models. Results: We identified 2,563 incident dementia cases during follow-up. Baseline BMI among patients with new-onset type 2 diabetes was inversely associated with the risk of all-cause dementia and AD, independent of confounding variables (P for trend <0.001). The percentage weight change during the 2 years after a diagnosis of type 2 diabetes showed significant U-shaped associations with the risk of all-cause dementia development (P < 0.001); the HRs of the disease increased significantly when weight loss or gain was >10% (1.34 [95% CI 1.11-1.63] and 1.38 [1.08-1.76], respectively). Additionally, weight loss >10% was associated with an increased risk of AD (HR 1.26 [95% CI 1.01-1.59]). Conclusions: A lower baseline BMI was associated with increased risks of all-cause dementia and AD in patients with new-onset type 2 diabetes. Weight loss or weight gain after the diagnosis of diabetes was associated with an increased risk of all-cause dementia. Weight loss was associated with an increased risk of AD. abstract_id: PUBMED:22119909 Prevention of vascular dementia. Evidence and practice During recent years, increasing knowledge has been obtained from clinical studies about the impact that vascular factors have on cognitive function and dementia. Due to demographic reasons and still insufficient control of all vascular risk factors, dementia and associated problems are of increasing importance and will have impact on economical and social development in most countries. The incidence of cognitive impairment and dementia will increase exponentially. As long as no causal therapy for dementia exists, diagnosis and control of risk factors for dementia will need much more attention. Hypertension is not only the most important risk factor for stroke that often leads to dementia but also for silent brain infarcts, which are also associated with onset of dementia. Uncontrolled hypertension is associated with cognitive impairment and sufficient control of hypertension in middle-aged patients can reduce the risk of dementia in older ages. Nevertheless, treatment of all other risk factors (e.g., diabetes mellitus, hyperlipidemia, atrial fibrillation) is important to reduce the onset of not only vascular but also Alzheimer dementia. abstract_id: PUBMED:35299656 Association of a wide range of chronic diseases and apolipoprotein E4 genotype with subsequent risk of dementia in community-dwelling adults: A retrospective cohort study. Background: Identifying independent and interactive associations of a wide range of diseases and multimorbidity and apolipoprotein E4 (APOE4) with dementia may help promote cognitive health. The main aim of the present study was to investigate associations of such diseases and their multimorbidity with incident dementia. Methods: In this retrospective cohort study, we included 471,485 individuals of European ancestry from the UK Biobank, aged 38-73 years at baseline (2006-10). Dementia was identified using inpatient records and death registers. The follow-up period was between March 16, 2006, and Jan 31, 2021. Findings: During a median follow-up of 11·9 years, 6189 cases of incident all-cause dementia (503 young-onset cases, 5686 late-onset cases) were documented. In multivariable-adjusted analysis, 33 out of 63 major diseases were associated with an increased risk of dementia. The hazard ratio (HR [95% CI]) ranged from 1·12 (1·06-1·19) for obesity to 14·22 (12·33-16·18) for Parkinson's disease. In addition to conventional diseases, respiratory disorders, musculoskeletal disorders, digestive disorders, painful conditions, and chronic kidney disease were associated with increased dementia risk. A larger HR for dementia was observed for a larger number of diseases (3·97 [3·51-4·48] for ≥6 diseases versus no disease). These individual diseases and multimorbidity were more predictive of young-onset dementia than of late-onset dementia. Dementia risk score incorporating multimorbidity, age, and APOE4 status had strong prediction performance (area under the curve [95% CI]: 82·2% [81·7-82·7%]). APOE4 was more predictive of late-onset dementia (HR [95% CI]: 2·90 [2·75-3·06]) than of young-onset dementia (1·26 [1·03-1·54]). Associations of painful conditions, depression, obesity, diabetes, stroke, Parkinson's disease, high cholesterol, and their multimorbidity with incident dementia were stronger among non-APOE4 carriers. Interpretation: Besides conventional diseases, numerous diseases are associated with an increased risk of dementia. These individual diseases and multimorbidity are more predictive of young-onset dementia, whereas APOE4 is more predictive of late-onset dementia. Individual diseases and multimorbidity are stronger predictors of dementia in non-APOE4 carriers. Although multiple risk factors have been adjusted for in the analysis, potential confounding from unknown factors may have biased the associations. Funding: The Fundamental Research Funds of the State Key Laboratory of Ophthalmology, Project of Investigation on Health Status of Employees in Financial Industry in Guangzhou, China (Z012014075), Science and Technology Program of Guangzhou, China (202,002,020,049). Answer: Yes, diabetes mellitus (DM) does appear to alter the onset and clinical course of vascular dementia (VaD). According to a study examining the relationship between type 2 diabetes mellitus (DM) and the clinical variables of VaD, patients with a history of pre-morbid DM had a significantly earlier onset of VaD and a faster rate of cognitive decline per year on the Mini Mental State Examination (MMSE). Additionally, DM was associated with a greater prevalence of neuropsychiatric symptoms at the time of diagnosis of VaD (PUBMED:21098968). Another study investigating the onset age of diabetes and incident dementia found that younger age at diabetes onset was significantly associated with a higher risk of dementia, including vascular dementia (VD). The strength of the association between diabetes and all-cause dementia increased with decreasing onset age of diabetes (PUBMED:36868384). Furthermore, a study comparing the clinical features and course of Alzheimer disease with cerebrovascular disease (AD+CVD) and vascular dementia (VaD) found that cognitive decline during follow-up was slowest for VaD, suggesting that the progression of cognitive impairment may differ between these conditions (PUBMED:18977819). Sex-specific associations between diabetes and dementia also indicate that the association of type 2 diabetes (T2DM) with dementia differed by sex, and the roles of age at onset of disease, insulin use, and diabetes' complications in their association are notable. For instance, people who experienced T2DM before age 55 had a higher risk of VD than those who had T2DM after age 55 (PUBMED:36804018). In summary, diabetes mellitus, particularly when onset occurs at a younger age, is associated with an earlier onset and a different clinical course of vascular dementia, including a faster cognitive decline and a higher prevalence of neuropsychiatric symptoms. These findings suggest that diabetes mellitus is an important factor in the development and progression of vascular dementia.
Instruction: Can reading accuracy and comprehension be separated in the Neale Analysis of Reading Ability? Abstracts: abstract_id: PUBMED:15130187 Can reading accuracy and comprehension be separated in the Neale Analysis of Reading Ability? Background: The Neale Analysis of Reading Ability (NARA) (Neale, 1997) is widely used in education and research. It provides measures of reading accuracy (decoding) and comprehension, which are frequently interpreted separately. Aims: Three studies were conducted to investigate the degree to which the NARA measures could be separated. Samples: British 7- and 8-year-olds participated in Study 1 (N=114) and Study 2 (N=212). In Study 3, 16 skilled and less-skilled comprehenders were identified from the Study 2 sample. Methods: Study 1: By investigating their contribution to silent reading comprehension, the independence of NARA decoding and comprehension scores was determined. Study 2: Decoding groups matched for listening comprehension were compared on the NARA comprehension measure, and population performance was compared across listening comprehension and NARA reading comprehension. Study 3: Comprehension groups were compared on ability to answer open-ended and forced-choice questions. Results: Firstly, NARA comprehension performance depended on decoding, to the extent that children with high listening comprehension ability but low decoding ability attained low NARA comprehension scores. Secondly, 32% of children who attained low NARA comprehension scores exhibited high listening comprehension. Thirdly, comprehension groups differed when assessed with open-ended questions but not when assessed with forced-choice questions. Conclusions: The NARA can underestimate the comprehension ability of children with weak decoding skills and children who have some difficulty with open-ended questions. The decoding and comprehension measures of the NARA cannot be separated. These findings have important implications for the interpretation of the measures provided by the NARA, in education and research. abstract_id: PUBMED:35645943 Reading Ability in Patients With Tuberous Sclerosis Complex: Results of Chinese Character Reading and Reading Comprehension Tests. Background: Most tuberous sclerosis complex (TSC) patients have neurological disorders and are at high risk of academic difficulties. Among academic skills, reading ability is the most important academic skill. The study applied the Chinese character fluency test to measure the word recognition and reading comprehension of TSC children to observe whether they have the characteristics of reading disability, as an indicator of the spectrum of reading ability in TSC patients. Methods: The patients were assessed using the Chinese character fluency test and reading comprehension test to explore the differences in reading ability in terms of gender, age, epilepsy history, genotype, and intelligence level. Results: Of the 27 patients, the assessment of reading accuracy showed statistical differences between intellectual level > 80, PR (p = 0.024), and pass numbers (p = 0.018). For the fluency assessment, there was a difference between different intellectual level (p = 0.050). In the reading comprehension test, there was differences for intellectual level in positivity (p = 0.07) and pass numbers (p = 0.06). Conclusion: The Chinese character fluency and reading comprehension test measure the word recognition and reading comprehension and the spectrum of reading ability in TSC patients. All individuals with TSC, especially those with below average of intellectual ability, should be considered for potential academic difficulties. abstract_id: PUBMED:17094881 Assessment matters: issues in the measurement of reading comprehension. Background: The Neale Analysis of Reading Ability (NARA; Neale, 1997) is a widely used assessment of reading comprehension and word reading accuracy. Spooner, Baddeley, and Gathercole (2004) questioned the suitability of the NARA for identifying children with specific reading comprehension deficits. Aims and methods. An evaluation of the NARA measurement of word reading and reading comprehension level was undertaken in relation to models of reading ability. Appropriate control measures were considered. The strengths and weaknesses of different forms of reading comprehension were also evaluated. Results: Previous research into reading comprehension difficulties using the NARA has adopted satisfactory control measures in relation to word reading ability. There are limitations associated with all the considered forms of reading comprehension assessment. Conclusions: If administered and interpreted appropriately, the NARA is an effective instrument for researchers and practitioners who need to assess both word reading accuracy and reading comprehension and to identify children with a dissociation between these two aspects of reading. abstract_id: PUBMED:29556444 Sentence Reading Comprehension by Means of Training in Segment-Unit Reading for Japanese Children with Intellectual Disabilities. Children with intellectual disabilities (ID) often have difficulty in sentence reading and comprehension. Previous studies have shown that training in segment-unit reading (SUR) facilitates the acquisition of sentence reading comprehension skills for Japanese students with ID. However, it remains unknown whether SUR training is also effective for individuals unable to read sentences and can generalize to untrained sentences. In this study, we examined the improvement and generalization of sentence reading accuracy and comprehension for two children with ID through SUR training with listening comprehensible sentences. During training, the segments were sequentially presented in their correct spatial locations, and participants read them aloud. After the training, participants' reading accuracy and comprehension improved for both trained and untrained sentences. The results suggest that presenting the components of stimuli sequentially in their correct spatial locations is key to facilitating the development of sentence reading accuracy and comprehension for individuals with ID. abstract_id: PUBMED:12495566 General cognitive ability in children with reading comprehension difficulties. Background: Children with specific reading comprehension difficulties read accurately and fluently but are poor at understanding what they read. Aims: This study investigated cognitive ability in children with poor reading comprehension with a view to determining the relationship between general cognitive ability and specific reading comprehension difficulty. Sample: Twenty-five poor comprehenders and 24 control children, matched for chronological age and word reading ability, participated in this study. Methods: General conceptual ability (GCA) was assessed using the British Ability Scales (2(nd) edition; BAS-II); good and poor comprehenders' performance on different subscales was compared and related to underlying skills in reading accuracy, reading comprehension and number. Results: There was a general tendency for poor comprehenders to achieve lower scores on verbal tasks than on non-verbal and spatial tasks. Although the poor comprehenders scored significantly below the control children across most subtests, most obtained GCA scores within the normal range. For these children, reading comprehension was significantly below GCA-expected levels. A subset of poor comprehenders with below average GCA showed a clear hyperlexic profile in which comprehension was not unexpectedly poor but rather, reading accuracy was surprisingly good. Conclusions: These findings highlight the heterogeneity of children presenting with poor reading comprehension. Although most poor comprehenders have weaknesses that appear to be restricted to the verbal domain, a minority have more general cognitive impairments. abstract_id: PUBMED:29660589 Predicting reading ability in teenagers who are deaf or hard of hearing: A longitudinal analysis of language and reading. Background: Deaf and hard of hearing (D/HH) children and young people are known to show group-level deficits in spoken language and reading abilities relative to their hearing peers. However, there is little evidence on the longitudinal predictive relationships between language and reading in this population. Aims: To determine the extent to which differences in spoken language ability in childhood predict reading ability in D/HH adolescents. Methods: and procedures: Participants were drawn from a population-based cohort study and comprised 53 D/HH teenagers, who used spoken language, and a comparison group of 38 normally hearing teenagers. All had completed standardised measures of spoken language (expression and comprehension) and reading (accuracy and comprehension) at 6-10 and 13-19 years of age. Outcomes: and results: Forced entry stepwise regression showed that, after taking reading ability at age 8 years into account, language scores at age 8 years did not add significantly to the prediction of Reading Accuracy z-scores at age 17 years (change in R2 = 0.01, p = .459) but did make a significant contribution to the prediction of Reading Comprehension z-scores at age 17 years (change in R2  = 0.17, p < .001). Conclusions: and implications: In D/HH individuals who are spoken language users, expressive and receptive language skills in middle childhood predict reading comprehension ability in adolescence. Continued intervention to support language development beyond primary school has the potential to benefit reading comprehension and hence educational access for D/HH adolescents. abstract_id: PUBMED:37711329 Reading fluency as the bridge between decoding and reading comprehension in Chinese children. Purpose: Reading fluency has been considered an essential component of reading comprehension, but it is yet to be examined in a reading model in a non-alphabetic writing system. This study investigated whether reading fluency could be identified as a separate construct from decoding and examined the unique role of reading fluency in the Simple View of Reading (SVR). Method: A total of 342 Cantonese-speaking Chinese children in grades 3-5 were recruited to participate in the study. They were assessed on word reading accuracy and fluency, morphological awareness, vocabulary knowledge, and reading comprehension. Results: The confirmatory factor analysis results confirmed that reading fluency is a separate factor from decoding, linguistic comprehension, and reading comprehension. Furthermore, the structural equation modeling results revealed that reading fluency is a significant predictor of reading comprehension and a mediator between decoding and reading comprehension in the extended SVR model. Conclusion: The findings extended previous research in alphabetic languages and supported reading fluency as the bridge between decoding and reading comprehension. The present study highlighted the importance of reading fluency in Chinese reading acquisition in a theoretical framework. abstract_id: PUBMED:28395303 Prologue: Reading Comprehension Is Not a Single Ability. Purpose: In this initial article of the clinical forum on reading comprehension, we argue that reading comprehension is not a single ability that can be assessed by one or more general reading measures or taught by a small set of strategies or approaches. Method: We present evidence for a multidimensional view of reading comprehension that demonstrates how it varies as a function of reader ability, text, and task. The implications of this view for instruction of reading comprehension are considered. Conclusion: Reading comprehension is best conceptualized with a multidimensional model. The multidimensionality of reading comprehension means that instruction will be more effective when tailored to student performance with specific texts and tasks. abstract_id: PUBMED:32870705 Relationship Between Single Word Reading, Connected Text Reading, and Reading Comprehension in Persons With Aphasia. Purpose This study examined the relationship between single word reading, connected text reading, and comprehension in persons with aphasia. Method Thirteen persons with aphasia read orally from the Arizona Battery for Reading and Spelling real-word and nonword lists and the Gray Oral Reading Tests-Fifth Edition. The comprehension questions following each paragraph of the Gray Oral Reading Tests-Fifth Edition were answered and scored. The Reading Comprehension Battery for Aphasia-Second Edition provided a measure of silent reading comprehension. Descriptive statistics and Spearman correlation were used to examine associations among reading measures. Results Persons with aphasia showed associations between single word reading and connected text reading accuracy; however, single word reading ability was not associated with oral or silent reading comprehension. Conclusions Although preliminary, the findings provide support for word-level reading abilities underlying connected text reading accuracy but suggest additional cognitive mechanisms are involved in text-level reading comprehension that are not explained by single word reading alone. The findings indicate clinicians should use caution when inferring comprehension abilities from single word reading performance as reading comprehension abilities are likely best assessed using text-level comprehension assessments. abstract_id: PUBMED:28364575 Morphological awareness and reading comprehension: Examining mediating factors. The relation between morphological awareness-defined as the awareness of and ability to manipulate the smallest units of meaning in language-and reading comprehension remains in need of specification. In this study, we evaluated four potential intervening variables through which morphological awareness may contribute indirectly to reading comprehension. We assessed word reading and vocabulary as well as children's ability to read and analyze the meaning of morphologically complex words (morphological decoding and morphological analysis, respectively). Controls of phonological awareness and nonverbal ability were included in the model. Participants were 221 English-speaking children in Grade 3. Multivariate path analyses revealed evidence of two indirect relations and one direct relation between morphological awareness and reading comprehension. In the first indirect path, morphological awareness contributed to morphological decoding, which then influenced word reading and finally reading comprehension. In a second indirect path, morphological awareness contributed to morphological analysis, which contributed to reading comprehension. Finally, in a direct path, morphological awareness contributed to reading comprehension beyond all other variables. These findings inform as to the potential mechanisms underlying the relation between morphological awareness and reading comprehension in children. Answer: No, reading accuracy and comprehension cannot be separated in the Neale Analysis of Reading Ability (NARA). Studies have shown that the NARA comprehension performance is dependent on decoding to the extent that children with high listening comprehension ability but low decoding ability attain low NARA comprehension scores. Additionally, a significant portion of children who scored low on NARA comprehension exhibited high listening comprehension. When comprehension groups were compared on their ability to answer open-ended and forced-choice questions, differences were observed with open-ended questions but not with forced-choice questions. This suggests that the NARA can underestimate the comprehension ability of children with weak decoding skills and those who have difficulty with open-ended questions. Therefore, the decoding and comprehension measures of the NARA cannot be considered separate entities (PUBMED:15130187). Moreover, other studies have highlighted the complexity of reading comprehension and its multidimensional nature, suggesting that it varies as a function of reader ability, text, and task (PUBMED:28395303). This further supports the idea that reading comprehension is not a single ability and cannot be easily disentangled from other reading skills such as decoding or reading accuracy.
Instruction: Does laparoscopy lower the threshold for the surgical treatment of gastroesophageal reflux disease in children? Abstracts: abstract_id: PUBMED:20706151 Does laparoscopy lower the threshold for the surgical treatment of gastroesophageal reflux disease in children? Objective: To assess whether laparoscopic surgery lowers the threshold for surgical intervention, we examined whether the introduction of the laparoscopic technique at our institution in 1997 has resulted in an increase in antireflux surgery in children at our clinic. Patients And Methods: The number of annual fundoplications between 1997 and 2008 at a single institution was assessed in children younger than 18 years. The number of fundoplications was compared with the number of pyloromyotomies and appendicectomies per year in the same period of time to prove or exclude a general increase in the referral of children. Results: Since 1997, the proportion of laparoscopic fundoplications increased from 60% in 1997 to 100% in 2008. During this period, 109 laparoscopic fundoplications were performed: 31 in the period from 1997 to 2002 and 78 from 2003 to 2008. Regression analysis shows a significant increase in the number of performed fundoplications (slope: 1.03 ± 0.28, P = 0.0043), whereas both the number of pyloromyotomies and appendicectomies remained stable (slopes: -0.14 ± 0.40, P = 0.73, and -0.75 ± 0.47, P = 0.14, respectively). Conclusions: Since the introduction of minimally invasive surgery at our tertiary referral center in 1997, the number of patients referred for an antireflux operation has increased. This cannot be explained by an increase of referrals from outside the region or a change in the indication for surgery. We conclude that laparoscopy lowers the threshold for the surgical treatment of gastroesophageal reflux disease in children. abstract_id: PUBMED:27926352 3D Laparoscopy in Neonates and Infants. Background: This study focuses on the successful application of three-dimensional (3D) laparoscopic surgeries in the treatment of congenital anomalies and acquired diseases in the young pediatric population. The purpose of this scientific work consists in highlighting the spectrum, indications, applicability, and effectiveness of 3D endosurgery in children. Methods: Our experience is based on 110 endosurgical procedures performed in neonates and infants in the 3D format between January 2014 and May 2015. Depending on the type of operations, all patients were divided into the following groups: (1) inguinal herniorrhaphy (IH)-63 patients; (2) Nissen fundoplication (NF)-22 patients; (3) pyeloureteral anastomosis (PUA)-15 patients; (4) nephrectomy (NE)-5 patients; and (5) ovarian cystectomy (OC)-5 patients. The patients of the first three groups were compared with babies who underwent standard laparoscopic surgery, performed in the two-dimensional (2D) format during the same time period. The groups were organized according to patient demographics, operative report, and postoperative parameters. Results: The patients were similar in terms of demographics and other preoperative parameters. There were significant differences in mean operative time between 3D and 2D procedures in the groups of patients with hydronephrosis and gastroesophageal reflux, which used manipulation with internal sutures (NF-37.95 minutes versus 48.42 minutes, P = .014; PUA-61.31 minutes versus 78.75 minutes, P = .019), but not in group after IH (15.88 minutes versus 15.57 minutes, P = .681). Postoperative parameters such as length of hospital stay and the number of complications were equivalent between groups. Conclusion: In this study, we demonstrated the success of 3D laparoscopy in small babies with inguinal hernia, gastroesophageal reflux, hydronephrosis, ovarian cyst, and multicystic kidney. Laparoscopy in 3D format lessens the duration of complex procedures, which utilize the use of the suture technique into the abdominal cavity. The perception of depth and the presence of tactile feedback make 3D laparoscopic surgery more acceptable when compared to traditional laparoscopy. abstract_id: PUBMED:22892343 Use of laparoscopy in general surgical operations at academic centers. Background: Laparoscopy is commonly being used in many different types of general surgical procedures. The aim of the present study was to examine the use of laparoscopy and perioperative outcomes in 7 general surgical operations commonly performed at U.S. academic medical centers. Methods: The clinical data of patients who underwent 1 of the 7 general surgical operations from 2008 to 2012 were obtained from the University HealthSystem Consortium database. The University HealthSystem Consortium database contains data from all major teaching hospitals in the United States. The 7 analyzed operations included only elective, inpatient procedures (except for appendectomy): open and laparoscopic antireflux surgery for gastroesophageal reflux, colectomy for colon cancer or diverticulitis, bariatric surgery for morbid obesity, ventral hernia repair for incisional hernia, appendectomy for acute appendicitis, rectal resection for rectal cancer, and cholecystectomy for cholelithiasis. The outcome measures included the number of procedures, rate of laparoscopy, rate of conversion to laparotomy, and in-hospital mortality. Results: During the 3.5-year period, 53,958 patients underwent bariatric surgery, 13,918 patients underwent antireflux surgery, 8654 patients underwent appendectomy, 8512 patients underwent cholecystectomy, 29,934 patients underwent colectomy, 17,746 patients underwent ventral hernia repair, and 4729 patients underwent rectal resection. The present rate of laparoscopic use was 94.0% for bariatric surgery, 83.7% for antireflux surgery, 79.2% for appendectomy, 77.1% for cholecystectomy, 52.4% for colectomy, 28.1% for ventral hernia repair, and 18.3% for rectal resection. In-hospital mortality was greatest for colorectal resection (.38%-.58%). In-hospital mortality for bariatric surgery (.06%) was comparable to that for appendectomy (.01%), cholecystectomy (.27%), antireflux surgery (.15%), and ventral hernia repair (.20%). The rate of laparoscopic conversion to open surgery was lowest for bariatric surgery (.89%) and greatest for rectal resection (16.4%). Conclusion: Within the context of academic centers and elective, inpatient procedures, bariatric surgery had the greatest use of laparoscopy and the lowest rate of laparoscopic conversion to open surgery. The mortality for laparoscopic bariatric surgery is now comparable to that of laparoscopic cholecystectomy, ventral hernia repair, appendectomy, and antireflux surgery. abstract_id: PUBMED:23566581 Video-assisted surgery in children: current progress and future perspectives This review presents the evidence of video-assisted surgery in the pediatric population and discusses future progress in this field. Videosurgery minimizes the cosmetic impact and the pain induced by open procedures and has been in constant development in adults and children. Earlier training of surgeons and residents combined with advances in anesthetics and technology have expanded the use of videosurgery for more complex interventions. Although most feasible surgical procedures have been performed by laparoscopy, the literature has not yet defined it as the gold standard for most interventions, especially because of the lack of evidence for many of them. However, laparoscopy for cholecystectomy is now the preferred approach with excellent postoperative outcomes and few complications. Although no evidence has been demonstrated in children, laparoscopy has been shown to be superior in adults for gastroesophageal reflux disease and splenectomy. Laparoscopic appendectomy remains controversial. Nevertheless, meta-analyses have concluded in moderate but significant advantages in terms of pain, cosmetic considerations, and recovery for the laparoscopic approach. Laparoscopy is now adopted for undescended testes and allows both localization and surgical treatment if necessary. For benign conditions, videosurgery can be an excellent tool for nephrectomy and adrenalectomy. However, laparoscopy remains controversial in pediatric surgical oncology. abstract_id: PUBMED:23065041 Lower esophageal sphincter-preserving laparoscopy-assisted proximal gastrectomy in patients with early gastric cancer: a method for the prevention of reflux esophagitis. Although laparoscopy-assisted proximal gastrectomy (LAPG) with esophagogastrostomy for early gastric cancer (EGC) is technically feasible and oncologically safe, it has not been popularized because of the frequent occurrence of reflux esophagitis associated with loss of the lower esophageal sphincter (LES). Herein, we present surgical outcomes in patients with LES-preserving LAPG (LES-p LAPG), which may contribute to protecting against postoperative gastroesophageal reflux or stricture in the treatment of proximal EGC. From November 2009 to May 2010, LES-p LAPG was performed in nine patients with clinical EGC, located at the proximal one-third of the stomach with the upper margin of the tumor 3-4 cm from the esophagogastric junction. After the resection of the proximal stomach with D1 + β lymph node dissection, gastrogastrostomy was performed using a 25-mm circular stapler through a mini-laparotomy wound at the epigastrium. The median operating time was 137.5 min (range 120-180). The median number of retrieved lymph nodes and length of the proximal resection margin were 27 (range 7-49) and 2.4 cm (range 0.7-5), respectively. The postoperative complications included one gastrogastrostomy stricture and one case of leakage, which were managed by endoscopic balloon dilation and conservative treatment, respectively. None of the patients suffered from symptoms of reflux esophagitis during the follow-up period (median 15 months; range 8-28 months). This technique of LES-p LAPG for the treatment of proximal EGC could be a simple, safe, and useful technique to prevent esophageal reflux or stricture. This technique requires prospective validation. abstract_id: PUBMED:31376132 A Video Case Report of Gastric Perforation Following Endoscopic Sleeve Gastroplasty and its Surgical Treatment. Introduction: Endoscopic sleeve gastroplasty (ESG) is a novel weight loss procedure that reduces the size of the stomach using an endoscopic suturing device. There are severe adverse events that have been reported following ESG (Brethauer et al. Surg Obes Relat Dis. 6:689-94, 2010; Abu Dayyeh et al. Gastrointest Endosc. 78:530-5, 2013; Nava et al. Endoscopy. 47:449-52, 2015; Nava et al. Endosc Int Open. 4(2):E222-7, 2016). However, complications like gastric perforation following ESG have not been reported. This video presents a case with gastric perforation following ESG and its surgical treatment. Methods: A 44-year-old female patient with an initial body mass index (BMI) of 38 kg/m2 underwent an ESG. Her comorbidities include gastroesophageal reflux disease (GERD) and polycystic ovary syndrome (PCOS). On postoperative day six, the patient presented with lower abdominal pain. The patient refused to get an esophagogastroduodenoscopy (EGD) or laparoscopy done. An upper gastrointestinal series (UGI) was performed, and a large ileus was noted with no evidence of leak or free air. On postoperative day seven, a computed tomography (CAT) scan showed a large amount of free air and fluid throughout the abdomen and pelvis. The patient was taken to the operating room (OR) for an exploratory laparoscopy. Results: Upon entering the abdomen, a large amount of pus and free fluid was noted. This was irrigated free from the abdominal cavity until it came back clear. We noted six sutures that went intraluminally to extraluminally and entered the anterior abdominal wall. These sutures were taken down until we found the perforation. A GIA stapler was placed over the perforation, and the defect was closed. The staple line was then imbricated. Once done with the imbrication, we spent a significant amount of time laparoscopically irrigating the abdomen with 12 L of fluid. In total, three drains were placed to assist with draining the abdomen. Conclusion: ESG is a feasible endobariatric option, but complications like gastric perforation can occur. For such complication, immediate surgical treatment is indicated. abstract_id: PUBMED:11052187 Laparoscopy or thoracoscopy for achalasia. Achalasia is an esophageal motor disorder of unknown etiology. Typical manometric findings include aperistalsis of the esophageal body coupled with elevated pressure and incomplete relaxation of the lower esophageal sphincter during swallowing. Medical treatments consist of pneumatic dilatation or injections of botulinum toxin. Surgical treatment consists of Heller's myotomy with or without an antireflux procedure. Relief of dysphagia symptoms can be achieved in 85% to 94% of patients undergoing surgical treatment. In the past decade, the minimally invasive approach for the treatment of achalasia has been proven feasible, safe, and effective. We review the role of thoracoscopy and laparoscopy and address controversies in the management of patients with achalasia. abstract_id: PUBMED:8948034 Gastroesophageal reflux: conventional surgical treatment versus laparoscopy. A prospective study of 61 cases. Sixty-one patients with gastroesophageal reflux who did not respond to conventional medical treatment were treated in a prospective study, 29 by conventional surgery and 32 by laparoscopic methods. All underwent manometry and pH measurement preoperatively and at a follow-up of four months. There was no mortality, and the morbidity of the two groups was not significantly different at 3% and 5%. Hospital stay was significantly reduced (5.4 versus 8.9 days; p = 0.02) following laparoscopic treatment, and time off from work was 21.3 days versus 38.2 days (p = 0.02). The satisfaction index expressed by the patients was 65% at 1 month and 95% at 3 months. Dysphagia was observed in 30% of the patients at 1 month and in 3% at 4 months in both groups. The results of manometry and pH measurements at 4 months are comparable between open surgery and laparoscopy. There was one failure (3%) in the laparoscopic group caused by disruption of the valve. The mean pressure in the esophageal segment (expressed in mm Hg) changed in the two groups from 3.6 to 18.1 (p = 0.001). The results of this series show laparoscopic management of gastroesophageal reflux to be justified. abstract_id: PUBMED:12042909 The efficacy of laparoscopy in detecting and treating associated congenital malformations in children. One of the main advantages of laparoscopy in children is the fact that it enables a magnified view and the possibility to explore the whole abdominal cavity. This case report clearly shows these advantages. We report the case of a 3-yr-old girl, suffering from severe GERD and right inguinal inguinal hernia, who had already been operated at birth for esophageal atresia. We performed a laparoscopic fundoplication according to Nissen and, at the end of procedure, we decided to turn the optic down to control the right inguinal region to confirm the presence of an inguinal hernia. To our great surprise we found a right oblique external hernia as well as a direct inguinal hernia on the same side. Both hernias was treated successfully in laparoscopy. At a 1-year follow-up, the patient presented no reflux and no recurrence of the inguinal hernias. The laparoscopy in this case permitted operation on two different pathologies involving the upper and lower parts of the abdominal cavity using the same ports and without enlarging the incision, as would happen in laparotomy. The main relevance of this case is that laparoscopy allowed the detection of an associated pathology like a direct inguinal hernia that would have been certainly overlooked in open surgery and could have caused a recurrent hernia if operated via open surgery. abstract_id: PUBMED:30591490 Lower Esophageal Sphincter-preserving Anastomosis Is an Acceptable Reconstruction Method Following Laparoscopy-assisted Proximal Gastrectomy. Background/aims: Our study aimed to compared 3-year results following laparoscopy-assisted proximal gastrectomy (LAPG)-double tract anastomosis (DTA) and lower esophageal sphincter-preserving LAPG (LES-LAPG). Patients And Methods: Nine patients who underwent LES-LAPG and 23 LAPG-DTA were compared regarding 3-year follow-up complications, the anemia-related parameter and albumin levels. Results: During the 3-year follow-up period, no patient (0%) in the LES-LAPG and six patients (6/23, 26.1%) in the LAPG-DTA group developed reflux symptoms. Reflux symptoms were well controlled with PPI in all patients, except for one in the LAPG-DTA group. In the laboratory study, there were no differences in serum ferritin and vitamin B12 levels between the two groups. However, mean hemoglobin and albumin levels were significantly higher in the LES-LAPG group from 18 months postoperatively. Conclusion: LES-LAPG provided acceptable results for the prevention of reflux esophagitis. Moreover, LES-LAPG may be advantageous for anemia prevention and nutritional status improvement compared to LAPG-DTA. Answer: Yes, laparoscopy does lower the threshold for the surgical treatment of gastroesophageal reflux disease (GERD) in children. The introduction of laparoscopic techniques at a tertiary referral center in 1997 led to a significant increase in the number of patients referred for antireflux operations, which could not be explained by an increase of referrals from outside the region or a change in the indication for surgery. This suggests that the minimally invasive nature of laparoscopy may make the option of surgery more acceptable to both healthcare providers and patients' families, thereby lowering the threshold for choosing surgical intervention for GERD in children (PUBMED:20706151).
Instruction: Quality of life at baseline: is assessment after randomization valid? Abstracts: abstract_id: PUBMED:9794344 Quality of life at baseline: is assessment after randomization valid? The AVID Investigators. The Antiarrhythmics Versus Implantable Defibrillators. Objectives: The purpose of this report is to examine whether differences existed between patients who completed a baseline quality of life (QoL) form before being informed about their randomized assignments versus those who completed it after knowing their randomization assignments. Methods: In the pilot phase of the Antiarrhythmics Versus Implantable Defibrillators (AVID) study (n = 200), 113 patients completed a baseline QoL battery prior to randomization (drug versus defibrillator), 49 additional patients completed this battery after randomization, and 38 patients did not complete this battery. Baseline demographic, clinical and QoL data were compared for these groups. Results: Although the two groups with QoL data were not significantly different regarding various clinical and demographic characteristics, they did have significantly different QoL profiles. Patients with QoL collected before randomization had better overall QoL scores and mental health scores. Conclusions: These data suggest that patients with worse QoL may be less willing to complete a baseline QoL form in a timely manner or that knowledge of the randomization assignment may have an effect on QoL. abstract_id: PUBMED:31572268 Restricted Speech Recognition in Noise and Quality of Life of Hearing-Impaired Children and Adolescents With Cochlear Implants - Need for Studies Addressing This Topic With Valid Pediatric Quality of Life Instruments. Cochlear implants (CI) support the development of oral language in hearing-impaired children. However, even with CI, speech recognition in noise (SRiN) is limited. This raised the question, whether these restrictions are related to the quality of life (QoL) of children and adolescents with CI and how SRiN and QoL are related to each other. As a result of a systematic literature research only three studies were found, indicating positive moderating effects between SRiN and QoL of young CI users. Thirty studies addressed the quality of life of children and adolescents with CI. Following the criteria of the World Health Organization (WHO) for pediatric health related quality of life HRQoL (1994) only a minority used validated child centered and age appropriate QoL instruments. Moreover, despite the consensus that usually children and adolescents are the most prominent informants of their own QoL (parent-reports complement the information of the children) only a minority of investigators used self-reports. Restricted SRiN may be a burden for the QoL of children and adolescents with CI. Up to now the CI community does not seem to have focused on a possible impairment of QoL in young CI users. Further studies addressing this topic are urgently needed, which is also relevant for parents, clinicians, therapists, teachers, and policy makers. Additionally investigators should use valid pediatric QoL instruments. Most of the young CI users are able to inform about their quality of life themselves. abstract_id: PUBMED:30175605 Sarcopenia and quality of life: the validated Hungarian translation of the Sarcopenia Quality of Life (SarQoL) questionnaire Introduction: Sarcopenia, or age-related muscle loss, is emerging as a serious public health concern. Due to the impaired physical performance associated with sarcopenia, a reduced quality of life (QoL) has been evidenced in the affected individuals. Generic instruments, such as Rand Corporation Short Form 36 (SF-36) or the European Quality of Life (EuroQoL-5D) questionnaires do not accurately assess the impact of sarcopenia on QoL. SarQoL (Sarcopenia Quality of Life) questionnaire, was the first disease-specific questionnaire addressing the quality of life in patients with sarcopenia and has been recently designed for providing a global assessment of the quality of life in community-dwelling elderly subjects aged 65 years and older. Aim: Our aim was the development of a valid Hungarian version of the original SarQoL, through the translation, cultural adaptation and content validation of the original questionnaire. Method: We followed the recommended process, the international protocol of translation in five steps: two initial translations, synthesis of the two translations, backward translation, expert committee to compare translations with the original questionnaire and pretest. The pretest process involved 20 subjects (10 clinically sarcopenic and 10 non-sarcopenic with different educational and socioeconomic backgrounds) who were asked to complete the questionnaire. Feedbacks were requested from all subjects regarding the comprehensibility of questions or difficulties in completing the test. Results: Using the recommended best practice protocol for translation, the pre-final version is comparable with the original instrument in terms of content and accuracy. Conclusion: After the content validation with clinically sarcopenic persons it should be a useful tool to assess the quality of life of people with sarcopenia among elderly Hungarian patients. Orv Hetil. 2018; 159(36): 1483-1486. abstract_id: PUBMED:25664715 Quality-of-life measures in fecal incontinence: is validation valid? Background: Multiple health measurement scales have been used to study patients with fecal incontinence, but none have met the needs for clinical use and research perfectly. These include severity scales and generic and condition-specific quality-of-life scales. Several different approaches have been used to develop and evaluate the internal and external validity of these scales. Objective: As a step toward an improved quality-of-life instrument for fecal incontinence, the present study aimed to provide a critical review of the psychometric methodology of existing generic and condition-specific quality-of-life scales by using a standard measurement model. Design: This study is a retrospective review. Settings: Two investigators experienced in psychometric methodology reviewed source articles from frequently used fecal incontinence quality-of-life scales. Patients: Patients with fecal incontinence were identified. Main Outcome Measures: The primary outcome measured was the demonstration of at least 1 reliability criterion, content validity, construct validity, and either criterion validity or discriminative validity. Results: A total of 12 scales were identified. The reported methodology varied considerably. Most scales demonstrated convergent validity and test-retest reliability, whereas very few scales demonstrated internal consistency or predictive validity. Generic scales were found to be reliable and valid, but not responsive to condition severity. There was a wide range of methodology used in scale development and a wide diversity in the psychometric rigor. Limitations: Variations in scale construction, data reporting, and validity testing made the evaluation of fecal incontinence quality-of -life scales by using a standardized measurement model difficult. Conclusions: Identifying deficiencies in validity testing and reporting of existing scales is vital for future creation of a useful validated instrument to measure quality of life in patients with fecal incontinence. abstract_id: PUBMED:19617757 A valid and reliable measure of constipation-related quality of life. Purpose: Few existing measures assess constipation-specific quality of life. This study sought to develop a valid and reliable quality-of-life measure for constipation. Methods: First, we created a preliminary instrument that assessed quality-of-life domains affected by constipation: body image, eating, mood, and relationships with others. We conducted focus groups both with patients with constipation seeking treatment and the health care providers who treat them. Next, a 59-item questionnaire was given to 240 subjects with constipation (83% female) and 103 healthy volunteers (63% female). Test-retest reliability and discriminant, convergent, and divergent validity were assessed. Results: Exploratory factor analysis revealed four domains: Social Impairment (five items), Distress (six items), Eating Habits (three items), and Bathroom Attitudes (four items). Internal consistency and test-retest reliability for all subscales was high (Cronbach's alpha = 0.89; intraclass correlation coefficient = 0.87). All domains discriminated well between subjects with constipation and healthy volunteers (P < 0.001). Convergent validity was excellent: all subscales correlated highly with the Irritable Bowel Syndrome Quality of Life Scale total score (P < 0.001) and the Medical Outcomes Study Short Form-36 physical component and mental component summary scores (P < 0.001). Scores from our Constipation-Related Quality of Life measure were not significantly correlated with the Social Desirability Scale, demonstrating divergent validity. Conclusions: Our findings support the reliability and validity of the Constipation-Related Quality of Life measure. Future validation of the Constipation-Related Quality of Life measure for assessing changes in quality of life in response to treatments for constipation is needed. abstract_id: PUBMED:19365262 Quality of life in food allergy: valid scales for children and adults. Purpose Of Review: The purpose of this review is to give an overview of how health-related quality of life (HRQL) can be measured in food allergy and to explore recent findings on how food allergy might impact HRQL. Recent Findings: In addition to the more familiar burdens of having a food allergy, the psychosocial impact of food allergy and information gaps concerning food allergy have received much attention in the recent literature. Recently, reliable and valid disease-specific HRQL questionnaires have become available to measure the impact of food allergy on HRQL in food allergic patients of all ages. Summary: Assessment of HRQL could be used by clinicians to get insight into the specific problems patients have to face. In addition, HRQL measurements may be used to measure the effects of an intervention on the patient's quality of life. Finally, HRQL is the only available measure reflecting the ongoing severity of food allergy, as no objective disease parameters are available. abstract_id: PUBMED:31726229 Development of a valid and reliable instrument for the assessment of quality of life in parents of children with clefts. Purpose: Orofacial clefts are the most common congenital malformations that affect craniofacial structures. Studies show that they have a major influence on psychological development of the patient, and on their families. A review of the literature showed a lack of specific questionnaires for children and their parents. This study investigated the impact of orofacial clefts in children on the quality of life of their parents. In addition, the results of the treatment and the quality of work of the health team members involved in this process were evaluated. Materials And Methods: For the purpose of this study, an original questionnaire was made to analyse the effect of orofacial clefts in children who had undergone surgery on the quality of life of 73 of their parents. The questionnaire consisted of 28 simple statements, which were evaluated with a 5-degree Likert scale (from 1-fully disagree to 5-fully agree), did not require any specific additional clarification, and were easy to complete. Results: Analysis of areas of the questionnaire that applied to the parents, resulted in two subscales, parental social health and child social health, which had satisfactory Cronbach's coefficients (0.907 and 0.897, respectively). However, some issues had a relatively poor coefficient of internal consistency, which justified their expulsion from the final model of the parent questionnaire. Conclusion: The questionnaire developed for this study comprised two subscales concerned with the social health of parents/respondents and the social health of adolescents, as perceived by the parents. It was a valid and reliable instrument, and it showed satisfactory quality of life for parents of adolescents with clefts. abstract_id: PUBMED:36856375 Quality of life in caregivers of melanoma patients Background: The impact of melanoma on quality of life (QoL) is not limited to the patient but may also affect caregivers. Objectives: To investigate the impact of melanoma on caregivers' QoL. Materials & Methods: Caregivers of melanoma patients were recruited at the melanoma unit of our hospital. The impact on caregivers' QoL was measured using the Family Dermatology Life Quality Index (FDLQI). Results: Data were collected for 120 caregivers, of whom 51.7% were men and the mean age was 56.9 years. Breslow thickness of melanoma was <0.8 mm in 70.8% of cases. Mean FDLQI score was 5.7 (SD: 2.4). Among the single items of the FDLQI, the highest mean score corresponded to emotional distress. The impact on QoL was greater when the caregiver was a son/daughter, and increased relative to the age of the patient and number of years since diagnosis. Conclusion: To our knowledge, this is the first study to quantitatively evaluate the impact of melanoma on caregivers. Such impact was not negligible and mostly concerned emotional aspects. Caregivers need to be supported by structured educational and psychological interventions. abstract_id: PUBMED:37338962 Two-stage multivariate Mendelian randomization on multiple outcomes with mixed distributions. In clinical research, it is important to study whether certain clinical factors or exposures have causal effects on clinical and patient-reported outcomes such as toxicities, quality of life, and self-reported symptoms, which can help improve patient care. Usually, such outcomes are recorded as multiple variables with different distributions. Mendelian randomization (MR) is a commonly used technique for causal inference with the help of genetic instrumental variables to deal with observed and unobserved confounders. Nevertheless, the current methodology of MR for multiple outcomes only focuses on one outcome at a time, meaning that it does not consider the correlation structure of multiple outcomes, which may lead to a loss of statistical power. In situations with multiple outcomes of interest, especially when there are mixed correlated outcomes with different distributions, it is much more desirable to jointly analyze them with a multivariate approach. Some multivariate methods have been proposed to model mixed outcomes; however, they do not incorporate instrumental variables and cannot handle unmeasured confounders. To overcome the above challenges, we propose a two-stage multivariate Mendelian randomization method (MRMO) that can perform multivariate analysis of mixed outcomes using genetic instrumental variables. We demonstrate that our proposed MRMO algorithm can gain power over the existing univariate MR method through simulation studies and a clinical application on a randomized Phase III clinical trial study on colorectal cancer patients. abstract_id: PUBMED:32097714 Two valid and reliable short forms of the Singapore caregiver quality of life scale were developed: SCQOLS-10 and SCQOLS-15. Objectives: The Singapore Caregiver Quality of Life Scale (SCQOLS) comprises five domains and 51 items in total. This study aimed to develop and evaluate short forms of SCQOLS. Study Design And Setting: Data were collected from 612 family caregivers of patients with advanced cancers in Singapore. Exploratory factor analysis and best subset regression were used to identify candidate items for each domain. The short forms were evaluated for measurement properties. Results: Inclusion of at least two items per domain in the short form gave correlation coefficients of at least 0.8 with the corresponding domain scores in the full-length version. The short forms and full-length version had similar correlation coefficients with Negative Personal Impact and Positive Personal Impact measures. Two of the five domains of the 10-item version had Cronbach's alpha ≤0.50 and test-retest reliability ≤0.65. A 15-item version of the scale with two to four items per domain performed satisfactorily in all aspects evaluated. Conclusion: A 15-item short form of the SCQOLS (SCQOLS-15) is valid and reliable for the assessment of the overall and domain-specific quality of life of family caregivers. A 10-item short form (SCQOLS-10) may serve as a quick, valid and reliable assessment of the overall level of quality of life. Answer: The assessment of quality of life (QoL) at baseline after randomization may not be valid. This is suggested by the findings from the pilot phase of the Antiarrhythmics Versus Implantable Defibrillators (AVID) study, where significant differences in QoL profiles were observed between patients who completed a baseline QoL form before being informed about their randomized assignments and those who completed it after knowing their randomization assignments. Specifically, patients who completed the QoL assessment before randomization had better overall QoL scores and mental health scores (PUBMED:9794344). This indicates that knowledge of the randomization assignment may have an effect on QoL, or that patients with worse QoL may be less willing to complete a baseline QoL form in a timely manner. Therefore, to ensure the validity of baseline QoL assessments in clinical trials, it is preferable to collect this data before patients are informed of their randomization assignments.
Instruction: Are risk factors for breast cancer associated with follow-up procedures in diverse women with abnormal mammography? Abstracts: abstract_id: PUBMED:15947876 Are risk factors for breast cancer associated with follow-up procedures in diverse women with abnormal mammography? Objective: We evaluated the association of risk factors for breast cancer with reported follow-up procedures after abnormal mammography among diverse women. Methods: Women ages 40--80 years were recruited from four clinical sites after receiving a screening mammography result that was classified as abnormal but probably benign, suspicious or highly suspicious, or indeterminate using standard criteria. A telephone-administered survey asked about breast cancer risk factors (family history, estrogen use, physical inactivity, age of menarche, age at birth of first child, parity, alcohol use), and self-reported use of diagnostic tests (follow-up mammogram, breast ultrasound, or biopsy). Results: Nine hundred and seventy women completed the interview, mean age was 56, 42% were White, 19% Latina, 25% African American, and 15% Asian. White women were more likely to have a positive family history (20%), use estrogen (32%), be nulliparous (17%) and drink alcohol (62%). Latinas were more likely to be physically inactive (93%), African Americans to have early onset of menarche (53%) and Asians first child after age 30 (21%). White women were more likely to have suspicious mammograms (40%) and to undergo biopsy (45%). In multivariate models, Latinas were more likely to report breast ultrasound, physical inactive women reported fewer follow-up mammograms, and care outside the academic health center was associated with fewer biopsies. Indeterminate and suspicious mammography interpretations were significantly associated with more biopsy procedures (OR=8.4; 95% CI=3.8-18.5 and OR=59; 95% CI=35-100, respectively). Conclusions: Demographic profile and breast cancer risk factors have little effect on self-reported use of diagnostic procedures following an abnormal mammography examination. Level of mammography abnormality determines diagnostic evaluation but variance by site of care was observed. abstract_id: PUBMED:19558307 Obesity, gynecological factors, and abnormal mammography follow-up in minority and medically underserved women. Background: The relationship between obesity and screening mammography adherence has been examined previously, yet few studies have investigated obesity as a potential mediator of timely follow-up of abnormal (Breast Imaging Reporting and Data System [BIRADS-0]) mammography results in minority and medically underserved patients. Methods: We conducted a retrospective cohort study of 35 women who did not return for follow-up >6 months from index abnormal mammography and 41 who returned for follow-up < or =6 months in Nashville, Tennessee. Patients with a BIRADS-0 mammography event in 2003-2004 were identified by chart review. Breast cancer risk factors were collected by telephone interview. Multivariate logistic regression was performed on selected factors with return for diagnostic follow-up. Results: Obesity and gynecological history were significant predictors of abnormal mammography resolution. A significantly higher frequency of obese women delayed return for mammography resolution compared with nonobese women (64.7% vs. 35.3%). A greater number of hysterectomized women returned for diagnostic follow-up compared with their counterparts without a hysterectomy (77.8% vs. 22.2%). Obese patients were more likely to delay follow-up >6 months (adjusted OR 4.09, p = 0.02). Conversely, hysterectomized women were significantly more likely to return for timely mammography follow-up < or =6 months (adjusted OR 7.95, p = 0.007). Conclusions: Study results suggest that weight status and gynecological history influence patients' decisions to participate in mammography follow-up studies. Strategies are necessary to reduce weight-related barriers to mammography follow-up in the healthcare system including provider training related to mammography screening of obese women. abstract_id: PUBMED:8888152 Timeliness of follow-up after abnormal screening mammography. Little information has been published concerning the timeliness of follow-up after abnormal mammography. This article presents data on follow-up after abnormal mammography, including differences in follow-up by age, race, mammographic interpretation, and type of tracking system. From unpublished data, the rate of timely follow-up 8 to 12 weeks after index abnormal mammography ranges from 69% to 99%. Women aged 65 and older, those of lower socioeconomic status, and those who are instructed to have repeat evaluations in four to six months have the highest proportion of untimely follow-up. With use of computer-based tracking systems, timely follow-up ranges from 89% to 99%. Computer-based tracking systems should be encouraged to promote timely follow-up of abnormal mammography. Further research is needed to better delineate those at risk for untimely follow-up after abnormal mammography, causes of untimely follow-up, the impact of untimely follow-up on breast cancer stage and mortality, and interventions that maximize timely follow-up. abstract_id: PUBMED:20173286 Psychosocial determinants of mammography follow-up after receipt of abnormal mammography results in medically underserved women. This article targets the relationship between psychosocial determinants and abnormal screening mammography follow-up in a medically underserved population. Health belief scales were modified to refer to diagnostic follow-up versus annual screening. A retrospective cohort study design was used. Statistical analyses were performed examining relationships among sociodemographic factors, psychosocial determinants, and abnormal mammography follow-up. Women with lower mean internal health locus of control scores (3.14) were two times more likely than women with higher mean internal health locus of control scores (3.98) to have inadequate follow-up (OR=2.53, 95% CI=1.12-5.36). Women with less than a high school education had lower cancer fatalism scores than women who had completed high school (47.5 vs. 55.2, p-value=.02) and lower mean external health locus of control scores (3.0 vs. 5.3) (p-value<.01). These constructs have implications for understanding mammography follow-up among minority and medically underserved women. Further comprehensive study of these concepts is warranted. abstract_id: PUBMED:32601926 Organization Communication Factors and Abnormal Mammogram Follow-up: a Qualitative Study Among Ethnically Diverse Women Across Three Healthcare Systems. Background: Regular mammogram screening for eligible average risk women has been associated with early detection and reduction of cancer morbidity and mortality. Delayed follow-up and resolution of abnormal mammograms limit early detection efforts and can cause psychological distress and anxiety. Objective: The goal of this study was to gain insight from women's narratives into how organizational factors related to communication and coordination of care facilitate or hinder timely follow-up for abnormal mammogram results. Design: We conducted 61 qualitative in-person interviews with women from four race-ethnic groups (African American, Chinese, Latina, and White) in three different healthcare settings (academic, community, and safety-net). Participants: Eligible participants had an abnormal mammogram result requiring breast biopsy documented in the San Francisco Mammography Registry in the previous year. Approach: Interview narratives included reflections on experience and suggested improvements to communication and follow-up processes. A grounded theory approach was used to identify themes across interviews. Key Results: Participants' experiences of follow-up and diagnosis depended largely on communication processes. Twenty-one participants experienced a follow-up delay (> 30 days between index mammogram and biopsy). Organizational factors, which varied across different institutions, played key roles in effective communication which included (a) direct verbal communication with the ability to ask questions, (b) explanation of medical processes and terminology avoiding jargon, and (c) use of interpretation services for women with limited English proficiency. Conclusion: Health organizations varied in their processes for abnormal results communication and availability of support staff and interpretation services. Women who received care from institutions with more robust support staff, such as bilingual navigators, more often than not reported understanding their results and timely abnormal mammogram follow-up. These reports were consistent across women from diverse ethnic groups and suggest the value of organizational support services between an abnormal mammogram and resolution for improving follow-up times and minimizing patient distress. abstract_id: PUBMED:36791753 Factors Associated With False-Positive Recalls in Mammography Screening. Background: We aimed to identify factors associated with false-positive recalls in mammography screening compared with women who were not recalled and those who received true-positive recalls. Methods: We included 29,129 women, aged 40 to 74 years, who participated in the Karolinska Mammography Project for Risk Prediction of Breast Cancer (KARMA) between 2011 and 2013 with follow-up until the end of 2017. Nonmammographic factors were collected from questionnaires, mammographic factors were generated from mammograms, and genotypes were determined using the OncoArray or an Illumina custom array. By the use of conditional and regular logistic regression models, we investigated the association between breast cancer risk factors and risk models and false-positive recalls. Results: Women with a history of benign breast disease, high breast density, masses, microcalcifications, high Tyrer-Cuzick 10-year risk scores, KARMA 2-year risk scores, and polygenic risk scores were more likely to have mammography recalls, including both false-positive and true-positive recalls. Further analyses restricted to women who were recalled found that women with a history of benign breast disease and dense breasts had a similar risk of having false-positive and true-positive recalls, whereas women with masses, microcalcifications, high Tyrer-Cuzick 10-year risk scores, KARMA 2-year risk scores, and polygenic risk scores were more likely to have true-positive recalls than false-positive recalls. Conclusions: We found that risk factors associated with false-positive recalls were also likely, or even more likely, to be associated with true-positive recalls in mammography screening. abstract_id: PUBMED:34196901 Variations in pathways and resource use in follow-up after abnormal mammography screening: a nationwide register-based study. Purpose: Mammography screening reduces breast cancer mortality, but a successful screening programme depends on both high participation and a sufficient follow-up of abnormalities. This study investigated patterns of follow-up after abnormal screening mammography in Denmark, and whether the variation was associated with health care resource use. Methods: We included 19,458 women aged 50-69 years with an abnormal screening mammography during a 3-year period of 2014-2016. Women were followed until the end of 2018. Their follow-up pathway was categorized in terms of the timeliness, appropriateness (i.e. whether all recommended diagnostic tests were utilized), and the ratio of benign vs. malignant surgeries. Further, we estimated health care resource use including post-diagnostic imaging and surgery procedures. Results: Ninety-seven percent of women had a diagnostic follow-up test within 6 months and 94% of those had diagnostic procedures in accordance with the recommendations. The proportion with timely follow-up (i.e. within 1 month) was 83%, but varied significantly between administrative regions (p < 0.001), and also between women with a screen-detected cancer and those with a false-positive mammogram (87% vs. 81%, p < 0.001). The ratio between having a benign versus a malignant surgery was 1:8, but it varied depending on which tests were used for diagnosis. The average number of procedures was, generally, in accordance with the recommendations. Conclusion: In most cases, follow-up after abnormal screening mammography followed national recommendations. We nevertheless found that this was not always the case in certain subgroups and administrative regions. abstract_id: PUBMED:8839544 Racial differences in timeliness of follow-up after abnormal screening mammography. Background: To determine whether patient race was associated with timeliness of follow-up after abnormal screening mammography, a retrospective record review of diagnostic tests for women with abnormal screening mammography from a Northern California mobile van program was conducted. Methods: The study included 317 women between the ages of 33 and 85 who were reported to have abnormal screening mammography between July 1993 and May 1994. Measurements included patient demographics, screening mammography interpretation, follow-up diagnostic tests, and dates of diagnostic evaluation. Results: Women with abnormal screening mammography underwent a wide variety of diagnostic evaluations. Nonwhite women had significantly longer time (median time, 19 days) from date of index abnormal screening mammography to final disposition compared with white women (median time, 12 days). This racial difference was primarily due to the longer interval between index abnormal screening mammography and first diagnostic test (median time, 15 days for nonwhite women versus 7 days for white women, P < 0.001). The difference persisted when adjusting for patient age, family history of breast cancer, report of palpable mass, and income. The racial difference was similarly significant for each nonwhite subgroup (African American, Latina, and Asian) when compared with white women (P < 0.01). Conclusions: Reasons for less timely follow-up of abnormal mammography among nonwhite women need to be identified. Delays that may be instigated by the patient or be due to her physician or system of care need to be explored further. abstract_id: PUBMED:16132791 Inadequate follow-up of abnormal screening mammograms: findings from the race differences in screening mammography process study (United States). Objective: Despite relatively high mammography screening rates, there are reports of inadequate follow-up of abnormal results. Our objective was to identify factors associated with inadequate follow-up, and specifically, to determine if this outcome differed by race/ethnicity. Methods: We studied 176 subjects with abnormal or inconclusive mammograms identified from a prospective cohort study of African-American (n = 635) and White (n = 816) women who underwent screening in five hospital-based facilities in Connecticut, October 1996 through January 1998. Using multivariate logistic regression, we identified independent predictors of inadequate follow-up of an abnormal mammogram. Results: Over 28% of women requiring immediate or short-term follow-up did not receive this care within three months of the recommended return date. African-American race/ethnicity, pain during the mammogram, and lack of a usual provider were significant independent predictors of inadequate follow-up. Although many factors were examined, the observed race difference was unexplained. Conclusions: While inadequate follow-up of abnormal exams undermines the potential benefits of mammography screening for all women, the observed race difference in this study may have implications for the persistent race difference in breast cancer stage at diagnosis and survival. More research is needed to identify factors that contribute to poor follow-up among African-American women. abstract_id: PUBMED:12965983 Evaluation of abnormal mammography results and palpable breast abnormalities. Background: Because approximately 1 in 10 women with a breast lump or abnormal mammography result will have breast cancer, a series of decisions must be taken by a primary care practitioner to exclude or establish a diagnosis of breast cancer among these women. Purpose: To determine the most accurate and least invasive means to evaluate an abnormal mammography result and a palpable breast abnormality. Data Source: MEDLINE search (January 1966 to March 2003) for articles and reviews describing the accuracy of clinical examination, biopsy procedures, and radiographic examination for patients with abnormal mammography results or palpable breast abnormalities. Study Selection: The authors reviewed abstracts and selected articles that provided relevant primary data. Studies were included if 1) mammography, fine-needle aspiration biopsy, or core-needle biopsy was performed before a definitive diagnosis was obtained; 2) the study sample included 100 or more women; and 3) breast cancer status was determined from histopathology review of excisional biopsy specimens, from linkage with a state cancer registry or the Surveillance, Epidemiology, and End Results program, or from clinical follow-up of 95% or more of the study sample. Data Extraction: One investigator abstracted results. Methods were evaluated for major potential biases, but methodologic scoring was not performed. Data Synthesis: Likelihood ratios for first screening mammography were 0.1 for the Breast Imaging Reporting and Data System (BI-RADS) assessment category "negative or benign finding," 1.2 for "probably benign finding," 7 for "need additional imaging evaluation," 125 for "suspicious abnormality," and 2200 for "highly suggestive of malignancy." For fine-needle aspiration biopsy of a palpable lump performed by formally trained physicians, the likelihood ratio was infinity for an assessment of "malignant," 2.6 for "atypical/suspicious," and 0.02 for "benign." When diagnostic mammography was used to evaluate a palpable lump or nonpalpable breast abnormality, the positive likelihood ratios were 5.6 and 9.4, and the negative likelihood ratios were 0.15 and 0.19, respectively. Conclusions: Women whose screening mammography results are interpreted as "suspicious abnormality" or "highly suggestive of malignancy" have a high risk for breast cancer and should undergo core-needle biopsy or needle localization with surgical biopsy. Women whose screening mammography results are interpreted as "need additional imaging evaluation" have a moderate risk for breast cancer and should undergo diagnostic mammography or ultrasonography to decide whether a nonpalpable breast lesion should be biopsied. Women whose screening mammography results are interpreted as "probably benign finding" have a low risk for breast cancer and can undergo follow-up mammography in 6 months. Either fine-needle aspiration biopsy or ultrasonography is recommended as the first diagnostic test of a palpable breast abnormality to distinguish simple cysts from solid masses. Fine-needle aspiration biopsy also allows characterization of a solid mass. Diagnostic mammography does not help determine whether a palpable breast mass should be biopsied and should not affect the decision to perform a biopsy. Answer: Risk factors for breast cancer do not appear to have a strong association with the use of follow-up procedures after an abnormal mammography examination in diverse women. The study by PUBMED:15947876 found that demographic profile and breast cancer risk factors had little effect on self-reported use of diagnostic procedures following an abnormal mammography. Instead, the level of mammography abnormality was a significant determinant of diagnostic evaluation, although some variance by site of care was observed. For instance, indeterminate and suspicious mammography interpretations were significantly associated with more biopsy procedures. However, other studies have identified certain factors that may influence follow-up actions. For example, PUBMED:19558307 reported that obesity and gynecological history, such as having a hysterectomy, were significant predictors of whether women returned for timely follow-up after an abnormal mammogram. Obese women were more likely to delay follow-up, while hysterectomized women were more likely to return for timely follow-up. Additionally, PUBMED:20173286 found that psychosocial determinants, such as internal health locus of control and education level, were related to abnormal mammography follow-up in medically underserved women. Women with lower internal health locus of control scores were more likely to have inadequate follow-up. Furthermore, organizational communication factors, including the ability to ask questions, explanation of medical processes, and use of interpretation services, were found to facilitate or hinder timely follow-up for abnormal mammogram results (PUBMED:32601926). In summary, while traditional breast cancer risk factors may not directly influence the likelihood of follow-up procedures after abnormal mammography, other factors such as obesity, gynecological history, psychosocial determinants, and organizational communication factors can play a role in follow-up actions among diverse women (PUBMED:15947876, PUBMED:19558307, PUBMED:20173286, PUBMED:32601926).
Instruction: Lack of insight 3 years after first-episode psychosis: an unchangeable illness trait determined from first presentation? Abstracts: abstract_id: PUBMED:24934905 Lack of insight 3 years after first-episode psychosis: an unchangeable illness trait determined from first presentation? Background: Lack of insight is recognized as a symptom that predisposes the individuals with psychosis to noncompliance with the treatment, leading to poorer course of illness. This study aimed to explore baseline predictors of disturbances on insight at follow-up. Methods: Three insight dimensions (insight of: 'mental illness', 'need for treatment' and 'the social consequences of the disorder') were measured with the Scale to Assess Unawareness of Mental Disorder (SUMD) in a cohort of 224 first-episode psychosis (FEP) patients at 3-year follow-up. Subgroups, good vs. poor insight, were compared on baseline clinical, neuropsychological, premorbid and sociodemographic characteristics. Regression models tested baseline predictors for each insight dimension. Results: At 3-year follow-up a high percentage of patients, 45%, 36% and 33% for each dimension, were found to remain lacking insight. Poor insight into having an illness was predicted by a diagnosis of schizophrenia and poor baseline insight of the social consequences; insight into the need for treatment was predicted by adolescent adjustment and depression at baseline; and insight into the social consequences of the disorder was determined by late adolescent adjustment and baseline insight of mental illness. Conclusions: Our findings support the hypothesis that long-term insight in psychosis seems to be, to some extent, determined from first presentation, showing trait-like properties. A subgroup of 'lacking insight' patients, which is characterized by a diagnosis of schizophrenia, lower levels of premorbid adjustment and less severe depressive symptoms at baseline might benefit from special interventions targeted at enhancing insight from their first contact with psychiatric services. abstract_id: PUBMED:34230974 Dynamic Interplay Between Insight and Persistent Negative Symptoms in First Episode of Psychosis: A Longitudinal Study. Persistent negative symptoms (PNS) are an important factor of first episode of psychosis (FEP) that present early on in the course of illness and have a major impact on long-term functional outcome. Lack of clinical insight is consistently associated with negative symptoms during the course of schizophrenia, yet only a few studies have explored its evolution in FEP. We sought to explore clinical insight change over a 24-month time period in relation to PNS in a large sample of FEP patients. Clinical insight was assessed in 515 FEP patients using the Scale to assess Unawareness of Mental Disorder. Data on awareness of illness, belief in response to medication, and belief in need for medication were analyzed. Patients were divided into 3 groups based on the presence of negative symptoms: idiopathic (PNS; n = 135), secondary (sPNS; n = 98), or absence (non-PNS; n = 282). Secondary PNS were those with PNS but also had clinically relevant levels of positive, depressive, or extrapyramidal symptoms. Our results revealed that insight improved during the first 2 months for all groups. Patients with PNS and sPNS displayed poorer insight across the 24-month period compared to the non-PNS group, but these 2 groups did not significantly differ. This large longitudinal study supported the strong relationship known to exist between poor insight and negative symptoms early in the course of the disorder and probes into potential factors that transcend the distinction between idiopathic and secondary negative symptoms. abstract_id: PUBMED:35569423 Effect of cognitive insight on clinical insight from pre-morbid to early psychosis stages. Poor cognitive insight, including low self-reflectiveness and high self-certainty, contributes to poor clinical insight, which includes awareness of illness, relabelling of specific symptoms, and treatment compliance. However, inconsistent results regarding cognitive insight among individuals at clinical high risk of psychosis (CHR) have been reported. This study investigated the difference in cognitive insight among groups with different severity of positive symptoms and analysed the effect of cognitive insight on clinical insight in each group. All participants, including CHR individuals with 3 or 4 points (L-Pitem, n = 85) and 5 points (H-Pitem, n = 37) on any positive-symptom item of the Scale of Prodromal Syndromes, and patients with first-episode psychosis (FEP, n = 59), were measured cognitive and clinical insight using the Beck Cognitive Insight Scale and the Schedule of Assessment of Insight, respectively. The self-reflectiveness of cognitive insight was highest in the L-Pitem group and lowest in the FEP group. Self-reflectiveness was positively associated with awareness of illness in the L-Pitem and FEP groups; both self-reflectiveness and self-certainty was positively associated with treatment compliance in the L-Pitem group. Improving self-reflectiveness of cognitive insight may conduce to good clinical insight. Self-certainty may have different implication to individuals with mild prodromal symptoms. abstract_id: PUBMED:38353751 The influence of gender in cognitive insight and cognitive bias in people with first-episode psychosis: an uncontrolled exploratory analysis. Purpose: Previous studies have investigated the role of gender in clinical symptoms, social functioning, and neuropsychological performance in people with first-episode psychosis (FEP). However, the evidence of gender differences for metacognition in subjects with FEP is still limited and controversial. The aim of the present study was to explore gender differences in cognitive insight and cognitive biases in this population. Methods: Cross-sectional study was carried out in a sample of 104 patients with FEP (35 females and 69 males) recruited from mental health services. Symptoms were assessed with the Positive and Negative Syndrome Scale, cognitive insight with the Beck Cognitive Insight Scale, and cognitive bias by the Cognitive Biases Questionnaire for Psychosis. The assessment also included clinical and sociodemographic characteristics. Results: After controlling for potential confounders (level of education, marital status, and duration of psychotic illness) analysis of covariance revealed that males presented greater self-reflectiveness (p = 0.004) when compared to females. However, no significant differences were found in self-certainty and composite index of the cognitive insight scale, as in the cognitive biases assessed. Conclusions: Gender was an independent influence factor for self-reflectiveness, being better for males. Self-reflectiveness, if shown to be relatively lacking in women, could contribute to the design of more gender-sensitive and effective psychotherapeutic treatments, as being able to self-reflect predicts to better treatment response in psychosis. abstract_id: PUBMED:37438084 The role of insight, social rank, mindfulness and self-compassion in depression following first episode psychosis. Gaining awareness of psychosis (i.e., insight) is linked to depression, particularly in the post-acute phase of psychosis. Informed by social rank theory, we examined whether the insight-depression relationship is explained by reduced social rank related to psychosis and whether self-compassion (including uncompassionate self-responding [UCS] and compassionate self-responding [CSR]) and mindfulness buffered the relationship between social rank and depression in individuals with first episode psychosis during the post-acute phase. Participants were 145 young people (Mage = 20.81; female = 66) with first episode psychosis approaching discharge from an early psychosis intervention centre. Questionnaires and interviews assessed insight, depressive symptoms, perceived social rank, self-compassion, mindfulness and illness severity. Results showed that insight was not significantly associated to depression and thus no mediation analysis was conducted. However, lower perceived social rank was related to higher depression, and this relationship was moderated by self-compassion and, more specifically, UCS. Mindfulness was related to depression but had no moderating effect on social rank and depression. Results supported previous findings that depressive symptoms are common during the post-acute phase. The role of insight in depression for this sample is unclear and may be less important during the post-acute phase than previously considered. Supporting social rank theory, the results suggest that low perceived social rank contributes to depression, and reducing UCS may ameliorate this effect. UCS, social rank and possibly mindfulness may be valuable intervention targets for depression intervention and prevention efforts in the recovery of psychosis. abstract_id: PUBMED:33493778 Suicidal ideation in first-episode psychosis: Considerations for depression, positive symptoms, clinical insight, and cognition. Background: Suicide is a leading cause of death for individuals with psychosis. Although factors influencing suicide risk have been studied in schizophrenia, far less is known about factors that protect against or trigger increased risk during early-stage and first episode of psychosis. This study examined whether depression, psychotic symptoms, clinical insight, and cognition were associated with suicide ideation among individuals with first-episode psychosis. Methods: Data were obtained from the Recovery After an Initial Schizophrenia Episode (RAISE) project. Participants (n = 404) included adults between ages 15 and 40 in a first episode of psychosis. Measurement included the Positive and Negative Syndrome Scale, Brief Assessment of Cognition in Schizophrenia, and Calgary Depression Scale for Schizophrenia. A logistic regression model evaluated clinical and cognitive variables as predictors of suicidal ideation. Results: Greater positive symptoms (OR = 1.085, p < .01) and depression (OR = 1.258, p < .001) were associated with increased likelihood of experiencing suicidal ideation during the RAISE project. Meanwhile, stronger working memory (OR = 0.922, p < .05) and impaired clinical insight (OR = 0.734, p < .05) were associated with a decreased likelihood of experiencing suicidal ideation. Conclusion: The likelihood of experiencing suicidal ideation was significantly increased when positive and depressive symptoms were present, and significantly decreased when clinical insight was poorer and working memory stronger. These findings have important implications for the role of cognition and insight in risk for suicide ideation in early-stage psychosis, which may aid in improving the prediction of suicide behaviors and inform clinical decision-making over the course of the illness. abstract_id: PUBMED:25907250 Differential effects of antipsychotic drugs on insight in first episode schizophrenia: Data from the European First-Episode Schizophrenia Trial (EUFEST). Although antipsychotics are widely prescribed, their effect of on improving poor illness insight in schizophrenia has seldom been investigated and therefore remains uncertain. This paper examines the effects of low dose haloperidol, amisulpride, olanzapine, quetiapine, and ziprasidone on insight in first-episode schizophrenia, schizoaffective disorder, or schizophreniform disorder. The effects of five antipsychotic drugs in first episode psychosis on insight were compared in a large scale open randomized controlled trial conducted in 14 European countries: the European First-Episode Schizophrenia Trial (EUFEST). Patients with at least minimal impairments in insight were included in the present study (n=455). Insight was assessed with item G12 of the Positive and Negative Syndrome Scale (PANSS), administered at baseline and at 1, 3, 6, 9, and 12 months after randomization. The use of antipsychotics was associated with clear improvements in insight over and above improvements in other symptoms. This effect was most pronounced in the first three months of treatment, with quetiapine being significantly less effective than other drugs. Effects of spontaneous improvement cannot be ruled out due to the lack of a placebo control group, although such a large spontaneous improvement of insight would seem unlikely. abstract_id: PUBMED:36922459 Differences in clinical presentation at first hospitalization and the impact on involuntary admissions among first-generation migrant groups with non-affective psychotic disorders. Background: Some migrant and ethnic minority groups have a higher risk of coercive pathways to care; however, it is unclear whether differences in clinical presentation contribute to this risk. We sought to assess: (i) whether there were differences in clinician-rated symptoms and behaviours across first-generation immigrant and refugee groups at the first psychiatric hospitalization after psychosis diagnosis, and (ii) whether these differences accounted for disparities in involuntary admission. Methods: Using population-based health administrative data from Ontario, Canada, we constructed a sample (2009-2013) of incident cases of non-affective psychotic disorder followed for two years to identify first psychiatric hospitalization. We compared clinician-rated symptoms and behaviours at admission between first-generation immigrants and refugees and the general population, and adjusted for these variables to ascertain whether the elevated prevalence of involuntary admission persisted. Results: Immigrants and refugee groups tended to have lower ratings for affective symptoms, self-harm behaviours, and substance use, as well as higher levels of medication nonadherence and poor insight. Immigrant groups were more likely to be perceived as aggressive and a risk of harm to others, and both groups were perceived as having self-care issues. Adjustment for perceived differences in clinical presentation at admission did not attenuate the higher prevalence of involuntary admission for immigrant and refugee groups. Conclusions: First-generation migrant groups may differ in clinical presentation during the early course of psychotic illness, although these perceived differences did not explain the elevated rates of involuntary admission. Further research using outpatient samples and tools with established cross-cultural validity are warranted. abstract_id: PUBMED:27216590 Insight and psychiatric dangerousness: A review of the literature Introduction: Violence committed by individuals with severe mental illness has become an increasing focus of concern among clinicians, policy makers, and the general public, often as the result of tragic events. Research has shown in the past two decades an increased risk of violence among patients with mental disorder. Nevertheless, of those suffering from mental illness, perpetrators of other directed violence form a minority subgroup. The means by which there is this association between mental illness and violence has remained controversial. Factors such as positive psychotic symptoms, medication non-adherence, alcohol or psychoactive substance abuse and antisocial personality were found to be predictive of violence. Overall, literature provides support to the assertion that violent behavior of mentally ill patients is a heterogeneous phenomenon that is driven by multiple inter-related and independent factors. Furthermore, psychiatrists are often asked to predict an individual's future dangerousness, in a medical or a legal context. In the process of risk assessment of dangerousness, more focus has been placed on dynamic risk factor. In this context, lack of insight has established itself both as a part of violence risk models and as a clinical item in structured approaches to measure dangerousness. However, few studies have tested these associations. The main purpose of this paper is to review the literature concerning the relationship between insight and dangerousness and discuss the contributions of the insight in the assessment of dangerousness in patients with mental illness. We included twenty studies that evaluated the association between insight and variable such as physical or verbal violence, aggressiveness, hostility or sexual aggression. Results: According to the findings of this review, the strength and specific nature of this relationship remain unclear due to considerable methodological and conceptual shortcomings, including heterogeneity in the definition and assessment of violence, a minority of prospective studies and the lack of systematic consideration of possible confounding variables. However, the ability of the patient to perceive their illness is an important element to be considered in assessing the dangerousness both medically and legally. Higher belief flexibility and lower confidentiality of individual judgment, which reflect greater cognitive insight, may be associated with a lower incidence of violence, in particular in schizophrenia by decreasing the degree of confidence related to psychotic symptoms. Conclusion: In the growing efforts to reduce stigma associated with mental illness, it is important to identify a subgroup of patients at risk of violence and provide them with targeted treatment. In this sense, it seems important in the future to continue in this field of research to determine if the lack of insight is a covariate of a worsened condition or a specific violence risk factor per se. abstract_id: PUBMED:30385130 Changes in insight over the first 24 months of treatment in schizophrenia spectrum disorders. Background: While insight in schizophrenia improves with treatment, significant impairments often persist. The degree of persistence is not well characterised. Aims: We assessed patient and clinician-rated changes in insight in acutely ill, minimally treated first-episode schizophrenia spectrum disorder patients over 24 months of standardised treatment with a depot antipsychotic. Method: This single arm open label longitudinal cohort study included 105 participants with first-episode schizophrenia, schizophreniform or schizoaffective disorder. Insight was assessed at months 0, 6, 12 and 24 using the patient-rated Birchwood Insight Scale (BIS) and clinician-rated global insight item of the Positive and Negative Syndrome Scale (PANSS). Changes in insight over time were assessed using linear mixed-effect models for continuous repeated measures. Relationships between insight and psychopathology, functionality, cognition and quality of life were assessed with regression models. Results: There was significant improvement over time for the PANSS insight item (p < 0.0001). However, the only significant improvement for the BIS was with the Need for Treatment subscale (p = 0.01). There were no significant improvements noted for the Symptom Attribution (p = 0.7) and Illness Awareness (p = 0.2) subscales, as well as the BIS Total score (p = 0.6). Apart from depressive symptoms at baseline, there were no significant predictors of patient-rated insight. Conclusions: Clinicians should note that, even when treatment is assured and response is favourable, fundamental impairments in patient-rated insight persist. Answer: The study by PUBMED:24934905 suggests that lack of insight in psychosis may have trait-like properties that are determined from the first presentation. The study found that a high percentage of first-episode psychosis (FEP) patients (45%, 36%, and 33% for each dimension of insight) remained lacking insight at a 3-year follow-up. Poor insight into having an illness was predicted by a diagnosis of schizophrenia and poor baseline insight of the social consequences. Insight into the need for treatment was predicted by adolescent adjustment and depression at baseline, while insight into the social consequences of the disorder was determined by late adolescent adjustment and baseline insight of mental illness. These findings support the hypothesis that long-term insight in psychosis seems to be, to some extent, determined from the first presentation. Additionally, the study by PUBMED:34230974 found that patients with persistent negative symptoms (PNS) displayed poorer insight across a 24-month period compared to the non-PNS group. This suggests that poor insight is strongly related to negative symptoms early in the course of the disorder. Furthermore, PUBMED:35569423 indicates that cognitive insight, including self-reflectiveness and self-certainty, affects clinical insight, which includes awareness of illness and treatment compliance. The study found that self-reflectiveness was positively associated with awareness of illness in certain groups, suggesting that improving self-reflectiveness of cognitive insight may lead to better clinical insight. In contrast, PUBMED:30385130 reports that while insight in schizophrenia improves with treatment, significant impairments often persist over 24 months, even with standardized treatment. This indicates that lack of insight may not be entirely unchangeable but can be resistant to change even with treatment. In summary, the evidence suggests that lack of insight after a first episode of psychosis may be influenced by factors present at the first presentation and can persist over time, indicating that it may have trait-like properties. However, there is also evidence that insight can improve with treatment, although significant impairments may remain (PUBMED:24934905; PUBMED:34230974; PUBMED:35569423; PUBMED:30385130).
Instruction: Does preoperative steroid treatment affect the histology in giant cell (cranial) arteritis? Abstracts: abstract_id: PUBMED:22844066 Does preoperative steroid treatment affect the histology in giant cell (cranial) arteritis? Introduction: Giant cell arteritis (GCA) has been successfully treated with steroids for many years and temporal artery biopsy (TAB) is regarded as the gold standard diagnostic test. The primary aim of this study was to determine whether steroid pretreatment abrogates histological features of GCA reducing diagnostic return, as suspected on the basis of anecdotal evidence. This impacts upon patients suspected of having GCA and the need for prompt treatment balanced with the diagnostic need for TAB. Methods: A 6-year single-centre retrospective study of biopsies (2005-2011) was performed with interrogation of the medical notes for information regarding steroid use. The null hypothesis considered there was no association between steroid use and biopsy outcome. Results: No significant difference was found between steroid use and biopsy outcome, with biopsies still producing positive results after weeks of steroid treatment. Conclusions: TAB is still useful in the diagnosis of GCA, even after commencing steroid treatment. abstract_id: PUBMED:29221884 Efficacy and tolerance of tocilizumab for corticosteroid sparing in giant cell arteritis and aortitis: Experience of Nimes University Hospital about eleven patients Introduction: Giant cell arteritis is a large-vessels vasculitis, which treatment consists in a slowly-tappered steroid-therapy. Immunosuppressive agents are sometimes used in case of steroid-dependance. We have conducted an observationnal retrospective study including patients treated with tocilizumab for a giant cell arteritis or an aortitis in the internal medicine department at the Nîmes University Hospital. Results: Eleven patients were included between 2011 and 2016, who had been treated only with prednisone. Tocilizumab was used because of steroid-dependance for nine patients, delirium under steroids for one patient and unefficiency of steroids for an other patient. Infusions of tocilizumab, administred monthly at 8mg/kg, led to clinical and biological remission for all patients. Consequently, prednisone was tappered under 10mg/d for ten patients after six months of treatment with tocilizumab. Eight cases of non-severe infection were reported; also two cases of dyslipidemia, one case of prurit and one case of moderate neutropenia. Two relapses were observed after the end of treatment, in patients treated with less than twelve infusions. Conclusion: Tocilizumab could be efficient and well-tolerated in steroid-dependent giant cell arteritis and aortitis. The modalities of its use remain to be precised. abstract_id: PUBMED:32399619 Treatment of giant cell arteritis: what is in the pipeline? Glucocorticoids (GC) represent the standard treatment in remission induction and maintenance in the treatment of giant cell arteritis (GCA). Additive immunosuppressants are currently only recommended in special situations, such as refractory or relapsing disease or in cases of glucocorticoid-induced side effects. Methotrexate has been the standard steroid-sparing agent for many years. Meanwhile, tocilizumab is the first choice for steroid reduction, which was the first biological to be licensed for the treatment of GCA; however, long-term data over more than 3 years are lacking. A number of promising bDMARD and tsDMARD are currently being investigated in randomized controlled trials (RCT), which could contribute to additional effective steroid-sparing options in the treatment of GCA and help to establish an additive GC-sparing medication as the standard treatment in the future. This article gives an overview on current treatment studies for GCA. abstract_id: PUBMED:31471629 Large-vessel vasculitis-giant cell and Takayasu arteritis Large-vessel vasculitis includes giant cell arteritis (GCA) and Takayasu arteritis (TA). GCA can affect persons from the age of 50 years and is more frequent among women. The disease course generally begins with an acute phase, with patients feeling very unwell and experiencing temporal headaches. Rapid diagnosis and treatment are necessary to reduce the risk of blindness. A suspected diagnosis must be confirmed by imaging, histology is optional. Initial treatment comprises oral prednisone. Recent studies have demonstrated inhibition of interleukin‑6 with tocilizumab (TCZ) to be highly effective. Alternatively, methotrexate can be administered in a steroid-sparing approach. In contrast, TA onset is generally during childhood or adolescence, and begins with moderate systemic inflammation. The aorta and its main branches are affected. Treatment comprises steroids, disease-modifying antirheumatic drugs, and the tumor necrosis factor inhibitor infliximab or TCZ. abstract_id: PUBMED:30846454 Visual loss in giant cell arteritis 3 weeks after steroid initiation. Giant cell arteritis (GCA) is the most common vasculitis in adults and blindness is a common complication if left untreated. Oral glucocorticoids are the mainstay of treatment and if started promptly, loss of vision can usually be prevented. We present the case of a 77-year-old man who developed irreversible bilateral blindness after a confirmed diagnosis of GCA and oral steroid treatment. The roles of diagnostic delay, steroid dosing, significance of visual symptoms at diagnosis and after commencing oral glucocorticoids, and interpretation of ophthalmological signs are reviewed. abstract_id: PUBMED:36907732 Unmet need in the treatment of polymyalgia rheumatica and giant cell arteritis. For decades, aside from prednisone and the occasional use of immune suppressive drugs such as methotrexate, there was little to offer patients with polymyalgia rheumatica (PMR) and giant cell arteritis (GCA). However, there is a great interest in various steroid sparing treatments in both these conditions. This paper aims to provide an overview of our current knowledge of PMR and GCA, examining their similarities and distinctions in terms of clinical presentation, diagnosis, and treatment, with emphasis placed on reviewing recent and ongoing research efforts on emerging treatment. Multiple recent and ongoing clinical trials are demonstrating new therapeutics that will provide benefit and contribute to the evolution of clinical guidelines and standard of care for patients with GCA and/or PMR. abstract_id: PUBMED:28485026 Giant cell arteritis: beyond temporal artery biopsy and steroids. Giant cell arteritis is the most common primary vasculitis of the elderly. The acute complications of untreated giant cell arteritis, such as vision loss or occasionally stroke, can be devastating. The diagnosis is, however, not altogether straightforward due to variable sensitivities of the temporal artery biopsy as a reference diagnostic test. In this review, we discuss the increasing role of imaging in the diagnosis of giant cell arteritis. Glucocorticoid treatment is the backbone of therapy, but it is associated with significant adverse effects. A less toxic alternative is required. Conventional and novel immunosuppressive agents have only demonstrated modest effects in a subgroup of steroid refractory Giant cell arteritis due to the different arms of the immune system at play. However, recently a study of interleukin-6 blockade demonstrated benefits of giant cell arteritis. The current status of these immunosuppressive agents and novel therapies are also discussed in this review. abstract_id: PUBMED:7305475 Successful treatment of dissecting aortic aneurysm due to giant cell arteritis. A 72-year-old women with polymyalgia rheumatica clinically controlled on maintenance steroid therapy presented with symptoms of chest pain and numbness in the right arm. A diagnosis of dissecting aortic aneurysm was confirmed at thoracotomy and the aorta was successfully resected. Histology revealed active giant cell aortitis. We suggest that a normal erythrocyte sedimentation rate in patients with treated temporal arteritis does not preclude large vessel involvement. abstract_id: PUBMED:35348297 Efficacy of leflunomide as a steroid-sparing agent in treatment of Indian giant cell arteritis patients: A 2-year follow-up study. Objectives: To evaluate the effectiveness of leflunomide as a steroid-sparing agent among Indian patients with giant cell arteritis (GCA) and to assess the changes of "halo sign" within affected arteries, detected ultrasonographically, after remission. Methods: In this prospective observational study, patients fulfilling American College of Rheumatology criteria for GCA and having halo sign in temporal artery ultrasound were treated with leflunomide and predefined tapering dose of prednisolone. Ultrasounds of temporal and axillary arteries were done at baseline and after remission were achieved. Results: Twenty-two GCA patients were followed up for a median duration of 24 months (interquartile range, IQR: 18-33). All patients showed clinical improvement and steroids could be stopped in 17 out of 22 patients. Median time to achieve remission (symptom-free with normal inflammatory markers) was 3 (95% confidence interval [CI]: 2.4-3.6) months. Median time to achieve reduction to prednisolone dose <5 mg/d was 9 months (95% CI: 7-11). Prednisolone dose could be reduced in all patients while on leflunomide, suggesting steroid-sparing effect and a steroid-free remission could be achieved after a median of 14 months (95% CI: 9.4-18.6). Ultrasonographically, all patients showed improvement of halo signs, after 8 weeks (IQR 7.25-12). Seven patients experienced a clinical relapse after 12 months (IQR: 5-21) of initial remission. The predictors of relapse were duration of symptoms before initiation of immunosuppression therapy and delayed achievement remission by strict criteria. Conclusion: This study showed efficacy and safety of leflunomide as a steroid-sparing agent in Indian GCA patients. abstract_id: PUBMED:22965672 Tocilizumab for giant cell arteritis: an amazing result. Giant cell arteritis (GCA), previously Horton's disease, is a systemic vasculitis affecting the middle-sized or large arteries in patients older than 50 years of age. The mainstay of the treatment of GCA is glucocorticoid therapy. Herein, we present a case with giant cell arteritis resistant to oral and intravenous steroid, intravenous cyclophosphamide and mycophenolate mofetil, but successfully treated with tocilizumab. Answer: Preoperative steroid treatment does not significantly affect the histology in giant cell (cranial) arteritis. A study aimed to determine whether steroid pretreatment could reduce the diagnostic yield of temporal artery biopsy (TAB) in patients suspected of having giant cell arteritis. The results showed that there was no significant difference between steroid use and biopsy outcome, indicating that TAB is still useful in diagnosing GCA even after commencing steroid treatment (PUBMED:22844066).
Instruction: Documentation of Pregnancy Status, Gynaecological History, Date of Last Menstrual Period and Contraception Use in Emergency Surgical Admissions: Time for a Change in Practice? Abstracts: abstract_id: PUBMED:26296839 Documentation of Pregnancy Status, Gynaecological History, Date of Last Menstrual Period and Contraception Use in Emergency Surgical Admissions: Time for a Change in Practice? Objective: To determine whether pregnancy status, gynaecological history, date of last menstrual period and contraceptive use are documented in emergency female admissions of reproductive age admitted to general surgery. Design: This is a retrospective study. Setting: This study was conducted in the United Kingdom. Population: Females of reproductive age (12-50 years) admitted as an emergency to general surgery with abdominal pain were considered in this study. Methods: Retrospective analysis of medical notes of emergency female admissions with abdominal pain between January and September 2012. We recorded whether a pregnancy test result was documented (cycle 1). Results were analysed and a prompt added to the medical clerk-in document. We re-audited (cycle 2) between January and June 2013 looking for improvement. Main Outcome Measures: Documented pregnancy status within 24 h of admission and prior to any surgical intervention. Results: 100 case notes were reviewed in stage 1. 30 patients (30 %) had a documented pregnancy status. 32 (32 %), 25 (25 %) and 29 (29 %) had a documented gynaecology history, contraceptive use and date of last menstrual period (LMP), respectively. 24 patients underwent emergency surgery, 6 (25 %) had a documented pregnancy status prior to surgery. Of 50 patients reviewed in stage 2, 37 (75.0 %) had a documented pregnancy status (p < 0.001), with 41 (82 %) having both gynaecological history (p < 0.0001) and contraceptive use (p < 0.0001) documented. 40 patients (80 % had a documented LMP (p < 0.0001). 7 patients required surgery, of whom 6 (85.7 %) had a documented pregnancy test prior to surgery (p = 0.001). All pregnancy tests were negative. Conclusions: A simple prompt in the surgical admission document has significantly improved the documentation of pregnancy status and gynaecological history in our female patients, particularly in those who require surgical intervention. A number of patient safety concerns were addressed locally, but require a coordinated, interdisciplinary discussion and a national guideline. A minimum standard of care, in females of reproductive age, should include mandatory objective documentation of pregnancy status, whether or not they require surgical intervention. abstract_id: PUBMED:28799272 Results of a national multicenter audit assessment of gynecologic history in surgical patients. Objective: To determine the adequacy of assessing gynecologic history for females of reproductive age (FRA) admitted to a general surgery department. Methods: The present prospective multicenter audit included FRA who were admitted for elective or emergency procedures to general surgery departments in Scotland between May 11 and May 25, 2015. Data were compared between patients who were admitted for elective and emergency treatment. Results: There were 530 FRA included from 18 centers, including 169 (31.9%) and 361 (68.1%) elective and emergency admissions, respectively. The date of last menstrual period was document for 203 (38.3%) patients, use of contraception for 149 (28.1%), sexual activity for 83 (15.7%), pregnancy status for 274 (51.7%), and the possibility of pregnancy for 237 (44.7%). A higher incidence of documented date of last menstrual period (P=0.002) and pregnancy status (P<0.001) were identified among emergency admissions, and the possibility of pregnancy was documented more commonly among elective admissions (P<0.001). Conclusions: Key factors required for gynecologic assessment were often not documented for FRA admitted to general surgery both as elective and emergency admissions. Surgical teams and medical undergraduates require educating regarding the importance of obtaining gynecologic history for all FRA. abstract_id: PUBMED:30253702 A multispecialty study of determining the possibility of pregnancy and the documentation of pregnancy status in surgical patients: a cause for concern? Background: Determining the possibility of pregnancy and the documentation of pregnancy status are important considerations in the assessment of females of reproductive age when admitted to hospital. Objectives: Our aim was to determine the adequacy of the documentation of pregnancy status and possibility of pregnancy across multiple surgical specialties. Materials And Methods: A prospective audit of surgical specialties (general, orthopaedics, urology, vascular, maxillofacial, ENT, gynaecology and neurosurgery) within NHS Tayside, in May 2015. Results: A total of 129 females of reproductive age were admitted; 69 (53.5%) elective and 60 (46.5%) emergencies. Eighty-four patients (65%) were asked 'Is there any possibility of pregnancy?' Pregnancy status was documented in 74% of patients. Eleven (8.5%) patients were not asked about possibility of pregnancy and did not have a documented pregnancy status. Documentation of the use of contraception, sexual activity and date of last menstrual period was noted in 53 (41.1%), 31 (24.0%) and 66 (51.2%) patients, respectively. Conclusions: There is a wide variation in the documentation of pregnancy status and possibility of pregnancy amongst surgical specialties. This was not an issue in gynaecology but is an issue in ENT, maxillofacial, neurosurgery, vascular and general surgery. The reasons are unclear. Documentation of pregnancy status using ßhCG assays should be the gold standard, and national guidelines are required. abstract_id: PUBMED:29043709 Estimation of Day-Specific Probabilities of Conception during Natural Cycle in Women from Babylon. Background: Identifying predictors of the probabilities of conception related to the timing and frequency of intercourse in the menstrual cycle is essential for couples attempting pregnancy, users of natural family planning methods, and clinicians diagnosing for possible causes of infertility. The aim of this study is to estimate the days in which the likelihood of conception happened by using first trimester ultrasound fetal biometry in natural cycles and spontaneous pregnancy, and to explore some factors that may affect them. Materials And Methods: This study is retrospective cohort study, with random sampling. It involved 60 pregnant ladies at first trimester; the date of conception was estimated using: i. Crown-rump length biometry (routine ultrasound examinations were performed at a median of 70 days following Last menstrual period or equivalently 10 weeks), ii. Date of last menstrual cycle. Only women with previous infertility and now conceiving naturally with a certain date of Last menstrual period were selected. Results: The distribution of conception showed a sharp rise from day 8 onwards, reaching its maximum at day 13 and decreasing to zero by day 30 of Last menstrual period. The older and obese women had conceive earlier than younger women but there was insignificants difference between the two groups (P>0.05). According to the type of infertility, the women with secondary infertility had conceived earlier than those with primary infertility. There was a significant difference between the two groups (P<0.05). Conclusion: Day specific of conception may be affected by factors such as age, BMI, and type of infertility. This may be confirmed by larger sample size in metacentric study. abstract_id: PUBMED:36293599 Sexual History Documentation and Screening in Adolescent Females with Suicidal Ideation in the Emergency Department. Adolescents with mental illness often seek care in the emergency department (ED) and are more likely to engage in risky behaviors such as substance abuse and unprotected sex, increasing their risk of sexually transmitted infections (STI), unintended pregnancy, and non-consensual sex. This was a retrospective study of 312 females, aged 13-17 years, presenting to the pediatric ED with the chief complaint of suicidal ideation from February to May 2018. Electronic medical records were reviewed for demographics, psychiatric history, sexual history, and testing for pregnancy or STI. The primary outcome was the documentation of the presence or absence of prior sexual activity. Secondary outcomes included documented aspects of sexual history and pregnancy or STI testing performed in the ED. Of the 312 eligible patients, 144 (46.2%) had a documented sexual history, and of those 50 (34.7%) reported being sexually active. Sexual history documentation was not associated with patient age, race, ethnicity, insurance, or the gender of the ED provider. A history of anxiety and a recent suicide attempt were associated with a lack of sexual history documentation (p = 0.03). Of the sexually active patients, 28 (56%) had documentation of contraception use. Pregnancy testing was performed in 67.3% of all patients and 80% of sexually active patients. Only 10 patients had STI testing in the ED, with most testing occurring in those with sexual history documentation (p = 0.007). In conclusion, more than half of females with suicidal ideation in our ED had no documentation of sexual history, and when documentation was completed, it was often missing important elements, including screening for pregnancy, STI, non-consensual sex, and contraception use. Since the ED visit provides an important opportunity to address the reproductive health needs of this high-risk population, further work is needed to determine ways to improve provider documentation and sexual health screening. abstract_id: PUBMED:22434979 Knowledge, attitude and practice of emergency contraceptives among adama university female students. Background: Unwanted pregnancy followed by unsafe abortion is one of the major worldwide health problems, which has many negative consequences on the health and well-being of women. Information about women's knowledge, attitude and practice of emergency contraceptives plays a major role in the reduction of unwanted pregnancy; however, there are no studies about this issue in the study area. This study assessed Adama University female students' knowledge, attitude and practice of emergency contraceptives. Method: A cross-sectional study design was employed from February 1 to 30/2009, on 660 regular undergraduate female students of Adama University. Data were entered and analyzed using SPSS for windows version 16.0. Logistic regression was used to identify the association between variables and emergency contraceptive knowledge, attitude and practice. P-value less than 0.05 at 95% CI was taken for statistical significance. Results: Of the total, 660 respondents, 194(29.4%) were sexually active, 63(9.4%) had history of pregnancy and 49(7.4%) had history of abortion. About 309 (46.8%) of the students had heard about emergency contraceptives and from those who heard emergency contraceptives, 27.2% had good knowledge. Majority, four hundred fifteen (62.9%) of the students had positive attitude towards it. However, only 31(4.7%) had used emergency contraceptive methods. Conclusion: This study demonstrated lack of awareness, knowledge and utilization of emergency contraceptives among Adama University female students. Hence behavioral change strategies should be considered by responsible bodies to improve knowledge and bring attitudinal change on use of emergency contraception. abstract_id: PUBMED:23945595 Copper T380 intrauterine device for emergency contraception: highly effective at any time in the menstrual cycle. Study Question: Does the efficacy of placing a copper intrauterine device (IUD) for emergency contraception (EC) to prevent pregnancy depend on menstrual cycle timing and timing of unprotected intercourse (UPI)? Summary Answer: If the urine pregnancy test is negative prior to IUD placement, the copper IUD is highly effective for EC at any point in the menstrual cycle. What Is Known Already: The use of the Copper T380A for EC has been encouraged by the failure of oral EC methods to decrease rates of unintended pregnancy and the documented success of the IUD in reducing unintended pregnancies. However scant data exist regarding the efficacy and safety of IUD insertion for EC when accounting for menstrual cycle timing and time since UPI. Study Design, Size, Duration: This is a secondary analysis of data obtained from a previously published prospective cohort study of women who received the Copper T380A IUD for EC between July 1997 and January 2000. We included 1840 participants according to the study inclusion criteria of a known last menstrual period (LMP) and cycle lengths of 25-35 days. Participants/materials, Setting, Methods: The original study included women aged between 18 and 44 years who presented for EC at 18 sites throughout China and who had regular menstrual cycles between 24 and 42 days, a known LMP, UPI within 120 h (5 days) and a negative urine pregnancy test (cutoff <25 IU/ml). Women with uncertain LMP dates were excluded. This study included only participants with cycle lengths of 25-35 days. Main Results And The Role Of Chance: Among the 1840 participants with usual cycle lengths of 25-35 days, 850 (46.2%) had their IUD inserted following UPI in the expected fertile window and 84 (4.6%) had the insertion >5 days after the predicted ovulation day and 52 (2.8%) had the insertion >5 days after UPI. There were no pregnancies in the first month among the 1771 women who had information available regarding their 1-month follow-up pregnancy test. Limitations, Reasons For Caution: This was a secondary analysis of an observational study, and thus participants were not randomized to an alternative postcoital method. There were a small number of women who had UPI >5 days after their predicted ovulation day thus limiting the confidence of assuring a low risk of pregnancy in this situation. The ovulation day was calculated based on the LMP prior to IUD insertion and not on the subsequent first day of menses following IUD insertion. Wider Implications Of The Findings: If the urine pregnancy test is negative prior to IUD placement, the copper IUD is likely to be effective for EC at almost any point in the menstrual cycle. Study Funding/competing Interest(s): The original study was funded by the UNDP/UNPFA/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction. The donors and sponsors of the study had no role in the study design, data collection, data analysis, data interpretation, writing of the report or the decision to submit the paper for publication. abstract_id: PUBMED:16631987 Nonprescription availability of emergency contraception in the United States: current status, controversies, and impact on emergency medicine practice. In October 2004, the American College of Emergency Physicians Council joined more than 60 other health professional organizations in supporting the nonprescription availability of emergency contraception. This article reviews the history, efficacy, and safety of emergency contraception; the efforts toward making emergency contraception available without a prescription in the United States; the arguments for and against nonprescription availability of emergency contraception; and the potential impact nonprescription availability could have on the practice of emergency medicine in the United States. abstract_id: PUBMED:38192856 Motivators for emergency contraception: Previous pregnancy and condom rupture. Objectives: Little is known about the motivations to apply for emergency contraception (EC). Our first aim was to explore the motivating circumstances to use EC as fast as possible. Our second aim was to explore the contraceptive method of the population seeking EC. Study Design: This present retrospective observational study between July 2021 and September 2021 is embedded in the MEEC (Motivation and Epidemiology of Emergency Contraceptive Pill) based on the study cohort of a Hungarian data bank containing follow-up data of 455 women applied for EC telemedicine consultation. Variables assessed were: age, gynecological history (pregnancies, abortions, miscarriages), data of the intercourse (elapsed time, contraceptive method), and data of the menstrual cycle, and relationship status. Results: Of all patients, 59.3 % reported condom rupture, 29.5 % no protection, and 11.2 % other. Patients using condom applied for EC significantly sooner than those using no protection and using other protective methods. A significantly shorter elapsed time was observed in patients with a history of a previous pregnancy. No significant relationship was seen between the way of protection, previous pregnancies, and surprisingly the time of ovulation despite the obvious intention of avoiding pregnancy. Conclusions: This is the first study to examine the potential role of epidemiologic factors as motivators for EC on the basis of a large patient cohort. Our study demonstrates the significant role of condom rupture/use and the history of previous pregnancies to be the strongest motivators for EC. abstract_id: PUBMED:31481043 "A good little tool to get to know yourself a bit better": a qualitative study on users' experiences of app-supported menstrual tracking in Europe. Background: Menstrual apps facilitate observation and analysis of menstrual cycles and associated factors through the collection and interpretation of data entered by users. As a subgroup of health-related apps, menstrual apps form part of one of the most dynamic and rapidly growing developments in biomedicine and health care. However, despite their popularity, qualitative research on how people engaging in period-tracking use and experience these apps remains scarce. Methods: Between June 2016 and March 2017, we conducted 26 qualitative interviews with menstrual app users living in Austria and Spain. The participants were asked about their practices and experiences regarding app-supported menstrual tracking. The interviews were audio recorded, transcribed verbatim, and coded using the software NVivo. Results: An inductive content analysis was performed and eight characteristics of app-supported menstrual tracking were identified: 1) tracking menstrual cycle dates and regularities, 2) preparing for upcoming periods, 3) getting to know menstrual cycles and bodies, 4) verifying menstrual experiences and sensations, 5) informing healthcare professionals, 6) tracking health, 7) contraception and seeking pregnancy, and 8) changes in tracking. Our study finds that period-tracking via apps has the potential to be an empowering practice as it helps users to be more aware of their menstrual cycles and health and to gain new knowledge. However, we also show that menstrual tracking can have negative consequences as it leads to distress in some cases, to privacy issues, and the work it requires can result in cessation. Finally, we present practical implications for healthcare providers and app developers. Conclusions: This qualitative study gives insight into users' practices and experiences of app-supported menstrual tracking. The results provide information for researchers, health care providers and app designers about the implications of app-supported period-tracking and describe opportunities for patient-doctor interactions as well as for further development of menstrual apps. Answer: The documentation of pregnancy status, gynecological history, date of last menstrual period (LMP), and contraception use in emergency surgical admissions is an area that has been identified as needing improvement. A retrospective study conducted in the United Kingdom found that only 30% of female patients of reproductive age admitted to general surgery with abdominal pain had a documented pregnancy status. Similarly, documentation of gynecological history, contraceptive use, and LMP was poor, with only 32%, 25%, and 29% respectively. However, after adding a prompt to the medical clerk-in document, there was a significant improvement in documentation, with 75% of patients having a documented pregnancy status and over 80% having documented gynecological history and contraceptive use (PUBMED:26296839). A national multicenter audit in Scotland also revealed inadequacies in the assessment of gynecologic history for females of reproductive age admitted to general surgery. Key factors required for gynecologic assessment were often not documented, with only 51.7% having documented pregnancy status and even lower percentages for other factors such as date of last menstrual period and use of contraception. The study concluded that surgical teams and medical undergraduates require education on the importance of obtaining gynecologic history for all females of reproductive age (PUBMED:28799272). Another multispecialty study across various surgical specialties within NHS Tayside found wide variation in the documentation of pregnancy status and the possibility of pregnancy. The study highlighted that this was not an issue in gynecology but was a concern in other specialties such as ENT, maxillofacial, neurosurgery, vascular, and general surgery. The study suggested that documentation of pregnancy status using ßhCG assays should be the gold standard and called for national guidelines (PUBMED:30253702). These findings indicate that there is a need for a change in practice to ensure that pregnancy status, gynecological history, LMP, and contraception use are consistently and accurately documented in emergency surgical admissions for females of reproductive age. Implementing prompts and educational interventions, as well as establishing national guidelines, could help address patient safety concerns and improve the standard of care in this area.
Instruction: Is male gender a risk factor for conversion of laparoscopic into open cholecystectomy? Abstracts: abstract_id: PUBMED:35990867 The Association of Preoperative Risk Factors for Laparoscopic Conversion to Open Surgery in Elective Cholecystectomy. Background: Laparoscopic cholecystectomy is a common operation worldwide, with low mortality (0.01%) and morbidity (2-8%). It has been reported 2.9 to 3.2% of elective laparoscopic cholecystectomies are converted to open surgery. Converted cases are associated with increased complications rates. Method: Two thousand and seventy-five patients, 82.8% females and 17.2% males who underwent elective laparoscopic cholecystectomy in our hospital, between March 1, 2016, and February 28, 2018, were prospectively collected in a database. Pearson's Chi-squared and Fisher's exact tests were used to determine significance, with p <0.05 deemed statistically significant. We analyzed seven risk factors associated with conversion to open surgery; age, gender, body mass index (BMI), previous abdominal surgeries, the presence of contracted gallbladder, Mirizzi syndrome, or choledocholithiasis. Laparoscopic cholecystectomy was performed using a 3-port technique (73%) and a 4-port technique (27%). Results: Finding associated "strong" factors to conversion: male patients, >60-years-old, previous upper abdominal surgery, contracted gallbladder, Mirizzi syndrome or choledocholithiasis. The presence of a higher or lower BMI did not influence the rate of conversion. The most impact association were males over 60 years, and males with an earlier upper abdominal surgery. Conclusion: Laparoscopic cholecystectomy is the gold standard for gallstones and gallbladder disease; however, inflammation, adhesions, and anatomic difficulty continue to challenge the use and safety of this approach in a small number of patients. This study identifies predictors of choice for open cholecystectomy. In view of the raised morbidity and mortality associated with open cholecystectomy, distinguishing these predictors will serve to decrease the rate of conversion and address these factors preoperatively. How To Cite This Article: Hanson-Viana E, Ayala-Moreno EA, Ortega-Leon LH, et al. The Association of Preoperative Risk Factors for Laparoscopic Conversion to Open Surgery in Elective Cholecystectomy. Euroasian J Hepato-Gastroenterol 2022;12(1):6-9. abstract_id: PUBMED:28739121 Risk factors for conversion of laparoscopic cholecystectomy to open surgery - A systematic literature review of 30 studies. Background: The study aims to evaluate the methodological quality of publications relating to predicting the need of conversion from laparoscopic to open cholecystectomy and to describe identified prognostic factors. Method: Only English full-text articles with their own unique observations from more than 300 patients were included. Only data using multivariate analysis of risk factors were selected. Quality assessment criteria stratifying the risk of bias were constructed and applied. Results: The methodological quality of the studies were mostly heterogeneous. Most studies performed well in half of the quality criteria and considered similar risk factors, such as male gender and old age, as significant. Several studies developed prediction models for risk of conversion. Independent risk factors appeared to have additive effects. Conclusion: A detailed critical review of studies of prediction models and risk stratification for conversion from laparoscopic to open cholecystectomy is presented. One study is identified of high quality with a potential to be used in clinical practice, and external validation of this model is recommended. abstract_id: PUBMED:8703145 Is male gender a risk factor for conversion of laparoscopic into open cholecystectomy? Background: Based on a clinical observation that the conversion rate of laparoscopic cholecystectomy (LC) to open cholecystectomy (OC) is higher in males, we decided to review our records and to verify whether a significant difference in conversion rates exists between sexes. Methods: A retrospective study on conversion rates of elective laparoscopic cholecystectomy (LC) into open cholecystectomy (LC) in relation to gender was carried out in 329 patients: 267 females and 62 males. Results: Our data revealed that the probability of conversion is fivefold greater in males than females, 21% vs 4.5%, respectively (p = 0.0001). We attribute this striking difference to significantly more adhesions p = 0.0002) and anatomical difficulties (p = 0.003) in males during LC, leading to conversion. Conclusions: We conclude that conversion of LC to OC is more prevalent among males and is probably attributable to a greater incidence of anatomical difficulties. abstract_id: PUBMED:33080991 The Analysis of Risk Factors in the Conversion from Laparoscopic to Open Cholecystectomy. Laparoscopic cholecystectomy is a standard treatment for cholelithiasis. In situations where laparoscopic cholecystectomy is dangerous, a surgeon may be forced to change from laparoscopy to an open procedure. Data from the literature shows that 2 to 15% of laparoscopic cholecystectomies are converted to open surgery during surgery for various reasons. The aim of this study was to identify the risk factors for the conversion of laparoscopic cholecystectomy to open surgery. A retrospective analysis of medical records and operation protocols was performed. The study group consisted of 263 patients who were converted into open surgery during laparoscopic surgery, and 264 randomly selected patients in the control group. Conversion risk factors were assessed using logistic regression analysis that modeled the probability of a certain event as a function of independent factors. Statistically significant factors in the regression model with all explanatory variables were age, emergency treatment, acute cholecystitis, peritoneal adhesions, chronic cholecystitis, and inflammatory infiltration. The use of predictive risk assessments or nomograms can be the most helpful tool for risk stratification in a clinical scenario. With such predictive tools, clinicians can optimize care based on the known risk factors for the conversion, and patients can be better informed about the risks of their surgery. abstract_id: PUBMED:34384723 Conversion from laparoscopic to open cholecystectomy: Risk factor analysis based on clinical, laboratory, and ultrasound parameters. Introduction And Aims: The standard of care for gallbladder disease is laparoscopic cholecystectomy. Difficult dissection of the hepatocytic triangle and bleeding can result in conversion to open cholecystectomy, which is associated with increased morbidity. Identifying risk factors for conversion in the context of acute cholecystitis will allow patient care to be individualized and improve outcomes. Materials And Methods: A retrospective case-control study included all patients diagnosed with acute cholecystitis, according to the 2018 Tokyo Guidelines, admitted to a tertiary care academic center, from January 1991 to January 2012. Using logistic regression, we analyzed variables to identify risk factors for conversion. Variables that were found to be significant predictors of conversion in the univariate analysis were included in a multivariate model. We then performed an exploratory analysis to identify the risk factor summation pathway with the highest sensitivity for conversion. Results: The study included 321 patients with acute cholecystitis. Their mean age was 49 years (±16.8 SD), 65% were females, and 35% were males. Thirty-nine cases (12.14%) were converted to open surgery. In the univariate analysis, older age, male sex, gallbladder wall thickness, and pericholecystic fluid were associated with a higher risk for conversion. In the multivariate analysis all of the variables, except pericholecystic fluid, were associated with conversion. Our risk factor summation model had a sensitivity of 84%. Conclusions: Preoperative clinical data can be utilized to identify patients with a higher risk of conversion to open cholecystectomy. Being aware of such risk factors can help improve perioperative planning and preparedness in challenging cases. abstract_id: PUBMED:27160289 Preoperative Risk Factors for Conversion of Laparoscopic Cholecystectomy to Open Surgery - A Systematic Review and Meta-Analysis of Observational Studies. Background: Preoperative risk factors for the conversion of laparoscopic cholecystectomy to open surgery have been identified, but never been explored systematically. Our objective was to systematically present the evidence of preoperative risk factors for conversion of laparoscopic cholecystectomy to open surgery. Methods: PubMed and Embase were searched systematically in March 2014. Observational studies evaluating preoperative risk factors for conversion of laparoscopic cholecystectomy to open surgery in patients with gallstone disease were included. The outcome variables extracted were patient demographics, medical history, severity of gallstone disease, and preoperative laboratory values. Results: A total of 1,393 studies were screened for eligibility. We found 32 studies, including 460,995 patients operated with laparoscopic cholecystectomy, eligible for the systematic review. Of these, 10 studies were suitable for 7 meta-analyses on age, gender, body mass index, previous abdominal surgery, severity of disease, white blood cell count, and gallbladder wall thickness. Conclusions: A gallbladder wall thicker than 4-5 mm, a contracted gallbladder, age above 60 or 65, male gender, and acute cholecystitis were risk factors for the conversion of laparoscopic cholecystectomy to open surgery. Furthermore, there was no association between diabetes mellitus or white blood cell count and conversion to open surgery. abstract_id: PUBMED:36612732 Preoperative Risk Factors for Conversion from Laparoscopic to Open Cholecystectomy: A Systematic Review and Meta-Analysis. Laparoscopic cholecystectomy is a standard treatment for patients with gallstones in the gallbladder. However, multiple risk factors affect the probability of conversion from laparoscopic cholecystectomy to open surgery. A greater understanding of the preoperative factors related to conversion is crucial to improve patient safety. In the present systematic review, we summarized the current knowledge about the main factors associated with conversion. Next, we carried out several meta-analyses to evaluate the impact of independent clinical risk factors on conversion rate. Male gender (OR = 1.907; 95%CI = 1.254−2.901), age > 60 years (OR = 4.324; 95%CI = 3.396−5.506), acute cholecystitis (OR = 5.475; 95%CI = 2.959−10.130), diabetes (OR = 2.576; 95%CI = 1.687−3.934), hypertension (OR = 1.931; 95%CI = 1.018−3.662), heart diseases (OR = 2.947; 95%CI = 1.047−8.296), obesity (OR = 2.228; 95%CI = 1.162−4.271), and previous upper abdominal surgery (OR = 3.301; 95%CI = 1.965−5.543) increased the probability of conversion. Our analysis of clinical factors suggested the presence of different preoperative conditions, which are non-modifiable but could be useful for planning the surgical scenario and improving the post-operatory phase. abstract_id: PUBMED:28709978 The severity grading of acute cholecystitis following the Tokyo Guidelines is the most powerful predictive factor for conversion from laparoscopic cholecystectomy to open cholecystectomy. Background: The relationship between the severity assessment of acute cholecystitis based on the Tokyo Guidelines and the risk for conversion from laparoscopic surgery to open surgery has been assessed in few previous reports, with conflicting results. Methods: A retrospective review of patients with acute cholecystitis within a single system from 2010 to 2013 was performed. The diagnosis and severity of acute cholecystitis were assigned by the Tokyo Guidelines 2013 (TG13). The primary outcome measure was conversion to open cholecystectomy. Results: During the period of study, 493 patients were operated by laparoscopy for acute cholecystitis. Laparoscopic cholecystectomy was intraoperatively converted to open surgery in 56 cases (11.4%). The multivariate analysis showed that the risk factors for conversion to open surgery included male gender (OR: 2.15; IC95% [1.18-3.9]), diabetes (OR: 2.22; IC95% [1.13-4.33]), total bilirubin levels (OR: 1.02; IC95% [1-1.05]), and the TG13 severity classification (OR: 4.44; IC95% [2.25-8.75]). Conclusions: The independent risk factors for conversion to open surgery included male sex, diabetes mellitus, total bilirubin level, and TG13 grade. TG13 grade was found to be the most powerful predictive factor for conversion as it had the highest OR. abstract_id: PUBMED:26011206 Is the male gender an independent risk factor for complication in patients undergoing laparoscopic cholecystectomy for acute cholecystitis? This paper was designed to investigate the gender dependent risk of complication in patients undergoing laparoscopic cholecystectomy for acute cholecystitis. Laparoscopic cholecystectomy is the standard procedure for benign gallbladder disorders. The role of gender as an independent risk factor for complicated laparoscopic cholecystectomy remains unclear. A retrospective single-center analysis of laparoscopic cholecystectomies performed for acute cholecystitis over a 5-year period in a community hospital was performed. Within the period of examination, 1884 laparoscopic cholecystectomies were performed. The diagnosis was acute cholecystitis in 779 cases (462 female, 317 male). The male group was significantly older (P = 0.001). Surgery lasted significantly longer in the male group (P = 0.008). Conversion was done in 35 cases (4.5%). There was no significant difference in the rate of conversion between both groups. However the rate of conversion was significantly higher in male patients > 65 years (P = 0.006). The length of postoperative hospital stay was significantly longer in the male group (P = 0.007), in the group > 65 years (P = 0.001) and following conversion to open surgery (P = 0.001). The male gender was identified as an independent risk factor for prolonged laparoscopic cholecystectomy on multivariate analysis. The male gender could be an independent risk factor for complicated or challenging surgery in patients undergoing laparoscopic cholecystectomy for acute cholecystitis. abstract_id: PUBMED:25579475 Role of preoperative sonography in predicting conversion from laparoscopic cholecystectomy to open surgery. Background: Laparoscopic cholecystectomy is the first step treatment in cholelithiasis. The purpose of this study was to establish a radiologic view on prediction of conversion from laparoscopic cholecystectomy to open surgery. Methods: This study included 176 patients who had undergone laparoscopic cholecystectomy. Preoperative ultrasonographic findings were assessed and we gave points to each finding according to results from correlation analysis. After the scoring we investigated the relationship between ultrasonographic findings and conversion from laparoscopic cholecystectomy to open surgery. Results: Scoring significantly predicted failure in laparoscopic approach (AUC=0.758, P=0.003,). Optimal cut off score was found to be 1.95 with 67% sensitivity and 78% specificity. Score>1.95 was a risk factor for failure in laparoscopic approach [odds ratio=7.1(95% CI,2-24.9, P=0.002)]. There were 8 subjects out of 36(22%) with high score underwent open surgery while 4 out of 128 (3%) subjects with low score needed open surgery (p=0.002). Negative predictive value of 128/132=97%. Mean score of whole study population was 1.28 (range 0-8.8) and mean score of subjects underwent open surgery was 3.6 while it was 1.1 in successful laparoscopic approach group (p<0.001). Mean Age and BMI were similar between groups (p>0.05). Sex of subjects did not affect the success of surgery (p>0.05). Conclusion: The contribution of preoperative ultrasonography is emphasized in many studies. Our study suggests quantitative results on conversion from laparoscopic cholecystectomy to open surgery. We believe that radiologists have to indicate the risk of conversion in their ultrasonography reports. Answer: Yes, male gender is identified as a risk factor for the conversion of laparoscopic cholecystectomy to open surgery. Multiple studies have reported a higher likelihood of conversion in male patients. For instance, one study found that the probability of conversion is fivefold greater in males than females, attributing this difference to significantly more adhesions and anatomical difficulties in males (PUBMED:8703145). Another study confirmed that male gender, along with other factors such as age over 60 years and previous upper abdominal surgery, were strongly associated with conversion (PUBMED:35990867). Furthermore, a systematic review and meta-analysis also identified male gender as a risk factor, with an odds ratio of 1.907, indicating that males are almost twice as likely to require conversion to open surgery compared to females (PUBMED:36612732). Additionally, another study highlighted that male sex was an independent risk factor for conversion, with a higher rate of conversion in male patients over 65 years old (PUBMED:26011206). These findings are consistent across various studies, suggesting that male gender is indeed a significant risk factor for conversion from laparoscopic to open cholecystectomy.
Instruction: Does COPD risk vary by ethnicity? Abstracts: abstract_id: PUBMED:27103797 Does COPD risk vary by ethnicity? A retrospective cross-sectional study. Background: Lower risk of COPD has been reported in black and Asian people, raising questions of poorer recognition or reduced susceptibility. We assessed prevalence and severity of COPD in ethnic groups, controlling for smoking. Method: A retrospective cross-sectional study using routinely collected primary care data in London. COPD prevalence, severity (% predicted forced expiratory volume in 1 second [FEV1]), smoking status, and treatment were compared between ethnic groups, adjusting for age, sex, smoking, deprivation, and practice clustering. Results: Among 358,614 patients in 47 general practices, 47.6% were white, 20% black, and 5% Asian. Prevalence of COPD was 1.01% overall, 1.55% in whites, 0.58% in blacks, and 0.78% in Asians. COPD was less likely in blacks (adjusted odds ratio [OR], 0.44; 95% confidence interval [CI] 0.39-0.51) and Asians (0.82; CI, 0.68-0.98) than whites. Black COPD patients were less likely to be current smokers (OR, 0.56; CI, 0.44-0.71) and more likely to be never-smokers (OR, 4.9; CI, 3.4-7.1). Treatment of patients with similar disease severity was similar irrespective of ethnic origin, except that long-acting muscarinic antagonists were prescribed less in black COPD patients (OR, 0.53; CI, 0.42-0.68). Black ethnicity was a predictor of poorer lung function (% predicted FEV1: B coefficient, -7.6; P<0.0001), an effect not seen when ethnic-specific predicted FEV1 values were used. Conclusion: Black people in London were half as likely as whites to have COPD after adjusting for lower smoking rates in blacks. It seems likely that the differences observed were due either to ethnic differences in the way cigarettes were smoked or to ethnic differences in susceptibility to COPD. abstract_id: PUBMED:35591829 Ethnicity and other COVID-19 death risk factors in Mexico. Introduction: Patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection may develop coronavirus disease 2019 (COVID-19). Risk factors associated with death vary among countries with different ethnic backgrounds. We aimed to describe the factors associated with death in Mexicans with confirmed COVID-19. Material And Methods: We analysed the Mexican Ministry of Health's official database on people tested for SARS-CoV-2 infection by real-time reverse transcriptase-polymerase chain reaction (rtRT-PCR) of nasopharyngeal fluids. Bivariate analyses were performed to select characteristics potentially associated with death, to integrate a Cox-proportional hazards model. Results: As of May 18, 2020, a total of 177,133 persons (90,586 men and 86,551 women) in Mexico received rtRT-PCR testing for SARS-CoV-2. There were 5332 deaths among the 51,633 rtRT-PCR-confirmed cases (10.33%, 95% CI: 10.07-10.59%). The median time (interquartile range, IQR) from symptoms onset to death was 9 days (5-13 days), and from hospital admission to death 4 days (2-8 days). The analysis by age groups revealed that the significant risk of death started gradually at the age of 40 years. Independent death risk factors were obesity, hypertension, male sex, indigenous ethnicity, diabetes, chronic kidney disease, immunosuppression, chronic obstructive pulmonary disease, age > 40 years, and the need for invasive mechanical ventilation (IMV). Only 1959 (3.8%) cases received IMV, of whom 1893 were admitted to the intensive care unit (96.6% of those who received IMV). Conclusions: In Mexico, highly prevalent chronic diseases are risk factors for death among persons with COVID-19. Indigenous ethnicity is a poorly studied factor that needs more investigation. abstract_id: PUBMED:32490145 Impact of traditional risk factors for the outcomes of atrial fibrillation across race and ethnicity and sex groups. Background: Although traditional risk factors for atrial fibrillation (AF) and its outcomes are established in whites, their role in the pathogenesis of AF across race-ethnicity and both sexes remain unclear. Cohort studies have consistently shown worse AF-related outcomes in these groups. The objective of this study was to determine the role played by race- and sex-specific risk factors in AF outcomes in non-Hispanic blacks (NHBs), Hispanics/Latinos (H/Ls), and non-Hispanic whites (NHWs). Methods: Using electronic health records (EHR), 3607 patients with an ICD-9 code for AF were identified over a 7-year period. Risk factors were identified from ICD to 9 CM claims data: hypertension (HTN), type 2 diabetes mellitus (T2DM), stroke/transient ischemic attack (TIA), smoking, chronic obstructive pulmonary disease (COPD), coronary artery disease (CAD), peripheral arterial disease (PAD) and obstructive sleep apnea (OSA). Multivariate analysis of variance was used to compare the incidence of AF risk factors. Results: NHBs and H/Ls with AF experienced more stroke than NHWs (27% and 24% vs. 19% P < 0.01). Females had less HTN (48.4% vs 51.6% [males], P = 0.0002), CAD (47.4% vs 55.7% [males], P = 0.02), and smoking rates (38.2% vs 61.8% [males], P < 0.0001) but higher stroke rates (25.9% [female] vs 21.8% [males], P < 0.0001). Age-adjusted risk factors for stroke varied markedly across race-ethnicity and sex. Conclusions: We identified differences in risk factors for AF and stroke across race-ethnicity and sex. The findings of our study are hypothesis generating and should be used to direct future studies. abstract_id: PUBMED:27216214 Prevalence and risk factors of chronic obstructive pulmonary diseases in a Hlai community in Hainan Island of China. Objectives: We investigated prevalence and risk factors of chronic obstructive pulmonary disease (COPD) in a population of Hlai (the Li) ethnicity, a major minority, in Qicha Town, Changjiang County, Hainan Province, PRC, during 2014. Methods: All residents at the age of 40 years or older were interviewed with standardized questionnaires. Spirometry was performed to measure the possible airflow limitation. According to the GOLD criteria, post bronchodilator FEV1/FVC < 70% was defined as COPD. Case-control study was used to screen the risk factors by analyzing COPD group (212 cases) and non-COPD control group (236 cases). Single factor analysis and multiple factor logistic regression analysis were used as statistical methods. Results: The prevalence of COPD in the residents at the age of 40 years or older of Hlai community was 5.07% (286/5637) (95% CI = 0.045-0.057). In the logistic regression analysis, the COPD prevalence was 5.07% (147/2901) in men and 5.08% (139/2736) in women, respectively, with odds ratio (OR) 1.003, 95% CI 0.790-1.272 and P > 0.05, suggesting that the sex did not affect the COPD prevalence in the investigated samples, but age (OR = 1.096), expectoration (OR = 87.917), locomotor activity limitation (OR = 3.908) and frequency of respiration (OR = 2.512) were risk factors and associated with the development of COPD. Notably, although the tobacco smoker in male and female COPD patients were 48.6% (54/111) and 4.0% (4/101), respectively, passive smokers in female with COPD were 45.6% (46/101). Conclusion: In the Hlai population aged ≥40 years, the COPD prevalence was 5.07%. Smoking, age, expectoration, locomotor activity limitation and frequency of respiration were risk factors of COPD in Hlai ethnicity. abstract_id: PUBMED:31787547 Racial and Ethnic Disparities in Lung Adenocarcinoma Survival: A Competing-Risk Model. Background: Race/ethnicity-specific disparities in lung cancer survival have been investigated extensively. However, more studies concentrating on lung adenocarcinoma (ADC), especially those using a competing-risk model, are needed. We examined race/ethnicity-specific differences in lung ADC survival. Patients And Methods: Patients with ADC diagnosed from 2004 to 2015 were identified from the Surveillance, Epidemiology, and End Results program. Race/ethnicity was categorized into 4 groups: non-Hispanic white (NHW), non-Hispanic black (NHB), non-Hispanic Asian/Pacific Islander (NHAPI), and Hispanic. Lung cancer-specific mortality (LCSM) and other cause-specific mortality (OCSM) were evaluated using a competing-risk model. Results: On multivariate analysis, NHB patients experienced slightly lower LCSM (subdistribution hazard ratio, 0.96; 95% confidence interval, 0.94-0.98) and higher OCSM (subdistribution hazard ratio, 1.16; 95% confidence interval, 1.11-1.22) compared with NHW patients in the stage IV group. No significant differences were found in LCSM and OCSM between the NHB and NHW patients with early-stage ADC (stage I or II). Both NHAPI and Hispanic patients experienced lower OCSM and LCSM compared with the NHW patients. Additionally, NHB patients with stage IV tumors had a greater mortality risk of cardiovascular disease and a lower risk of chronic obstructive pulmonary disease than NHW patients. Conclusions: The source of racial/ethnic survival disparities that exist between NHB and NHW patients was mainly found in patients with stage IV ADC. Reducing the greater mortality rate of cardiovascular disease among NHB patients and chronic obstructive pulmonary disease among NHW patients would be conducive to narrowing the racial/ethnic gaps. Further research is warranted to determine additional influencing factors, especially among patients with stage IV ADC. abstract_id: PUBMED:32226910 Race, Ethnicity, Socioeconomic Status, and Chronic Lung Disease in the U.S. Background: Higher socioeconomic status (SES) indicators such as educational attainment and income reduce the risk of chronic lung diseases (CLDs) such as Chronic Obstructive Pulmonary Disease (COPD), emphysema, chronic bronchitis, and asthma. Marginalization-related Diminished Returns (MDRs) refer to smaller health benefits of high SES for marginalized populations such as racial and ethnic minorities compared to the socially privileged groups such as non-Hispanic Whites. It is still unknown, however, if MDRs also apply to the effects of education and income on CLDs. Purpose: Using a nationally representative sample, the current study explored racial and ethnic variation in the associations between educational attainment and income and CLDs among American adults. Methods: In this study, we analyzed data (n = 25,659) from a nationally representative survey of American adults in 2013 and 2014. Wave one of the Population Assessment of Tobacco and Health (PATH)-Adult study was used. The independent variables were educational attainment (less than high school = 1, high school graduate = 2, and college graduate =3) and income (living out of poverty =1, living in poverty = 0). The dependent variable was any CLDs (i.e., COPD, emphysema, chronic bronchitis, and asthma). Age, gender, employment, and region were the covariates. Race and ethnicity were the moderators. Logistic regressions were fitted to analyze the data. Results: Individuals with higher educational attainment and those with higher income (who lived out of poverty) had lower odds of CLDs. Race and ethnicity showed statistically significant interactions with educational attainment and income, suggesting that the protective effects of high education and income on reducing odds of CLDs were smaller for Blacks and Hispanics than for non-Hispanic Whites. Conclusions: Education and income better reduce the risk of CLDs among Whites than Hispanics and Blacks. That means we should expect disproportionately higher than expected risk of CLDs in Hispanics and Blacks with high SES. Future research should test if high levels of environmental risk factors contribute to the high risk of CLDs in high income and highly educated Black and Hispanic Americans. Policy makers should not reduce health inequalities to SES gaps because disparities sustain across SES levels, with high SES Blacks and Hispanics remaining at risk of health problems. abstract_id: PUBMED:31725658 The burden of health conditions across race and ethnicity for aging Americans: Disability-adjusted life years. Despite evidence suggesting race and ethnicity are important factors in responses to environmental exposures, drug therapies, and disease risk, few studies focus on the health needs of racially- and ethnically-diverse aging adults.The objective of this study was to determine the burden of 10 health conditions across race and ethnicity for a nationally-representative sample of aging Americans.Data from the 1998 to 2014 waves of the Health and Retirement Study, an ongoing longitudinal-panel study, were analyzed.Those aged over 50 years who identified as Black, Hispanic, or White were included. There were 5510 Blacks, 3423 Hispanics, and 21,168 Whites in the study.At each wave, participants reported if they had cancer, chronic obstructive pulmonary disease, congestive heart failure, diabetes, back pain, hypertension, a fractured hip, myocardial infarction, rheumatism or arthritis, and a stroke. Disability-adjusted life years (DALYs) were calculated for each health condition by race and ethnicity. Ranked DALYs determined how race and ethnicity was differentially impacted by the burden of each health condition. Sample weights were utilized to make DALY estimates nationally-representative.Weighted DALY estimates (in thousands) ranged from 1405 to 55,631 for Blacks, 931 to 28,442 for Hispanics, and 15,313 to 295,623 for Whites. Although the health conditions affected each race and ethnicity differently, hypertension had the largest number of DALYs, and hip fractures had the fewest across race and ethnicity. In total, there were an estimated 198,621, 101,462, and 1,187,725 DALYs for older Black, Hispanic, and White aging adults.Our findings indicate that race and ethnicity may be influential on health and disease for aging adults in the United States. Monitoring DALYs may help guide the flow of health-related expenditures, improve the impact of health interventions, advance inclusive health care for diverse aging adult populations, and prepare healthcare providers for serving the health needs of aging adults. abstract_id: PUBMED:33539330 Sexual Orientation Disparities in Risk Factors for Adverse COVID-19-Related Outcomes, by Race/Ethnicity - Behavioral Risk Factor Surveillance System, United States, 2017-2019. Sexual minority persons experience health disparities associated with sexual stigma and discrimination and have a high prevalence of several health conditions that have been associated with severe coronavirus disease 2019 (COVID-19) (1,2). Current COVID-19 surveillance systems do not capture information about sexual orientation. To begin bridging the gap in knowledge about COVID-19 risk among sexual minority adults, CDC examined disparities between sexual minority and heterosexual adults in the prevalence of underlying conditions with strong or mixed evidence of associations with severe COVID-19-related illness (3), by using data from the 2017-2019 Behavioral Risk Factor Surveillance System (BRFSS).* When age, sex, and survey year are adjusted, sexual minority persons have higher prevalences than do heterosexual persons of self-reported cancer, kidney disease, chronic obstructive pulmonary disease (COPD), heart disease (including myocardial infarction, angina, or coronary heart disease), obesity, smoking, diabetes, asthma, hypertension, and stroke. Sexual minority adults who are members of racial/ethnic minority groups disproportionately affected by the pandemic also have higher prevalences of several of these health conditions than do racial/ethnic minority adults who are heterosexual. Collecting data on sexual orientation in COVID-19 surveillance and other studies would improve knowledge about disparities in infection and adverse outcomes by sexual orientation, thereby informing more equitable responses to the pandemic. abstract_id: PUBMED:27440974 Heart Failure Hospitalization by Race/Ethnicity, Gender and Age in California: Implications for Prevention. Objective: We examined variation in rates of hospitalization, risk factors, and costs by race/ethnicity, gender and age among heart failure (HF) patients. Methods: We analyzed California hospital discharge data for patients in 2007 (n=58,544) and 2010 (n=57,219) with a primary diagnosis of HF (ICD-9 codes: 402, 404, 428). HF cases included African Americans (Blacks; 14%), Hispanic/Latinos (21%), and non-Hispanic Whites (65%). Age-adjusted prevalence rates per 100,000 US population were computed per CDC methodology. Results: Four major trends emerged: 1) Overall HF rates declined by 7.7% from 284.7 in 2007 to 262.8 in 2010; despite the decline, the rates for males and Blacks remained higher compared with others in both years; 2) while rates for Blacks (aged ≤54) were 6 times higher compared with same age Whites, rates for Hispanics were higher than Whites in the middle age category; 3) risk factors for HF included hypertension, chronic heart disease, chronic kidney disease, atrial fibrillation, and chronic obstructive pulmonary disease; and 4) submitted hospitalization costs were higher for males, Blacks, and younger patients compared with other groups. Conclusions: Health inequality in HF persists as hospitalization rates for Blacks remain higher compared with Whites and Hispanics. These findings reinforce the need to determine whether increased access to providers, or implementing proven hypertension and diabetes preventive programs among minorities might reduce subsequent hospitalization for HF in these populations. abstract_id: PUBMED:30914490 Race/Ethnicity and 30-Day Readmission Rates in Medicare Beneficiaries With COPD. Background: COPD is now included in Medicare's hospital readmission reduction program. Hospitals with excessive risk-adjusted 30-d readmission rates receive financial penalties. Race/ethnicity is not included in the risk-adjustment models. We examined whether race/ethnicity was independently associated with readmission after controlling for clinical factors and other demographic variables. Methods: We used the 100% Medicare in-patient (Part A) files to identify patients hospitalized with COPD (MS-DRG codes 190, 191, 192) who were discharged between January 1, 2013, and September 13, 2014. The outcome measure was an unplanned readmission within 30 d of hospital discharge. We used generalized linear mixed models to test the independent effects of race/ethnicity on 30-d readmission. Results: The sample included 298,706 Medicare beneficiaries hospitalized for COPD: 87% white, 8% African-American, and 5% Hispanic. Mean age was 77.7 ± 7.7 y. Overall, 17.3% of subjects experienced an unplanned readmission. Whites (17.4%) and African-Americans (17.7%) had significantly higher unadjusted rates than Hispanics, and Hispanics demonstrated the lowest readmission rate (16.3%). The minority groups generally displayed higher-risk clinical profiles. After controlling for those differences, the multivariable model suggested a benefit for both minority groups in terms of readmission risk. The adjusted readmission rates for whites, African-Americans, and Hispanics were 16.6%, 15.9%, and 14.6%, respectively. Conclusions: Racial/ethnic disparities in observed readmission rates may be largely explained by the more severe clinical profiles of minority populations. Controlling for known clinical risk factors effectively mediates the relationship between race/ethnicity and readmission. Answer: Yes, COPD risk does vary by ethnicity. A retrospective cross-sectional study using primary care data in London found that the prevalence of COPD was lower in black (0.58%) and Asian (0.78%) populations compared to whites (1.55%). After adjusting for factors such as age, sex, smoking, deprivation, and practice clustering, blacks were found to be half as likely as whites to have COPD. This difference was attributed to either ethnic differences in the way cigarettes were smoked or to ethnic differences in susceptibility to COPD (PUBMED:27103797). Another study in a Hlai community in Hainan Island of China reported a COPD prevalence of 5.07% among residents aged 40 years or older. The study identified smoking, age, expectoration, locomotor activity limitation, and frequency of respiration as risk factors for COPD in this Hlai ethnicity (PUBMED:27216214). In the United States, racial and ethnic disparities in lung disease, including COPD, have been noted. Higher socioeconomic status indicators such as educational attainment and income reduce the risk of chronic lung diseases, but the protective effects of high education and income on reducing odds of CLDs were smaller for Blacks and Hispanics than for non-Hispanic Whites (PUBMED:32226910). Additionally, a study examining Medicare beneficiaries with COPD found that racial/ethnic disparities in observed readmission rates may be largely explained by the more severe clinical profiles of minority populations. After controlling for known clinical risk factors, the relationship between race/ethnicity and readmission was effectively mediated, with adjusted readmission rates being lower for African-Americans and Hispanics compared to whites (PUBMED:30914490). Overall, these studies indicate that COPD risk and outcomes do vary by ethnicity, with some ethnic groups having lower prevalence and others experiencing different risk profiles and outcomes.
Instruction: Is there a "July effect" in surgery for adolescent idiopathic scoliosis? Abstracts: abstract_id: PUBMED:24695932 Is there a "July effect" in surgery for adolescent idiopathic scoliosis? Background: Prior studies in various medical and surgical specialties have suggested that the changeover of medical trainees in the United States at the end of the academic year, or so-called "July effect," negatively impacts the quality of patient care, including increasing morbidity and decreasing efficiency. We analyzed whether the outcomes of surgery for adolescent idiopathic scoliosis involving physicians-in-training as first assistants were affected by the time of year the surgery was performed. Methods: We performed a multicenter retrospective study with use of a prospectively collected database to examine outcomes following instrumented posterior spinal fusion in patients with adolescent idiopathic scoliosis. The minimum duration of follow-up was two years. The outcomes of procedures performed by twelve surgeons whose first assistants were all surgeons-in-training were analyzed on the basis of the month of year that the surgery was performed. Variables assessed included blood loss, operative time, length of hospitalization, radiographic outcomes, Scoliosis Research Society (SRS-22) scores, and complications. Results: Five hundred and seventy-five surgical procedures for adolescent idiopathic scoliosis were performed, most in June (14%) and July (13%) (p ≤ 0.001). Preoperative radiographic characteristics were similar across all months as were postoperative radiographic outcomes. Preoperative and two-year SRS-22 scores were also similar across all months, with the exception of scores in the preoperative pain domain, which showed worse pain for patients who underwent surgery in February. No significant differences in blood loss, operative time, or length of hospital stay were observed when these variables were analyzed on the basis of the month in which the surgery was performed. The rate of patients experiencing any complication (23.5% overall) was not associated with the month of surgery, nor were the rates for the specific subcategories of neurologic, pulmonary, gastrointestinal, instrumentation, or surgical site-related complications. With the exception of three gastrointestinal complications that were observed in July, the odds of a patient having a complication from surgery in July/August were unchanged from other months. Conclusions: Overall, the data did not provide evidence to support a July effect. Our results suggest that surgery for adolescent idiopathic scoliosis during July and August yields safety and outcomes equal to that of other months. abstract_id: PUBMED:23873226 Effect of spine fellow training on operative outcomes, affirming graduated responsibility. Study Design: Retrospective review of prospectively collected surgical data. Objective: This study sought to determine the effect of fellow education during the course of the academic year (August-July) on surgical outcomes in adolescent idiopathic scoliosis. One surgeon and one type of surgery were chosen to minimize confounding factors. Summary Of Background Data: Educating and training the next generation of physicians and surgeons is necessary for the survival and continuation of medical care. There has been recent momentum to document scientifically that medical education is safe. Spine surgery is complex and demanding, with a steep learning curve, making it an ideal model to detect any potential negative impact of medical education. Methods: Subjects: adolescent patients undergoing posterior spinal surgery, between August 2007 and July 2010, by a single senior surgeon at one institution with a fellow as the only surgical assistant. Demographic and perioperative data were collected and then segmented by surgical date into quarters according to the rotations of the academic year. One fellow was included in each quarter during the 4 years, resulting in 16 fellows across the 4 quarters. An analysis of variance model was used to assess differences in operative time, blood loss, length of stay, and complications between the quarters of the year. Results: There were no significant differences between the groups regarding age, sex, or Lenke curve type. No statistically significant differences were found between the 4 quarters of the fellowship year for estimated blood loss, use of cell saver, length of stay, operative time, and complication rate. Conclusion: This study is the first to show that fellow education during the course of the academic year did not impact the patient outcomes studied. It is clear that while there is significant academic benefit for the fellows as they complete their spine fellowship, there is no negative impact for patients. Level Of Evidence: 4. abstract_id: PUBMED:29802465 The effect of deformity correction on psychiatric condition of the adolescent with adolescent idiopathic scoliosis. Purpose: The purpose of this prospective study was to evaluate the effects of deformity correction on body image, quality of life, self-esteem, depression and anxiety in patients with adolescent idiopathic scoliosis (AIS) who underwent surgery. Methods: Between June 2014 and July 2015, 41 consecutive patients who underwent surgery for AIS were compared with the control group of 52 healthy patients regarding the changes in the pre- and postoperative quality of life and psychiatric status of patients with deformity correction. Body Cathexis Scale (BCS), Pediatric Quality of Life Inventory (PedsQL), Children's Depression Inventory (CDI), Piers-Harris self-esteem questionnaire (PH-SEQ) and state-trait Anxiety Inventory for Children were used to evaluate the patients. Results: There was a significant decrease in postoperative first-year Cobb angle and trunkal shift imbalance compared with the preoperative values (p = 0.0001 and p = 0.0001). Postoperative first-year thoracic kyphosis angle and body height showed a significant increase according to preoperative values (p = 0.0001 and p = 0.0001). Postoperative PH-SEQ score and PedsQL total score showed a significant increase in the study group compared to the preoperative level, but no significant difference was found between the control group. Postoperative CDI score, BCS score, STAI-state and STAI-trait scores decreased significantly in the study group compared with preoperative scores. Conclusions: Surgical correction of deformity in AIS provided significant improvements regarding quality of life and psychiatric condition. Spinal surgeons should be aware of the possible psychological problems of AIS patients and should keep in mind that deformity correction not only improves physical health but also improves mental health. These slides can be retrieved under Electronic Supplementary Material. abstract_id: PUBMED:27927573 Effect of Surgical Fusion on Volitional Weight-Shifting in Individuals With Adolescent Idiopathic Scoliosis. Study Design: Prospective. Objectives: The goals of this study were to (1) evaluate the differences in weightbearing symmetry between individuals with adolescent idiopathic scoliosis (AIS) and typically developing controls; (2) observe the effect of posterior spinal fusion and instrumentation (PSFI) on volitional weight-shifting at 1 and 2 years postoperatively; and (3) evaluate whether lowest instrumented fusion level (ie, lowest instrumented vertebra [LIV]) in PSFI has an effect on volitional weight-shifting. Summary Of Background Data: Previous studies have conflicting findings with regard to the effect of scoliosis on postural control tasks as well as the effect of surgery. They have also noted an inconsistent effect of PSFI at different LIVs, with more distal LIVs exhibiting greater reductions in postoperative range of motion. Methods: The study was designed with an AIS group of 41 patients (8 males and 33 females) with AIS who underwent PSFI, along with a Control Group of 24 age-matched typically developing participants (12 male and 12 female). Both groups performed postural control tasks (static balance and volitional weight-shifting), with the AIS group repeating the tasks at 1 and 2 years postoperatively. Results: At baseline, the AIS group showed increased weightbearing asymmetry than the Control Group (p = .01). The AIS group showed improvements in volitional weight-shifting at 2 years over baseline (p < .01). There was no effect of LIV on volitional weight-shifting by the second postoperative year. Conclusions: Individuals with AIS have greater weightbearing asymmetry but improved volitional weight-shifting over typically developing controls. PSFI improves volitional weight-shifting beyond preoperative baseline but does not differ significantly by LIV. abstract_id: PUBMED:27549661 The effect of sublaminar wires on the rib hump deformity during scoliosis correction manoeuvres. Introduction: During thoracic curve correction, the tightening of the sublaminar wires through concavity creates a medial and a dorsal translation of the spine. However, little is known about the effect of the sublaminar wires on the axial plane. Methods: This is prospective case series analysis of 30 consecutive surgical patients with main thoracic adolescent idiopathic scoliosis. All of the patients were fused with hybrid instrumentation (apical concavity-sublaminar wires) and differential rod contouring (over-kyphosis concavity/under-kyphosis convexity). The degrees of the rib hump were measured with a scoliometer placed at the apex of the deformity at five different times: (1) preoperatively through the Adam's test, and during surgery (sterilised scoliometer), (2) with the patient lying prone, (3) after the Ponte osteotomies, (4) after the apical sublaminar tightening, and (5) after convexity apical derotation and compression manoeuvres. Results: (1) Preoperatively, the Adam's test was 16.3° ± 4.6. (2) Lying prone and under general anaesthesia, it decreased to 11.4° ± 3.9. (3) After exposure and Ponte osteotomies, it was 7.1° ± 4. (4) After the wire tightening, it was 10.8° ± 4.7. (5) After the convexity manoeuvres, it was 4.8° ± 3.7. The degrees of the rib hump final correction were 11.6° ± 4 (70 % correction). The tightening of the sublaminar wires increased the rib hump by 3.5°. Conclusions: The sublaminar wire tightening towards the concave rod seemed to create an effect opposite of the desired effect, increasing the apical rotation and the thoracic rib hump deformity. Convexity manoeuvres (apical screw derotation and compression) are necessary and must be coupled with an under-bending of the convex rod to neutralise this effect. abstract_id: PUBMED:33372960 Single- versus Dual-Attending Surgeon Approach for Spine Deformity: A Systematic Review and Meta-Analysis. Background: Surgical management of spine deformity is associated with significant morbidity. Recent literature has inconsistently demonstrated better outcomes after utilizing 2 attending surgeons for spine deformity. Objective: To conduct a systematic review and meta-analysis on studies reporting outcomes following single- vs dual-attending surgeons for spine deformity. Methods: MEDLINE, Embase, Web of science, and Cochrane databases were last searched on July 16, 2020. A total of 1013 records were identified excluding duplicates. After screening, 10 studies (4 cohort, 6 case series) were included in the meta-analysis. Random-effect models were used to pool the effect estimates by study design. When feasible, further subgroup analysis by deformity type was conducted. Results: A total of 953 patients were analyzed. Pooled results from propensity score-matched cohort studies revealed that the single-surgeon approach was unfavorably associated with a nonstatistically significant higher blood loss (mean difference = 421.0 mL; 95% CI: -28.2, 870.2), a statistically significant higher operative time (mean difference = 94.3 min; 95% CI: 54.9, 133), length of stay (mean difference = 0.84 d; 95% CI: 0.46, 1.22), and an increased risk of complications (Mantel-Haenszel risk ratio = 2.93; 95% CI: 1.12, 7.66). Data from pooled case series demonstrated similar results for all outcomes. Moreover, these results did not differ significantly between deformity types (adolescent idiopathic scoliosis and adult spinal deformity). Conclusion: Dual-attending surgeon approach appeared to be associated with reduced operative time, shorter hospital stays, and reduced risk of complications. These findings may potentially improve outcomes in surgical treatment of spine deformity. abstract_id: PUBMED:27054455 Effect of Surgical Approach on Pulmonary Function in Adolescent Idiopathic Scoliosis Patients: A Systemic Review and Meta-analysis. Study Design: Systemic review and meta-analysis. Objective: To analyze the effect of spinal fusion and instrumentation for adolescent idiopathic scoliosis (AIS) on absolute pulmonary function test (PFTs). Summary Of Background Data: Pulmonary function is correlated with severity of deformity in AIS patients and studies that have analyzed the effect of spinal fusion and instrumentation on PFTs for AIS have reported inconsistent results. There is a need to analyze the effect of spinal fusion on PFTs with stratification by surgical approach. Methods: Our analysis included 22 studies. Cohen's d effect sizes were calculated for absolute PFT outcome measures with 95% confidence intervals (CI). Meta-analyses were performed at each postoperative time frame for six homogeneous surgical approaches: (i) combined anterior release and posterior fusion with instrumentation; (ii) combined video assisted anterior release and posterior fusion with instrumentation without thoracoplasty; (iii) posterior fusion with instrumentation without thoracoplasty; (iv) anterior fusion with instrumentation and without thoracoplasty; (v) video assisted anterior fusion with instrumentation without thoracoplasty; and (vi) any scoliosis surgery with additional thoracoplasty. Results: Anterior spinal fusion with instrumentation, any scoliosis surgery with concomitant thoracoplasty, or video-assisted anterior fusion with instrumentation for AIS had similar absolute PFTs at their 2 year postoperative follow up compared with their preoperative PFTs (effect sizes ranging from -0.2-0.2 with all CI crossing "0"). Posterior spinal fusion with instrumentation (with or without an anterior release) demonstrated small to moderate increases in PFTs 2 years postoperatively (effect sizes ranging from 0.35-0.65 with all CI not crossing "0"). Conclusion: Anterior fusion with instrumentation, regardless of the approach, and any scoliosis surgery with concomitant thoracoplasty do not lead to significant change in pulmonary functions 2 year after surgery. Posterior spinal fusion with instrumentation (with or without an anterior release) resulted in small to moderate increases in PFTs. Level Of Evidence: N/A. abstract_id: PUBMED:36922012 The Collateral Effect of Enhanced Recovery After Surgery Protocols on Spine Patients With Neuromuscular Scoliosis. Introduction: Enhanced recovery after surgery (ERAS) protocols are often specific to a specific type of surgery without assessing the overall effect on the ward. Previous studies have demonstrated reduced length of stay (LOS) with ERAS protocols in patients with adolescent idiopathic scoliosis (AIS), although the patients are often healthy and with few or no comorbidities. In 2018, we used ERAS principles for patients undergoing AIS surgery with a subsequent 40% reduced LOS. The current study aims to assess the potential collateral effect of LOS in patients surgically treated for neuromuscular scoliosis admitted to the same ward and treated by the same staff but without a standardized ERAS protocol. Methods: All patients undergoing neuromuscular surgery 2 years before and after ERAS introduction (AIS patients) with a gross motor function classification score of 4 to 5 were included. LOS, intensive care stay, and postoperative complications were recorded. After discharge, all complications leading to readmission and mortality were noted with a minimum of 2 years of follow-up using a nationwide registry. Results: Forty-six patients were included; 20 pre-ERAS and 26 post-ERAS. Cross groups, there were no differences in diagnosis, preoperative curve size, pulmonary or cardiac comorbidities, weight, sex, or age. Postoperative care in the intensive care unit was unchanged between the two groups (1.2 vs 1.1; P = 0.298). When comparing LOS, we found a 41% reduction in the post-ERAS group (11 vs 6.5; P < 0.001) whereas the 90-day readmission rates were without any significant difference (45% vs 34% P = 0.22) We found no difference in the 2-year mortality in either group. Conclusion: The employment of ERAS principles in a relatively uncomplicated patient group had a positive, collateral effect on more complex patients treated in the same ward. We believe that training involving the caregiving staff is equally important as pharmacological protocols. abstract_id: PUBMED:30684354 Effect of Preoperative SpineCor® Treatment on Surgical Outcome in Idiopathic Scoliosis: An Observational Study. BACKGROUND Idiopathic scoliosis is a three-dimensional deformity of the spine. We investigated the effect of preoperative treatment with SpineCor® dynamic brace on the efficiency of surgical correction from a posterior approach in adolescent idiopathic scoliosis. MATERIAL AND METHODS This was a retrospective observational study. Participants were 53 girls who underwent surgery from posterior approach due to idiopathic adolescent scoliosis, divided into a study group (Group A, 27 girls) and a control group (Group B, 26 girls). Girls in the study group had previously undergone treatment with the SpineCor® brace. Outcome measures were amount of correction and coronal balance based on anteroposterior plain radiographs obtained prior to surgery, at 1 week after surgery, and at 12 months after surgery. RESULTS In both groups, satisfactory deformity correction was achieved after surgery (Group A, 73%±12 vs. Group B, 68%±16) and at 12-month follow-up (75%±12 vs. 68%±12, respectively), with no statistically significant differences identified. Directly after surgery, patients preoperatively treated with the SpineCor® brace displayed smaller coronal balance deviation compared with the preoperative measurement, with significant differences in the outcome achieved at 1 week after surgery in Group B. At 12-month follow-up, both groups had significant coronal balance improvement. CONCLUSIONS This is the first study assessing the effect of dynamic brace treatment on scoliosis surgery. The study shows that a history of preoperative treatment with the SpineCor® dynamic brace does not affect the amount of the achieved correction of AIS directly after surgery or at 12-month follow-up, but it does facilitate faster restoration of normal coronal balance. abstract_id: PUBMED:15247577 The effect of intraoperative traction during posterior spinal instrumentation and fusion for adolescent idiopathic scoliosis. Study Design: A retrospective study comparing patients having traction and a control group not having traction during posterior spinal instrumentation and fusion (PSIF) for adolescent idiopathic scoliosis (AIS). Objective: To evaluate the effect of intraoperative traction on surgical correction of AIS. Summary Of Background Data: When the Cotrel-Dubousset instrumentation system was introduced, the use of intraoperative traction was advocated. However, there is no specific report documenting the effect of intraoperative traction on the correction of AIS. Methods: The medical and radiologic records of 140 AIS patients treated by PSIF were reviewed. Forty of these patients had intraoperative traction using a head halter associated with lower extremity skin traction. The radiologic outcome was compared between the two groups intraoperatively (before instrumentation with the first rod) and after surgery using Student t tests (level of significance = 0.05). Results: The intraoperative and postoperative corrections of the coronal primary Cobb angle were similar for both groups, although the patients in the traction group had smaller preoperative Cobb angles and more flexible curves and were instrumented with more screws. The postoperative thoracic kyphosis was significantly increased in both groups. The lumbar lordosis at the 1-year follow-up was maintained in the control group, but it was significantly decreased in the traction group. Conclusion: The authors do not recommend the routine use of intraoperative traction using a head halter combined with skin traction for all AIS patients undergoing PSIF. However, it could be helpful in selected cases, such as in patients having pelvic obliquity and requiring instrumentation of the pelvis. Answer: Based on the information provided in the abstracts, there does not appear to be evidence supporting the existence of a "July effect" in surgery for adolescent idiopathic scoliosis. The "July effect" refers to the supposed negative impact on patient care quality due to the changeover of medical trainees at the end of the academic year. However, a multicenter retrospective study analyzed outcomes following instrumented posterior spinal fusion in patients with adolescent idiopathic scoliosis and found no significant differences in blood loss, operative time, length of hospital stay, radiographic outcomes, Scoliosis Research Society (SRS-22) scores, or complications when analyzed based on the month in which the surgery was performed. The rate of patients experiencing any complication was not associated with the month of surgery, including the months of July and August, suggesting that surgery for adolescent idiopathic scoliosis during these months yields safety and outcomes equal to that of other months (PUBMED:24695932). Additionally, another study that focused on the effect of fellow education during the academic year on surgical outcomes in adolescent idiopathic scoliosis also found no impact on patient outcomes studied, including estimated blood loss, use of cell saver, length of stay, operative time, and complication rate (PUBMED:23873226). Therefore, the data from these studies do not provide evidence to support a July effect in surgeries for adolescent idiopathic scoliosis.
Instruction: Can the neural basis of repression be studied in the MRI scanner? Abstracts: abstract_id: PUBMED:23638050 Can the neural basis of repression be studied in the MRI scanner? New insights from two free association paradigms. Background: The psychodynamic theory of repression suggests that experiences which are related to internal conflicts become unconscious. Previous attempts to investigate repression experimentally were based on voluntary, intentional suppression of stimulus material. Unconscious repression of conflict-related material is arguably due to different processes, but has never been studied with neuroimaging methods. Methods: We used functional magnetic resonance imaging (fMRI) in addition with skin conductance recordings during two free association paradigms to identify the neural mechanisms underlying forgetting of freely associated words according to repression theory. Results: In the first experiment, free association to subsequently forgotten words was accompanied by increases in skin conductance responses (SCRs) and reaction times (RTs), indicating autonomic arousal, and by activation of the anterior cingulate cortex. These findings are consistent with the hypothesis that these associations were repressed because they elicited internal conflicts. To test this idea more directly, we conducted a second experiment in which participants freely associated to conflict-related sentences. Indeed, these associations were more likely to be forgotten than associations to not conflict-related sentences and were accompanied by increases in SCRs and RTs. Furthermore, we observed enhanced activation of the anterior cingulate cortex and deactivation of hippocampus and parahippocampal cortex during association to conflict-related sentences. Conclusions: These two experiments demonstrate that high autonomic arousal during free association predicts subsequent memory failure, accompanied by increased activation of conflict-related and deactivation of memory-related brain regions. These results are consistent with the hypothesis that during repression, explicit memory systems are down-regulated by the anterior cingulate cortex. abstract_id: PUBMED:30319179 Effects of MRI scanner parameters on breast cancer radiomics. Purpose: To assess the impact of varying magnetic resonance imaging (MRI) scanner parameters on the extraction of algorithmic features in breast MRI radiomics studies. Methods: In this retrospective study, breast imaging data for 272 patients were analyzed with magnetic resonance (MR) images. From the MR images, we assembled and implemented 529 algorithmic features of breast tumors and fibrograndular tissue (FGT). We divided the features into 10 groups based on the type of data used for the feature extraction and the nature of the extracted information. Three scanner parameters were considered: scanner manufacturer, scanner magnetic field strength, and slice thickness. We assessed the impact of each of the scanner parameters on each of the feature by testing whether the feature values are systematically diverse for different values of these scanner parameters. A two-sample t-test has been used to establish whether the impact of a scanner parameter on values of a feature is significant and receiver operating characteristics have been used for to establish the extent of that effect. Results: On average, higher proportion (69% FGT versus 20% tumor) of FGT related features were affected by the three scanner parameters. Of all feature groups and scanner parameters, the feature group related to the variation in FGT enhancement was found to be the most sensitive to the scanner manufacturer (AUC = 0.81 ± 0.14). Conclusions: Features involving calculations from FGT are particularly sensitive to the scanner parameters. abstract_id: PUBMED:34203896 Impact of Preprocessing and Harmonization Methods on the Removal of Scanner Effects in Brain MRI Radiomic Features. In brain MRI radiomics studies, the non-biological variations introduced by different image acquisition settings, namely scanner effects, affect the reliability and reproducibility of the radiomics results. This paper assesses how the preprocessing methods (including N4 bias field correction and image resampling) and the harmonization methods (either the six intensity normalization methods working on brain MRI images or the ComBat method working on radiomic features) help to remove the scanner effects and improve the radiomic feature reproducibility in brain MRI radiomics. The analyses were based on in vitro datasets (homogeneous and heterogeneous phantom data) and in vivo datasets (brain MRI images collected from healthy volunteers and clinical patients with brain tumors). The results show that the ComBat method is essential and vital to remove scanner effects in brain MRI radiomic studies. Moreover, the intensity normalization methods, while not able to remove scanner effects at the radiomic feature level, still yield more comparable MRI images and improve the robustness of the harmonized features to the choice among ComBat implementations. abstract_id: PUBMED:25792858 Acute vertigo in an anesthesia provider during exposure to a 3T MRI scanner. Vertigo induced by exposure to the magnetic field of a magnetic resonance imaging (MRI) scanner is a well-known phenomenon within the radiology community but is not widely appreciated by other clinical specialists. Here, we describe a case of an anesthetist experiencing acute vertigo while providing sedation to a patient undergoing a 3 Tesla MRI scan. After discussing previous reports, and the evidence surrounding MRI-induced vertigo, we review potential etiologies that include the effects of both static and time-varying magnetic fields on the vestibular apparatus. We conclude our review by discussing the occupational standards that exist for MRI exposure and methods to minimize the risks of MRI-induced vertigo for clinicians working in the MRI environment. abstract_id: PUBMED:12191952 A neural correlate of consciousness related to repression. In previous research Libet (1966) discovered that a critical time period for neural activation is necessary in order for a stimulus to become conscious. This necessary time period varies from subject to subject. In this current study, six subjects for whom the time for neural activation of consciousness had been previously determined were administered a battery of psychological tests on the basis of which ratings were made of degree of repressiveness. As hypothesized, repressive subjects had a longer critical time period for neural activation of consciousness, suggesting the possibility that this neurophysiological time factor is a necessary condition for the development of repression. abstract_id: PUBMED:33530350 Silicon Carbide and MRI: Towards Developing a MRI Safe Neural Interface. An essential method to investigate neuromodulation effects of an invasive neural interface (INI) is magnetic resonance imaging (MRI). Presently, MRI imaging of patients with neural implants is highly restricted in high field MRI (e.g., 3 T and higher) due to patient safety concerns. This results in lower resolution MRI images and, consequently, degrades the efficacy of MRI imaging for diagnostic purposes in these patients. Cubic silicon carbide (3C-SiC) is a biocompatible wide-band-gap semiconductor with a high thermal conductivity and magnetic susceptibility compatible with brain tissue. It also has modifiable electrical conductivity through doping level control. These properties can improve the MRI compliance of 3C-SiC INIs, specifically in high field MRI scanning. In this work, the MRI compliance of epitaxial SiC films grown on various Si wafers, used to implement a monolithic neural implant (all-SiC), was studied. Via finite element method (FEM) and Fourier-based simulations, the specific absorption rate (SAR), induced heating, and image artifacts caused by the portion of the implant within a brain tissue phantom located in a 7 T small animal MRI machine were estimated and measured. The specific goal was to compare implant materials; thus, the effect of leads outside the tissue was not considered. The results of the simulations were validated via phantom experiments in the same 7 T MRI system. The simulation and experimental results revealed that free-standing 3C-SiC films had little to no image artifacts compared to silicon and platinum reference materials inside the MRI at 7 T. In addition, FEM simulations predicted an ~30% SAR reduction for 3C-SiC compared to Pt. These initial simulations and experiments indicate an all-SiC INI may effectively reduce MRI induced heating and image artifacts in high field MRI. In order to evaluate the MRI safety of a closed-loop, fully functional all-SiC INI as per ISO/TS 10974:2018 standard, additional research and development is being conducted and will be reported at a later date. abstract_id: PUBMED:34456460 Inter-Scanner Harmonization of High Angular Resolution DW-MRI using Null Space Deep Learning. Diffusion-weighted magnetic resonance imaging (DW-MRI) allows for non-invasive imaging of the local fiber architecture of the human brain at a millimetric scale. Multiple classical approaches have been proposed to detect both single (e.g., tensors) and multiple (e.g., constrained spherical deconvolution, CSD) fiber population orientations per voxel. However, existing techniques generally exhibit low reproducibility across MRI scanners. Herein, we propose a data-driven technique using a neural network design which exploits two categories of data. First, training data were acquired on three squirrel monkey brains using ex-vivo DW-MRI and histology of the brain. Second, repeated scans of human subjects were acquired on two different scanners to augment the learning of the network proposed. To use these data, we propose a new network architecture, the null space deep network (NSDN), to simultaneously learn on traditional observed/truth pairs (e.g., MRI-histology voxels) along with repeated observations without a known truth (e.g., scan-rescan MRI). The NSDN was tested on twenty percent of the histology voxels that were kept completely blind to the network. NSDN significantly improved absolute performance relative to histology by 3.87% over CSD and 1.42% over a recently proposed deep neural network approach. Moreover, it improved reproducibility on the paired data by 21.19% over CSD and 10.09% over a recently proposed deep approach. Finally, NSDN improved generalizability of the model to a third in vivo human scanner (which was not used in training) by 16.08% over CSD and 10.41% over a recently proposed deep learning approach. This work suggests that data-driven approaches for local fiber reconstruction are more reproducible, informative and precise and offers a novel, practical method for determining these models. abstract_id: PUBMED:34147591 Estimation of in-scanner head pose changes during structural MRI using a convolutional neural network trained on eye tracker video. Introduction: In-scanner head motion is a common cause of reduced image quality in neuroimaging, and causes systematic brain-wide changes in cortical thickness and volumetric estimates derived from structural MRI scans. There are few widely available methods for measuring head motion during structural MRI. Here, we train a deep learning predictive model to estimate changes in head pose using video obtained from an in-scanner eye tracker during an EPI-BOLD acquisition with participants undertaking deliberate in-scanner head movements. The predictive model was used to estimate head pose changes during structural MRI scans, and correlated with cortical thickness and subcortical volume estimates. Methods: 21 healthy controls (age 32 ± 13 years, 11 female) were studied. Participants carried out a series of stereotyped prompted in-scanner head motions during acquisition of an EPI-BOLD sequence with simultaneous recording of eye tracker video. Motion-affected and motion-free whole brain T1-weighted MRI were also obtained. Image coregistration was used to estimate changes in head pose over the duration of the EPI-BOLD scan, and used to train a predictive model to estimate head pose changes from the video data. Model performance was quantified by assessing the coefficient of determination (R2). We evaluated the utility of our technique by assessing the relationship between video-based head pose changes during structural MRI and (i) vertex-wise cortical thickness and (ii) subcortical volume estimates. Results: Video-based head pose estimates were significantly correlated with ground truth head pose changes estimated from EPI-BOLD imaging in a hold-out dataset. We observed a general brain-wide overall reduction in cortical thickness with increased head motion, with some isolated regions showing increased cortical thickness estimates with increased motion. Subcortical volumes were generally reduced in motion affected scans. Conclusions: We trained a predictive model to estimate changes in head pose during structural MRI scans using in-scanner eye tracker video. The method is independent of individual image acquisition parameters and does not require markers to be to be fixed to the patient, suggesting it may be well suited to clinical imaging and research environments. Head pose changes estimated using our approach can be used as covariates for morphometric image analyses to improve the neurobiological validity of structural imaging studies of brain development and disease. abstract_id: PUBMED:36007437 Brain atrophy measurement over a MRI scanner change in multiple sclerosis. Background: A change in MRI hardware impacts brain volume measurements. The aim of this study was to use MRI data from multiple sclerosis (MS) patients and healthy control subjects (HCs) to statistically model how to adjust brain atrophy measures in MS patients after a major scanner upgrade. Methods: We scanned 20 MS patients and 26 HCs before and three months after a major scanner upgrade (1.5 T Siemens Healthineers Magnetom Avanto to 3 T Siemens Healthineers Skyra Fit). The patient group also underwent standardized serial MRIs before and after the scanner change. Percentage whole brain volume changes (PBVC) measured by Structural Image Evaluation using Normalization of Atrophy (SIENA) in the HCs was used to estimate a corrective term based on a linear model. The factor was internally validated in HCs, and then applied to the MS group. Results: Mean PBVC during the scanner change was higher in MS than HCs (-4.1 ± 0.8 % versus -3.4 ± 0.6 %). A fixed corrective term of 3.4 (95% confidence interval: 3.13-3.67)% was estimated based on the observed average changes in HCs. Age and gender did not have a significant influence on this corrective term. After adjustment, a linear mixed effects model showed that the brain atrophy measures in MS during the scanner upgrade were not anymore associated with the scanner type (old vs new scanner; p = 0.29). Conclusion: A scanner change affects brain atrophy measures in longitudinal cohorts. The inclusion of a corrective term based on changes observed in HCs helps to adjust for the known and unknown factors associated with a scanner upgrade on a group level. abstract_id: PUBMED:29922197 The Neural Basis of and a Common Neural Circuitry in Different Types of Pro-social Behavior. Pro-social behaviors are voluntary behaviors that benefit other people or society as a whole, such as charitable donations, cooperation, trust, altruistic punishment, and fairness. These behaviors have been widely described through non self-interest decision-making in behavioral experimental studies and are thought to be increased by social preference motives. Importantly, recent studies using a combination of neuroimaging and brain stimulation, designed to reveal the neural mechanisms of pro-social behaviors, have found that a wide range of brain areas, specifically the prefrontal cortex, anterior insula, anterior cingulate cortex, and amygdala, are correlated or causally related with pro-social behaviors. In this review, we summarize the research on the neural basis of various kinds of pro-social behaviors and describe a common shared neural circuitry of these pro-social behaviors. We introduce several general ways in which experimental economics and neuroscience can be combined to develop important contributions to understanding social decision-making and pro-social behaviors. Future research should attempt to explore the neural circuitry between the frontal lobes and deeper brain areas. Answer: Yes, the neural basis of repression can be studied in the MRI scanner. Research has utilized functional magnetic resonance imaging (fMRI) in conjunction with skin conductance recordings during free association paradigms to investigate the neural mechanisms underlying forgetting of freely associated words according to repression theory. The results from these studies indicate that high autonomic arousal during free association predicts subsequent memory failure, accompanied by increased activation of conflict-related brain regions, such as the anterior cingulate cortex, and deactivation of memory-related regions, like the hippocampus and parahippocampal cortex. These findings support the hypothesis that during repression, explicit memory systems are down-regulated by the anterior cingulate cortex (PUBMED:23638050).
Instruction: Is urinary incontinence a barrier to exercise in women? Abstracts: abstract_id: PUBMED:16055580 Is urinary incontinence a barrier to exercise in women? Objective: To describe the prevalence of urinary incontinence during exercise in women, estimate whether exercise intensity is related to leakage severity, and report women's assessments of incontinence as a barrier to exercise. Methods: Questionnaires were mailed to 5,130 women aged 18-60 years drawn from National Family Opinion research panels. Physical activity levels were assessed by the International Physical Activity Questionnaire. Urinary incontinence, defined as involuntary leakage in the last 30 days, was assessed with the Sandvik Severity Index and a global measure of bother. Prevalence estimates were adjusted via post-stratification weighting. Results: A total of 3,364 eligible women responded (68%), of whom 34.6% were insufficiently active (95% confidence interval [CI] 32.7-36.5%), 29.7% were sufficiently active (95% CI 27.9-31.5%), and 35.7% were highly active (95% CI 33.8-37.6%). Urinary incontinence prevalence was 34.3% (95% CI 32.5-36.1%). One in seven women experienced urinary leakage during physical activity; this was more common among highly active (15.9%) than less active women (11.8%) (P = .01). After adjusting for age, comorbidities, education, and race, women with very severe incontinence were 2.64 times (95% CI 1.25-5.55) more likely to be insufficiently active than continent women. Incontinence was a moderate or substantial barrier to exercise for 9.8% (95% CI 8.8-10.9%) of women. Of incontinent women, the proportion for whom incontinence was a moderate or substantial barrier to exercise increased with each severity category: 9.2%, slight; 37.8%, moderate; 64.6%, severe; and 85.3%, very severe (P < .01). Conclusion: Urinary incontinence is perceived as a barrier to exercise, particularly by women with more severe leakage. abstract_id: PUBMED:33971737 Effect of Pelvic Floor Symptoms on Women's Participation in Exercise: A Mixed-Methods Systematic Review With Meta-analysis. Objective: To (1) review the effect of pelvic floor (PF) symptoms (urinary incontinence [UI], pelvic organ prolapse, and anal incontinence) on exercise participation in women, and (2) explore PF symptoms as a barrier to exercising. Design: Mixed-methods systematic review with meta-analysis. Literature Search: Eight databases were systematically searched up to September 2020. Study Selection Criteria: We included full-text, peer-reviewed observational, experimental, or qualitative studies in adult, community-dwelling women with PF symptoms. Outcomes included the participant-reported effect on exercise or the perception of PF symptoms as an exercise barrier. Study quality was assessed using a modified version of the Mixed Methods Appraisal Tool. Data Synthesis: Meta-analysis was performed where possible. Deductive and inductive content analysis was used to synthesize qualitative data. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework and the GRADE-Confidence in the Evidence from Reviews of Qualitative research (CERQual) guided interpretation of the certainty of evidence. Results: Thirty-three studies were included. In 47% (95% confidence interval [CI]: 37%, 56%; I2 = 98.6%) of women with past, current, or fear of PF symptoms, UI symptoms adversely affected exercise participation (21 studies, n = 14 836 women). Thirty-nine percent (95% CI: 22%, 57%; I2 = 93.0%; 6 studies, n = 426) reported a moderate or great effect on exercise. Pelvic organ prolapse affected exercise for 28% of women (95% CI: 24%, 33%; I2 = 0.0%; 2 studies, n = 406). There were no quantitative studies of anal incontinence. Conclusion: For 1 in 2 women, UI symptoms negatively affect exercise participation. Half of women with UI reported either stopping or modifying exercise due to their symptoms. Limited data on pelvic organ prolapse also demonstrated adverse exercise effect. J Orthop Sports Phys Ther 2021;51(7):345-361. Epub 10 May 2021. doi:10.2519/jospt.2021.10200. abstract_id: PUBMED:34939122 Pelvic Floor Symptoms Are an Overlooked Barrier to Exercise Participation: A Cross-Sectional Online Survey of 4556 Women Who Are Symptomatic. Objective: This study aimed to: (1) investigate barriers to exercise in women with pelvic floor (PF) symptoms (urinary incontinence [UI], anal incontinence [AI], and pelvic organ prolapse [POP]); (2) determine factors associated with reporting PF symptoms as a substantial exercise barrier; and (3) investigate the association between reporting PF symptoms as an exercise barrier and physical inactivity. Methods: In this cross-sectional survey, Australian women who were 18 to 65 years of age and had PF symptoms completed an anonymous online survey (May-September 2018) containing validated PF and physical activity questionnaires: Questionnaire for Female Urinary Incontinence Diagnosis, Incontinence Severity Index, Pelvic Floor Bother Questionnaire, and International Physical Activity Questionnaire. Participants reported exercise barriers and the degree to which the barriers limited participation. Binary logistic regression was used to identify variables associated with (1) identifying PF symptoms as a substantial exercise barrier and (2) physical inactivity. Results: In this cohort (N = 4556), 31% (n = 1429) reported PF symptoms as a substantial exercise barrier; UI was the most frequently reported barrier. Two-thirds of participants who identified POP and UI as exercise barriers had stopped exercising. The odds of reporting PF symptoms as a substantial exercise barrier were significantly higher for women with severe UI (odds ratio [OR] = 4.77; 95% CI = 3.60-6.34), high symptom bother (UI OR = 10.19; 95% CI = 7.24-14.37; POP OR = 22.38; 95% CI = 13.04-36.60; AI OR = 29.66; 95% CI = 7.21-122.07), those who had a vaginal delivery (1 birth OR = 2.04; 95% CI = 1.63-2.56), or those with a third- or fourth-degree obstetric tear (OR = 1.47; 95% CI = 1.24-1.76). The odds of being physically inactive were greater in women who identified PF symptoms as an exercise barrier than in those who did not (OR = 1.33; 95% CI = 1.1-1.59). Conclusion: One in 3 women reported PF symptoms as a substantial exercise barrier, and this was associated with increased odds of physical inactivity. Impact: Physical inactivity is a major cause of mortality and morbidity in women. Pelvic floor symptoms stop women participating in exercise and are associated with physical inactivity. Screening and management of PF symptoms could allow women to remain physically active across their life span. Lay Summary: Pelvic floor symptoms are a substantial barrier to exercise in women of all ages, causing them to stop exercising and increasing the odds of being physical inactive. Physical therapists can screen and help women manage their PF symptoms so that they remain physically active. abstract_id: PUBMED:11905931 Too wet to exercise? Leaking urine as a barrier to physical activity in women. Leaking urine is frequently mentioned (anecdotally) by women as a barrier to physical activity. The aim of this paper was to use results from the Australian Longitudinal Study on Women's Health (ALSWH) to explore the prevalence of leaking urine in Australian women, and to ascertain whether leaking urine might be a barrier to participation for women. More than 41,000 women participated in the baseline surveys of the ALSWH in 1996. More than one third of the mid-age (45-50 years) and older (70-75) women and 13% of the young women (18-23) reported leaking urine. There was a cross-sectional association between leaking urine and physical activity, such that women with more frequent urinary leakage were also more likely to report low levels of physical activity. More than one thousand of those who reported leaking urine at baseline participated in a follow-up study in 1999. Of these, more than 40% of the mid-age women (who were aged 48-53 in 1999), and one in seven of the younger (21-26 years) and older (73-79 years) women reported leaking urine during sport or exercise. More than one third of the mid-age women and more than one quarter of the older women, but only 7% of the younger women said they avoided sporting activities because of leaking urine. The data are highly suggestive that leaking urine may be a barrier to physical activity, especially among mid-age women. As current estimates suggest that fewer than half of all Australian women are adequately active for health benefit, health professionals could be more proactive in raising this issue with women and offering help through non-invasive strategies such as pelvic floor muscle exercises. abstract_id: PUBMED:28786872 Barriers to Exercise Among Women With Urgency Urinary Incontinence: Patient and Provider Perspectives. Objective: The aim of this study was to describe important barriers to exercise in older women with urgency urinary incontinence (UUI) from the patient and provider perspectives. Methods: Six focus groups (2 in active women, 2 in sedentary women, and 2 in providers) were conducted with 36 women with UUI and 18 providers. Focus group discussions were transcribed verbatim. All transcripts were coded and analyzed by 2 independent reviewers. Investigators identified emergent themes and concepts using a modified biopsychosocial conceptual model. Results: A wide range of physical, psychological, social, and environmental factors were perceived to influence exercise. Although women with UUI identified pain as a strong barrier to exercise, providers did not. Both women with UUI and providers identified shame associated with incontinence as a significant barrier, and, conversely, satisfaction with UUI treatment was noted as an enabler for exercising. Women and providers had incongruent views on the need for supervision during exercise; women viewed supervision as a barrier to exercise, whereas providers viewed lack of supervision as a barrier to exercise. Opportunity for socialization was noted as a major enabler of exercise by all groups and suggests that exercise programs that promote interactions with peers may increase exercise participation. The importance of financial incentive and reimbursement was congruent between women and their providers. Conclusions: Women with UUI have unique perspectives on barriers to exercise. Understanding women's perspective can aid clinicians and researchers in improving exercise counseling and in creating exercise programs for women with UUI. abstract_id: PUBMED:36553882 Kegel Exercise Training Program among Women with Urinary Incontinence. A common condition with a large global prevalence and a persistent medical taboo for many people is urinary incontinence. Around one in three women globally are impacted by it. The most frequently suggested physical therapy treatment for women with stress incontinence or urge incontinence is Kegel exercise (also called pelvic floor muscle training). This study aims to assess the effects of a Kegel exercise training program among women with urinary incontinence. The study was conducted at three government hospitals in Egypt's Port Said city's outpatient gynecological clinic. The intervention design was quasi-experimental. In total, 292 women with urine incontinence who visited the research sites made up the subjects. The necessary data were gathered using an interview questionnaire. Improvements in urinary incontinence and quality of life were positively correlated with daily Kegel exercise practice. Urinary incontinence has statistically significant positive correlations with age (p = 0.026), respiratory rate (p = 0.007), and body mass index (p = 0.026) as women grow older. Urinary incontinence, being single, and increasing pulse, however, had adversely significant negative correlations (p = 0.031 and 0.020, respectively). Urinary incontinence affects women's overall wellbeing, particularly in the emotional and social spheres, as well as their quality of life and their ability to participate in normal everyday activities. Following the adoption of the Kegel exercise training program, there was a substantial improvement in both urine incontinence and quality of life. abstract_id: PUBMED:36739199 Screening for pelvic floor symptoms in exercising women: a survey of 636 health and exercise professionals. Objectives: This study aimed to establish health and exercise professionals' (i) current practice of screening for pelvic floor (PF) symptoms in women within sports/exercise settings (ii) between-professional group differences in screening practice (iii) confidence and attitudes towards screening for PF symptoms and (iv) barrier/enablers towards engagement in future screening practice. Design: Observational, cross-sectional survey. Methods: Australian health and exercise professionals (n = 636) working with exercising women participated in a purpose-designed and piloted, online survey about PF symptom screening in professional practice. Data were analysed descriptively and groups compared using Chi-square/Kruskal-Wallis tests. Results: Survey respondents included physiotherapists (39%), personal trainers/fitness instructors (38%) and exercise physiologists (12%), with a mean of 12 years of practice (SD: 9.7, range: 0-46). One in two participants never screened women for PF symptoms; 23% screened when indicated. Pregnant/recently post-natal women (44%) were more commonly screened for PF symptoms than younger women (18-25 years:28%) and those competing in high-impact sports (32%). Reasons for not screening included waiting for patients to disclose symptoms (41%) and an absence of PF questions on screening tools (37%). Most participants were willing to screen PF symptoms but cited a lack of knowledge, training and confidence as barriers. Conclusions: Screening for PF symptoms in exercising women is not common practice, especially in at-risk groups such as young, high-impact athletes. Including PF questions in existing pre-exercise questionnaires and providing professional development to improve knowledge of indications for screening and evidence-based management options may facilitate early symptom identification and prevent secondary exercise cessation. abstract_id: PUBMED:15385857 Exercise and urinary incontinence in women. Urinary incontinence is a common problem in women and may significantly impair their quality of life. Although women often report stress urinary incontinence during exercise, current data indicates that most types of exercise are not a risk factor for the development of urinary incontinence. However, certain extreme high-impact sports such as parachute jumping may cause pelvic organ support defects that result in stress urinary incontinence. Eating disorders also increase the risk of urinary incontinence in athletes. Overall, women should be encouraged to pursue physical activity that will benefit their general health without the risk of development of urinary incontinence later in life. Women athletes should be counseled about the increased risk of urinary incontinence with ultra high-impact sports and eating disorders. abstract_id: PUBMED:34031331 Development and validation of pelvic floor muscles exercise intervention for urinary incontinence among pregnant women. Introduction: The prevalence of urinary incontinence among pregnant women is high in Malaysia. However, healthcare providers appear to pay little attention to it along with a limited local intervention that addresses the continence health during pregnancy. This study aims to develop and validate intervention with pelvic floor muscle exercise (PFME) for pregnant women. Materials And Methods: The development of PFME intervention was guided by the Medical Research Council Framework for Developing and Evaluating Complex Intervention (MRC Framework). This involved four phases: identification of current research evidence, expert opinion, validation via focus group discussions with physiotherapists and pregnant women, and piloting the intervention using a single group pre-post design among 30 pregnant women at Maternity Hospital Kuala Lumpur to assess the feasibility of the intervention by evaluating changes in knowledge and attitude. The qualitative approach was used to analyse the first three phases, while non-parametric methods were used to analyse the pilot prepost test results. Results: Based on research evidence and guidelines found during the literature review, a PFME intervention was developed using a new paradigm incorporating two theories, the Health Belief Model and Motivational Interviewing that have been shown to be important in continence promotion and exercise adherence. The contribution of the panel of experts in refining the intervention to meet the local context, endorses the achievement of the intervention's content validity. While, the focus group discussion with pregnant women and physiotherapists revealed the face-validity of the intervention. The findings of the pilot pre-testing showed that PFME knowledge (p<0.001) and attitude (p=0.011) improved significantly immediately following the intervention. Conclusions: Evidently, this is a pioneer study that illustrates the development of a Malaysian context-adapting PFME intervention on the basis of recommended steps using the MRC Framework. Incorporating a theory-based and rigorous validation approach into the development of the PFME intervention brought novel perspectives to the intervention. Given the promising preliminary results of the pre-testing pilot study, the PFME intervention could be implemented in the planned randomised control trial to validate the robustness of the results. abstract_id: PUBMED:36327627 The feasibility of a multimodal exercise program for sedentary postmenopausal women with urinary incontinence: A pilot randomized controlled trial. Objectives: The aim was to investigate the feasibility and effects of an eight-week multimodal exercise program in sedentary postmenopausal women with urinary incontinence (UI) compared with pelvic floor muscle training (PFMT) only. Study Design: This was a pilot randomized controlled trial. The participants were randomly allocated to either the intervention group (IG), who received a multimodal exercise program that incorporated aerobic exercise, resistance exercise, and PFMT, or the control group (CG), who received PFMT only. The intervention was provided twice weekly for eight weeks. Main Outcome Measures: The primary outcome was feasibility in relation to consent rate, attendance, withdrawal rate, satisfaction with program, and adverse events. The secondary outcomes included pelvic floor muscle function (assessed using digital palpation and manometry), UI-specific quality of life (QoL) (rated on the International Consultation on Incontinence Questionnaire Lower Urinary Tract Symptoms Quality of Life [ICIQ-LUTSqol]), symptom severity (assessed using the International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form [ICIQ-UI SF] and a bladder diary), and physical activity levels (rated on the International Physical Activity Questionnaire- short form [IPAQ]). Results: Thirty-three of 63 eligible participants (52.4 %) consented to participate in the study and 26 completed it (IG n = 13, CG n = 13); the withdrawal rate was lower in the IG than in the CG (13.3 % vs 27.8 %). In both groups the attendance rate was 100 % and the satisfaction rate was 84.6 %. No adverse events were reported. The intention-to-treat analysis showed that UI-specific QoL (p = 0.011, mean difference: -5.62, 95 % CI: -9.67 to -1.57) and symptom severity measured using the ICIQ-UI SF (p = 0.001, mean difference: -3.23, 95 % CI: -4.93 to -1.54) were significantly improved in the IG after intervention, while no significant changes were found in the CG after intervention. The daytime frequency (p = 0.001) and number of leakages (p = 0.045) recorded in the bladder diary were significantly reduced in the CG after intervention. No significant differences between groups were found on any of the outcome measurements after intervention, except the daytime frequency from the bladder diary, which was significantly more reduced in the CG than in the IG (p = 0.003, mean difference: -0.28, 95 % CI: -2.25 to 1.67). Conclusion: An eight-week multimodal exercise program is feasible for sedentary postmenopausal women with UI, and the multimodal exercise program demonstrated potential benefits for improving UI-specific QoL and symptom severity in this population. The trial was registered at ClinicalTrials.gov (NCT04351750). Answer: Yes, urinary incontinence (UI) is a barrier to exercise in women. Studies have shown that urinary incontinence is perceived as a barrier to exercise, particularly by women with more severe leakage (PUBMED:16055580). A mixed-methods systematic review with meta-analysis found that UI symptoms negatively affect exercise participation for 1 in 2 women, with many either stopping or modifying exercise due to their symptoms (PUBMED:33971737). Another cross-sectional survey indicated that 31% of women reported pelvic floor (PF) symptoms as a substantial exercise barrier, and this was associated with increased odds of physical inactivity (PUBMED:34939122). Furthermore, research from the Australian Longitudinal Study on Women's Health suggested that leaking urine may be a barrier to physical activity, especially among mid-age women, with a significant number of women avoiding sporting activities because of leaking urine (PUBMED:11905931). Older women with urgency urinary incontinence (UUI) also identified a range of barriers to exercise, including physical, psychological, social, and environmental factors, with shame associated with incontinence being a significant barrier (PUBMED:28786872). Additionally, health and exercise professionals reported that screening for PF symptoms in exercising women is not common practice, and they cited a lack of knowledge, training, and confidence as barriers to screening (PUBMED:36739199). Despite these barriers, interventions such as Kegel exercise training programs have been shown to improve urinary incontinence and quality of life, suggesting that management of UI could help women remain physically active (PUBMED:36553882). In summary, urinary incontinence is a significant barrier to exercise in women, affecting their participation and leading to physical inactivity in some cases. However, with appropriate management and interventions, women can overcome these barriers and maintain an active lifestyle.
Instruction: Is pregnancy a teachable moment for smoking cessation among US Latino expectant fathers? Abstracts: abstract_id: PUBMED:20013439 Is pregnancy a teachable moment for smoking cessation among US Latino expectant fathers? A pilot study. Objective: Pregnancy may be a time when US Latino expectant fathers consider quitting smoking. A 'teachable moment' is theorized to increase motivation to change a behavior through increased risk perceptions, emotional responses, and changes in self-image. Design: We recruited 30 Spanish-speaking expectant fathers through their pregnant partners. We assessed expectant fathers' diet, exercise, and smoking and teachable moment constructs (risk perceptions, emotional responses, and self-image).We also tested correlations between teachable moment constructs and motivation to change behaviors. Results: Latino expectant fathers had high-risk perceptions that their smoking harmed the pregnancy (M=4.4, SD=0.5 on five-point scale) and strong emotional responses about their smoking during pregnancy (M=3.9, SD=1.1). They also felt it was their role to make the pregnancy healthy (M=4.4, SD=0.8). They felt less strongly that their diet and exercise affected the pregnancy. The teachable moment constructs for smoking were strongly correlated with motivation to quit smoking; the same was not true for diet and exercise. Conclusions: Latino expectant fathers seem aware that their smoking could harm the pregnancy but seem less concerned about the effect of their diet and exercise on the pregnancy. Pregnancy may be a time to help Latino expectant fathers quit smoking. abstract_id: PUBMED:37723506 Effectiveness of behavior change interventions for smoking cessation among expectant and new fathers: findings from a systematic review. Background: Smoking cessation during pregnancy and the postpartum period by both women and their partners offers multiple health benefits. However, compared to pregnant/postpartum women, their partners are less likely to actively seek smoking cessation services. There is an increased recognition about the importance of tailored approaches to smoking cessation for expectant and new fathers. While Behavior Change Interventions (BCIs) are a promising approach for smoking cessation interventions, evidence on effectiveness exclusively among expectant and new fathers are fragmented and does not allow for many firm conclusions to be drawn. Methods: We conducted a systematic review on effectiveness of BCIs on smoking cessation outcomes of expectant and new fathers both through individual and/or couple-based interventions. Peer reviewed articles were identified from eight databases without any date or language restriction.Two independent reviewers screened studies for relevance, assessed methodological quality of relevant studies, and extracted data from studies using a predeveloped data extraction sheet. Results: We retrieved 1222 studies, of which 39 were considered for full text screening after reviewing the titles and abstracts. An additional eight studies were identified from reviewing the reference list of review articles picked up by the databases search. A total of nine Randomised Control Trials were included in the study. Six studies targeted expectant/new fathers, two targeted couples and one primarily targeted women with an intervention component to men. While the follow-up measurements for men varied across studies, the majority reported biochemically verified quit rates at 6 months. Most of the interventions showed positive effects on cessation outcomes. BCI were heterogenous across studies. Findings are suggestive of gender targeted interventions being more likely to have positive cessation outcomes. Conclusions: This systematic review found limited evidence supporting the effectiveness of BCI among expectant and new fathers, although the majority of studies show positive effects of these interventions on smoking cessation outcomes. There remains a need for more research targeted at expectant and new fathers. Further, there is a need to identify how smoking cessation service delivery can better address the needs of (all) gender(s) during pregnancy. abstract_id: PUBMED:32320843 Brief cessation advice, nicotine replacement therapy sampling and active referral (BANSAR) for smoking expectant fathers: Study protocol for a multicentre, pragmatic randomised controlled trial. Background: Pregnancy presents a teachable moment to engage male smokers whose partners are pregnant in smoking cessation. Evidence on how to approach and help these smokers quit smoking in antenatal settings has remained scarce. This paper presents the rationale and study design of a trial which aims to evaluate the effectiveness of a brief intervention model for promoting smoking cessation in expectant fathers. Methods: BANSAR is a pragmatic randomised controlled trial conducted in antenatal clinic in seven public hospitals in Hong Kong, China. An estimated 1148 fathers who smoke at least one cigarette daily and whose partners are pregnant and non-smoking will be randomised (1:1) to receive brief advice combined with 1-week sample of nicotine replacement therapy (NRT) and active referral to smoking cessation services, or brief advice only (usual care). Outcome will be assessed at 3 and 6 months after treatment initiation. The primary outcome is carbon monoxide-verified (<4 part per million) abstinence at 6 months post-treatment initiation. Secondary outcomes include self-reported 7-day point-prevalence abstinence and 24-week continuous abstinence, use of smoking cessation service and NRT and quit attempt, and smoking reduction, change in nicotine dependence and intention to quit in continuing smokers. Comment: This trial will provide real-world evidence on the effectiveness of a combined brief intervention model for smoking cessation in expectant fathers, an understudied population. The findings may be particularly relevant to low and middle-income countries, where male-to-female smoking ratios and birth rates tend to be higher than higher-income countries. Trial Registration: ClinicalTrials.gov, number NCT03671707. abstract_id: PUBMED:25406226 Efficacy of a couple-based randomized controlled trial to help Latino fathers quit smoking during pregnancy and postpartum: the Parejas trial. Background: Although many Latinos in the United States smoke, they receive assistance to quit less often than non-Latinos. To address this disparity, we recruited Latino couples into a randomized controlled trial and provided a smoking cessation program during a teachable moment, when men's partners were pregnant. Methods: We compared two interventions: (i) written materials plus nicotine replacement therapy (NRT) to (ii) materials, NRT, and couple-based counseling that addressed smoking cessation and couples communication. We recruited 348 expectant fathers who smoked via their pregnant partners from county health departments. Our primary outcome was 7-day point prevalence smoking abstinence and was collected from November 2010 through April 2013 and analyzed in February 2014. Results: We found high rates of cessation but no arm differences in smoking rates at the end of pregnancy (0.31 vs. 0.30, materials only vs. counseling, respectively) and 12 months after randomization (postpartum: 0.39 vs. 0.38). We found high quit rates among nondaily smokers but no arm differences (0.43 vs. 0.46 in pregnancy and 0.52 vs. 0.48 postpartum). Among daily smokers, we found lower quit rates with no arm differences but effects favoring the intervention arm (0.13 vs. 0.16 in pregnancy and 0.17 vs. 0.24 postpartum). Conclusions: A less intensive intervention promoted cessation equal to more intensive counseling. Postpartum might be a more powerful time to promote cessation among Latino men. Impact: Less intensive interventions when delivered during teachable moments for Latino men could result in a high smoking cessation rate and could reduce disparities. abstract_id: PUBMED:24621544 Smoking motives, quitting motives, and opinions about smoking cessation support among expectant or new fathers. Objective: The aims of this study were to identify smoking and quitting motives among expectant or new fathers who were in the precontemplation or contemplation stage of smoking cessation and to explore their perceptions of smoking cessation interventions. Design: This study used a descriptive qualitative design. Setting: The study was conducted in an outpatient antenatal clinic and postpartum unit of a large university hospital. Participants: A convenience sample of five expectant fathers and five new fathers who smoked was used. Method: Qualitative thematic analysis was used to analyze the transcripts of audio-recorded interviews. Results: Despite their reluctance to quit smoking, all the participants made changes in their smoking behaviors during pregnancy or postpartum to protect their partners and infants from the odor and/or potential harm of secondhand and thirdhand smoke. Our findings reveal that pregnancy and childbirth may be a time when men experience additional and unique stress that influences continued smoking but may also give rise to unique motives for future smoking reduction and cessation among men previously resistant to quitting. Furthermore, expectant or new fathers may be more drawn to smoking cessation interventions that foster their own personal strategies to reduce or quit smoking and that respect their needs for self-reliance and control. Conclusion: The perinatal period may be an opportune time for a motivationally based proactive smoking cessation intervention among male smokers. abstract_id: PUBMED:25844907 Relationships among spousal communication, self-efficacy, and motivation among expectant Latino fathers who smoke. Objective: Cigarette smoking is a prevalent problem among Latinos, yet little is known about what factors motivate them to quit smoking or make them feel more confident that they can. Given cultural emphases on familial bonds among Latinos (e.g., familismo), it is possible that communication processes among Latino spouses play an important role. The present study tested a mechanistic model in which perceived spousal constructive communication patterns predicted changes in level of motivation for smoking cessation through changes in self-efficacy among Latino expectant fathers. Methods: Latino males (n = 173) and their pregnant partners participated in a couple-based intervention targeting males' smoking. Couples completed self-report measures of constructive communication, self-efficacy (male partners only), and motivation to quit (male partners only) at 4 time points throughout the intervention. Results: Higher levels of perceived constructive communication among Latino male partners predicted subsequent increases in male partners' self-efficacy and, to a lesser degree, motivation to quit smoking; however, self-efficacy did not mediate associations between constructive communication and motivation to quit smoking. Furthermore, positive relationships with communication were only significant at measurements taken after completion of the intervention. Female partners' level of perceived constructive communication did not predict male partners' outcomes. Conclusion: These results provide preliminary evidence to support the utility of couple-based interventions for Latino men who smoke. Findings also suggest that perceptions of communication processes among Latino partners (particularly male partners) may be an important target for interventions aimed at increasing desire and perceived ability to quit smoking among Latino men. (PsycINFO Database Record abstract_id: PUBMED:32758182 Association of smoking behavior among Chinese expectant fathers and smoking abstinence after their partner becomes pregnant: a cross-sectional study. Background: Exposure to secondhand smoke (SHS) during pregnancy can cause pregnancy complications and adverse birth outcomes. About 40% of Chinese expectant fathers are smokers and they rarely attempt to quit smoking. There is a paucity of effective smoking cessation services targeting this population. In this study, we assessed the smoking behavior of Chinese expectant fathers and examined its association with smoking abstinence after their partner became pregnant, which is an essential prerequisite for designing effective smoking cessation interventions. Methods: We conducted a cross-sectional survey in the obstetrics and gynecology clinic of three tertiary hospitals in China. Expectant fathers who smoked at least one cigarette per day for 1 month within the past 12 months were invited to participate in this study. The participants were asked to complete a structured questionnaire that assessed their smoking behaviors before and after their partner became pregnant. Results: From December 2017 to March 2018, we recruited a total of 466 eligible expectant fathers, among whom 323 (69.3%) were identified as current smokers and 143 (30.7%) were ex-smokers. Using lasso regression, 19 features were selected from among 27 independent variables. The results of the selected multivariable logistic regression model showed that knowledge about the health hazards of smoking among smokers (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.24 to 1.58; p < 0.001), knowledge about the health hazards of SHS to pregnant women (OR 1.46; 95% CI 1.09 to 1.97; p < 0.001), knowledge about harm to the fetus and newborn (OR 1.58; 95% CI 1.25 to 2.03; p < 0.001), and being a first-time expectant father (OR 2.08; 95% CI 1.02 to 3.85; p = 0.046) were significantly positively associated with smoking abstinence among expectant fathers after their partner became pregnant. Significantly negative associations were found for severe dysfunctionality in terms of family support (OR 0.48; 95% CI 0.24 to 0.95; p = 0.036) and smoking only outside the home (OR 0.81; 95% CI 0.26 to 0.98; p < 0.001). Conclusions: In this study, we identified several factors associated with smoking abstinence among expectant fathers after their partner became pregnant. These findings can guide the development of effective interventions targeting expectant fathers, to help them quit smoking. abstract_id: PUBMED:16036284 A pilot study of smoking and associated behaviors of low-income expectant fathers. Pregnancy is considered a teachable moment for helping women who smoke to quit, yet few studies have examined smoking behavior of expectant fathers. The present study considers the possibility that pregnancy is a teachable moment for expectant fathers as well and describes smoking and associated behaviors of men during their partner's pregnancy. Participants were 138 low-income men living with their pregnant partners. Using telephone interviews, we found 63% of the men had smoked at least 100 cigarettes in their lifetime. Current smoking was reported by 49.3% of expectant fathers (39.1% daily smoking; 10.2% some days). Expectant fathers' current smoking was associated with having a lower level of education (p<.0001), pregnant partner being a current smoker (p=.0002), higher quantity of alcohol consumption per day of drinking (p=.0003), and absence of smoking prohibitions inside the home (p<.0001). In the past year, 70.1% of the current smokers tried to quit. We found high rates of smoking in low-income expectant fathers, and an expectant father's smoking during his partner's pregnancy was associated with his pregnant partner continuing to smoke. A majority of expectant fathers identified as current smokers tried to quit in the past year or indicated an intention to quit in the near future. Intervention during pregnancy that targets pregnant women and expectant fathers who smoke could lead to more households without tobacco use and thus have positive implications for paternal, maternal, and family health. Further clinical and research attention is needed to address the smoking behaviors of both expectant fathers and their pregnant partners. abstract_id: PUBMED:34670560 Perceptions, behaviours and attitudes towards smoking held by the male partners of Chinese pregnant women: a qualitative study. Background: Direct associations of tobacco exposure during pregnancy with pregnancy complications and adverse birth outcomes have been proven. Previous studies suggest that expecting a child provides a valuable opportunity to promote behavioural changes, such as smoking cessation, among the male partners of pregnant women. Thorough understandings of Chinese expectant fathers' smoking behaviour during the transition to fatherhood is a prerequisite to the development of appropriate interventions to facilitate smoking cessation. This study aimed to explore the perceptions, behaviours and attitudes related to smoking among male partners of pregnant women in China. Methods: A descriptive phenomenological approach was adopted. A purposive sample of expectant fathers aged 18 years or older who had a tobacco use history within the past year were recruited at obstetrics and gynaecology clinics and invited to participate in one-to-one, 20-30-min semi-structured interviews. The data analysis followed Colaizzi's descriptive phenomenological method. Results: Twenty-five expectant fathers were interviewed. Four themes were generated: 1) the benefits of smoking and respondents' misperceptions of the impact of smoking and SHS and neglectful attitude of the impact of smoking, which were given as the major reasons for continuing to smoke; 2) factors contributing to smoking cessation, including concern for the potential health impact of continued smoking on the pregnant partner and baby, the role of being father, and the encouragement to quit from family members; and 3) perceived barriers to smoking cessation, including withdrawal symptoms or cigarette cravings, absence of smoking cessation support, and increasing stress. Conclusion: This study provides a comprehensive understanding of the perception, behaviours, and attitudes related to smoking among Chinese expectant fathers. The findings of this study can guide healthcare professionals and policymakers in combining the distribution of educational information about the hazards of SHS for maternal and neonatal health with smoking cessation assistance for expectant fathers through policy initiatives and other types of incentives and programmes targeted to enhance smoking cessation among this population. Trial Registration: Prospectively registered at clinicaltrial.org ( NCT03401021 ) on 8 Jan 2018. abstract_id: PUBMED:32991589 Effectiveness of a video-based smoking cessation intervention focusing on maternal and child health in promoting quitting among expectant fathers in China: A randomized controlled trial. Background: Secondhand smoke can cause adverse pregnancy outcomes, yet there is a lack of effective smoking cessation interventions targeted at expectant fathers. We examined the effectiveness of a video-based smoking cessation intervention focusing on maternal and child health in promoting quitting among expectant fathers. Methods And Findings: A single-blind, 3-arm, randomized controlled trial was conducted at the obstetrics registration centers of 3 tertiary public hospitals in 3 major cities (Guangzhou, Shenzhen, and Foshan) in China. Smoking expectant fathers who registered with their pregnant partners were invited to participate in this study. Between 14 August 2017 to 28 February 2018, 1,023 participants were randomized to a video (n = 333), text (n = 322), or control (n = 368) group. The video and text groups received videos or text messages on the risks of smoking for maternal and child health via instant messaging. The control group received a leaflet with information on smoking cessation. Follow-up visits were conducted at 1 week and at 1, 3, and 6 months. The primary outcome, by intention to treat (ITT), was validated abstinence from smoking at the 6-month follow-up. The secondary outcomes included 7-day point prevalence of abstinence (PPA) and level of readiness to quit at each follow-up. The mean age of participants was 32 years, and about half of them were first-time expectant fathers. About two-thirds of participants had completed tertiary education. The response rate was 79.7% (815 of 1,023) at 6 months. The video and text groups had higher rates of validated abstinence than the control group (video group: 22.5% [75 of 333], P < 0.001; text group: 14.9% [48 of 322], P = 0.02; control group: 9.2% [34 of 368]) with adjusted odds ratios (ORs) of 2.80 (95% confidence interval [CI]: 1.79-4.37, P < 0.001) in the video group and 1.70 (95% CI: 1.06-2.74, P = 0.03) in the text group. The video and text groups differed in the rates of validated abstinence (22.5% versus 14.9%, P = 0.008; adjusted OR: 1.64, 95% CI: 1.10-2.46, P = 0.02). The video and text groups had higher rates of 7-day PPA than the control group at 6 months (video group: 24.6% [82 of 333] versus 11.4% [42 of 368], P < 0.001; text group: 17.4% [56 of 333] versus 11.4% [42 of 368], P = 0.02). The video and text groups also differed in the rates of 7-day PPA (24.6% versus 17.4%, P = 0.02). Excluding the quitters, the video and text groups had higher levels of readiness to quit than the control group at 6 months (video group: 43.5% [109 of 251] versus 31.6% [103 of 326], P = 0.002; text group: 40.6% [108 of 266] versus 31.6% [103 of 326], P = 0.01), No such difference was detected between the video and text groups (43.5% versus 40.6%, P = 0.29). The study was limited in that the long-term effectiveness of the intervention is uncertain. Conclusions: This smoking cessation intervention for expectant fathers that focused on explaining the ramifications of smoking on maternal and child health was effective and feasible in promoting quitting, and video messages were more effective than texts in delivering the information. Trial Registration: ClinicalTrials.gov: NCT03236025. Answer: Yes, pregnancy appears to be a teachable moment for smoking cessation among US Latino expectant fathers. A pilot study found that Latino expectant fathers had high-risk perceptions that their smoking harmed the pregnancy and strong emotional responses about their smoking during pregnancy. They also felt it was their role to make the pregnancy healthy. The teachable moment constructs for smoking were strongly correlated with motivation to quit smoking (PUBMED:20013439). This suggests that pregnancy may be an effective time to help Latino expectant fathers quit smoking.
Instruction: Is determination of matrix metalloproteinases and their tissue inhibitors serum concentrations useful in patients with gastroenteropancreatic and bronchopulmonary neuroendocrine neoplasms? Abstracts: abstract_id: PUBMED:23339005 Is determination of matrix metalloproteinases and their tissue inhibitors serum concentrations useful in patients with gastroenteropancreatic and bronchopulmonary neuroendocrine neoplasms? Introduction: Gastroenteropancreatic (GEP) and bronchopulmonary (BP) neurendocrine neoplasms (NENs) are rare and slowly growing tumours. Matrix metalloproteinases (MMPs) degrade extracellular matrix and are responsible for invasion and metastasis. Tissue inhibitors of matrix metalloproteinases (TIMPs) affect the invasiveness of tumour cells and the formation of distant metastases. The aim of this study was to evaluate selected MMPs (MMP2 and MMP9) and their tissue inhibitors (TIMP1 and TIMP2) depending on the pTNM classification, grading, and the occurrence of metastases. Material And Methods: The study group consisted of 86 patients with GEP NENs. The control group consisted of 31 healthy volunteers. Serum levels of TIMP1, TIMP2, MMP2 and MMP9 were determined by ELISA (R&D Systems) in all the study subjects. The statistical calculations were performed using MedCalc. Results: We observed significant differences in MMP2 and TIMP1 levels between the study group with NENs and the control group. TIMP1 levels were significantly higher in patients with high-grade NEN (NEC, neuroendocrine carcinoma) compared to patients with low-grade tumour (NET G1, neuroendocrine tumours G1) (p 〈 0.017). We also observed a significant correlation between TIMP1 levels and the presence of metastases in the group of patients with GEP NENs, and also higher TIMP1 levels than those in the patients without metastases (p 〈 0.05). We also found a higher likelihood of metastases in patients with GEP NENs with TIMP1 levels exceeding 206.4 ng/mL. Conclusions: Patients with NENs secreted larger quantities of MMP2 and TIMP1. TIMP1 may be considered a marker of metastases in patients with GEP NENs. abstract_id: PUBMED:24260058 Prognostic impact of p16 and p21 on gastroenteropancreatic neuroendocrine tumors. Aberrant expression of the cell cycle kinase inhibitors, p16 and p21, has been associated with poor prognosis in a number of human malignancies. These proteins may also be involved in the development and progression of gastroenteropancreatic neuroendocrine tumors (GEP-NETs). The present study aimed to investigate protein levels of p16 and p21 in GEP-NETs and to evaluate their clinical significance. p16 and p21 protein expression was tested immunohistochemically in the tissue samples of 68 GEP-NETs. The association between expression and clinicopathological characteristics and overall survival was assessed. Low expression of p16 (no positive nuclear staining) was found in 37 (54%) cases and high p21 expression (≥5% positive nuclear staining) was detected in 23 (34%) cases. Low p16 protein levels indicated a poorer prognosis for patients graded as G2 subgroup in the univariate analysis (relative risk, 4.4; 95% CI, 1.8-10.6). No significant correlation was found between the expression of p21 and any of the clinicopathological variables. The present study indicates a prognostic relevance for p16 immunoreactivity. Low levels of p16 protein were associated with a shorter survival in the G2 subgroup of GEP-NETs. p21 protein expression was not identified to be useful as a predictive indicator in GEP-NETs. abstract_id: PUBMED:21694850 Advances in the treatment of gastroenteropancreatic neuroendocrine tumors. Gastroenteropancreatic neuroendocrine tumors (GEP-NETs) are a rare and heterogeneous class of neoplasms. While surgical resection is the mainstay of treatment, non-surgical therapies play a role in the setting of unresectable and metastatic disease. The goals of medical therapy are directed both at alleviating symptoms of peptide release and shrinking tumor mass. Biotherapies such as somatostatin analogs and interferon can decrease the secretion of peptides and inhibit their end-organ effects. A second objective for treatment of unresectable GEP-NETs is limiting tumor growth. Options for limiting tumor growth include somatostatin analogs, systemic chemotherapy, locoregional therapies, ionizing radiation, external beam radiation, and newer targeted agents. In particular, angiogenesis inhibitors, tyrosine kinase inhibitors, and mTOR inhibitors have shown early promising results. The rarity of these tumors, their resistance to standard chemotherapy, and the excellent performance status of most of these patients, make a strong argument for consideration of novel therapeutic trials. abstract_id: PUBMED:34348340 Immunotherapy in Gastroenteropancreatic Neuroendocrine Neoplasia. The worldwide prevalence and incidence of gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) and of NENs, in general, have been increasing recently. While valuing the considerable progress made in the treatment strategies for GEP-NEN in recent years, patients with advanced, metastasized disease still have a poor prognosis, which calls for urgent novel therapies. The immune system plays a dual role: both host-protecting and "tumor-promoting." Hence, immunotherapy is potentially a powerful weapon to help NEN patients. However, although recent successes with checkpoint inhibitors have shown that enhancing antitumor immunity can be effective, the dynamic nature of the immunosuppressive tumor microenvironment presents significant hurdles to the broader application of these therapies. Studies led to their approval in NEN of the lung and Merkel cell carcinoma, whereas results in other settings have not been so encouraging. Oncolytic viruses can selectively infect and destroy cancer cells, acting as an in situ cancer vaccine. Moreover, they can remodel the tumor microenvironment toward a T cell-inflamed phenotype. Oncolytic virotherapy has been proposed as an ablative and immunostimulatory treatment strategy for solid tumors that are resistant to checkpoint inhibitors alone. Future efforts should focus on finding the best way to include immunotherapy in the GEP-NEN treatment scenario. In this context, this study aims at providing a comprehensive generalized review of the immune checkpoint blockade and the oncolytic virotherapy use in GEP-NENs that might improve GEP-NEN treatment strategies. abstract_id: PUBMED:30675205 Serum chromogranin A for the diagnosis of gastroenteropancreatic neuroendocrine neoplasms and its association with tumour expression. The aim of the present study was to assess the clinical value of serum chromogranin A (CgA) levels in patients with gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) and to compare them with tumour expression of CgA. A total of 109 consecutive patients with confirmed GEP-NENs were enrolled in this prospective study between December 2012 and August 2016, including 73 patients with primary or recurrent GEP-NENs and 36 patients with GEP-NENs that were treated following surgery. Furthermore, 30 patients with benign gastrointestinal diseases and 30 healthy volunteers served as control groups. Serum CgA levels were measured by ELISA, using different reference values, in order to evaluate its diagnostic efficacy. Serum neuron-specific enolase was also measured to evaluate its diagnostic efficacy and analyse its association with serum CgA levels. The levels of CgA, synaptophysin and neural cell adhesion molecule 1 in the tumour tissue were assessed by immunohistochemical assays. The results indicated that serum CgA levels were significantly higher in patients with GEP-NENs compared with the control groups (P<0.05). No association was observed between serum CgA levels and tumour grade (G1, G2 and G3), but serum CgA levels differed significantly between patients with GEP-NENs of different origins (P<0.05). A serum CgA cut-off value of 85.3 ng/ml was associated with high sensitivity (64.4%) and specificity (92.7%). Different reference values were recommended for NENs of different origins, with serum CgA cut-off values of 96.72, 51.13 and 86.19 ng/ml for the stomach, intestines and pancreas, respectively. The serum CgA levels were consistent with the CgA expression in the tumour. In conclusion, serum CgA may serve as a circulating pathological biomarker for the diagnosis of GEP-NENs. The use of different reference values for different tumour origins may improve the diagnostic efficacy of CgA for GEP-NENs. A cut-off value of 85.3 ng/ml is recommended in the Chinese population. abstract_id: PUBMED:38422348 Efficacy and safety of streptozocin-based chemotherapy for gastroenteropancreatic neuroendocrine tumors in Japanese clinical practice. Background: Streptozocin has been used to treat neuroendocrine tumors in Europe and the USA; however, its actual status in Japan has not been fully clarified owing to the rarity of this disease and the relatively recent approval of streptozocin in Japan. Methods: We retrospectively analyzed 53 patients with gastroenteropancreatic neuroendocrine tumors who were treated with streptozocin-based chemotherapy at two Japanese hospitals between January 2004 and June 2023. Results: The overall response and disease control rates were 27.7 and 74.5%, respectively, and the median progression-free survival and overall survival were 7.1 and 20.3 months, respectively. Performance status ≥1 showed a significant negative correlation with progression-free survival, and performance status ≥1 and liver tumor burden ≥25% showed a significant negative correlation with overall survival. No significant differences were observed in the treatment response between pancreatic and gastrointestinal neuroendocrine tumors. No treatment-related serious adverse events were observed; however, 87.7% of patients expressed a decrease in the estimated glomerular filtration rate, which negatively correlated with the duration of streptozocin treatment (r = 0.43, P = 0.0020). In the streptozocin re-administration group (n = 5), no differences were found in efficacy between the initial and second streptozocin treatments. Conclusions: Although streptozocin is a safe, streptozocin-induced renal dysfunction is a dilemma in streptozocin responders. Streptozocin may benefit patients with gastroenteropancreatic neuroendocrine tumors, especially those with a good performance status; however, in some cases, planned streptozocin withdrawal or switching to other drugs should be considered. abstract_id: PUBMED:8611434 Expression of collagenase (MMP2), stromelysin (MMP3) and tissue inhibitor of the metalloproteinases (TIMP1) in pancreatic and ampullary disease. It is now recognised that epithelial-stromal interactions are important in a wide range of disease processes including neoplasia and inflammation. Metalloproteinases are central to matrix degradation and remodelling, which are key events in tumour invasion and metastasis and may also be involved in tissue changes occurring in chronic inflammation. Immunohistochemistry was performed on sections from 50 patients with pancreatic cancer (n = 27), ampullary cancer (n = 12), low bile duct cancer (n = 3), neuroendocrine tumours (n = 3) and chronic pancreatitis (n = 5), using antibodies raised against collagenase (MMP2), stromelysin (MMP3) and tissue inhibitor of metalloproteinase (TIMP1) and developed using the avidin-biotin complex method. Abundance of MMP2, MMP3 and TIMP1 was greater in pancreatic and ampullary cancer than any other pathology and immunoreactivity in the malignant epithelial cells in pancreatic and ampullary cancer was greater than in the stromal tissues (in pancreatic cancer: MMP2 100% vs 37%, MMP3 93% vs 15%, TIMP1 93% vs 4%, P < 0.0001). There were strong correlations between the immunoreactivity of the two antibodies for MMP2 (P < 0.0001), between MMP2 and TIMP1 (P < 0.0001) and between MMP3 and TIMP1 (P < 0.0001). The immunoreactivity for TIMP1 in pancreatic and ampullary cancers with lymph node metastases was significantly less compared with those cases without lymph node metastases (P < 0.02) and there was an association between increased immunoreactivity for MMP2 and the degree of tumour differentiation (P < 0.01). The results implicate MMP2, MMP3 and TIMP1 in the invasive phenotype of pancreatic and ampullary cancer. abstract_id: PUBMED:1413839 Serum chromogranin A in the diagnosis and follow-up of neuroendocrine tumors of the gastroenteropancreatic tract. Hormonally active neuroendocrine tumors may easily be diagnosed by elevated serum levels of their specific peptides and hormonal products, but there are no reliable markers for neuroendocrine tumors without hormonal activity. Chromogranin A (CgA), a secretory protein of neuroendocrine cells, has recently been characterized as a valuable tissue marker in hormonally active and non-functioning neuroendocrine tumors. This study analyzes the role of CgA as a serum marker for different neuroendocrine tumors. Thirty-three patients with neuroendocrine tumors of the stomach (n = 7), the ileum (n = 18), and the pancreas (n = 8) were investigated. Serum CgA levels were analyzed by radioimmunoassay at the time of diagnosis and during follow-up under different therapeutic regimens. Serum CgA was elevated in 30 (91%) patients. Mean CgA serum levels varied with tumor location (pancreas: 7068 +/- 3008 ng/ml, ileum: 5381 +/- 1740 ng/ml, stomach: 529 +/- 179 ng/ml, x +/- SEM ng/ml) but did not differ between functioning and non-functioning tumors. Eight of 10 patients treated with either somatostatin or interferon-alpha showed changes of CgA concentrations corresponding to tumor growth. We conclude that CgA is a useful broad-spectrum tumor marker in gastroenteropancreatic neuroendocrine tumors. Its determination is especially recommended in tumors without hormonal activity. abstract_id: PUBMED:20215503 Matrix metalloproteinases contribute distinct roles in neuroendocrine prostate carcinogenesis, metastasis, and angiogenesis progression. Prostate cancer is the leading form of cancer in men. Prostate tumors often contain neuroendocrine differentiation, which correlates with androgen-independent progression and poor prognosis. Matrix metalloproteinases (MMP), a family of enzymes that remodel the microenvironment, are associated with tumorigenesis and metastasis. To evaluate MMPs during metastatic prostatic neuroendocrine cancer development, we used transgenic mice expressing SV40 large T antigen in their prostatic neuroendocrine cells, under the control of transcriptional regulatory elements from the mouse cryptdin-2 gene (CR2-TAg). These mice have a stereotypical pattern of tumorigenesis and metastasis. MMP-2, MMP-7, and MMP-9 activities increased concurrently with the transition to invasive metastatic carcinoma, but they were expressed in different prostatic cell types: stromal, luminal epithelium, and macrophages, respectively. CR2-TAg mice treated with AG3340/Prinomastat, an MMP inhibitor that blocks activity of MMP-2, MMP-9, MMP-13, and MMP-14, had reduced tumor burden. CR2-TAg animals were crossed to mice homozygous for null alleles of MMP-2, MMP-7, or MMP-9 genes. At 24 weeks CR2-TAg; MMP-2(-/-) mice showed reduced tumor burden, prolonged survival, decreased lung metastasis, and decreased blood vessel density, whereas deficiencies in MMP-7 or MMP-9 did not influence tumor growth or survival. Mice deficient for MMP-7 had reduced endothelial area coverage and decreased vessel size, and mice lacking MMP-9 had increased numbers of invasive foci and increased perivascular invasion, as well as decreased tumor blood vessel size. Together, these results suggest distinct contributions by MMPs to the progression of aggressive prostate tumor and to helping tumors cleverly find alternative routes to malignant progression. abstract_id: PUBMED:35719897 Current updates and future directions in diagnosis and management of gastroenteropancreatic neuroendocrine neoplasms. Gastroenteropancreatic neuroendocrine neoplasms are a heterogenous group of rare neoplasms that are increasingly being discovered, often incidentally, throughout the gastrointestinal tract with varying degrees of activity and malignant potential. Confusing nomenclature has added to the complexity of managing these lesions. The term carcinoid tumor and embryonic classification have been replaced with gastroenteropancreatic neuroendocrine neoplasm, which includes gastrointestinal neuroendocrine and pancreatic neuroendocrine neoplasms. A comprehensive multidisciplinary approach is important for clinicians to diagnose, stage and manage these lesions. While histological diagnosis is the gold standard, recent advancements in endoscopy, conventional imaging, functional imaging, and serum biomarkers complement histology for tailoring specific treatment options. In light of developing technology, our review sets out to characterize diagnostic and therapeutic advancements for managing gastroenteropancreatic neuroendocrine tumors, including innovations in radiolabeled peptide imaging, circulating biomarkers, and endoscopic treatment approaches adapted to different locations throughout the gastrointestinal system. Answer: The determination of matrix metalloproteinases (MMPs) and their tissue inhibitors (TIMPs) serum concentrations appears to be useful in patients with gastroenteropancreatic (GEP) neuroendocrine neoplasms (NENs). A study found significant differences in MMP2 and TIMP1 levels between the study group with NENs and the control group. Specifically, TIMP1 levels were significantly higher in patients with high-grade NEN compared to patients with low-grade tumors, and there was a significant correlation between TIMP1 levels and the presence of metastases in patients with GEP NENs. Higher TIMP1 levels were observed in patients with metastases compared to those without, and a higher likelihood of metastases was found in patients with TIMP1 levels exceeding 206.4 ng/mL. These findings suggest that TIMP1 may be considered a marker of metastases in patients with GEP NENs (PUBMED:23339005). Additionally, another study on the expression of MMP2, MMP3, and TIMP1 in pancreatic and ampullary disease indicated that the abundance of these proteins was greater in pancreatic and ampullary cancer than in other pathologies, implicating MMP2, MMP3, and TIMP1 in the invasive phenotype of these cancers (PUBMED:8611434). Therefore, the determination of MMPs and TIMPs serum concentrations can provide valuable information regarding the invasiveness and metastatic potential of GEP NENs, which can be useful for patient prognosis and management.
Instruction: Do dry roasting, lightly salting nuts affect their cardioprotective properties and acceptability? Abstracts: abstract_id: PUBMED:26746221 Do dry roasting, lightly salting nuts affect their cardioprotective properties and acceptability? Purpose: Previous studies have reported improvements in cardiovascular disease (CVD) risk factors with the consumption of raw nuts. However, around one-third of nuts consumed are roasted and salted. Thus, it is important to determine whether roasting and salting nuts affect the health benefits observed with raw nuts. This study aimed to compare the effects of consuming two different forms of hazelnuts on cardiovascular risk factors and acceptance. Methods: Using a randomised crossover design, 72 participants were asked to consume 30 g/day of either raw or dry roasted, lightly salted hazelnuts for 28 days each. CVD risk factors were measured at the beginning and end of each treatment period. "Desire to consume" and "overall liking" for both forms of hazelnuts were assessed daily using a 150-mm visual analogue scale. Results: Body composition, blood pressure, plasma total and low-density lipoprotein-cholesterol, apolipoprotein A1 and B100, glucose and α-tocopherol concentrations did not differ between forms of hazelnuts (all P ≥ 0.054). High-density lipoprotein (HDL)-cholesterol (P = 0.037) and triacylglycerol (P < 0.001) concentrations were significantly lower following the consumption of dry roasted, lightly salted hazelnuts when compared to the raw hazelnuts. Compared with baseline, consuming both forms of hazelnuts significantly improved HDL-cholesterol and apolipoprotein A1 concentrations, total-C/HDL-C ratio, and systolic blood pressure without significantly changing body composition. Acceptance ratings did not differ between forms of hazelnuts and remained high throughout the study. Conclusion: Dry roasting and lightly salting nuts do not appear to negate the cardioprotective effects observed with raw nut consumption, and both forms of nuts are resistant to monotony. Public health messages could be extended to include dry roasted and lightly salted nuts as part of a heart healthy diet. abstract_id: PUBMED:38254543 The Cardioprotective Properties of Selected Nuts: Their Functional Ingredients and Molecular Mechanisms. Nuts have been known as a nutritious food since ancient times and can be considered part of our original diet: they are one of the few foods that have been eaten in the same form for thousands of years. They consist of various dry fruits and seeds, with the most common species being almonds (Prunus dulcis), hazelnuts (Corylus avellana), cashews (cashew nuts, Anacardium occidentale), pistachios (Pistacia vera), walnuts (Italian nuts, Juglans regia), peanuts (Arachia hypogaca), Brazil nuts (Bartholletia excels), pecans (Corya illinoinensis), macadamia nuts (Macademia ternifolia) and pine nuts. Both in vitro and in vivo studies have found nuts to possess a range of bioactive compounds with cardioprotective properties, and hence, their consumption may play a role in preventing and treating cardiovascular diseases (CVDs). The present work reviews the current state of knowledge regarding the functional ingredients of various nuts (almonds, Brazil nuts, cashew nuts, hazelnuts, macadamia nuts, peanuts, pecan nuts, pine nuts, pistachios, and walnuts) and the molecular mechanisms of their cardioprotective action. The data indicate that almonds, walnuts and pistachios are the best nut sources of bioactive ingredients with cardioprotective properties. abstract_id: PUBMED:10479223 Nuts and their bioactive constituents: effects on serum lipids and other factors that affect disease risk. Because nuts have favorable fatty acid and nutrient profiles, there is growing interest in evaluating their role in a heart-healthy diet. Nuts are low in saturated fatty acids and high in monounsaturated and polyunsaturated fatty acids. In addition, emerging evidence indicates that there are other bioactive molecules in nuts that elicit cardioprotective effects. These include plant protein, dietary fiber, micronutrients such as copper and magnesium, plant sterols, and phytochemicals. Few feeding studies have been conducted that have incorporated different nuts into the test diets to determine the effects on plasma lipids and lipoproteins. The total- and lipoprotein-cholesterol responses to these diets are summarized in this article. In addition, the actual cholesterol response was compared with the predicted response derived from the most current predictive equations for blood cholesterol. Results from this comparison showed that when subjects consumed test diets including nuts, there was an approximately 25% greater cholesterol-lowering response than that predicted by the equations. These results suggest that there are non-fatty acid constituents in nuts that have additional cholesterol-lowering effects. Further studies are needed to identify these constituents and establish their relative cholesterol-lowering potency. abstract_id: PUBMED:17125535 Nuts and coronary heart disease: an epidemiological perspective. The epidemiological evidence for the cardio-protective effect of nut consumption is presented and reviewed. Four large prospective epidemiological studies of primary prevention of coronary heart disease are reviewed and discussed (Adventist Health Study, Iowa Women's Health Study, Nurses' Health Study and the Physicians' Health Study). Other studies of nuts and coronary heart disease risk are addressed. The combined evidence for a cardio-protective effect from nut consumption is summarized and presented graphically. The risk of coronary heart disease is 37 % lower for those consuming nuts more than four times per week compared to those who never or seldom consume nuts, with an average reduction of 8.3 % for each weekly serving of nuts. The evidence for a causal relationship between nut consumption and reduced risk of coronary heart disease is outlined using Hill's criteria for causality and is found to support a causal cardio-protective relationship. abstract_id: PUBMED:11368503 The effects of nuts on coronary heart disease risk. Epidemiologic studies have consistently demonstrated beneficial effects of nut consumption on coronary heart disease (CHD) morbidity and mortality in different population groups. Clinical studies have reported total and low-density lipoprotein cholesterol-lowering effects of heart-healthy diets that contain various nuts or legume peanuts. It is evident that the favorable fatty acid profile of nuts (high in unsaturated fatty acids and low in saturated fatty acids) contributes to cholesterol lowering and, hence, CHD risk reduction. Dietary fiber and other bioactive constituents in nuts may confer additional cardioprotective effects. abstract_id: PUBMED:20199998 Nuts, blood lipids and cardiovascular disease. The aim of this paper is to evaluate nut-related epidemiological and human feeding study findings and to discuss the important nutritional attributes of nuts and their link to cardiovascular health. Frequent nut consumption has been found to be protective against coronary heart disease in five large epidemiological studies across two continents. A qualitative summary of the data from four of these studies found an 8.3% reduction in risk of death from coronary heart disease for each weekly serving of nuts. Over 40 dietary intervention studies have been conducted evaluating the effect of nut containing diets on blood lipids. These studies have demonstrated that intake of different kinds of nuts lower total and LDL cholesterol and the LDL: HDL ratio in healthy subjects or patients with moderate hypercholesterolaemia, even in the context of healthy diets. Nuts have a unique fatty acid profile and feature a high unsaturated to saturated fatty acid ratio, an important contributing factor to the beneficial health effects of nut consumption. Additional cardioprotective nutrients found in nuts include vegetable protein, fiber, alpha-tocopherol, folic acid, magnesium, copper, phytosterols and other phytochemicals. abstract_id: PUBMED:31585864 Anti-atherosclerotic and cardiovascular protective benefits of Brazilian nuts. Brazil nuts are rich in magnesium, selenium, arginine and other amino acids, dietary fiber, tocopherols (vitamin E), phytosterols, linoleic acid, linolenic acid, sitosterols, monounsaturated and polyunsaturated fatty acids, polyphenols and other amino acids. Due to such a rich mixture of nutrients, Brazil nuts protect LDL from peroxidation, and improve endothelial function, blood pressure, lipid metabolism, and decrease endothelial inflammatory markers, DNA oxidation, and blood lipids (cholesterol, LDL, triglycerides). Here, we review and propose biological mechanisms by which bioactive compounds of Brazil nuts afford protections against atherosclerosis and cardiovascular diseases. Just a few nuts per day provide sufficient cardiovascular benefits, including protection against development and progression of atherosclerosis. abstract_id: PUBMED:20199997 Nuts, inflammation and insulin resistance. The beneficial effects of nut consumption on cardiovascular disease (CVD) have been widely documented. These protective effects are mainly attributed to the role of nuts in the metabolism of lipids and lipoproteins. As chronic inflammation is a key early stage in the atherosclerotic process that predicts future CVD events and is closely related to the pathogenesis of insulin resistance, many recent studies have focused on the potential effect of nut consumption on inflammation and insulin resistance. Through different mechanisms, some components of nuts such as magnesium, fiber, alpha-linolenic acid, L-arginine, antioxidants and MUFA may protect against inflammation and insulin resistance. This review evaluates the epidemiologic and experimental evidence in humans demonstrating an association between nut consumption and these two emergent cardio-protective mechanisms. abstract_id: PUBMED:30221085 A comparison of perceptions of nuts between the general public, dietitians, general practitioners, and nurses. Background: Nut consumption at the population level remains low despite the well-documented benefits of their consumption, including their cardioprotective effects. Studies have suggested that advice from health professionals may be a means to increase nut consumption levels. Understanding how nuts are perceived by the public and health professionals, along with understanding the public's perceptions of motivators of and deterrents to consuming nuts, may inform the development of initiatives to improve on these low levels of consumption. The aim of this cross-sectional study was to compare perceptions of nuts among three groups of health professionals (dietitians, general practioners, and practice nurses) and the general public in New Zealand (NZ), along with motivators of and deterrents to consuming nuts amongst the general public and their experiences of receiving advice around nut consumption. Methods: The NZ electoral roll was used to identify dietitians, general practitioners (GPs), and practice nurses, based on their free-text occupation descriptions, who were then invited to complete a questionnaire with 318, 292, and 149 respondents respectively. 1,600 members of the general public were randomly selected from the roll with 710 respondents. Analyses were performed using chi-squared tests to look at differences in categorical variables and linear regression for differences in other variables between the four survey groups. Results: Although there were significant differences between the four groups regarding the perceptions of nuts, in general there was agreement that nuts are healthy, high in protein and fat, are filling, and some nuts are high in selenium. We noted frequent agreement that the general public participants would consume more if nuts: improved health (67%), were more affordable (60%), or improved the nutrient content (59%) and balance of fats (58%) within their diets. Over half the respondents reported they would eat more nuts if they were advised to do so by a dietitian or doctor, despite less than 4% reporting they had received such advice. The most frequently selected deterrents to increasing nut consumption were: cost (67%), potential weight gain (66%), and leading to eating too much fat (63%). Discussion: It is concerning that so few among the general public report receiving advice to consume more nuts from health professionals, especially given their apparent responsiveness to such advice. Health professionals could exploit the motivators of nut consumption, while also addressing the deterrents, to promote nut intake. These factors should also be addressed in public health messages to encourage regular nut consumption among the public. Educational initiatives could also be used to improve the nutritional knowledge of GPs and practice nurses with regard to nuts, although even dietitians were unsure of their knowledge in some cases. abstract_id: PUBMED:35118090 Protective Effects of Appropriate Amount of Nuts Intake on Childhood Blood Pressure Level: A Cross-Sectional Study. Objective: Increased blood pressure (BP) is a major risk factor for cardiovascular disease (CVD) in adults. Regular consumption of nuts may improve some BP in adults whereas evidence in children is relatively lacking. This study aimed to determine the efficacy of nuts intake on BP in children. Methods: Stratified cluster sampling was performed to include a total of 15,268 primary school children aged 6-12 years in urban and rural areas in Southwest China. The daily nuts intake dosage was collected by questionnaires, and generalized linear model (GLM) and logistic regression were used to analyze the relationship between nuts intake and BP. Results: For the total subjects, 11,130 (72.9%) participants consumed <35 g/day of nuts, 1,145 (7.5%) participants consumed 35 g/day ≤ nut <50 g/day of nuts, 2,053 (13.4%) participants consumed 50~100 g/day of nuts, and 940 (6.2%) participants consumed over 100 g/day of nut. For sex subgroup, 1,074 (13.53%) boys and 979 (13.35%) girls consumed 50~100 g/day of nuts. Compared with the 50~100 g/day of nuts intake group, systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) were significantly different in <35 g/day, 35g/day ≤ nut <50 g/day, and >100 g/day nuts intake groups (all p < 0.001). The logistic regression showed that compared with the 50~100 g/day group, the other three groups are more likely associated with childhood hypertension (all p < 0.001). Therefore, a U-shaped relationship between nuts intake and BP level was identified. Conclusions: The finding suggests that intake of 50~100 g/day nuts is the recommended dose of nuts intake to control childhood hypertension, as well as for cardioprotection purposes. Answer: Dry roasting and lightly salting nuts do not appear to negate the cardioprotective effects observed with raw nut consumption. A study comparing the effects of consuming raw versus dry roasted, lightly salted hazelnuts found that both forms of hazelnuts significantly improved HDL-cholesterol and apolipoprotein A1 concentrations, total-C/HDL-C ratio, and systolic blood pressure without significantly changing body composition. Although HDL-cholesterol and triacylglycerol concentrations were significantly lower following the consumption of dry roasted, lightly salted hazelnuts compared to raw hazelnuts, the overall cardioprotective effects were maintained. Acceptance ratings did not differ between forms of hazelnuts and remained high throughout the study, suggesting that both forms of nuts are resistant to monotony and can be included as part of a heart-healthy diet (PUBMED:26746221). This finding is consistent with the broader literature on the cardioprotective properties of nuts, which have been attributed to their favorable fatty acid profiles, dietary fiber, micronutrients, plant sterols, and phytochemicals (PUBMED:10479223; PUBMED:11368503; PUBMED:20199998). Epidemiological evidence also supports a reduced risk of coronary heart disease with frequent nut consumption (PUBMED:17125535; PUBMED:11368503). Additionally, specific nuts like Brazil nuts have been shown to have anti-atherosclerotic and cardiovascular protective benefits (PUBMED:31585864), and nut consumption has been associated with reduced inflammation and insulin resistance, which are emergent cardioprotective mechanisms (PUBMED:20199997). In summary, dry roasting and lightly salting nuts do not significantly affect their cardioprotective properties and are acceptable to consumers, suggesting that such processed nuts can be part of a heart-healthy diet (PUBMED:26746221).
Instruction: Rating the quality of intensive care units: is it a function of the intensive care unit scoring system? Abstracts: abstract_id: PUBMED:17562737 Review of the acuity scoring systems for the pediatric intensive care unit and their use in quality improvement. Acuity scoring systems quantitate the severity of clinical conditions and stratify patients according to presenting patient condition. In the pediatric intensive care unit, the complexity and number of clinical scoring systems are increasing as their applications for clinicians, health services researches, and quality improvement broaden. This article is a review of acuity scoring systems for the pediatric intensive care unit, including examples of scoring systems available, the methods used in assessing these tools, the ways in which these systems are used, and the utility of acuity scoring systems in accurate benchmarking. It is anticipated that with increasing health care costs and competition and increased focus on medical error reduction and quality improvement, the demands for risk-adjusted outcomes and institutional benchmarking will increase; therefore, as clinicians, academicians, and administrators, it is imperative that we be knowledgeable of the methods and applications of these acuity scoring systems to ensure their quality and appropriate use. abstract_id: PUBMED:1865952 Measurement of the quality of care for surgical patients in an intensive care unit in a peripheral hospital Objective: To determine the quality of care in an intensive care unit. Design: Prospective investigation for one year. Comparison with results from the literature. Setting: Surgical intensive care unit of a community hospital. Patients: Measurement of the APACHE-II-score was performed on days 1, 3 and 7 in all surgical intensive care unit patients admitted during a one-year period. The predicted mortality from the literature was compared with the actual mortality in our hospital. Results: A total of 301 patients were admitted to the intensive care unit. Overall mortality was 9%. All the patients with an APACHE II score above 25 on admission died. The actual mortality was comparable with the predicted mortality from the literature. Conclusion: The APACHE II score can be used to determine the quality of care in an intensive care unit. Early prediction of a bad prognosis makes transportation to a more specialized hospital possible, before irreversible organ damage develops. abstract_id: PUBMED:12352029 Rating the quality of intensive care units: is it a function of the intensive care unit scoring system? Objective: Intensive care units (ICUs) use severity-adjusted mortality measures such as the standardized mortality ratio to benchmark their performance. Prognostic scoring systems such as Acute Physiology and Chronic Health Evaluation (APACHE) II, Simplified Acute Physiology Score II, and Mortality Probability Model II0 permit performance-based comparisons of ICUs by adjusting for severity of disease and case mix. Whether different risk-adjustment methods agree on the identity of ICU quality outliers within a single database has not been previously investigated. The objective of this study was to determine whether the identity of ICU quality outliers depends on the ICU scoring system used to calculate the standardized mortality ratio. Design, Setting, Patients: Retrospective cohort study of 16,604 patients from 32 hospitals based on the outcomes database (Project IMPACT) created by the Society of Critical Care Medicine. The ICUs were a mixture of medical, surgical, and mixed medical-surgical ICUs in urban and nonurban settings. Standardized mortality ratios for each ICU were calculated using APACHE II, Simplified Acute Physiology Score II, and Mortality Probability Model II. ICU quality outliers were defined as ICUs whose standardized mortality ratio was statistically different from 1. Kappa analysis was used to determine the extent of agreement between the scoring systems on the identity of hospital quality outliers. The intraclass correlation coefficient was calculated to estimate the reliability of standardized mortality ratios obtained using the three risk-adjustment methods. Measurements And Main Results: Kappa analysis showed fair to moderate agreement among the three scoring systems in identifying ICU quality outliers; the intraclass correlation coefficient suggested moderate to substantial agreement between the scoring systems. The majority of ICUs were classified as high-performance ICUs by all three scoring systems. All three scoring systems exhibited good discrimination and poor calibration in this data set. Conclusion: APACHE II, Simplified Acute Physiology Score II, and Mortality Probability Model II0 exhibit fair to moderate agreement in identifying quality outliers. However, the finding that most ICUs in this database were judged to be high-performing units limits the usefulness of these models in their present form for benchmarking. abstract_id: PUBMED:25438896 Palliative care in the intensive care unit. Most patients who receive terminal care in the intensive care setting die after withdrawing or limiting of life-sustaining measures provided in the intensive care setting. The integration of palliative care into the intensive care unit (ICU) provides care, comfort, and planning for patients, families, and the medical staff to help decrease the emotional, spiritual, and psychological stress of a patient's death. Quality measures for palliative care in the ICU are discussed along with case studies to demonstrate how this integration is beneficial for a patient and family. Integrating palliative care into the ICU is also examined in regards to the complex adaptive system. abstract_id: PUBMED:34333619 Intensive Care Unit Scoring Systems. Background: Illness severity scoring systems are commonly used in critical care. When applied to the populations for whom they were developed and validated, these tools can facilitate mortality prediction and risk stratification, optimize resource use, and improve patient outcomes. Objective: To describe the characteristics and applications of the scoring systems most frequently applied to critically ill patients. Methods: A literature search was performed using MEDLINE to identify original articles on intensive care unit scoring systems published in the English language from 1980 to 2020. Search terms associated with critical care scoring systems were used alone or in combination to find relevant publications. Results: Two types of scoring systems are most frequently applied to critically ill patients: those that predict risk of in-hospital mortality at the time of intensive care unit admission (Acute Physiology and Chronic Health Evaluation, Simplified Acute Physiology Score, and Mortality Probability Models) and those that assess and characterize current degree of organ dysfunction (Multiple Organ Dysfunction Score, Sequential Organ Failure Assessment, and Logistic Organ Dysfunction System). This article details these systems' differing features and timing of use, score calculation, patient populations, and comparative performance data. Conclusion: Critical care nurses must be aware of the strengths, limitations, and specific characteristics of severity scoring systems commonly used in intensive care unit patients to effectively employ these tools in clinical practice and critically appraise research findings based on their use. abstract_id: PUBMED:35843808 Post-intensive care syndrome and health-related quality of life in long-term survivors of intensive care unit. Objective: The objective of this study was to provide preliminary data for improving the health-related quality of life of long-term intensive care unit survivors by identifying the relationship between health-related quality of life and post-intensive care syndrome. Methods: Using a descriptive correlation research design, data from patients who visited the outpatient department for continuous treatment after discharge from the intensive care unit were analysed. Post-intensive care syndrome was measured by physical, cognitive, and mental problems. Data were collected from 1st August to 31st December, 2019, and 121 intensive care unit survivors participated in the study. Results: Health-related quality of life showed a negative correlation with physical, mental, and cognitive problems. The factors associated with health-related quality of life were physical and mental problems, education level, sedatives and neuromuscular relaxants, and marital status. Conclusions: To improve the health-related quality of life of intensive care unit survivors, post-intensive care syndrome prevention is important, and a systematic strategy is required through a long-term longitudinal trace study. In addition, intensive care unit nurses and other healthcare professionals need to provide early interventions to reduce post-intensive care syndrome. abstract_id: PUBMED:26333755 Pediatric Palliative Care in the Intensive Care Unit. The chronicity of illness that afflicts children in Pediatric Palliative Care and the medical technology that has improved their lifespan and quality of life make prognostication extremely difficult. The uncertainty of prognostication and the available medical technologies make both the neonatal intensive care unit and the pediatric intensive care unit locations where many children will receive Pediatric Palliative Care. Health care providers in the neonatal intensive care unit and pediatric intensive care unit should integrate fundamental Pediatric Palliative Care principles into their everyday practice. abstract_id: PUBMED:14501954 Quality indicators for end-of-life care in the intensive care unit. Objective: The primary goal of this study was to address the documented deficiencies in end-of-life care (EOLC) in intensive care unit settings by identifying key EOLC domains and related quality indicators for use in the intensive care unit through a consensus process. A second goal was to propose specific clinician and organizational behaviors and interventions that might be used to improve these EOLC quality indicators. Participants: Participants were the 36 members of the Robert Wood Johnson Foundation (RWJF) Critical Care End-of-Life Peer Workgroup and 15 nurse-physician teams from 15 intensive care units affiliated with the work group members. Fourteen adult medical, surgical, and mixed intensive care units from 13 states and the District of Columbia in the United States and one mixed intensive care unit in Canada were represented. Methods: An in-depth literature review was conducted to identify articles that assessed the domains of quality of EOLC in the intensive care unit and general health care. Consensus regarding the key EOLC domains in the intensive care unit and quality performance indicators within each domain was established based on the review of the literature and an iterative process involving the authors and members of the RWJF Critical Care End-of-Life Peer Workgroup. Specific clinician and organizational behaviors and interventions to address the proposed EOLC quality indicators within the domains were identified through a collaborative process with the nurse-physician teams in 15 intensive care units. Measurements And Main Results: Seven EOLC domains were identified for use in the intensive care unit: a) patient- and family-centered decision making; b) communication; c) continuity of care; d) emotional and practical support; e) symptom management and comfort care; f) spiritual support; and g) emotional and organizational support for intensive care unit clinicians. Fifty-three EOLC quality indicators within the seven domains were proposed. More than 100 examples of clinician and organizational behaviors and interventions that could address the EOLC quality indicators in the intensive care unit setting were identified. Conclusions: These EOLC domains and the associated quality indicators, developed through a consensus process, provide clinicians and researchers with a framework for understanding quality of EOLC in the intensive care unit. Once validated, these indicators might be used to improve the quality of EOLC by serving as the components of an internal or external audit evaluating EOLC continuous quality improvement efforts in intensive care unit settings. abstract_id: PUBMED:16855055 Quality and safety in the intensive care unit. Ensuring patient safety is becoming increasingly important for intensive care unit practitioners. The intensive care unit is particularly prone to medical errors because of the complexity of the patients, interdependence of the practitioners, and dependence on team functioning. This review provides historical perspectives, research foundations, and a practical "how to" guide to improving care in the intensive care unit. It also considers the organizational structure, the processes of care, and the occurrence of adverse outcomes in this setting. Effective intensive care unit quality and safety programs capitalize on institutional resources and have multidisciplinary input with clear leadership, input from quality improvement initiatives, a responsible yet nonpunitive culture, and data-driven assessment and monitoring to reduce medical errors. Intensive care unit practitioners need to capitalize on the benefits that patients and their families bring to the patient safety discourse. This provides opportunities for better understanding the risks of the intensive care unit and improving the consent process. abstract_id: PUBMED:35081244 Cross-cultural adaptation and translation of the Pediatric Intensive Care Unit-Quality of Dying and Death into Brazilian Portuguese. Objectives: To translate and culturally adapt the Pediatric Intensive Care Unit-Quality of Dying and Death questionnaire into Brazilian Portuguese. Methods: This was a cross-cultural adaptation process including conceptual, cultural, and semantic equivalence steps comprising three stages. Stage 1 involved authorization to perform the translation and cultural adaptation. Stage 2 entailed independent translation from English into Brazilian Portuguese, a synthesis of the translation, back-translation, and an expert panel. Stage 3 involved a pretest conducted with family caregivers and a multidisciplinary team. Results: The evaluation by the expert panel resulted in an average agreement of 0.8 in relation to semantic, cultural, and conceptual equivalence. The pretests of both versions of the questionnaire showed that the participants had adequate comprehension regarding the ease of understanding the items and response options. Conclusion: After going through the process of translation and cultural adaptation, the Pediatric Intensive Care Unit-Quality of Dying and Death caregiver and multidisciplinary team versions were considered culturally adapted, with both groups having a good understanding of the items. The questionnaires include relevant items to evaluate the process of death and dying in the intensive care setting, and suggest changes in care centered on patients and especially family caregivers, given the finitude of their children. Answer: Yes, rating the quality of intensive care units (ICUs) is indeed a function of the ICU scoring system. ICU scoring systems are tools that help in quantifying the severity of illness, predicting mortality, and stratifying patients based on their clinical conditions. These systems are used to adjust for the severity of disease and case mix, allowing for performance-based comparisons of ICUs (PUBMED:12352029). Examples of such scoring systems include the Acute Physiology and Chronic Health Evaluation (APACHE) II, Simplified Acute Physiology Score II, and Mortality Probability Model II0, which are used to calculate standardized mortality ratios (SMRs) for ICUs (PUBMED:12352029). The use of these scoring systems is critical for benchmarking and quality improvement efforts in ICUs. They are instrumental in accurate benchmarking and are anticipated to become increasingly important with rising healthcare costs, competition, and a focus on medical error reduction (PUBMED:17562737). The APACHE II score, for instance, has been used to determine the quality of care in an ICU by comparing predicted mortality with actual mortality, and it can facilitate early prediction of a bad prognosis, potentially prompting timely transfer to specialized hospitals (PUBMED:1865952). However, it is important to note that different risk-adjustment methods may not always agree on the identity of ICU quality outliers within a single database. A study found fair to moderate agreement among different scoring systems in identifying ICU quality outliers, suggesting that while these tools are useful, their utility for benchmarking may be limited if most ICUs are judged to be high-performing units (PUBMED:12352029). Therefore, while ICU scoring systems are fundamental in rating the quality of ICUs, it is essential for healthcare providers to understand the strengths, limitations, and specific characteristics of these systems to effectively employ them in clinical practice and research (PUBMED:34333619).
Instruction: Depressed expression of angiogenic growth factors in the subacute phase of myocardial ischemia: a mechanism behind the remodeling plateau? Abstracts: abstract_id: PUBMED:20016374 Depressed expression of angiogenic growth factors in the subacute phase of myocardial ischemia: a mechanism behind the remodeling plateau? Background And Aims: To investigate whether, in the subacute phase of acute myocardial infarction, in the peri-infarcted area the expressions of the vascular endothelial growth factor (VEGF-A) and angiopoietin (Ang) ligand receptors are depressed, and whether overexpression of these angiogens counteracts a downregulation of myocardial function. Methods: Acute myocardial infarction was induced by left anterior descending artery ligation and overexpression through injection of human VEGF-A165 and Ang-1 plasmids. The capillary and arteriolar densities, Akt-1 phosphorylation and citrate synthase activity were measured concurrent with the expression of VEGF-A, VEGFR1 and R2, Ang-1, Ang-2 and Tie-2. Results: One day after AMI, VEGR-2 was unchanged but all other measured factors in the two families were upregulated. After day 2, the Ang-2 expression increased but other measured factors decreased. After gene transfer, the vascular supply, Akt phosphorylation and citrate synthase activity were higher in the peri-infarcted area, where also the endogenous angiogenic growth factor expressions were increased. Conclusion: A rapid decrease in angiogenic stimulating factors occurs in the subacute phase of AMI and is related to a progressive decrease in myocardial contraction. A negative consequence of such a circuit is a successive reduction in the vascular supply and contractility in areas with reduced perfusion. These negative adaptations can be counteracted by angiogen overexpression. abstract_id: PUBMED:27420190 Therapeutic angiogenesis: angiogenic growth factors for ischemic heart disease. Stem cells encode vascular endothelial growth factors (VEGFs), fibroblastic growth factors (FGFs), stem cell factor, stromal cell-derived factor, platelet growth factor and angiopoietin that can contribute to myocardial vascularization. VEGFs and FGFs are the most investigated growth factors. VEGFs regulate angiogenesis and vasculogenesis. FGFs stimulate vessel cell proliferation and differentiation and are regulators of endothelial cell migration, proliferation and survival. Clinical trials of VEGF or FGF for myocardial angiogenesis have produced disparate results. The efficacy of therapeutic angiogenesis can be improved by: (1) identifying the most optimal patients; (2) increased knowledge of angiogenic factor pharmacokinetics and proper dose; (3) prolonging contact of angiogenic factors with the myocardium; (4) increasing the efficiency of VEGF or FGF gene transduction; and (5) utilizing PET or MRI to measure myocardial perfusion and perfusion reserve. abstract_id: PUBMED:20363480 Tissue-engineered pro-angiogenic fibroblast scaffold improves myocardial perfusion and function and limits ventricular remodeling after infarction. Objective: Microvascular malperfusion after myocardial infarction leads to infarct expansion, adverse remodeling, and functional impairment. Native reparative mechanisms exist but are inadequate to vascularize ischemic myocardium. We hypothesized that a 3-dimensional human fibroblast culture (3DFC) functions as a sustained source of angiogenic cytokines, thereby augmenting native angiogenesis and limiting adverse effects of myocardial ischemia. Methods: Lewis rats underwent ligation of the left anterior descending coronary artery to induce heart failure; experimental animals received a 3DFC scaffold to the ischemic region. Border-zone tissue was analyzed for the presence of human fibroblast surface protein, vascular endothelial growth factor, and hepatocyte growth factor. Cardiac function was assessed with echocardiography and pressure-volume conductance. Hearts underwent immunohistochemical analysis of angiogenesis by co-localization of platelet endothelial cell adhesion molecule and alpha smooth muscle actin and by digital analysis of ventricular geometry. Microvascular angiography was performed with fluorescein-labeled lectin to assess perfusion. Results: Immunoblotting confirmed the presence of human fibroblast surface protein in rats receiving 3DFC, indicating survival of transplanted cells. Increased expression of vascular endothelial growth factor and hepatocyte growth factor in experimental rats confirmed elution by the 3DFC. Microvasculature expressing platelet endothelial cell adhesion molecule/alpha smooth muscle actin was increased in infarct and border-zone regions of rats receiving 3DFC. Microvascular perfusion was also improved in infarct and border-zone regions in these rats. Rats receiving 3DFC had increased wall thickness, smaller infarct area, and smaller infarct fraction. Echocardiography and pressure-volume measurements showed that cardiac function was preserved in these rats. Conclusions: Application of a bioengineered 3DFC augments native angiogenesis through delivery of angiogenic cytokines to ischemic myocardium. This yields improved microvascular perfusion, limits infarct progression and adverse remodeling, and improves ventricular function. abstract_id: PUBMED:29025278 Angiogenic Growth Factors for Coronary Artery Disease: Current Status and Prospects. Although there have been advances in coronary artery bypass grafting and percutaneous coronary intervention, some patients who have ischemic coronary artery disease (CAD) are ineligible for revascularization due to suboptimal anatomy. Cardiac angiogenesis is not only a physiological response to ischemia or hypoxia but also a potential target of therapeutic strategies. Preclinical studies have shown a great enthusiasm on therapeutic angiogenesis for ischemic CAD. However, the latest trials provided the limited evidence on its efficacy. This article aims to discuss the physiological process of angiogenesis, the characteristic of angiogenic growth factors, delivery system, and clinical and preclinical studies, which can provide a novel insight into the therapeutic angiogenesis for CAD. abstract_id: PUBMED:26852735 The association of depressed angiogenic factors with reduced capillary density in the Rhesus monkey model of myocardial ischemia. Depressed capillary density is associated with myocardial ischemic infarction, in which hypoxia-inducible factor 1α (HIF-1α) is increased. The present study was undertaken to examine changes in the angiogenic factors whose expression is regulated by HIF-1 and their relation to the depressed capillary density in the Rhesus monkey model of myocardial ischemic infarction. Male Rhesus monkeys 2-3 years old were subjected to myocardial ischemia by permanent ligation of left anterior descending (LAD) artery leading to the development of myocardial infarction. Eight weeks after LAD ligation, copper concentrations, myocardial histological changes and capillary density were examined, along with Western blot and immunohistochemical analysis of angiogenic factors and detection of HIF-1 activity. Capillary density was significantly decreased but the concentrations of HIF-1α and HIF-1β were significantly increased in the infarct area. However, the levels of mRNA and protein for VEGF and VEGFR1 were significantly decreased. Other HIF-1 regulated angiogenic factors, including Tie-2, Ang-1 and FGF-1, were also significantly depressed, but vascular destabilizing factor Ang-2 was significantly increased. Copper concentrations were depressed in the infarct area. Copper-independent HIF-1 activity was increased shown by the elevated mRNA level of IGF-2, a HIF-1 target gene. Removal of copper by a copper chelator, tetraethylenepentamine, from primary cultures of neonatal rat cardiomyocytes also suppressed the expression of HIF-1 regulated VEGF and BNIP3, but not IGF-2. The data suggest that under ischemic conditions, copper loss suppressed the expression of critical angiogenic genes regulated by HIF-1, but did not affect copper-independent HIF-1 activation of gene expression. This copper-dependent dysregulation of angiogenic gene expression would contribute to the pathogenesis of myocardial ischemic infarction. abstract_id: PUBMED:16627361 Myocardial gene expression of angiogenic factors in human chronic ischemic myocardium: influence of acute ischemia/cardioplegia and reperfusion. Objective: Angiogenic therapies in animals have demonstrated the development of new blood vessels within ischemic myocardium. However, results from clinical protein and gene angiogenic trials have been less impressive. The present study aimed to investigate the expression of angiogenic genes in human chronic ischemic myocardium and the influence of acute ischemia/cardioplegia and reperfusion on their expression. Methods: Myocardial biopsies were taken from chronic ischemic and nonischemic myocardium in 15 patients with stable angina pectoris during coronary bypass surgery. Tissue samples were evaluated by oligonucleotide microarray and quantitative real-time PCR for the expression of angiogenic factors. Results: There was identical baseline expression of VEGF-A and VEGF-C mRNA in chronic ischemic myocardium compared with nonischemic myocardium. Reperfusion increased the gene expression of VEGF-A and VEGF-C mRNA both in nonischemic and ischemic myocardium. VEGF-A protein was detected mainly in the extracellular matrix around the cardiomyocytes in ischemic myocardium. Conclusion: These data suggest that the nonconclusive VEGF gene therapy trials chronic coronary artery disease was not due to a preexisting upregulation of VEGF in chronic ischemic myocardium. There might be room for further therapeutic angiogenesis in chronic ischemic myocardium. abstract_id: PUBMED:16010973 Age-related changes in cardiac expression of VEGF and its angiogenic receptor KDR in stroke-prone spontaneously hypertensive rats. We examined the age-related changes in cardiac expression of angiogenic molecules during the development of cardiac remodeling in stroke-prone spontaneously hypertensive rats (SHRSP) in comparison with those in Wistar-Kyoto rats (WKY) and spontaneously hypertensive rats (SHR). Vascular endothelial growth factor (VEGF) was highly upregulated in SHRSP aged 20 weeks compared with the same age of WKY, but it was downregulated at 40 weeks. On the other hand, KDR, an angiogenic receptor of VEGF, and endothelial nitric oxide synthase, which is important in the VEGF-mediated angiogenic pathway, were markedly downregulated in SHRSP from 20 weeks of age. Such age-related changes in their expression levels seen in SHRSP were quite different from those in SHR. In both SHR and SHRSP, transforming growth factor-beta1 (TGF-beta1) expression was increased with age, although SHRSP showed more marked upregulation. Cardiac remodeling in SHRSP was characterized by decreased coronary capillary density, cardiomyocyte hypertrophy, and cardiac fibrosis. We conclude that, in addition to overexpression of TGF-beta1, which appears to play a pivotal role in promoting cardiac hypertrophy and fibrosis, a defect of the VEGF-KDR system could result in impaired physiologic coronary angiogenesis in SHRSP, contributing to cardiac deteroration associated with myocardial ischemia in this malignant hypertensive model. abstract_id: PUBMED:34681861 Defective Uteroplacental Vascular Remodeling in Preeclampsia: Key Molecular Factors Leading to Long Term Cardiovascular Disease. Preeclampsia is a complex hypertensive disorder in pregnancy which can be lethal and is responsible for more than 70,000 maternal deaths worldwide every year. Besides the higher risk of unfavorable obstetric outcomes in women with preeclampsia, another crucial aspect that needs to be considered is the association between preeclampsia and the postpartum cardiovascular health of the mother. Currently, preeclampsia is classified as one of the major risk factors of cardiovascular disease (CVD) in women, which doubles the risk of venous thromboembolic events, stroke, and ischemic heart disease. In order to comprehend the pathophysiology behind the linkage between preeclampsia and the development of postpartum CVD, a thorough understanding of the abnormal uteroplacental vascular remodeling in preeclampsia is essential. Therefore, this review aims to summarize the current knowledge of the defective process of spiral artery remodeling in preeclampsia and how the resulting placental damage leads to excessive angiogenic imbalance and systemic inflammation in long term CVD. Key molecular factors in the pathway-including novel findings of microRNAs-will be discussed with suggestions of future management strategies of preventing CVD in women with a history of preeclampsia. abstract_id: PUBMED:9796333 Angiogenic gene therapy for ischemic heart disease Once myocardium is under hypoxia due to narrowing of coronary arteries, the myocardium produces angiogenic peptides such as fibroblast growth factors to develop collaterals to restore the blood supply to its ischemic region. Thus, if angiogenic growth factors are supplied exogenously, the development of collaterals should be facilitated, and may save myocardium from hypoxia, thereby enhancing heart function. In addition to the experiments using recombinant protein of angiogenic factors, recent reports showed that gene transfer of such angiogenic factors indeed restored blood supply into ischemic myocardium and enhanced its function, suggesting such approach being an effective new strategy for the treatment of ischemic heart disease. This brief review summarizes recent progress of angiogenic therapy. abstract_id: PUBMED:30586716 Dysfunctional and Proinflammatory Regulatory T-Lymphocytes Are Essential for Adverse Cardiac Remodeling in Ischemic Cardiomyopathy. Background: Heart failure (HF) is a state of inappropriately sustained inflammation, suggesting the loss of normal immunosuppressive mechanisms. Regulatory T-lymphocytes (Tregs) are considered key suppressors of immune responses; however, their role in HF is unknown. We hypothesized that Tregs are dysfunctional in ischemic cardiomyopathy and HF, and they promote immune activation and left ventricular (LV) remodeling. Methods: Adult male wild-type C57BL/6 mice, Foxp3-diphtheria toxin receptor transgenic mice, and tumor necrosis factor (TNF) α receptor-1 (TNFR1)-/- mice underwent nonreperfused myocardial infarction to induce HF or sham operation. LV remodeling was assessed by echocardiography as well as histological and molecular phenotyping. Alterations in Treg profile and function were examined by flow cytometry, immunostaining, and in vitro cell assays. Results: Compared with wild-type sham mice, CD4+Foxp3+ Tregs in wild-type HF mice robustly expanded in the heart, circulation, spleen, and lymph nodes in a phasic manner after myocardial infarction, beyond the early phase of wound healing, and exhibited proinflammatory T helper 1-type features with interferon-γ, TNFα, and TNFR1 expression, loss of immunomodulatory capacity, heightened proliferation, and potentiated antiangiogenic and profibrotic properties. Selective Treg ablation in Foxp3-diphtheria toxin receptor mice with ischemic cardiomyopathy reversed LV remodeling and dysfunction, alleviating hypertrophy and fibrosis, while suppressing circulating CD4+ T cells and systemic inflammation and enhancing tissue neovascularization. Tregs reconstituted after ablation exhibited restoration of immunosuppressive capacity and normalized TNFR1 expression. Treg dysfunction was also tightly coupled to Treg-endothelial cell contact- and TNFR1-dependent inhibition of angiogenesis and the mobilization and tissue infiltration of CD34+Flk1+ circulating angiogenic cells in a C-C chemokine ligand 5/C-C chemokine receptor 5-dependent manner. Anti-CD25-mediated Treg depletion in wild-type mice imparted similar benefits on LV remodeling, circulating angiogenic cells, and tissue neovascularization. Conclusions: Proinflammatory and antiangiogenic Tregs play an essential pathogenetic role in chronic ischemic HF to promote immune activation and pathological LV remodeling. The restoration of normal Treg function may be a viable approach to therapeutic immunomodulation in this disease. Answer: The depressed expression of angiogenic growth factors in the subacute phase of myocardial ischemia is indeed a mechanism that could contribute to the remodeling plateau observed in this condition. After acute myocardial infarction (AMI), there is an initial upregulation of angiogenic stimulating factors such as vascular endothelial growth factor (VEGF-A) and angiopoietin (Ang) ligand receptors, but this is followed by a rapid decrease in these factors during the subacute phase. This decrease is associated with a progressive reduction in myocardial contraction, vascular supply, and contractility in areas with reduced perfusion (PUBMED:20016374). The negative adaptations resulting from the diminished expression of angiogenic factors can be counteracted by overexpression of these angiogens, which can improve vascular supply, Akt phosphorylation, and citrate synthase activity in the peri-infarcted area (PUBMED:20016374). Clinical trials of angiogenic growth factors like VEGF or fibroblast growth factors (FGFs) for myocardial angiogenesis have shown mixed results, suggesting that the efficacy of therapeutic angiogenesis could be improved by optimizing patient selection, understanding pharmacokinetics, enhancing gene transduction efficiency, and using advanced imaging techniques to measure myocardial perfusion (PUBMED:27420190). Moreover, tissue-engineered pro-angiogenic fibroblast scaffolds have been shown to improve myocardial perfusion and function, limit ventricular remodeling after infarction, and augment native angiogenesis through the delivery of angiogenic cytokines to ischemic myocardium (PUBMED:20363480). This suggests that therapeutic strategies aimed at enhancing angiogenesis can have a positive impact on the remodeling process post-myocardial ischemia. In summary, the depressed expression of angiogenic growth factors during the subacute phase of myocardial ischemia contributes to the remodeling plateau, and therapeutic interventions that enhance angiogenesis may offer a means to counteract these negative effects and improve cardiac function and remodeling post-infarction.
Instruction: Analysis of a continuous series of 34 young patients with early-stage cervical cancer selected for a vaginal radical trachelectomy: should "staging" conization be systematically performed before this procedure? Abstracts: abstract_id: PUBMED:23358180 Analysis of a continuous series of 34 young patients with early-stage cervical cancer selected for a vaginal radical trachelectomy: should "staging" conization be systematically performed before this procedure? Objective: Vaginal radical trachelectomy (VRT) is the most widely evaluated form of conservative management of young patients with early-stage (IB1) cervical cancer. Patients with nodal involvement or a tumor size greater than 2 cm are not eligible for such treatment. The aim of this study is to report the impact of a "staging" conization before VRT. Methods: This is a retrospective study of 34 patients potentially selected for VRT for a clinical and radiologic cervical tumor less than 2 cm. Among them, 28 underwent finally a VRT (20 of them having a previous conization before this procedure) and 6 patients with macroscopic cervical cancer, confirmed by punch biopsies, "eligible" for VRT (<2 cm) had undergone "staging" conization (without further VRT) to confirm the tumor size and lymphovascular space involvement (LVSI) status. Results: Six patients having "staging" conization before VRT had finally been deemed contraindications to VRT due to the presence of a histologically confirmed tumor greater than 2 cm and/or associated with multiple foci of LVSI. Among 28 patients who underwent VRT, 1 received adjuvant chemoradiation (this patient recurred and died of disease). Two patients treated with RVT (without postoperative treatment) recurred. Ten pregnancies (9 spontaneous and 1 induced) were observed in 9 patients. Among 4 patients with macroscopic "visible" tumor who do not underwent a "staging" conization before VRT, 2 recurred. Among 11 patients who underwent VRT and having LVSI, 3 recurred. Conclusions: These results suggest that if a conization is not performed initially, it should then be included among the staging procedures to select patients for VRT. abstract_id: PUBMED:24927981 Fertility preservation by photodynamic therapy combined with conization in young patients with early stage cervical cancer: a pilot study. Background: Vaginal radical trachelectomy (VRT) is the standard fertility preserving procedure for early stage cervical cancer patients. There have been reports in the literature, however, that VRT to be too radical procedure for early stage cervical cancer, as its post-operative obstetric morbidity was high. In this study, PDT with Loop electrosurgical excision procedure (LEEP) or conization was investigated as a less radical fertility preserving treatment alternative to VRT for early stage cervical cancer patients. Methods: We analyzed data of 21 patients with early stage cervical cancer (stages IA-IIA) who underwent PDT with LEEP/conization from 2003 to 2012. LEEP or conization was performed before PDT in every case. For patients in stage IB1 or above, only those who were confirmed to be free of malignancy in frozen section by pelvic lymph node dissection received PDT. Surface photoillumination with red laser light at a wavelength of 630nm was applied to the cervix and the endocervical canal 48h after intravenous injection of 2mg/kg of photosensitizer. Results: Median age of the 21 patients was 31 years old (range: 22-43), 19 patients (90.5%) of whom were nulliparous. Majority of the lesions were at stage IA1 (47.6%) or IB1 (42.9%). Histologically, 80.9% were squamous cell carcinoma. 5 patients (23.8%) had a lesion of 2cm or larger in diameter. There was one recurrence (4.7%) and no death during 52.6 months (6-114 months). Of the 13 women who attempted to get pregnant, 10 (76.9%) women conceived a total of 11 pregnancies. The first and second trimester miscarriages were 2 and 1 respectively, and 7 (70%) of the pregnancies reached the third trimester, of which 5 delivered at term. No tumor-related deaths or PDT-related severe adverse effects were noted. Conclusion: PDT combined with LEEP/conization could be an effective fertility sparing conservative treatment for young patients with early stage cervical cancer. abstract_id: PUBMED:37527959 The efficacy of pre-operative conization in patients undergoing surgical treatment for early-stage cervical cancer: A meta-analysis. Background: Minimal invasive surgery (MIS) has been reported to increase the risk of cancer relapse and death compared with traditional open surgery in patients with early-stage cervical cancer (CC). Pre-operative conization is a protective procedure that as developed to reduce the risk caused by MIS. Methods: Relevant publications were identified by searching medical databases prior to the December 31, 2022. The primary aim of this meta-analysis was to evaluate the efficacy of pre-operative conization on disease-free survival (DFS) in early-stage CC. The secondary objective was to assess the efficacy of pre-operative conization on overall survival (OS) in early-stage CC. Results: Twelve studies were eligible for analysis. The pooled result of pre-operative conization showed a significantly improved DFS when compared with non-conization patients (HR, 0.28; 95% CI, 0.19-0.41), furthermore, pre-operative conization improved DFS by 75% (HR, 0.25; 95% CI, 0.13-0.46) in stage IB1 patients. In patients who underwent MIS, pre-operative conization also led to a significant improvement in DFS when compared with non-conization patients (HR, 0.21; 95% CI, 0.09-0.54). However, in patients who underwent pre-operative conization, MIS increased the risk of recurrence by 34% when compared with open abdominal radical hysterectomy (HR, 1.34; 95% CI, 0.41-4.38), although this difference was not statistically significant. Finally, the OS of early-stage CC was not significantly affected by surgical approach or conization. Conclusion: Pre-operation conization represents a protective effect and can improve DFS when compared with non-conization in early-stage CC, especially in stage IB CC. There was no statistical evidence to indicate that pre-operation conization could improve OS. High-quality randomized controlled trials are required to verify these results. abstract_id: PUBMED:35618539 Protective effect of pre-operative conization in patients undergoing surgical treatment for early-stage cervical cancer. Objective: The aim of this study was to investigate the impact of pre-operative conization on disease-free survival (DFS) in early-stage cervical cancer. Methods: In this population-based cohort study we analysed from clinical cancer registries to determine DFS of women with International Federation of Gynecology and Obstetrics (FIGO) stage IA1-IB1 cervical cancer with respect to conization preceding radical hysterectomy performed between January 2010 and December 2015. Results: Out of 993 datasets available for the analysis, 235 patients met the inclusion criteria of the current study. The median follow-up was 5.4 years. During the study period, 28 (11.9%) recurrences were observed. All of these occurred in patients with FIGO stage IB1. For further evaluation, patients with FIGO IB1 tumors <2 cm were further analysed and divided into two groups, based on pre-operative conization. Pre-operative conization was associated with a reduced rate of recurrence (p = 0.007), with only three (5.2%) recurrences in this group (CO) compared to 25 recurrences (21.0%) in the group without conization (NCO) preceding radical hysterectomy. DFS was estimated at 79.0% and 94.8% in NCO and CO, respectively (p = 0.008). After adjustment for other prognostic covariates, conization remained a favourable prognostic factor for DFS (HR 0.27; 95% CI 0.08-0.93, p = 0.037). Lymph node involvement was the only unfavourable factor (HR 4.38; 95% CI 1.36-14.14, p = 0.014) in the multivariable analysis. Conclusions: Pre-operative conization is associated with improved DFS in early-stage cervical cancer independently of the surgical approach. abstract_id: PUBMED:37105062 Conization before radical hysterectomy in patients with early-stage cervical cancer: A Korean multicenter study (COBRA-R). Objective: To investigate the impact of conization on survival outcomes and to identify a specific population that might benefit from conization before radical hysterectomy (RH) in patients with early-stage cervical cancer. Methods: From six institutions in Korea, we identified node-negative, margin-negative, parametria-negative, 2009 FIGO stage IB1 cervical cancer patients who underwent primary type C RH between 2006 and 2021. The patients were divided into multiple groups based on tumor size, surgical approach, and histology. We performed a series of independent 1:1 propensity score matching and compared the survival outcomes between the conization and non-conization groups. Results: In total, 1254 patients were included: conization (n = 355) and non-conization (n = 899). Among the matched patients with a tumor size of >2 cm, the conization group showed a significantly better 3-year disease-free survival (DFS) rate compared with the non-conization group when RH was conducted via minimally invasive surgery (MIS), in those with squamous cell carcinoma (96.3% vs. 87.4%, P = 0.007) and non-squamous cell carcinoma (97.0% vs. 74.8%, P = 0.021). However, no difference in DFS was observed between the two groups among the matched patients with a tumor size of ≤2 cm, regardless of surgical approach or histological type. In patients who underwent MIS RH, DFS significantly worsened as the residual tumor size increased (P < 0.001). Conclusion: Cervical conization was associated with a lower recurrence rate in patients with early-stage cervical cancer with a tumor size of >2 cm who underwent primary MIS RH. Cervical conization may be performed prior to MIS RH to minimize the uterine residual tumor. abstract_id: PUBMED:38130560 Robotic Vaginal Cuff Closure During Radical Hysterectomy for Early-Stage Cervical Cancer: The Bruges Method. The only randomized trial (LACC trial, Laparoscopic Approach to Cervical Cancer), published in 2018, comparing the oncologic outcomes of minimally invasive and open surgery in early-stage cervical cancer, has shown inferior disease-free and overall survival for minimally invasive surgery. Subsequent large retrospective cohort studies of centers with long-standing experience in minimally invasive surgery and large nationwide cohort studies have shown that both the laparoscopic and robotic approaches have similar survival outcomes as the open surgery group in the LACC trial. Important protective measures to avoid tumor spillage in the peritoneal cavity during colpotomy were the closure of the vaginal cuff and avoiding the use of a uterine manipulator. Several methods have been described to close the vaginal cuff, mainly by a vaginal approach. Here we describe with a video a new technique of vaginal cuff closure during a robotic-assisted radical hysterectomy. During the robotic procedure, a purse string barbed suture is placed through the vaginal walls in order to close the vagina prior to colpotomy. The technique is a feasible, relatively fast, and easy-to-learn addition to the robotic radical hysterectomy procedure in early-stage cervical cancer. abstract_id: PUBMED:28486242 Conization in Early Stage Cervical Cancer: Pattern of Recurrence in a 10-Year Single-Institution Experience. Objective: The main objective of this study was to analyze the pattern of recurrence after conization and pelvic lymphadenectomy in early-stage cervical cancer (CC). Methods: We retrospectively identified 60 patients with early-stage CC who referred to the European Institute of Oncology (IEO; Milan, Italy) for fertility-sparing surgery. All of them underwent conization and pelvic lymphadenectomy (one received neoadjuvant chemotherapy followed by simple trachelectomy because of the size of the tumor). Results: In total, 54 patients were considered for final analysis; only 23 patients were entirely treated at IEO. Relapse occurred in 7 (13%) of 54 patients, and in 6 cases (86%) it was local. One patient experienced a pelvic lymph node recurrence (in a woman who conceived 4 months after conservative surgery). However, this was an atypical case for site and timing of recurrence with the consistent doubt that the nodal involvement was already present before conization. Thus, analyzing only IEO population, the recurrence rate was lower (9%), becoming 4% excluding the atypical case with nodal involvement. Conclusions: In our series, the relapse was mainly local (on the cervix). However, the pattern of recurrence and recurrence rates after conization and pelvic lymphadenectomy for early-stage CC are still unclear. Further studies, comparing conization with radical trachelectomy, are necessary to confirm that the adoption of this procedure in clinical practice is safe. Our data highlight that the management of such as a particular condition in dedicated and highly specialized centers is mandatory. abstract_id: PUBMED:32518013 Simple conization and pelvic lymphadenectomy in early-stage cervical cancer: A retrospective analysis and review of the literature. Objectives: To evaluate the feasibility of cervical conization and laparoscopic pelvic lymphadenectomy as a fertility-sparing surgery to treat early-stage cervical cancer. Methods: We conducted a retrospective analysis from a prospectively maintained database of patients with stage IA1-IB1 grossly invisible cervical cancers undergoing conization plus laparoscopic lymphadenectomy between January 2014 and July 2019. Results: Forty patients were identified. Five patients (12.5%) had stage IA1 with lymphovascular space invasion, 21 (52.5%) had stage IA2, and 14 (35.0%) had stage IB1. All of the patients had tumors <2 cm. Histology included 35 (87.5%) squamous-cell carcinomas, three (7.5%) adenocarcinomas, and two (5.0%) adenosquamous carcinomas. Median duration of the procedure was 105 min (range, 31-219), and the median estimated blood loss was 50 ml (range, 30-200). One patient received abdominal radical trachelectomy due to the presence of positive margin after conization. Three patients developed postoperative cervical stenosis. After a median follow-up of 35 months (range, 8-74), only one patient (2.5%) developed a recurrence in the remaining cervix, and no patients died. Four of 17 patients attempting to conceive had a spontaneous pregnancy: three delivered at term and one was currently pregnant. Conclusion: Cervical conization and pelvic lymphadenectomy seems to be an acceptable treatment for well-selected patients with low-risk, early-stage cervical cancer who wish to preserve fertility. It offers excellent oncologic outcomes, low perioperative morbidities, and good reproductive results. Further large prospective studies are warranted to prove the effectiveness of this surgery. abstract_id: PUBMED:38134716 Robotic radical hysterectomy after conization for patients with small volume early-stage cervical cancer. Laparoscopy and robotics are recommended for managing gynecological cancer, as they are associated with lower morbidity and comparable outcomes to open surgery. However, in the case of early cervical cancer, new evidence suggests worse oncological outcomes with these approaches compared to open surgery, though the limited number of robotic cases makes it challenging to draw definitive conclusions for this particular approach. The prior conization has been proposed as a strategy to reduce the risk of tumor spillage and contamination during minimally invasive (MIS) radical hysterectomy (RH). Retrospective studies have indicated that undergoing conization before RH is linked to a reduced risk of recurrences, especially in cervical tumors measuring less than 2 cm. Nevertheless, these studies lack the statistical power needed to definitively establish conization as a recommended step before RH. Furthermore, these studies do not have enough cases utilizing the robotic approach and specific conclusions cannot be drawn from this technique. The question of whether a subset of cases would benefit from preoperative conization and whether conization should be performed to recommend MIS over open surgery remains unanswered. Prospective clinical trials involving women diagnosed with early-stage cervical cancer <2 cm, randomized between undergoing conization before robotic RH or without prior conization are mandatory to assess the role of conization before robotic RH in cervical cancer. abstract_id: PUBMED:35513934 Laparo-assisted vaginal radical hysterectomy as a safe option for Minimal Invasive Surgery in early stage cervical cancer: A systematic review and meta-analysis. Background: Radical hysterectomy and pelvic lymphadenectomy are considered the standard treatment for early-stage cervical cancer (ECC). Minimal Invasive approach to this surgery has been debated after the publication of a recent prospective randomized trial (Laparoscopic Approach to Cervical Cancer, LACC trial). It demonstrated poorer oncological outcomes for Minimal Invasive Surgery in ECC. However, the reasons are still an open debate. Laparo-Assisted Vaginal Hysterectomy (LAVRH) seems to be a logical option to Abdominal Radical Hysterectomy (ARH). This meta-analysis has the aim to prove it. Methods: Following the recommendations in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, the Pubmed database and Scopus database were systematically searched in January 2022 since early first publications. No limitation of the country was made. Only English article were considered. The studies containing data about Disease-free Survival (DFS) and/or Overall Survival (OS) and/or Recurrence Rate (RcR) were included. Results: 19 studies fulfilled inclusion criteria. 9 comparative studies were enrolled in meta-analysis. Patients were analyzed concerning surgical approach (Laparo-Assisted Vaginal Radical Hysterectomy) and compared with ARH Oncological outcomes such as DFS and OS were considered. 3196 patiets were included for the review. Meta-analysis of 1988 0f them highlighted a non-statistic significant difference between LARVH and ARH (RR 0.8 [95% CI 0.55-1.16] p = 0.24; I2 = 0%; p = 0.98). OS was feasible only for 4 studies (RR 0.84 [95% CI 0.23-3.02] p = 0.79; I2 = 0 p = 0.44). Sub-analysis for tumor with a maximum diameter greater than 2 cm was performed. Data about the type of recurrences (loco-regional vs distant) were collected. Conclusion: LARVH does not appear to affect DFS and OS in ECC patients. The proposed results seem to be comparable with the open approach group of the LACC trial, which today represents the reference standard for the treatment of this pathology. More studies will be needed to test the safety and efficacy of LARVH in the ECC. Answer: The analysis of a continuous series of 34 young patients with early-stage cervical cancer selected for vaginal radical trachelectomy (VRT) suggests that "staging" conization should be considered as part of the staging procedures before VRT. In the study, 28 patients underwent VRT, with 20 having a previous conization. Six patients who were initially considered eligible for VRT based on clinical and radiologic assessments underwent "staging" conization, which revealed contraindications to VRT such as tumors greater than 2 cm or associated with multiple foci of lymphovascular space involvement (LVSI). The study found that among patients who did not undergo "staging" conization before VRT, there was a higher recurrence rate. In contrast, among patients who underwent VRT and had LVSI, the recurrence rate was lower. These results suggest that conization can help to more accurately determine tumor size and LVSI status, which are critical factors in selecting patients for VRT (PUBMED:23358180). Additionally, a meta-analysis evaluating the efficacy of pre-operative conization on disease-free survival (DFS) in early-stage cervical cancer found that pre-operative conization significantly improved DFS compared to non-conization patients, particularly in stage IB1 patients. The study also indicated that pre-operative conization could be a protective procedure, especially when minimal invasive surgery (MIS) is used, as it may reduce the risk of cancer relapse (PUBMED:37527959). Another study that investigated the impact of pre-operative conization on DFS in early-stage cervical cancer found that pre-operative conization was associated with a reduced rate of recurrence and improved DFS independently of the surgical approach (PUBMED:35618539). In conclusion, the evidence from these studies supports the systematic performance of "staging" conization before VRT in young patients with early-stage cervical cancer, as it appears to improve patient selection for the procedure and may enhance oncological outcomes.
Instruction: Is context everything to the definition of clinical depression? Abstracts: abstract_id: PUBMED:21183224 Is context everything to the definition of clinical depression? A test of the Horwitz and Wakefield postulate. Background: In arguing for the need to distinguish clinical depression from sadness, Horwitz and Wakefield argued for weighting consideration to nuances of life event stressors. Their definition of clinical depression corresponds to the concept of endogenous depression or melancholia, while their model would position reactive (or context specific) non-melancholic depressive disorders more as manifestations of 'sadness' rather than as clinical depression. Method: We test their postulate by examining the extent to which 141 clinically diagnosed melancholic and non-melancholic depressed patients reported episodes as being preceded by a life event stressor or not--and the salience of any life stressor to episode onset and severity. Results: While melancholic patients were more likely than non-melancholic patients to report episodes coming 'out of the blue' and to be more severe than might be expected from the severity of antecedent stressors, differences were more ones of degree and not absolute. Such context variables appeared, however, to differentiate melancholic and non-melancholic patients more consistently than depression symptom variables. As depression severity and impairment levels did not differ across the melancholic and non-melancholic patients, findings were unlikely to be artefacts of such factors. Conclusions: The study finds some support for the Horwitz and Wakefield hypothesis of clinical (or, at least melancholic) depression requiring independence of context or an antecedent stressor, but with precision likely to be compromised by nuances intrinsic to assessment of life event stressors and their contribution to depression onset, difficulties in defining valid 'melancholic' and 'non-melancholic' depressive sub-groups and the parsimony of the hypothesis. abstract_id: PUBMED:38422869 How different definition criteria may predict clinical outcome in treatment resistant depression: Results from a prospective real-world study. Management of treatment-resistant depression (TRD) remains a major public health challenge, also due to the lack of a consensus around TRD definition. We investigated the impact of different definitions of TRD on identifying patients with distinct features in terms of baseline characteristics, treatment strategies, and clinical outcome. We conducted a prospective naturalistic study on 538 depressed inpatients. Patients were screened for treatment resistance by two TRD definitions: looser criteria (lTRD) and stricter criteria (sTRD). We compared baseline characteristics, treatment and clinical outcome between the TRD groups and their non-TRD counterparts. 52.97 % of patients were identified as lTRD, only 28.81 % met the criteria for sTRD. sTRD patients showed lower rates of remission and slower symptom reduction compared to non-TRD patients and received more challenging treatments. Surprisingly, patients identified as sTRD also exhibited lower rates of psychiatric comorbidities, including personality disorders, substance abuse, or alcohol misuse. Stricter TRD criteria identify patients with worse clinical outcomes. Looser criteria may lead to overdiagnosis and over treatment. Clinical features known to be possible risk factors for TRD, as psychiatric comorbidities, showed to be more suggestive of a "difficult to manage" depression rather than a proper TRD. abstract_id: PUBMED:16138496 Pain symptoms in depression: definition and clinical significance. This article presents the findings of a focused literature review and consensus meetings on the definition and clinical significance of painful symptoms in patients with depression. About 50% of depressed patients report pain, and many types of pain occur more frequently in people with depression than in those without. There is some evidence that pain in depressed patients is associated with a poor response to treatment. Pain and depression may share common pathways and may both respond to treatment with certain antidepressants. Doctors need to be alert to pain in depressed patients and be prepared to treat it. abstract_id: PUBMED:33692724 Depression, Anxiety, and Stress Among Healthcare Workers During the COVID-19 Outbreak and Relationships With Expressive Flexibility and Context Sensitivity. This study aimed at investigating depression, anxiety, and stress symptoms among healthcare workers and examine the role of expressive flexibility and context sensitivity as key components of resilience in understanding reported symptoms. We hypothesized a significant and different contribution of resilience components in explaining depression, anxiety, and stress. A total sample of 218 Italian healthcare workers participated in this study through an online survey during the lockdown, consequently to the COVID-19. The Depression Anxiety Stress Scales-21 (DASS-21) was used to measure depression, anxiety, and stress; the Flexible Regulation of Emotional Expression (FREE) scale was used to measure the ability to enhance and suppress emotional expression; the Context Sensitivity Index (CSI) was used to measure the ability to accurately perceive contextual cues and determine cue absence. Demographic and work-related data were also collected. DASS-21 cut-off scores were used to verify the mental status among the respondents. Correlational analyses examined relationships between DASS-21, FREE, and CSI, followed by three regression analyses with depression, anxiety, and stress as dependent variables, controlling for age, gender, and work experience. Enhancement and suppression abilities, cue presence, and cue absence served as independent variables. The results showed a prevalence of moderate to extremely severe symptoms of 8% for depression, 9.8% for anxiety, and 8.9% for stress. Results of correlational analysis highlighted that enhance ability was inversely associated with depression and stress. Suppression ability was inversely associated with depression, anxiety, and stress. The ability to perceive contextual cues was inversely associated with depression and anxiety. The regression analysis showed that the ability to enhance emotional expression was statistically significant to explain depression among healthcare workers. In predicting anxiety, age, and the ability to accurately perceive contextual cues and determine cue absence made substantial contributions as predictors. In the last regression model, age, work experience, and the ability to suppress emotional expression were significant predictors of stress. This study's findings can help understand the specific contributions of enhancement and suppression abilities and sensitivity to stressor context cues in predicting depression, anxiety, and stress among healthcare workers. Psychological interventions to prevent burnout should consider these relationships. abstract_id: PUBMED:34783971 Health and disease as practical concepts: exploring function in context-specific definitions. Despite the longstanding debate on definitions of health and disease concepts, and the multitude of accounts that have been developed, no consensus has been reached. This is problematic, as the way we define health and disease has far-reaching practical consequences. In recent contributions it is proposed to view health and disease as practical- and plural concepts. Instead of searching for a general definition, it is proposed to stipulate context-specific definitions. However, it is not clear how this should be realized. In this paper, we review recent contributions to the debate, and examine the importance of context-specific definitions. In particular, we explore the usefulness of analyzing the relation between the practical function of a definition and the context it is deployed in. We demonstrate that the variety of functions that health and disease concepts need to serve makes the formulation of monistic definitions not only problematic but also undesirable. We conclude that the analysis of the practical function in relation to the context is key when formulating context-specific definitions for health and disease. At last, we discuss challenges for the pluralist stance and make recommendations for future research. abstract_id: PUBMED:30445388 Definition of treatment-resistant depression - Asia Pacific perspectives. Background: The lack of uniformity in the definition of treatment resistant depression (TRD) within the Asia-Pacific (APAC) region may have implications for patient management. We aimed to characterize the most commonly used TRD definition in selected APAC countries. Methods: A systematic literature review of TRD definitions in APAC countries was conducted in Medline and Embase (2010-2016) and conference proceedings (2014 and 2016). TRD guidelines (APAC, Europe regional, US, or international) were also searched. An expert-panel explored APAC nuances in TRD definitions to achieve consensus for a regional-level definition. Results: Ten guidelines and 89 studies qualified for study inclusion. Among the studies, variations were observed in definitions regarding: number of antidepressants failed (range: ≥1 to ≥3), classes of antidepressants (same or different; 59% did not specify class), duration of previous treatments (range: 4-12 weeks), dosage adequacy, and consideration of adherence (yes/no; 88% of studies did not consider adherence). No TRD-specific guidelines were identified. The emerging consensus from the literature review and panel discussion was that TRD is most commonly defined as failure to ≥2 antidepressant therapies given at adequate doses, for 6-8 weeks during a major depressive episode. Limitations: Few studies provided definitions of TRD used in daily clinical practice, and a limited number of countries were represented in the included studies and expert panel. Conclusion: Attaining consensus on TRD definition may promote accurate, and possibly early detection of patients with TRD to enable appropriate intervention that may impact patient outcomes and quality of life. abstract_id: PUBMED:30917602 Context Definition and Query Language: Conceptual Specification, Implementation, and Evaluation. As IoT grows at a staggering pace, the need for contextual intelligence is a fundamental and critical factor for IoT intelligence, efficiency, effectiveness, performance, and sustainability. As the standardisation efforts for IoT are fast progressing, efforts in standardising context management platforms led by the European Telecommunications Standards Institute (ETSI) are gaining more attention from both academic and industrial research organizations. These standardisation endeavours will enable intelligent interactions between 'things', where things could be devices, software components, web-services, or sensing/actuating systems. Therefore, having a generic platform to describe and query context is crucial for the future of IoT applications. In this paper, we propose Context Definition and Query Language (CDQL), an advanced approach that enables things to exchange, reuse and share context between each other. CDQL consists of two main parts, namely: context definition model, which is designed to describe situations and high-level context; and Context Query Language (CQL), which is a powerful and flexible query language to express contextual information requirements without considering details of the underlying data structures. An important feature of the proposed query language is its ability to query entities in IoT environments based on their situation in a fully dynamic manner where users can define situations and context entities as part of the query. We exemplify the usage of CDQL on three different smart city use cases to highlight how CDQL can be utilised to deliver contextual information to IoT applications. Performance evaluation has demonstrated scalability and efficiency of CDQL in handling a fairly large number of concurrent context queries. abstract_id: PUBMED:34706412 Improving depression prediction using a novel feature selection algorithm coupled with context-aware analysis. Background: Developing machine learning based depression prediction method with information from long-term recordings is important and challenging to clinical diagnosis of depression. Methods: We developed a novel two-stage feature selection algorithm conducted on the high-dimensional (over thirty thousand) features constructed by a context-aware analysis on the data set of DAIC-WOZ, including audio, video, and semantic features. The prediction performance was compared with seven reference models. The preferred topics and feature categories related to the retained features were also analyzed respectively. Results: Parsimonious subsets (tens of features) were selected by the proposed method in each case of prediction. We obtained the best performance in depression classification with F1-score as 0.96 (0.67), Precision as 1.00 (0.63), and Recall as 0.92 (0.71) on the development set (test set). We also achieved promising results in depression severity estimation with RMSE as 4.43 (5.11) and MAE as 3.22 (3.98), having a marginal difference with the best reference model (random forest with 'Selected-Text' features). Five most important topics related to depression were revealed. The audio features were predominant to the other feature categories in depression classification while the contributions of the three feature categories to severity estimation were almost equal. Limitations: More depression samples in the database we used should be further included. The second stage of feature selection is relatively time-consuming. Conclusion: This pipeline of depression recognition as well as the preferred topics and feature categories are expected to be useful in supporting the diagnosis of psychological distress conditions. abstract_id: PUBMED:26640739 The Relationship between Clinical, Momentary, and Sensor-based Assessment of Depression. The clinical assessment of severity of depressive symptoms is commonly performed with standardized self-report questionnaires, most notably the patient health questionnaire (PHQ-9), which are usually administered in a clinic. These questionnaires evaluate symptoms that are stable over time. Ecological momentary assessment (EMA) methods, on the other hand, acquire patient ratings of symptoms in the context of their lives. Today's smartphones allow us to also obtain objective contextual information, such as the GPS location, that may also be related to depression. Considering clinical PHQ-9 scores as ground truth, an interesting question is to what extent the EMA ratings and contextual sensor data can be used as potential predictors of depression. To answer this question, we obtained PHQ-9 scores from 18 participants with a variety of depressive symptoms in our lab, and then collected their EMA and GPS sensor data using their smartphones over a period of two weeks. We analyzed the relationship between GPS sensor features, EMA ratings, and the PHQ-9 scores. While we found a strong correlation between a number of sensor features extracted from the two-week period and the PHQ-9 scores, the other relationships remained non-significant. Our results suggest that depression is better evaluated using long-term sensor-based measurements than the momentary ratings of mental state or short-term sensor information. abstract_id: PUBMED:26822778 Enhanced attention to context: An intervention that increases perceived control in mild depression. People perceive that they have control over events to the extent that the same events do not occur outside of their control, randomly, in the environment or context. Therefore, perceived control should be enhanced if there is a large contrast between one's own control and the control that the context itself seems to exert over events. Given that depression is associated with low perceived control, we tested the hypothesis that enhanced attentional focus to context will increase perceived control in people with and without depression. A total of 106 non-depressed and mildly depressed participants completed a no control zero-contingency task with low and high outcome probability conditions. In the experimental context-focus group, participants were instructed to attend to the context, whereas in the control group, participants were instructed to attend to their thoughts. Irrespective of attentional focus, non-depressed participants displayed illusory control. However, people with mild depression responded strongly to the attention focus manipulation. In the control group, they evidenced low perceived control with classic depressive realism effects. In the experimental group, when asked to focus on the context in which events took place, participants with mild depression displayed enhanced perceived control or illusory control, similar to non-depressed participants. Findings are discussed in relation to whether depression effects on perceived control represent tendencies towards realism or attentional aspects of depressive thoughts. Answer: Context is not everything to the definition of clinical depression, but it plays a significant role. The study by Horwitz and Wakefield suggests that clinical depression, particularly melancholic depression, should be distinguished from sadness by considering the context of life event stressors. Their hypothesis posits that clinical depression requires independence from context or an antecedent stressor (PUBMED:21183224). However, the study found that while melancholic patients were more likely to report episodes coming 'out of the blue' and to be more severe than might be expected from the severity of antecedent stressors, the differences were more of degree rather than absolute. This indicates that context variables do differentiate melancholic and non-melancholic patients, but not in an absolute sense, and that the precision of this hypothesis is likely compromised by the nuances intrinsic to assessing life event stressors and their contribution to depression onset (PUBMED:21183224). Moreover, the definition of treatment-resistant depression (TRD) and its criteria can impact the identification of patients with distinct features, baseline characteristics, treatment strategies, and clinical outcomes. Stricter TRD criteria identify patients with worse clinical outcomes, while looser criteria may lead to overdiagnosis and overtreatment (PUBMED:38422869). This suggests that context, in terms of definition criteria, is important in predicting clinical outcomes in depression. Additionally, the context in which healthcare workers experienced depression, anxiety, and stress during the COVID-19 outbreak was found to be related to their expressive flexibility and context sensitivity, which are components of resilience (PUBMED:33692724). This further supports the idea that context is relevant to the experience and definition of depression. In conclusion, while context is a significant factor in defining and understanding clinical depression, it is not the sole determinant. The nuances of life event stressors, the criteria used to define treatment-resistant depression, and the specific circumstances surrounding individuals all contribute to the complexity of defining clinical depression.
Instruction: Does tobacco-control mass media campaign exposure prevent relapse among recent quitters? Abstracts: abstract_id: PUBMED:22949574 Does tobacco-control mass media campaign exposure prevent relapse among recent quitters? Objective: To determine whether greater mass media campaign exposure may assist recent quitters to avoid relapse. Method: Using date of data collection and postcode, media market estimates of televised tobacco-control advertising exposure measured by gross ratings points (GRPs) were merged with a replenished cohort study of 443 Australians who had quit in the past year. Participants' demographic and smoking characteristics prior to quitting, and advertising exposure in the period after quitting, were used to predict relapse 1 year later. Results: In multivariate analysis, each increase in exposure of 100 GRPs (i.e., 1 anti-smoking advertisement) in the three-month period after the baseline quit was associated with a 5% increase in the odds of not smoking at follow-up (OR = 1.05, 95% CI 1.02-1.07, p < 0.001). This relationship was linear and unmodified by length of time quit prior to the baseline interview. At the mean value of 1081 GRPs in the 3 months after the baseline-quit interview, the predicted probability of being quit at follow-up was 52%, whereas it was 41% for the minimum (0) and 74% for the maximum (3,541) GRPs. Conclusion: Greater exposure to tobacco-control mass media campaigns may reduce the likelihood of relapse among recent quitters. abstract_id: PUBMED:26036664 The effect of exposure to media campaign messages on adult cessation. Introduction: Numerous studies have examined the relationship between antitobacco mass-media campaigns and quit attempts. However, less is known about the effect of these campaigns on relapse. This paper evaluates the effect of media exposure on smokers' quit attempts and relapse. Methods: We used data from the Florida Adult Cohort Survey, a telephone follow-up survey of adult smokers and recent quitters, who completed the Florida Adult Tobacco Survey. For this study, 1823 unique smokers and recent quitters from baseline first observed between July 2008 and October 2012 were surveyed through up to seven follow-up interviews between October 2009 and October 2013. Media exposure during this period primarily represents exposure to Florida's Tobacco Free Florida (TFF) campaign, although it also includes exposure to the Centers for Disease Control and Prevention's Tips From Former Smokers media campaign in 2012-2013. A multiple-spell discrete-time survival model was estimated using logistic regression. Each spell represents a quit attempt or relapse event. Results: The odds of the first observed quit attempt are higher at higher levels of target rating points (TRPs) (aOR=1.02, p=0.023). The odds ratio for relapse and second quit and second relapse was not statistically significant. Conclusions: The results suggest that exposure to media campaign messages in Florida has led to increases in quit attempts. Although the estimates were not statistically significant for relapse or the second spell of quit attempts or relapse, the results suggest that media messages might also influence subsequent quit attempts or relapses after an initial quit attempt. abstract_id: PUBMED:37312770 Motivation to quit tobacco; Impact of different types of Anti-tobacco state-sponsored media propaganda messages. Introduction: Antitobacco media messages can easily reach the mass and play a very positive and significant role in changing the motivational stages among recent quitters. Motivation is the key to changing human behaviour. Motivation can be intrinsic and extrinsic. To modify tobacco-related behaviour, one must have an inherent motivation to quit tobacco. However, the outside factors, for example, protobacco advertisements, antitobacco advertisements, peer pressure, celebrity influence, and family members' influence cannot be ignored. Method: A total of 400 recent tobacco quitters were enrolled from four colleges via a multistage sampling method. Time series research design was used for data collection at three time periods 0, 1, and 3 months. Study participants were divided into four groups: 1) personal testimony group, 2) health warning group, 3) celebrity-influenced public service announcements, and 4) natural exposure group. Media messages containing antitobacco video clippings and pictures were delivered to the participants via phone thrice a week, as per the groups assigned. All four groups were assessed for the motivational stage via contemplation ladder at 0, 1, and 3 months of intervals. Results: Antitobacco personal testimonial media messages are most effective in enhancing the motivation to quit tobacco, followed by the antitobacco health warning messages, which are also proven to be effective in maintaining high motivation levels to remain abstinent from smoking. However, public service announcements are ineffective in keeping the motivation to quit tobacco at higher smoking. Conclusion: Overall, the antitobacco state-sponsored media messages, personal testimonials, and health warnings about tobacco products effectively maintain and enhance motivation to quit tobacco. abstract_id: PUBMED:38441941 Impact of Traditional and New Media on Smoking Intentions and Behaviors: Secondary Analysis of Tasmania's Tobacco Control Mass Media Campaign Program, 2019-2021. Background: Tasmania, the smallest state by population in Australia, has a comprehensive tobacco control mass media campaign program that includes traditional (eg, television) and "new" channels (eg, social media), run by Quit Tasmania. The campaign targets adult smokers, in particular men aged 18-44 years, and people from low socioeconomic areas. Objective: This study assesses the impact of the 2019-2021 campaign program on smokers' awareness of the campaign program, use of Quitline, and smoking-related intentions and behaviors. Methods: We used a tracking survey (conducted 8 times per year, immediately following a burst of campaign activity) to assess campaign recall and recognition, intentions to quit, and behavioral actions taken in response to the campaigns. The sample size was approximately 125 participants at each survey wave, giving a total sample size of 2000 participants over the 2 years. We merged these data with metrics including television target audience rating points, digital and Facebook (Meta) analytics, and Quitline activity data, and conducted regression and time-series modeling. Results: Over the evaluation period, unprompted recall of any Quit Tasmania campaign was 18%, while prompted recognition of the most recent campaign was 50%. Over half (52%) of those who recognized a Quit Tasmania campaign reported that they had performed or considered a quitting-related behavioral action in response to the campaign. In the regression analyses, we found having different creatives within a single campaign burst was associated with higher campaign recall and recognition and an increase in the strength of behavioral actions taken. Higher target audience rating points were associated with higher campaign recall (but not recognition) and an increase in quit intentions, but not an increase in behavioral actions taken. Higher Facebook advertisement reach was associated with lower recall among survey participants, but recognition was higher when digital channels were used. The time-series analyses showed no systematic trends in Quitline activity over the evaluation period, but Quitline activity was higher when Facebook reach and advertisement spending were higher. Conclusions: Our evaluation suggests that a variety of creatives should be used simultaneously and supports the continued use of traditional broadcast channels, including television. However, the impact of television on awareness and behavior may be weakening. Future campaign evaluations should closely monitor the effectiveness of television as a result. We are also one of the first studies to explicitly examine the impact of digital and social media, finding some evidence that they influence quitting-related outcomes. While this evidence is promising for campaign implementation, future evaluations should consider adopting rigorous methods to further investigate this relationship. abstract_id: PUBMED:28798263 Cost-effectiveness of a smokeless tobacco control mass media campaign in India. Background: Tobacco control mass media campaigns are cost-effective in reducing tobacco consumption in high-income countries, but similar evidence from low-income countries is limited. An evaluation of a 2009 smokeless tobacco control mass media campaign in India provided an opportunity to test its cost-effectiveness. Methods: Campaign evaluation data from a nationally representative household survey of 2898 smokeless tobacco users were compared with campaign costs in a standard cost-effectiveness methodology. Costs and effects of the Surgeon campaign were compared with the status quo to calculate the cost per campaign-attributable benefit, including quit attempts, permanent quits and tobacco-related deaths averted. Sensitivity analyses at varied CIs and tobacco-related mortality risk were conducted. Results: The Surgeon campaign was found to be highly cost-effective. It successfully generated 17 259 148 additional quit attempts, 431 479 permanent quits and 120 814 deaths averted. The cost per benefit was US$0.06 per quit attempt, US$2.6 per permanent quit and US$9.2 per death averted. The campaign continued to be cost-effective in sensitivity analyses. Conclusion: This study suggests that tobacco control mass media campaigns can be cost-effective and economically justified in low-income and middle-income countries. It holds significant policy implications, calling for sustained investment in evidence-based mass media campaigns as part of a comprehensive tobacco control strategy. abstract_id: PUBMED:36444612 Hope and sadness: Balancing emotions in tobacco control mass media campaigns aimed at smokers. Issue Addressed: Australia has smoking prevalence of less than 15% among adults, but there are concerns that the rates of decline have stabilised. Sustained mass media campaigns are central to decreasing prevalence, and the emotions evoked by campaigns contribute to their impact. This study investigates the association between potential exposure to campaigns that evoke different emotions on quitting salience (thinking about quitting), quitting intentions and quitting attempts. Methods: Data on quitting outcomes were obtained from weekly cross-sectional telephone surveys with adult smokers and recent quitters between 2013 and 2018. Campaign activity data were collated, and population-level potential campaign exposure was measured by time and dose. Results: Using multivariate analyses, a positive association between potential exposure to 'hope' campaigns and thinking about quitting and intending to quit was noted, but no association was seen with quit attempts. Potential exposure to 'sadness' evoking campaigns was positively associated with quitting salience and negatively associated with quit attempts, whereas those potentially exposed to campaigns evoking multiple negative emotions (fear, guilt and sadness) were approximately 30% more likely to make a quit attempt. Conclusions: This study suggests a relationship between the emotional content of campaigns, quitting behaviours. Campaign planners should consider campaigns that evoke negative emotions for population-wide efforts to bring about quitting activity alongside hopeful campaigns that promote quitting salience and quitting intentions. The emotional content of campaigns provides an additional consideration for campaigns targeting smokers and influencing quitting activity. SO WHAT?: This study demonstrates the importance of balancing the emotional content of campaigns to ensure that campaign advertising is given the greatest chance to achieve its objectives. Utilising campaigns that evoke negative emotions appear to be needed to encourage quitting attempts but maintaining hopeful campaigns to promote thinking about quitting and intending to quit is also an important component of the mix of tobacco control campaigns. abstract_id: PUBMED:27816042 Late smoking relapse among adolescent quitters. Whereas some data are available about late smoking relapse among adult quitters, there are none for teen quitters. This study is a 6-year follow-up of teen quitters (n=253) for whom we collected (retrospectively) data on the extent and timing of relapse. We found that even after a strictly defined quit (six-months prolonged abstinence) at one year, substantial relapse occurred both early and late: the majority (55%) of relapses occurred after the 0-1year interval after having quit. These findings have implication for the need for research into the relapse process for teen quitters, and for the need to develop interventions for teens (as for adults) to prevent (early and) late relapse. abstract_id: PUBMED:31698724 Cost-Effectiveness of Using Mass Media to Prevent Tobacco Use among Youth and Young Adults: The FinishIt Campaign. Mass media campaigns have been hailed as some of the most effective tobacco prevention interventions. This study examined the cost-effectiveness of the national tobacco prevention campaign, truth® FinishIt, to determine the cost per quality-adjusted life year (QALY) saved and the return on investment (ROI). The cost-utility analysis used four main parameters: program costs, number of smoking careers averted, treatment costs, and number of QALYs saved whenever a smoking career is averted. Parameters were varied to characterize cost-effectiveness under different assumptions (base case, conservative, optimistic, and most optimistic). The ROI estimate compared campaign expenditures to the cost saved due to the campaign implementation. Analyses were conducted in 2019. The base case analysis indicated the campaign results in a societal cost savings of $3.072 billion. Under the most conservative assumptions, estimates indicated the campaign was highly cost-effective at $1076 per QALY saved. The overall ROI estimate was $174 ($144 in costs to smokers, $24 in costs to the smoker's family, and $7 in costs to society) in cost savings for every $1 spent on the campaign. In all analyses, the FinishIt campaign was found to reach or exceed the threshold levels of cost savings or cost-effectiveness, with a positive ROI. These findings point to the value of this important investment in the health of the younger generation. abstract_id: PUBMED:29582640 Effectiveness of a Mass Media Campaign on Oral Carcinogens and Their Effects on the Oral Cavity Objective: To develop a mass media campaign on oral carcinogens and their effects on the oral cavity in order to increase awareness among the general population. Methods: Documentary and public service announcements highlighting the effects of tobacco and its products were designed and developed based on principles of behavior change. A questionnaire, designed to determine the knowledge, attitude and practice of people regarding oral carcinogens, was used to conduct a baseline survey at various sites in eastern Nepal. Local television channels and radio stations broadcasted the documentary and public service announcements. An evaluation survey was then performed to assess the effectiveness of the campaign. Results: Baseline and evaluation surveys covered 1,972 and 2,140 individuals, respectively. A third of the baseline population consumed quid, 22% chewing tobacco, 16% gutka (commercial preparation of arecanut, tobacco, lime and chemicals) and 25% cigarettes. Tobacco consumption differed significantly between 3 ecologic regions with greater use in the Terai region. The knowledge prevalence regarding the oral carcinogens quid (70%), chewing tobacco (82%), gutka (58%) and cigarettes (93%) significantly increased in the evaluation population. Females were more aware about the various tobacco products and their effects on health. More people knew about the harmful effects of tobacco on their health and oral cavity, and had their mouth examined and the frequency of consumption of these products reduced significantly after the campaign. Attitudes towards production, sale and advertisements of tobacco also improved significantly. Conclusions: The mass media campaign was an effective tool for increasing awareness among the population. abstract_id: PUBMED:19817312 Media advocacy in anti-tobacco campaign in the Republic of Moldova Unlabelled: This paper makes obvious the role of mass media in supporting the effort of an organisation/group of citizens to achieve changes of a social nature (requires, first of all, changes of public policies) and its promotion at the level of decision-makers, as well as public opinion. The main purpose of this paper is to study the phenomenon of media advocacy, identify its social potential to project and evaluate media engagement in the national anti-tobacco campaign of 2007. The anti-tobacco campaign carried out in 2007 had as an aim the public awareness on changing the tobacco control policies in Moldova. Media advocacy activities have been focused towards improving journalists' knowledge in the field of tobacco control and encouraging them (through media products carried out) to influence the Parliament's members to ratify documents against smoking. Results: the increase by two-three times and, for some media means even by ten times, of the number of anti-tobacco references (articles, radio and TV broadcastings) as compared to 2006. Media advocacy had a significant contribution to the unfolding of political and social events in this field (approval of FCTC; approval of a new Law on tobacco and tobacco products, stipulating "restrictions and afferent provisions to the consumption of tobacco products and the harmful effects on health"). Answer: Yes, tobacco control mass media campaign exposure may help prevent relapse among recent quitters. A study that merged media market estimates of televised tobacco-control advertising exposure with a cohort study of Australians who had quit smoking in the past year found that each increase in exposure of 100 gross ratings points (GRPs) in the three-month period after quitting was associated with a 5% increase in the odds of not smoking at follow-up. This suggests that greater exposure to tobacco-control mass media campaigns may reduce the likelihood of relapse among recent quitters (PUBMED:22949574). Another study using data from the Florida Adult Cohort Survey indicated that exposure to media campaign messages in Florida led to increases in quit attempts, although the estimates were not statistically significant for relapse or subsequent quit attempts or relapse after an initial quit attempt (PUBMED:26036664). Additionally, antitobacco personal testimonial media messages were found to be most effective in enhancing the motivation to quit tobacco, which is a key factor in preventing relapse (PUBMED:37312770).
Instruction: Should dosing of rocuronium in obese patients be based on ideal or corrected body weight? Abstracts: abstract_id: PUBMED:19690247 Should dosing of rocuronium in obese patients be based on ideal or corrected body weight? Background: Pharmacokinetic studies in obese patients suggest that dosing of rocuronium should be based on ideal body weight (IBW). This may, however, result in a prolonged onset time or compromised conditions for tracheal intubation. In this study, we compared onset time, conditions for tracheal intubation, and duration of action in obese patients when the intubation dose of rocuronium was based on three different weight corrections. Methods: Fifty-one obese patients, with a median (range) body mass index of 44 (34-72) kg/m2, scheduled for laparoscopic gastric banding or gastric bypass under propofol-remifentanil anesthesia were randomized into three groups. The patients received rocuronium (0.6 mg/kg) based on IBW (IBW group, n = 17), IBW plus 20% of excess weight (corrected body weight [CBW]20% group, n = 17), or IBW plus 40% of excess weight (CBW40% group, n = 17). Propofol was administered as a bolus of 200 mg and an infusion at 5 mg x kg(-1) x h(-1) and remifentanil was administered at 1.0 microg x kg(-1) x min(-1), both according to CBW40%. Neuromuscular function was monitored with train-of-four nerve stimulation and acceleromyography. The primary end point was duration of action, defined as time to reappearance of the fourth twitch in train-of-four. Results: The median (range) duration of action was 32 (18-49), 38 (25-66), and 42 (24-66) min in the IBW, CBW20%, and CBW40% groups, respectively (P = 0.001 for comparison of the IBW and CBW40% group). There were no significant differences in onset time (85 vs 84 vs 80 s) or in intubation conditions 90 s after administration of rocuronium. Conclusions: In obese patients undergoing gastric banding or gastric bypass, rocuronium dosed according to IBW provided a shorter duration of action without a significantly prolonged onset time or compromised conditions for tracheal intubation. abstract_id: PUBMED:35983671 Appropriate dosing of sugammadex for reversal of rocuronium-/vecuronium-induced muscle relaxation in morbidly obese patients: a meta-analysis of randomized controlled trials. Objective: To conduct a meta-analysis to compare different dosing scalars of sugammadex in a morbidly obese population for reversal of neuromuscular blockade (NMB). Methods: PubMed®, ClinicalTrials.gov, Cochrane Central Register of Controlled Trials (CENTRAL) and Google Scholar were searched for relevant randomized controlled trials (RCTs) comparing lower-dose sugammadex using ideal body weight (IBW) or corrected body weight (CBW) as dosing scalars with standard-dose sugammadex based on total body weight (TBW) among morbidly obese people after NMB. Mean difference with SD was used to estimate the results. Results: The analysis included five RCT with a total of 444 morbidly obese patients. The reversal time was significantly longer in patients receiving sugammadex with dosing scalar based on IBW than in patients receiving sugammadex with dosing scalar based on TBW (mean difference 55.77 s, 95% confidence interval [CI] 32.01, 79.53 s), but it was not significantly different between patients receiving sugammadex with dosing scalars based on CBW versus TBW (mean difference 2.28 s, 95% CI -10.34, 14.89 s). Conclusion: Compared with standard-dose sugammadex based on TBW, lower-dose sugammadex based on IBW had 56 s longer reversal time whereas lower-dose sugammadex based on CBW had a comparable reversal time. abstract_id: PUBMED:21692760 Ideal versus corrected body weight for dosage of sugammadex in morbidly obese patients. To date, the dosing of sugammadex is based on real body weight without taking fat content into account. We compared the reversal of profound rocuronium-induced neuromuscular blockade in morbidly obese patients using doses of sugammadex based on four different weight corrections. One hundred morbidly obese patients, scheduled for laparoscopic bariatric surgery under propofol-sufentanil anaesthesia, were randomly assigned four groups: ideal body weight; ideal body weight + 20%; ideal body weight + 40%; and real body weight. Patients received sugammadex 2 mg.kg(-1), when adductor pollicis monitoring showed two responses. The primary endpoint was full decurarisation. Secondary endpoints were the ability to get into bed independently on arrival to the post-anaesthetic care unit and clinical signs of residual paralysis. There was no residual paralysis in any patient. Morbidly obese patients can safely be decurarised from rocuronium-induced neuromuscular blockade T1-T2 with sugammadex dosed at 2 mg.kg(-1) ideal body weight + 40% (p < 0.0001). abstract_id: PUBMED:29310829 Sugammadex by ideal body weight versus 20% and 40% corrected weight in bariatric surgery - double-blind randomized clinical trial Background And Objectives: The weight parameters for use of sugammadex in morbidly obese patients still need to be defined. Methods: A prospective clinical trial was conducted with sixty participants with body mass index≥40kg.m-2 during bariatric surgery, randomized into three groups: ideal weight (IW), 20% corrected body weight (CW20) and 40% corrected body weight (CW40). All patients received total intravenous anesthesia. Rocuronium was administered at dose of 0.6mg.kg-1 of Ideal weight for tracheal intubation, followed by infusion of 0.3-0.6mg.kg-1.h-1. Train of four (TOF) was used to monitor depth of blockade. After spontaneous recovery TOF-count 2 at the end of surgery, 2mg.kg-1 of sugammadex was administered. Primary outcome was neuromuscular blockade reversal time to TOF≥0.9. Secondary outcome was the occurrence of postoperative residual curarization in post-anesthesia recovery room, searching the patient's ability to pass from the surgical bed to the transport, adequacy of oxygenation, respiratory pattern, ability to swallow saliva and clarity of vision. Results: Groups were homogenous in gender, age, total body weight, ideal body weight, body mass index, type and time of surgery. The reversal times (s) were (mean±standard deviation) 225.2±81.2, 173.9±86.8 and 174.1±74.9 respectively, in the IW, CW20 and CW40 groups (p=0.087). Conclusions: No differences were observed between groups with neuromuscular blockade reversal time and frequency of postoperative residual curarization. We concluded that ideal body weight can be used to calculate sugammadex dose to reverse moderate neuromuscular blockade in morbidly obese patients. abstract_id: PUBMED:22552386 Ideal body weight-based remifentanil infusion is potentially insufficient for anesthetic induction in mildly obese patients. We evaluated whether the effect of remifentanil treatment differs between normal weight (NW) patients with real body weight-based remifentanil and mildly obese (Ob) patients with ideal body weight based-remifentanil during short-term anesthetic induction. We enrolled 20 patients aged between 20 and 64 years in each group (NW group: 18.5 kg/m(2) ≤ BMI < 25 kg/m(2); Ob group: BMI ≥ 25 kg/m(2)). Tracheal intubation (TI) was performed after administration of 0.5 μg/kg/min remifentanil for 5 min, including 2 min of antecedent administration, with propofol and rocuronium. Hemodynamic parameters (SBP, DBP, and HR) were measured. Percent changes in hemodynamics resulting from anesthetic induction and TI were calculated, and effect-site concentration (ESC) in each patient was calculated by performing pharmacokinetic simulation. All hemodynamic values in the Ob group after TI were significantly higher than those in the NW group. Percent increases in SBP and HR in the Ob group were significantly higher than the corresponding values in the NW group. ESC of remifentanil at the time of TI in the NW group was higher than that in the Ob group. Remifentanil treatment with anesthetic induction based on the Japanese package insert might have insufficient effects in obese patients. abstract_id: PUBMED:27110105 Weight-based dosing in medication use: what should we know? Background: Weight-based dosing strategy is still challenging due to poor awareness and adherence. It is necessary to let clinicians know of the latest developments in this respect and the correct circumstances in which weight-based dosing is of clinical relevance. Methods: A literature search was conducted using PubMed. Results: Clinical indications, physiological factors, and types of medication may determine the applicability of weight-based dosing. In some cases, the weight effect may be minimal or the proper dosage can only be determined when weight is combined with other factors. Medications within similar therapeutic or structural class (eg, anticoagulants, antitumor necrosis factor medications, P2Y12-receptor antagonists, and anti-epidermal growth factor receptor antibodies) may exhibit differences in requirements on weight-based dosing. In some cases, weight-based dosing is superior to currently recommended fixed-dose regimen in adult patients (eg, hydrocortisone, vancomycin, linezolid, and aprotinin). On the contrary, fixed dosing is noninferior to or even better than currently recommended weight-based regimen in adult patients in some cases (eg, cyclosporine microemulsion, recombinant activated Factor VII, and epoetin α). Ideal body-weight-based dosing may be superior to the currently recommended total body-weight-based regimen (eg, atracurium and rocuronium). For dosing in pediatrics, whether weight-based dosing is better than body surface-area-based dosing is dependent on the particular medication (eg, methotrexate, prednisone, prednisolone, zidovudine, didanosine, growth hormone, and 13-cis-retinoic acid). Age-based dosing strategy is better than weight-based dosing in some cases (eg, intravenous busulfan and dalteparin). Dosing guided by pharmacogenetic testing did not show pharmacoeconomic advantage over weight-adjusted dosing of 6-mercaptopurine. The common viewpoint (ie, pediatric patients should be dosed on the basis of body weight) is not always correct. Effective weight-based dosing interventions include standardization of weight estimation, documentation and dosing determination, dosing chart, dosing protocol, order set, pharmacist participation, technological information, and educational measures. Conclusion: Although dosing methods are specified in prescribing information for each drug and there are no principal pros and cons to be elaborated, this review of weight-based dosing strategy will enrich the knowledge of medication administration from the perspectives of safety, efficacy, and pharmacoeconomics, and will also provide research opportunities in clinical practice. Clinicians should be familiar with dosage and administration of the medication to be prescribed as well as the latest developments. abstract_id: PUBMED:33639839 Actual versus ideal body weight dosing of sugammadex in morbidly obese patients offers faster reversal of rocuronium- or vecuronium-induced deep or moderate neuromuscular block: a randomized clinical trial. Background: This randomized, double-blind trial evaluated sugammadex-mediated recovery time from rocuronium- or vecuronium-induced moderate (M-) or deep (D-) neuromuscular block in morbidly obese adults dosed by actual (ABW) or ideal body weight (IBW). Methods: Adults with BMI ≥40 kg/m2 were randomized to 1 of 5 groups: M-neuromuscular block, sugammadex 2 mg/kg ABW; M-neuromuscular block, sugammadex 2 mg/kg IBW; M-neuromuscular block, neostigmine 5 mg, and glycopyrrolate 1 mg; D-neuromuscular block, sugammadex 4 mg/kg ABW; or D-neuromuscular block, sugammadex 4 mg/kg IBW. Supramaximal train of four (TOF) stimulation of the ulnar nerve (TOF-watch SX®) monitored recovery. Primary endpoint was time to TOF ratio ≥ 0.9 for ABW and IBW groups pooled across neuromuscular blocking agent (NMBA)/blocking depth, analyzed by log-rank test stratified for agent and depth. Prespecified safety outcomes included treatment-emergent bradycardia, tachycardia, and other arrhythmias, and adjudicated hypersensitivity and anaphylaxis. Results: Of 207 patients randomized, 188 received treatment (28% male, BMI 47 ± 5.1 kg/m2, age 48 ± 13 years). Recovery was 1.5 min faster with ABW vs IBW dosing. The sugammadex 2 mg/kg groups recovered 9-fold faster [time 0.11-fold, 95% CI 0.08 to 0.14] than the neostigmine group. ABW (5.3%) and IBW (2.7%) groups had similar incidences of recovery time > 10 min (95% CI of difference: - 4.8 to 11.0%); 84% for neostigmine group. Re-curarization occurred in one patient each in the 2 mg/kg IBW and neostigmine groups. Prespecified safety outcomes occurred with similar incidences. Conclusions: ABW-based sugammadex dosing yields faster reversal without re-curarization, supporting ABW-based sugammadex dosing in the morbidly obese, irrespective of the depth of neuromuscular block or NMBA used. Trial Registration: Registered on November 17, 2017, at ClinicalTrials.gov under number NCT03346070 . abstract_id: PUBMED:26739976 Comparison of the effect of rocuronium dosing based on corrected or lean body weight on rapid sequence induction and neuromuscular blockade duration in obese female patients. Objectives: To compare onset time, duration of action, and tracheal intubation conditions in obese patients when the intubation dose of rocuronium was based on corrected body weight (CBW) versus lean body weight (LBW) for rapid sequence induction. Methods: This prospective study was carried out at Numune Education and Research Hospital, Ankara, Turkey between August 2013 and May 2014. Forty female obese patients scheduled for laparoscopic surgery under general anesthesia were randomized into 2 groups. Group CBW (n=20) received 1.2 mg/kg rocuronium based on CBW, and group LBW (n=20) received 1.2 mg/kg rocuronium based on LBW. Endotracheal intubation was performed 60 seconds after injection of muscle relaxant, and intubating conditions were evaluated. Neuromuscular transmission was monitored using acceleromyography of the adductor pollicis. Onset time, defined as time to depression of the twitch tension to 95% of its control value, and duration of action, defined as time to achieve one response to train-of-four stimulation (T1) were recorded. Results: No significant differences were observed between the groups in intubation conditions or onset time (50-60 seconds median, 30-30 interquartile range [IQR]). Duration of action was significantly longer in the CBW group (60 minutes median, 12 IQR) than the LBW group (35 minutes median, 16 IQR; p less than 0.01). Conclusion: In obese patients, dosing of 1.2 mg/kg rocuronium based on LBW provides excellent or good tracheal intubating conditions within 60 seconds after administration and does not lead to prolonged duration of action. abstract_id: PUBMED:15385355 The pharmacodynamic effects of rocuronium when dosed according to real body weight or ideal body weight in morbidly obese patients. We investigated the pharmacodynamic effects of rocuronium on morbidly obese patients. Twelve morbidly obese female patients (body mass index >40 kg/m(2)) admitted for laparoscopic gastric banding were randomized into two groups. Group 1 (n = 6) received 0.6 mg/kg of rocuronium based on real body weight, whereas Group 2 (n = 6) received 0.6 mg/kg of rocuronium based on ideal body weight. In a control group of six normal-weight female patients admitted for laparoscopic surgery, rocuronium was dosed on the basis of their real body weight. Neuromuscular transmission was monitored by using acceleromyography of the adductor pollicis; anesthesia was induced and maintained with remifentanil and propofol. The onset time tended to be shorter in Group 1 and the control group compared with Group 2, but this did not achieve statistical significance. Duration of action to 25% of twitch tension was more than double in Group 1 (55 min) compared with the other two groups (22 and 25 min; P < 0.001). Duration of action was similar between Group 2 and control. Recovery index tended to be longer in Group 1, but without a significant difference. In conclusion, in morbidly obese patients, the duration of action of rocuronium is significantly prolonged when it is dosed according to real body weight. Therefore, the dosage should be assessed on the basis of ideal rather than on real body weight in clinical practice. abstract_id: PUBMED:38205981 Rocuronium Dosing by Ideal vs Total Body Weight in Obesity: A Prospective, Observational Non-inferiority Study. Background: Providing adequate paralysis and appropriate sedation is challenging in patients with obesity during rapid sequence intubation (RSI). Pharmacokinetic parameters play an important role in dosing of rocuronium due to low lipophilicity. Rocuronium may be dosed based on ideal body weight (IBW). Current guidelines do not offer recommendations for dosing in the setting of obesity. Dosing depends on clinician preference based on total body weight (TBW) or IBW. Objective: In this study we performed non-inferiority analysis to compare the intubation conditions, duration of paralysis, and incidence of new-onset tachycardia or hypertension after intubation in obese patients requiring RSI in the emergency department (ED). Methods: This was a single-center, prospective, observational study. Eligible for enrollment were adult patients with a TBW ≥30% IBW or body mass index ≥30 kilograms per meters squared who presented to the ED requiring RSI with the use of rocuronium. Rocuronium was dosed according to intubating physicians' preference. Physicians completed a survey assessing intubation conditions. Height and weight used for the calculation of the dose, the dose of rocuronium, time of administration, and time of muscle function recovery were recorded. Endpoints assessed included grading of view during laryngoscopy, first-past success, and duration of paralysis. Results: In total, 96 patients were included, 54 in TBW and 42 in IBW. The TBW cohort received a mean of 1 milligram per kilogram (mg/kg) compared to 0.71 mg/kg in the IBW group. Excellent intubation conditions were observed in 68.5% in the TBW group and 73.8% in the IBW group. The non-inferiority analysis for relative risk of excellent intubation was 1.12 (P = 0.12, [90% CI 0.80-1.50]). Conclusion: Non-inferiority analysis suggests that IBW dosing provides similar optimal intubation conditions when compared to TBW dosing, but the noninferiority comparison did not reach statistical significance. This study was unable to show statistical non-inferiority for IBW dosing. Answer: The dosing of rocuronium in obese patients has been a subject of research, with studies exploring whether ideal body weight (IBW) or corrected body weight (CBW) should be used as the basis for dosing. A study by Leykin et al. (PUBMED:19690247) found that in obese patients undergoing gastric banding or gastric bypass, rocuronium dosed according to IBW provided a shorter duration of action without significantly prolonged onset time or compromised conditions for tracheal intubation. This suggests that IBW may be a suitable basis for dosing rocuronium in obese patients to avoid prolonged effects. However, a study by Gaszynski (PUBMED:26739976) compared the use of rocuronium based on CBW versus lean body weight (LBW) and found that dosing based on LBW provided excellent or good tracheal intubating conditions within 60 seconds after administration and did not lead to prolonged duration of action. This indicates that LBW could also be a viable option for dosing rocuronium in obese patients. Another study by de Souza et al. (PUBMED:29310829) concluded that ideal body weight can be used to calculate sugammadex dose to reverse moderate neuromuscular blockade in morbidly obese patients, suggesting that IBW may be a reasonable dosing scalar for reversal agents as well. In contrast, a randomized clinical trial by Gaszynski et al. (PUBMED:33639839) found that actual body weight (ABW)-based sugammadex dosing yields faster reversal without re-curarization, supporting ABW-based sugammadex dosing in the morbidly obese, irrespective of the depth of neuromuscular block or neuromuscular blocking agent used. The pharmacodynamic effects of rocuronium when dosed according to real body weight or IBW in morbidly obese patients were investigated by Soltész et al. (PUBMED:15385355), who concluded that the duration of action of rocuronium is significantly prolonged when it is dosed according to real body weight, and therefore, the dosage should be assessed on the basis of IBW rather than real body weight in clinical practice. In summary, the evidence suggests that dosing rocuronium based on IBW is generally preferred to avoid prolonged neuromuscular blockade in obese patients, although LBW and ABW may also be considered in certain contexts, particularly when considering reversal with sugammadex. Clinicians should carefully consider the patient's body composition and the specific clinical context when determining the dosing scalar for rocuronium.
Instruction: Does thyroid supplementation accelerate tricyclic antidepressant response? Abstracts: abstract_id: PUBMED:11578993 Does thyroid supplementation accelerate tricyclic antidepressant response? A review and meta-analysis of the literature. Objective: The delayed onset of therapeutic response to antidepressants remains a major problem in the treatment of depression. Among the strategies to accelerate response to treatment, the early addition of thyroid hormone to antidepressants has been suggested as a viable method. The authors performed a meta-analysis of the literature on the use of thyroid hormone supplementation to accelerate the treatment of depression to determine whether there is sufficient evidence to support the clinical efficacy of this strategy. Method: Both a computer-aided search of the National Library of Medicine MEDLINE and an intensive search by hand were conducted to identify all double-blind, placebo-controlled studies assessing the concomitant administration of thyroid hormone and antidepressant to accelerate clinical response in patients with nonrefractory depression. Results: Six studies were identified. All were conducted with triiodothyronine (T(3)) and a tricyclic antidepressant. Five of the six studies found T(3) to be significantly more effective than placebo in accelerating clinical response. The pooled, weighted effect size index was 0.58, and the average effect was highly significant. Further, the effects of T(3) acceleration were greater as the percentage of women participating in the study increased. Conclusions: This meta-analysis supports the efficacy of T(3) in accelerating clinical response to tricyclic antidepressants in patients with nonrefractory depression. Furthermore, women may be more likely than men to benefit from this intervention. abstract_id: PUBMED:2339180 The effect of tricyclic antidepressants on basal thyroid hormone levels in depressed patients. The effect of tricyclic antidepressant treatment on basal thyroid hormone levels was evaluated in 28 subjects with primary major depression. In the total group, tricyclic antidepressant treatment was associated with significant reductions in measures of thyroxine. Furthermore, responders to antidepressant treatment had significantly greater decrements in thyroxine and the free thyroxine index as compared with nonresponders. This finding appears consistent with the effects of a wide range of antidepressant treatments on thyroid function tests. abstract_id: PUBMED:6470494 Effect of tricyclic antidepressant drugs on lymphocyte membrane structure. Tricyclic antidepressant-induced perturbations of murine splenic lymphocyte membranes and cell surface concanavalin A receptor mobility have been investigated using the fluorescent probes diphenylhexatriene and fluorescein-conjugated concanavalin A. Results of these studies illustrate the possible relationship between tricyclic antidepressant-induced membrane perturbations and tricyclic antidepressant-induced suppression of the normal murine lymphocyte mitogen response. Tricyclic antidepressant effects on murine splenic lymphocyte membranes are dose-, time- and temperature- dependent. Murine lymphocyte concanavalin A cell surface receptor mobility is not apparently altered by the tricyclic antidepressants. abstract_id: PUBMED:401340 The role of plasma concentrations in the use of tricyclic antidepressant drugs. 1. The range of tricyclic antidepressant plasma levels (or doses) needed for therapeutic response remains largely unresolved, since quantal plasma concentration (or dose)--response relationships have not been clearly defined for either therapeutic or nontherapeutic effects. 2. The fact that certain patients apparently became more depressed at higher plasma levels must be balanced against the facts that "depression" is a mixture of disorders as yet poorly distinguishable and that tricyclic antidepressants have multiple pharmacologic effects. 3. There is presently no justification for routinely monitoring tricyclic antidepressant plasma levels, even though, as for any drug, such determinations are justifiable in patients who are unresponsive or show signs of toxicity. 4. Plasma level determinations can never replace sound clinical judgment and dosage adjustment for individual patients. abstract_id: PUBMED:30215150 Effect of providing drug utilization review information on tricyclic antidepressant prescription in the elderly. Tricyclic antidepressants are known as potentially inappropriate medications in the elderly. A notification issued in July 2015 in South Korea recommended caution while prescribing tricyclic antidepressants to the elderly. Further, since October 2015, the nationwide computerized drug utilization review monitoring system provides a pop-up window, on a real-time basis, whenever tricyclic antidepressants are prescribed to elderly outpatients. Therefore, we evaluated whether providing drug utilization review information was effective in reducing tricyclic antidepressant prescription to elderly outpatients. We used the Health Insurance Review and Assessment Service-Adult Patient Sample data from 2014 to 2016. Data related to the prescription of tricyclic antidepressants to outpatients aged 65 years or more were extracted. We determined the number of prescriptions per day per 100,000 elderly patients in each month, compared the average number of prescriptions before and after the drug utilization review information was provided, and evaluated the changes in the number of prescriptions by using an interrupted time series analysis. The average number of tricyclic antidepressant prescriptions per day per 100,000 elderly patients decreased from 76.6 (75.5 to 77.6) to 65.7 (64.5 to 66.9), a 14.2% reduction after the provision of drug utilization review information started. Following initiation of provision of drug utilization review information, there was an immediate drop of 9.2 tricyclic antidepressant prescriptions per day per 100,000 elderly patients, whereas there was no statistically significant change in trends. Providing the drug utilization review information on tricyclic antidepressant prescription for the elderly contributed to the reduction in tricyclic antidepressant prescriptions. abstract_id: PUBMED:7153524 Tricyclic antidepressant effects on the murine lymphocyte mitogen response. Tricyclic antidepressant binding sites have recently been detected on the membranes of murine splenic lymphocytes (1). It is reported here that the mitogenic response of murine lymphocytes is altered in the presence of tricyclic antidepressants at concentrations of 10(-5) M or greater. The time-dependent effects of these drugs, when added to the cultures at various times up to 24 hours subsequent to addition of a mitogen (either concanavalin A or lipopolysaccharide B), are also reported. abstract_id: PUBMED:35397333 Biomarkers as predictors of treatment response to tricyclic antidepressants in major depressive disorder: A systematic review. Tricyclic antidepressants (TCAs) are frequently prescribed in case of non-response to first-line antidepressants in Major Depressive Disorder (MDD). Treatment of MDD often entails a trial-and-error process of finding a suitable antidepressant and its appropriate dose. Nowadays, a shift is seen towards a more personalized treatment strategy in MDD to increase treatment efficacy. One of these strategies involves the use of biomarkers for the prediction of antidepressant treatment response. We aimed to summarize biomarkers for prediction of TCA specific (i.e. per agent, not for the TCA as a drug class) treatment response in unipolar nonpsychotic MDD. We performed a systematic search in PubMed and MEDLINE. After full-text screening, 36 papers were included. Seven genetic biomarkers were identified for nortriptyline treatment response. For desipramine, we identified two biomarkers; one genetic and one nongenetic. Three nongenetic biomarkers were identified for imipramine. None of these biomarkers were replicated. Quality assessment demonstrated that biomarker studies vary in endpoint definitions and frequently lack power calculations. None of the biomarkers can be confirmed as a predictor for TCA treatment response. Despite the necessity for TCA treatment optimization, biomarker studies reporting drug-specific results for TCAs are limited and adequate replication studies are lacking. Moreover, biomarker studies generally use small sample sizes. To move forward, larger cohorts, pooled data or biomarkers combined with other clinical characteristics should be used to improve predictive power. abstract_id: PUBMED:1245849 Measurement of tricyclic antidepressant levels in an outpatient clinic. Although definitive studies regarding the correlation between tricyclic antidepressant plasma levels and therapeutic effect are lacking, preliminary data suggest that measurement of tricyclic antidepressant plasma levels provides a rational approach to improve clinical management of the depressed patient. Data were collected to determine if the routine measurement of plasma tricyclic antidepressant levels was practical in a large clinic population, and to determine if such levels could improve patient care. Individual differences in drug metabolism and frequent unreliable ingestion of medication make the measurement of drug plasma levels the only sure means of determining if a patient is receiving a fair therapeutic trial on a particular tricyclic antidepressant. Plasma analysis revealed both the failure to ingest adequate amounts of medication as prescribed and also the abuse of medications. Although generalizations regarding individual variation in drug metabolism or generalizations concerning drug compliance do little to improve patient care, whenever such problems are met on an individual basis, many clinical management problems can be resolved. abstract_id: PUBMED:10357043 Relationship between antidepressant partial and nonresponse and subsequent response to antidepressant augmentation. Objective: To evaluate the relationship between the degree of antidepressant nonresponse and subsequent response to lithium and triiodothyronine (T3) augmentation. Method: This is a retrospective analysis of data combined from two previous controlled studies of lithium and triiodothyronine augmentation of tricyclic antidepressants. Results: There was no difference in the rate of augmentation response between partial and nonresponders to tricyclic antidepressant treatment. Conclusions: Augmentation response does not appear to be related to the degree of nonresponse to the preceding antidepressant trial. abstract_id: PUBMED:832448 Tricyclic antidepressant plasma levels and adverse effects after overdose. Forty patients ingesting tricyclic antidepressant (TAD) overdoses were studied as a pharmacologic model to determine whether total tricyclic antidepressant plasma levels correlated with major adverse effects and electrocardiographic findings. Maximum TAD plasma levels were higher in patients who died (p less than 0.025) or had cardiac arrest (p less than 0.02), needed respiratory support (p less than 0.005), were unconscious (p less than 0.02), had grand mal seizures (p less than 0.001), ventricular rate larger than or equal to 120/min (p less than 0.01), cardiac arrhythmia (p less than 0.05), QRS duration larger than or equal to 100 msec (p less than 0.001), or bundle branch block (p less than 0.05). TAD plasma levels correlated with the dose ingested by history (N = 29, r = 0.58, p less than 0.001). Measurement of total TAD (free and protein-bound) appears to correlate well with biologic response. Answer: Yes, thyroid supplementation does accelerate tricyclic antidepressant response. A meta-analysis of the literature on the use of thyroid hormone supplementation to accelerate the treatment of depression determined that there is clinical efficacy in this strategy. The meta-analysis included double-blind, placebo-controlled studies assessing the concomitant administration of thyroid hormone and antidepressant to accelerate clinical response in patients with nonrefractory depression. All studies involved triiodothyronine (T3) and a tricyclic antidepressant. Five out of six studies found T3 to be significantly more effective than placebo in accelerating clinical response. The pooled, weighted effect size index was 0.58, and the average effect was highly significant. Additionally, the effects of T3 acceleration were greater as the percentage of women participating in the study increased, suggesting that women may be more likely than men to benefit from this intervention (PUBMED:11578993).
Instruction: Carotid angioplasty and stenting: will periprocedural transcranial Doppler monitoring be important? Abstracts: abstract_id: PUBMED:8798140 Carotid angioplasty and stenting: will periprocedural transcranial Doppler monitoring be important? Purpose: To explore the value of transcranial Doppler (TCD) ultrasonography in the periprocedural monitoring of patients undergoing angioplasty procedures for stenosis of the internal carotid artery. Methods: Thirty-two patients were included in the study between April 1991 and September 1995 (6 females, 26 males; average age 66 years). All patients were interrogated before and after angioplasty by a standard TCD examination protocol. Intraprocedurally, TCD was used continuously to monitor cerebral blood flow and supply evidence of embolic particulates. Nineteen patients were treated by percutaneous transluminal angioplasty (PTA) alone; the other 13 underwent primary stent (PS) implantation. Results: High-intensity transient signals indicative of emboli appeared to be more frequent in the PTA group than in the PS cohort. Preoperative TCD identified 3 (9%) high-risk patients with incompetent collateral pathways through the circle of Willis. Intraoperatively, TCD detected two postdilation carotid occlusions, a sylvian embolism, and one case of arterial spasm. The preprocedural TCD in a patient with contralateral carotid occlusion showed good collateral circulation, providing reassurance during conversion to endarterectomy when an undeployed stent obstructed blood flow. Postoperatively, TCD confirmed restored intracerebral circulation and identified one hyperperfusion syndrome. Conclusions: TCD is a simple, relatively inexpensive examination that can preprocedurally identify carotid stenosis patients at high risk for intraoperative cerebral ischemia in whom PTA might be preferable to surgery. During the procedure, TCD can document the benefits of endovascular treatment and offer early detection of ischemic complications. abstract_id: PUBMED:10073757 Microemboli detected by transcranial Doppler monitoring in patients during carotid angioplasty versus carotid endarterectomy. Unlabelled: Microemboli, as detected by transcranial Doppler monitoring, have been shown to be a potential cause of strokes after carotid endarterectomy. We retrospectively reviewed 105 patients who underwent transcranial Doppler monitoring during 112 procedures for the treatment of 115 carotid bifurcation stenoses: 40 by percutaneous angioplasty with stenting and 75 by carotid endarterectomy. In PTAS procedures (n = 40), there was a mean of 74.0 emboli per stenosis (range 0-398, P = 0.0001) with 4 neurologic events per patient (P = 0.08). In CEA procedures (n = 76), there was a mean of 8.8. emboli per stenosis (range 0-102, P= 0.0001) with 1 neurologic event per patient (P = 0.08). The post-procedural neurological events in the percutaneous angioplasty with stenting population included two strokes (5.6%) and two transient ischemia attacks (5.6%). Microemboli for each of these cases totalled 133, 17, 29 and 47 (with one shower), respectively. One postoperative carotid endarterectomy patient was noted to have a stroke (1.4%), with 48 microemboli noted during that procedure. The mean emboli rate for percutaneous angioplasty with stenting patients with neurological events was 59.0: without complications it was 85.1. The mean emboli rate for carotid endarterectomy patients without complications was 8.3. Three percutaneous angioplasty with stenting patients had no emboli (7.5%), whereas 29 carotid endarterectomy patients had no emboli (38.7%). Conclusion: The percutaneous angioplasty with stenting procedure is associated with more than eight times the rate of microemboli seen during carotid endarterectomy when evaluated with transcranial Doppler monitoring. Larger patient groups are needed to determine if this greater embolization rate has an associated risk of higher morbidity or mortality. abstract_id: PUBMED:14533974 Transcranial Doppler monitoring in angioplasty and stenting of the carotid bifurcation. Purpose: To assess the impact of cerebral embolism and hemodynamic changes during the successive stages of carotid angioplasty and stenting (CAS) using transcranial Doppler (TCD) monitoring of the middle cerebral artery (MCA). Methods: In 297 patients (206 men; mean age 69.9+/-8.0 years), the association of various TCD emboli and velocity variables with procedure-related death and cerebral events (amaurosis fugax, transient ischemic attacks, and stroke) was evaluated. Baseline patient characteristics (age, sex, preoperative cerebral symptoms, and prior carotid endarterectomy) and their associations with procedure-related cerebral events were also assessed. A distinction was made between adverse events that occurred during CAS and those that happened within 7 days. Results: Of the 36 procedure-related retinal and cerebral events, 28 (78%) were encountered intraprocedurally; an additional 6 (2%) events occurred within 7 days after the procedure. Two (0.7%) patients died. At 1 week, the combined minor and major stroke and death rate was 3.7%. Adverse outcome was associated with >4 showers of microemboli at postdilation (odds ratio [OR] 3.2, 95% CI 1.3 to 7.8, p=0.03), particulate macroemboli (OR 9.1, 95% CI 5.1 to 16.1, p<0.001), massive air embolism from ruptured balloons (OR 11.3, 95% CI 7.6 to 16.6, p<0.001), and angioplasty-induced asystole with significant hypotension plus MCA blood flow reduction (OR 3.3, 95% CI 1.4 to 8.3, p=0.03). Of the patient characteristics, male gender (OR 10.5, 95% CI 1.4 to 75.8, p=0.02) and preoperative cerebral ischemia (OR 3.3, 95% CI 1.6 to 6.6, p=0.003) were also related to outcome. Conclusions: In CAS, TCD monitoring provides insight into the pathogenesis of procedure-related cerebral events. Microemboli during poststent dilation, particulate macroembolism, massive air embolism, and angioplasty-induced asystole are associated with adverse outcome, as are male gender and prior cerebral ischemia. abstract_id: PUBMED:7974580 Carotid angioplasty. Detection of embolic signals during and after the procedure. Background And Purpose: Carotid angioplasty may offer an effective treatment for carotid stenosis, but there has been concern about the incidence and clinical consequences of distal embolization. Transcranial Doppler monitoring in carotid endarterectomy has demonstrated embolic signals during this procedure. We used this technique in patients undergoing carotid angioplasty. Methods: Transcranial Doppler ultrasound was used to monitor for embolic signals in the ipsilateral middle cerebral artery before and during 10 technically successful carotid angioplasties and at various standardized times in the following month. Results: In the month before angioplasty asymptomatic embolic signals were detected in 3 of 10 patients. During angioplasty multiple embolic signals were detected immediately after balloon inflation in 9 of 10 subjects. A minor ipsilateral cerebral ischemic event occurred in 1 of these 9, but the other 8 were asymptomatic. Embolic signals were common immediately after the procedure and intra-arterial femoral catheter removal (8 of 10 subjects) but thereafter became less frequent and were present in 1 of 5 at 4 hours, 2 of 10 at 48 hours, 1 of 6 at 7 days, and 1 of 10 at 1 month. Conclusions: Embolization at the time of carotid angioplasty is very common but usually asymptomatic; monitoring by means of Doppler ultrasound will allow the effectiveness of measures to reduce this embolization to be studied. Late embolization occurs in a minority of patients and may account for the small but significant risk of delayed stroke. Doppler monitoring may allow identification of patients at risk and assessment of the effectiveness of prophylactic therapy. abstract_id: PUBMED:20645265 Cerebral haemodynamics during carotid angioplasty with flow interruption. The correlation between oximetry and transcranial Doppler ultrasonography Aims: To determine the correlation between oximetry and transcranial Doppler ultrasonography (TDU) during and following carotid stent-angioplasty with flow interruption, and to evaluate the level of hypoperfusion and its recovery during the procedure. Patients And Methods: Records were made, prospectively, of the flow rates in the middle cerebral artery and the trans-cutaneous oxygen saturation of 18 patients who had undergone surgery to perform a carotid stent-angioplasty. Monitoring was basal and at 1, 3, 5, 10 and 15 minutes after stopping and opening the flow. Measurements, in units and percentages of change, were stratified by groups as mild, moderate and severe. The agreement between the two tests was studied. Results: Occlusion time: 8.2 +/- 2.7 minutes. Two patients (11.1%) presented cerebral hypoperfusion. Flow was re-established in two patients due to its reaching critical values. Mean of the baseline values: 56.3 +/- 11.4 cm/s (TDU) and 67.6 +/- 7.1% (oximetry). The changes in the absolute values and percentages of change between TDU and oximetry were evaluated and results showed an agreement between them in occlusion (rho = 0.8-0.9; p < 0.05), with less association on re-establishing the flow (rho = 0.4-0.8; p < 0.05). In percentages of change there was very good agreement in occlusion (kappa = 0.8-1; p < 0.05). Agreement was good (kappa = 0.68; p < 0.05) at 1, 3 and 5 minutes after opening up the flow. Conclusions: A significant correlation was found between the methods during the interruption of carotid flow, which means they can be used independently. Overall, 88.9% remained below the safety threshold for cerebral ischaemia and, given that the procedure can be carried out with brief interruptions, control by oximetry or TDU can be just as safe in evaluating cerebral ischaemia. abstract_id: PUBMED:9092224 Transcranial Doppler ultrasound as monitoring during carotid operation We describe our method of transcranial Doppler (TCD) monitoring during carotid endarterectomy (CE) procedures. During a period of 35 months we performed 257 CE with TCD monitoring. Critical flow values during crossclamping of the internal carotid artery (ICA) and the necessity of insertion of an intraluminal shunt were taken in consideration with regard to the choice of surgical procedure: TEA (with or without patching) or eversion endarterectomy. The critical flow values were around 10-15 cm/s, no pulsatility signs were detected. Further advantages of the TCD monitoring are: detection of microemboli, control of the potential collateralisation of the external carotid artery and the control of efficacy and accurate positioning of the intraluminal shunt. We comment our results of cerebral monitoring and consider it as a useful tool for optimizing the postoperative results of carotid surgery. abstract_id: PUBMED:9106301 Transcranial Doppler Sonographic monitoring during percutaneous transluminal angioplasty of the internal carotid artery. Our purpose was to assess the haemodynamic changes in the ipsilateral middle cerebral artery (MCA) during and after percutaneous transluminal angioplasty (PTA) of the internal carotid artery (ICA), and to compare them with clinical and angiographic findings. Transcranial Doppler Sonographic monitoring (TCD) of the MCA was performed during PTA in 22 patients with symptomatic severe stenosis of the ICA. Mean blood flow velocity (MBFV) and pulsatility index (PI) were recorded. During PTA, MBFV fell from 41 +/- 15 cm/s to 23 +/- 11 cm/s (P = 0.0001). Changes in PI were inconsistent. With reduction of MBFV of 50% or less (in 10 cases) no complication occurred. With a reduction of more than 50% (in 12), 6 patients developed neurological disturbances (transient ischaemic attacks in 5 and minor stroke in 1). This difference was significant (P = 0.0152). Symptomatic patients also had a higher rate of stroke prior to PTA (4/6) than patients who remained asymptomatic during PTA (0/6). After PTA had been performed MBFV and PI improved significantly (P = 0.0001), MBFV increasing to 48 +/- 16 cm/s and PI from 0.64 +/- 0.11 to 0.86 +/- 0.15. TCD changes proved more sensitive to cerebral haemodynamics than angiography in 8 patients. abstract_id: PUBMED:20675330 Carotid stenting and transcranial Doppler monitoring: indications for carotid stenosis treatment. Background: Recently, angioplasty and stenting of carotid arteries (CAS) have taken the place of surgery. The aim of our study is to assess the role of transcranial Doppler (TCD) monitoring during CAS to address the embolic complications during the stages of the procedure, with or without embolic cerebral protection devices. Methods: A total of 152 patients were submitted to carotid stenting. All patients were submitted to carotid arteries Duplex scanning. Results: Neurological complications are related to TCD detection of corpuscolate signals in rapid succession. Even if no reduction of the overall incidence rate of microembolic signals (MES) was observed, a decrease in the number of corpuscolate emboli were recorded when a cerebral protection was working. Conclusions: According to our study, even in selected patients on the basis of preoperative diagnostic criteria, CAS is burdened by a nonnegligible risk of subclinical embolic ischemic events detected at TCD and confirmed by diffusion-weighted magnetic resonance imaging (DW-MRI). abstract_id: PUBMED:7632552 The role of intraoperative transcranial Doppler monitoring in carotid artery surgery. Of 135 carotid artery reconstructions performed under general anesthesia in 127 patients (mean age 68 years), 119 were performed with continuous intraoperative Doppler recording of the middle cerebral artery. This investigation was impossible in nine (6.7%) cases because of the absence of a visible temporal window, and results were deemed uninterpretable in six (4.5%) additional cases. The goals of this study were to test the feasibility and reliability of transcranial Doppler monitoring in the evaluation of intracranial perfusion and to determine the risk of cerebral ischemia during carotid artery clamping. The two outcome parameters measured were mean velocity and percentage of decreased flow in the middle cerebral artery during clamping. Patients were divided into four groups based on variations in these parameters. Groups I and IIA (low risk) represented 69.7% of cases, group IIB (significant risk) represented 21.9%, and group III (major risk) represented 8.4%. Transcranial Doppler monitoring appears to be a reliable means of observing middle cerebral artery flow during carotid surgery and in our opinion provides objective criteria for determining the need for an indwelling shunt. Accordingly, in this study no neurologic complications imputable to clamping were observed. Transcranial Doppler monitoring can also be used to ensure correct functioning of the shunt and to detect intraoperative embolic complications. abstract_id: PUBMED:7913141 On-table diagnosis of incipient carotid artery thrombosis during carotid endarterectomy by transcranial Doppler scanning. We present a case where transcranial Doppler ultrasound monitoring of a carotid endarterectomy enabled us to detect the incipient thrombosis of the operated artery before reversal of anesthesia. The use of transcranial Doppler ultrasound monitoring in carotid endarterectomy has the potential to detect this complication before serious neurologic damage has occurred and therefore reduce the morbidity and mortality rates associated with the operation. Answer: Yes, periprocedural transcranial Doppler (TCD) monitoring appears to be important during carotid angioplasty and stenting. TCD is a valuable tool for identifying patients at high risk for intraoperative cerebral ischemia, as it can preprocedurally identify those with incompetent collateral pathways through the circle of Willis (PUBMED:8798140). Intraprocedurally, TCD can detect embolic events, such as postdilation carotid occlusions, sylvian embolism, arterial spasm, and can provide early detection of ischemic complications (PUBMED:8798140). Postoperatively, TCD can confirm restored intracerebral circulation and identify complications like hyperperfusion syndrome (PUBMED:8798140). TCD monitoring has shown that percutaneous angioplasty with stenting (PTAS) is associated with a higher rate of microemboli compared to carotid endarterectomy (CEA), which may suggest a higher risk of morbidity or mortality, although larger patient groups are needed to confirm this (PUBMED:10073757). Additionally, TCD monitoring during carotid angioplasty and stenting (CAS) provides insight into the pathogenesis of procedure-related cerebral events, with microemboli during poststent dilation, particulate macroembolism, massive air embolism, and angioplasty-induced asystole being associated with adverse outcomes (PUBMED:14533974). Furthermore, TCD monitoring has been used to detect embolic signals during and after carotid angioplasty, which is common but usually asymptomatic. This monitoring allows for the assessment of measures to reduce embolization and may help identify patients at risk for delayed stroke (PUBMED:7974580). TCD has also been shown to correlate with oximetry during carotid stent-angioplasty with flow interruption, suggesting that it can be used independently to evaluate cerebral ischemia (PUBMED:20645265). In summary, TCD monitoring during carotid angioplasty and stenting is important for assessing cerebral hemodynamics, detecting embolic events, and potentially reducing the risk of procedure-related complications. It provides real-time feedback that can guide clinical decisions and interventions to improve patient outcomes (PUBMED:8798140, PUBMED:10073757, PUBMED:14533974, PUBMED:7974580, PUBMED:20645265).
Instruction: Does supported living in residential homes improve the quality of life and mental stability of older adults with chronic mental disorder? Abstracts: abstract_id: PUBMED:15703321 Does supported living in residential homes improve the quality of life and mental stability of older adults with chronic mental disorder? Objective: Supported living in residential homes for the elderly is an innovative, age-appropriate residential program for older adults with chronic mental disorders. The program involves 1) accommodation in ordinary "elder care" homes; and 2) provision of on-site mental health care by professionals from the local psychiatric hospital. The authors asked whether the program succeeds in improving the patients' quality of life without compromising their mental stability. Methods: Patients in 18 supported living programs (N=96) were compared with similar patients in eight psychiatric hospitals (N=78), in a cross-sectional study. Quality of life was measured with the Philadelphia Geriatric Center Morale Scale (PGCMS) and the Manchester Short Assessment of Quality of Life (MANSA). Mental stability was assessed in terms of rehospitalizations, adjustments in medication because of symptom exacerbation, and 6-month prevalence of psychotic symptoms. Results: After adjustment for patient characteristics, the supported-living participants experienced a significantly lower quality of life than the hospital patients, as indicated by two of the three PGCMS subscales and by the MANSA. Disparities were greatest in the subgroup of patients with psychotic disorders. No significant differences in mental stability were found between the two conditions. Eleven of the 96 supported-living participants had undergone temporary rehospitalization since their entry into the program. Conclusions: Supported living in residential homes for elderly persons is an attractive model, but it does not automatically guarantee the participants a better quality of life. Further research is needed to determine which program characteristics can improve results. abstract_id: PUBMED:17050088 The relationship between characteristics of supported housing and the quality of life of older adults with severe mental illness. This study examined whether group living (as opposed to single living), staff availability and degree of personal freedom are associated with the quality of life of older adults with severe mental illness. A cross-sectional study was carried out in 18 supported living programmes in residential homes for the elderly that differed in terms of these three characteristics. The study included 35 patients with a psychotic disorder and 38 with an anxiety or mood disorder. Quality of life was assessed with the Philadelphia Geriatric Centre Morale Scale (PGCMS) and the Manchester Short Assessment of Quality of Life (MANSA). No association was found between group living and quality of life. Availability of psychiatrically trained staff was associated with life quality only for patients with a psychotic disorder, and perceived amount of personal freedom was associated with life quality only for patients with a non-psychotic disorder. Both differences were seen only on the PGCMS Agitation subscale. Older people with psychotic disorders appear to have relatively high needs for professional psychiatric support, and those with non-psychotic disorders for control over their daily lives. Further research is needed in other settings for older people with severe mental illness, preferably using longitudinal designs. abstract_id: PUBMED:15660405 The role of stigma in the quality of life of older adults with severe mental illness. Background: Stigma and discrimination against older people with mental illness is a seriously neglected problem. Objectives: (1) To investigate whether stigmatisation of older adults with mental disorder is associated with the type of residential institution they live in or the type of disorder they suffer and (2) to assess the role of stigma experiences in their quality of life. Methods: A cross-sectional study was carried out of 131 older adults with severe mental illness, recruited in 18 elder care homes operating supported living programmes and in eight psychiatric hospitals throughout the Netherlands. Stigmatisation was assessed with an 11-item questionnaire on stigma experiences associated with mental illness. Quality of life was assessed with the Manchester Short Assessment of Quality of Life (MANSA). To better ascertain the role of stigma, we also assessed in comparison the relationship of social participation to quality of life. Results: Some 57% of the respondents had experienced stigmatisation. No association emerged between residential type or disorder type and the extent of stigma experiences. Stigmatisation did show a negative association with quality of life, a connection stronger than that between social participation and quality of life. Conclusion: A feeling of belonging, as contrasted with being excluded, is at least as important for the quality of life of older people with severe mental illness as their actual participation in the community. abstract_id: PUBMED:24818430 Quality of life and mental disorders of adolescents living in French residential group homes. Here, the quality of life (QoL) of adolescents living in residential group homes (RGHs), is compared to QoL of a general adolescent population, and links between QoL and the presence of mental disorders are examined. Adolescents living in RGHs reported a significantly lower perception of their overall QoL compared to the general adolescent population. The presence of mental disorders was significantly and negatively associated with QoL scores. Some indices of QoL (physical and psychological well-being, relationship with teachers) did not show differences with the general population, indicating that mental health needs or lack of wellbeing are expressed in unusual ways. abstract_id: PUBMED:15511744 Supported living in residential homes for the elderly: impact on patients and elder care workers. To enable older people with severe and persistent mental illness to live in the community, the Dutch mental health sector has developed a program for supported living in residential homes for the elderly. It provides for the permanent stationing of mental health workers (MHWs) in elder care facilities to support both the resident patients and the elder care staff. The authors examined associations between the number of MHW staff and the degree to which (1) patients were integrated into the community and (2) elder care workers had developed effective working alliances with their patients. Participants included 110 patients participating in 18 supported living programs in the Netherlands. Community integration was assessed in face-to-face interviews with the patients about their perceived influence over daily life, involvement in social activities, and social network size. The quality of the worker-patient relationship was assessed using the Dutch Working Alliance Questionnaire for Community Care, completed by the elder care worker primarily responsible for each patient. After differentiation of the MHW staff into medically trained and nurse-trained professionals, associations with outcome measures were found only for the nurse-trained staff. The more hours of nurse-trained staff capacity per patient, the more influence perceived by the patients, and the more directiveness shown by the elder care workers in their contacts with patients. The impact of supported living programs in residential homes for the elderly appears to be determined in part by the caseloads of the on-site MHWs. abstract_id: PUBMED:1771195 Quality of life in alternative residential settings. Central to policy revisions over the past forty years toward persons with psychiatric disabilities has been a change in where they live. Whereas forty years ago those patients needing assistance were generally housed in large public mental hospitals, today a myriad of alternative community housing settings are offered. A major impetus for this shift in housing, at least as currently articulated in most public forums, has been to improve their quality of life. Here we examine the quality of life experiences of psychiatrically disabled persons living in alternative settings: a state hospital, large residential care facilities, small group homes, and supervised apartments. Our central hypothesis, only partly supported, is that a quality of life gradient exists across these living settings. The results lend support to the value of quality of life assessments and point to the importance of more focused notions about how our various interventions may affect the persons whom we serve. abstract_id: PUBMED:32448927 Quality of life outcomes for people with serious mental illness living in supported accommodation: systematic review and meta-analysis. Purpose: To conduct a systematic review and meta-analysis of quality of life (QoL) outcomes for people with serious mental illness living in three types of supported accommodation. Methods: Studies were identified that described QoL outcomes for people with serious mental illness living in supported accommodation in six electronic databases. We applied a random-effects model to derive the meta-analytic results. Results: 13 studies from 7 countries were included, with 3276 participants receiving high support (457), supported housing (1576) and floating outreach (1243). QoL outcomes related to wellbeing, living conditions and social functioning were compared between different supported accommodation types. Living condition outcomes were better for people living in supported housing ([Formula: see text]= - 0.31; CI = [- 0.47; - 0.16]) and floating outreach ([Formula: see text]= - 0.95; CI = [- 1.30; - 0.61]) compared to high-support accommodation, with a medium effect size for living condition outcomes between supported housing and floating outreach ([Formula: see text]= - 0.40; CI = [- 0.82; 0.03]), indicating that living conditions are better for people living in floating outreach. Social functioning outcomes were significant for people living in supported housing compared to high support ([Formula: see text] = - 0.37; CI = [- 0.65; - 0.09]), with wellbeing outcomes not significant between the three types of supported accommodation. Conclusion: There is evidence that satisfaction with living conditions differs across supported accommodation types. The results suggest there is a need to focus on improving social functioning and wellbeing outcomes for people with serious mental illness across supported accommodation types. abstract_id: PUBMED:12919241 Integrating mental health care into residential homes for the elderly: an analysis of six Dutch programs for older people with severe and persistent mental illness. Integrating mental health care into residential homes for the elderly is a potentially effective model to address the complex care needs of older chronically mentally ill people. Because no research was available on the implementation of such integrated care in practice, six programs already operating in the Netherlands were analyzed. At the administrative level, three types of cooperative arrangements existed: a psychiatric hospital renting a unit in a residential home for the elderly, a psychiatric hospital stationing mental health professionals in a residential home on a permanent basis, and a residential home employing its own psychiatrically trained staff. At the operational level, contrasting views emerged on the relation-ship between physical and mental health care; these were delivered separately or in integrated form. In either case, the employees trained as elder care workers or as psychiatric nurses had difficulties understanding each other because they held different ideas about good-quality care. These care visions can be characterized as the care-giving approach (care workers) versus the problem-oriented and the rehabilitation approaches (nurses). At the housing level, two models existed: mentally ill patients having apartments in a separate unit (concentrated housing) or located throughout the facility (dispersed housing). The most promising model appears to be the one in which a psychiatric hospital assigns mental health professionals to work in a residential home, where they remain administratively and operationally distinct from the standard residential services. Whether or not the psychiatric residents should be housed in separate units could not be decided based on this study. abstract_id: PUBMED:7124978 Chronic mental patients: the quality of life issue. Quality of life issues must be addressed more vigorously in the care of chronic mental patients. In a survey of 30 large board-and-care homes in Los Angeles, 278 mentally disabled residents described their life conditions and satisfaction in eight areas: living situation, family relations, social relations, leisure activities, work, finances, safety, and health. Residents were less satisfied than the general population in most life areas, especially finances, unemployment, safety, and family and social relations (p less than .001). Of particular concern was the finding that 34% had been recent victims of crime. The results underscore the need for better social programs for these patients. abstract_id: PUBMED:12474933 Residential environment and quality of life among seriously mentally ill residents of board and care homes. This study examined the relationship between residential environment of seriously mentally ill patients living in board and care homes and quality of life. Participants included 162 seriously mentally ill veteran patients living in 26 board and care homes in Los Angeles. Data from structured interviews were used to assess subjective quality of life (satisfaction with living situation and general well-being) and objective quality of life (social functioning and daily activities). Independent variables examined in multivariate analyses included individual socio-demographic and clinical characteristics, objective characteristics of the home, and subjective resident assessments of social climate within the home. Adjusting for individual characteristics, social climate was significantly and positively associated with both satisfaction with current living situation and with general well-being. Interpersonal conflict was negatively associated with general well-being. Number of beds within the home and median income in the neighborhood were significantly associated (positively and negatively, respectively) with social functioning outside the home. Living environment characteristics explained between 3 and 9% of the total variation in three of four quality of life measures, and 27% of the variation in the fourth, satisfaction with living situation. Satisfaction with living situation among seriously mentally ill residents of board and care homes may be enhanced by making the social climate more positive, and reducing conflict within the home. Social functioning outside of the home may be enhanced by placing patients into a home with more beds, and/or a home located in a lower income neighborhood. Answer: Supported living in residential homes for older adults with chronic mental disorders is an innovative approach that combines accommodation in ordinary elder care homes with on-site mental health care by professionals from local psychiatric hospitals. However, the evidence on whether this model improves the quality of life and mental stability of its participants is mixed. One study found that participants in supported living programs experienced a significantly lower quality of life than similar patients in psychiatric hospitals, as measured by the Philadelphia Geriatric Center Morale Scale (PGCMS) and the Manchester Short Assessment of Quality of Life (MANSA). There were no significant differences in mental stability between the two groups, as assessed by rehospitalizations, adjustments in medication due to symptom exacerbation, and the prevalence of psychotic symptoms (PUBMED:15703321). Another study indicated that the availability of psychiatrically trained staff was associated with life quality only for patients with a psychotic disorder, and perceived personal freedom was associated with life quality only for patients with a non-psychotic disorder (PUBMED:17050088). Stigma experiences were found to have a negative association with quality of life, suggesting that a sense of belonging is crucial for the quality of life of older people with severe mental illness (PUBMED:15660405). In contrast, a systematic review and meta-analysis found that satisfaction with living conditions differed across supported accommodation types, with better outcomes for people living in supported housing and floating outreach compared to high-support accommodation. However, social functioning and wellbeing outcomes were not significantly different across the three types of supported accommodation (PUBMED:32448927). The impact of supported living programs appears to be partly determined by the caseloads of on-site mental health workers, with nurse-trained staff being associated with better outcomes in terms of community integration and working alliances between elder care workers and patients (PUBMED:15511744). In summary, supported living in residential homes does not automatically guarantee a better quality of life for older adults with chronic mental disorders. The effectiveness of such programs may depend on various factors, including the availability of trained staff, the level of personal freedom afforded to residents, and the social climate within the residential setting. Further research is needed to determine which program characteristics can improve results and how to best address the quality of life and mental stability of this population (PUBMED:15703321; PUBMED:17050088; PUBMED:15660405; PUBMED:32448927; PUBMED:15511744).
Instruction: Is quality of life different for men with erectile dysfunction and prostate cancer compared to men with erectile dysfunction due to other causes? Abstracts: abstract_id: PUBMED:12629383 Is quality of life different for men with erectile dysfunction and prostate cancer compared to men with erectile dysfunction due to other causes? Results from the ExCEED data base. Purpose: To our knowledge the relationship between the underlying etiology of erectile dysfunction and its impact on health related quality of life has not been studied. Such a study is important for men with prostate cancer, as the potential negative quality of life impact of erectile dysfunction may affect clinical decision making in newly diagnosed disease. We compare health related quality of life in impotent men with prostate cancer to that of impotent men without prostate cancer using the Exploratory and Comprehensive Evaluation of Erectile Dysfunction (ExCEED, TAP Pharmaceutical Products, Inc., Lake Forest, Illinois) data base, which is a multicenter, observational disease registry of men with erectile dysfunction. Materials And Methods: The cohort included 168 men in ExCEED who had baseline health related quality of life measurement. Of these men 47 reported a history of prostate cancer while 121 did not. Appropriate univariate and multivariate analyses were performed comparing health related quality of life outcomes between impotent men with and without prostate cancer. Results: Men with erectile dysfunction and prostate cancer had worse sexual self-efficacy, erectile function, intercourse satisfaction and orgasmic function than those with erectile dysfunction without prostate cancer (all p <0.001). However, men with erectile dysfunction and prostate cancer experienced less psychological impact of erectile dysfunction on sexual experience (p = 0.05) and emotional life (p = 0.03) than those with erectile dysfunction without prostate cancer. The findings regarding the psychological impact of erectile dysfunction persisted in multivariate linear regression models. Conclusions: Men with erectile dysfunction and prostate cancer appear to have better disease specific health related quality of life than those with erectile dysfunction and no history of prostate cancer. This finding has important ramifications for clinicians when counseling patients newly diagnosed with prostate cancer and also when treating patients who present with erectile dysfunction of various etiologies. abstract_id: PUBMED:26853048 Health-Related Quality of Life, Psychological Distress, and Sexual Changes Following Prostate Cancer: A Comparison of Gay and Bisexual Men with Heterosexual Men. Introduction: Decrements in health-related quality of life (HRQOL) and sexual difficulties are a recognized consequence of prostate cancer (PCa) treatment. However little is known about the experience of gay and bisexual (GB) men. Aim: HRQOL and psychosexual predictors of HRQOL were examined in GB and heterosexual men with PCa to inform targeted health information and support. Method: One hundred twenty-four GB and 225 heterosexual men with PCa completed a range of validated psychosexual instruments. Main Outcome Measure: Functional Assessment of Cancer Therapy-Prostate (FACT-P) was used to measure HRQOL, with validated psychosexual measures, and demographic and treatment variables used as predictors. Results: GB men were significantly younger (64.25 years) than heterosexual men (71.54 years), less likely to be in an ongoing relationship, and more likely to have casual sexual partners. Compared with age-matched population norms, participants in both groups reported significantly lower sexual functioning and HRQOL, increased psychological distress, disruptions to dyadic sexual communication, and lower masculine self-esteem, sexual confidence, and sexual intimacy. In comparison with heterosexual men, GB men reported significantly lower HRQOL (P = .046), masculine self-esteem (P < .001), and satisfaction with treatment (P = .013); higher psychological distress (P = .005), cancer related distress (P < .001) and ejaculatory concern (P < .001); and higher sexual functioning (P < .001) and sexual confidence (P = .001). In regression analysis, psychological distress, cancer-related distress, masculine self-esteem, and satisfaction with treatment were predictors of HRQOL for GB men (R2Adj = .804); psychological distress and sexual confidence were predictors for heterosexual men (R2Adj = .690). Conclusion: These findings confirm differences between GB and heterosexual men in the impact of PCa on HRQOL across a range of domains, suggesting there is a need for GB targeted PCa information and support, to address the concerns of this "hidden population" in PCa care. abstract_id: PUBMED:15900977 Individual quality of life following radical prostatectomy in men with prostate cancer. Purpose: The aim of this study was to examine the individual quality of life (QoL) of men following radical prostatectomy for prostate cancer. The following research questions were addressed: (a) What are the most important areas of quality of life for men following radical prostatectomy? (b) How do these men rate their satisfaction in each area and what is the relative importance of each area to their overall quality of life? Methods: The purposive sample consisted of 11 men with prostate cancer who had undergone a radical prostatectomy 3 to 4 months earlier. QoL was examined using the SEIQoL-DW (Schedule for the Evaluation of Individual QoL: A Direct Weighting Procedure). The data were analyzed by means of qualitative content analysis (five most important QoL areas). Findings: The 11 respondents named a total of 55 QoL areas which they described and labelled. They then rated their current satisfaction in each area, and how important each one was to them. A second analysis of the content was made to identify the main QoL areas. The 55 quality of life areas mentioned by respondents were reduced to the following categories: health, activity, family, relationship with a partner, autonomy, independence, hobby, financial security, and sexuality. Health, family, and relationship with a partner are the thee areas which had the most impact on QoL. Overall, the respondents had a high quality of life value. Impotence and incontinence did not appear to have a very negative impact on quality of life. Conclusions: SEIQoL-DW was used for the first time in patients with prostate cancer. In a urology department where nurses and patients are confronted daily with the topics of intimacy, sexuality, and sense of embarrassment, more importance should be placed on the topic of sexuality when taking a patient history. Nurses should be trained in communication techniques that enable them to engage patients in a safe and therapeutic dialogue about their sexual concerns related to the diagnosis of prostate cancer. SEIQoL-DW can support the communication with patients. abstract_id: PUBMED:16225343 Quality of life among men treated with radiation therapy for prostate cancer. Prostate cancer continues to affect an increasing number of men in the United States. When diagnosed, these men may not have access to information that differentiates the long-term outcomes of one method of treatment from another. This study evaluated the impact of external beam radiation on several domains of quality of life, including physical, social/family, emotional, and functional well-being domains. abstract_id: PUBMED:10814950 Quality of life in men with urinary incontinence after prostate cancer surgery. Quality of life assessment is significant to health care providers because it helps us understand the experience of well-being as it relates to an illness and its severity, symptoms, and co-morbidities. Attempting to deduce the influence of illness on quality of one's life is complex; however, this area of research has demonstrated that the measurement of quality of life is as important in providing comprehensive care as the treatment itself. Prostate cancer is the most prevalent cancer in American men. Radical prostatectomy is frequently considered the treatment of choice for localized prostate cancer. Despite its widespread use, considerable morbidity exists, including erectile dysfunction and urinary incontinence. Although not all men who undergo radical prostatectomy will experience urinary incontinence, those who do find that it influences their daily lives, affecting the clothes they wear, their activities, sleep patterns, social relationships, and self-esteem. Based on the compelling nature of this problem, this article will focus on the effects that urinary incontinence has on the quality of life in men who undergo surgical treatment for prostate cancer. abstract_id: PUBMED:26814146 Core principles of sexual health treatments in cancer for men. Purpose Of Review: The considerable prevalence of sexual health problems in men after cancer treatment coupled with the severity of impact and challenges to successful intervention make sexual dysfunction one of the most substantial health-related quality of life burdens in all of cancer survivorship. Surgeries, radiation therapies, and nontreatment (e.g., active surveillance) variously result in physical disfigurement, pain, and disruptions in physiological, psychological, and relational functioning. Although biomedical and psychological interventions have independently shown benefit, long-term, effective treatment for sexual dysfunction remains elusive. Recent Findings: Recognizing the complex nature of men's sexual health in an oncology setting, there is a trend toward the adoption of a biopsychosocial orientation that emphasizes the active participation of the partner, and a broad-spectrum medical, psychological, and social approach. Intervention research to date provides good insight into the potential active ingredients of successful sexual rehabilitation programming. Summary: Combining a biopsychosocial approach with these active intervention elements forecasts an optimistic future for men's sexual rehabilitation programming within oncology. However, significant gaps remain in our understanding of patient experience and appropriate sexual health intervention for gay men and men of diverse race and culture. abstract_id: PUBMED:9541372 Health-related quality of life in men with erectile dysfunction. Objective: To assess health-related quality of life (HRQOL) in men with erectile dysfunction. Design: Descriptive survey with general and disease-specific measures. The instrument contained three established, validated HRQOL measures, a validated comorbidity checklist, and sociodemographics. The RAND 36-Item Health Survey 1.0 (SF-36) was used to assess general HRQOL. Sexual function and sexual bother were assessed using the UCLA Prostate Cancer Index. The marital interaction scale from the Cancer Rehabilitation Evaluation System Short Form (CARES-SF) was used to assess each patient's relationship with his sexual partner. Setting: Urology clinics at a university medical center and the affiliated Veterans Affairs (VA) Medical Center. Participants: Thirty-five (67%) of 54 consecutive university patients presenting for erectile dysfunction and 22 (42%) of 52 VA patients who were awaiting a previously prescribed vacuum erection device participated. Main Results: The university respondents scored slightly lower than population normals in social function, role limitations due to emotional problems, and emotional well-being. The VA respondents scored lower than expected in all eight domains. Scores for the VA population were significantly lower than those for the university population in physical function, role limitations due to physical problems, bodily pain, and social function. A significant correlation was seen between marital interaction and sexual function (r = -.33, p = .01) but not between marital interaction and sexual bother (r = -.15, p = .26) in the total sample. Sexual function also correlated significantly with general health perceptions (r = .34, p = .01), role limitations due to physical problems (r = .29, p = .03), and role limitations due to emotional problems (r = .30, p = .03). Sexual bother did not correlate with any of the general HRQOL domains. Affluent men reported better sexual function (p = .03). Conclusions: The emotional domains of the SF-36 are associated with more profound impairment than are the physical domains in men with erectile dysfunction. Erectile dysfunction and the bother it causes are discrete domains of HRQOL and distinct from each other in these patients. With increased attention to patient-centered medical outcomes, greater emphasis has been placed on such variables as HRQOL. This should be particularly true for a patient-driven symptom, such as erectile dysfunction. abstract_id: PUBMED:23271780 Quality of life in men undergoing active surveillance for localized prostate cancer. Active surveillance is an important arrow in the quiver of physicians advising men with prostate cancer. Quality-of-life considerations are paramount for patient-centered decision making. Although the overall deleterious impact on health is less dramatic than for those who pursue curative treatment, men on active surveillance also suffer sexual dysfunction and distress. Five-year outcomes revealed more erectile dysfunction (80% vs 45%) and urinary leakage (49% vs 21%) but less urinary obstruction (28% vs 44%) in men undergoing prostatectomy. Bowel function, anxiety, depression, well-being, and overall health-related quality of life (HRQOL) were similar after 5 years, but at 6-8 years, other domains of HRQOL, such as anxiety and depression, deteriorated significantly for those who chose watchful waiting. Further research is needed to compare prospectively HRQOL outcomes in men choosing active surveillance and those never diagnosed with prostate cancer, in part to help weigh the potential benefits and harms of prostate cancer screening. abstract_id: PUBMED:17355370 Economic conditions and marriage quality of men with prostate cancer. Objective: To explore the predictors of the quality of marriage of men with prostate cancer, as being diagnosed with prostate cancer affects the quality of life of the man and his partner, and while some aspects are known about the impact of the disease and its treatments on the man's quality of life, less is known about the marriage quality (MQ) in this new situation. Patients And Methods: We followed 591 men from Stockholm County (Sweden) who had been diagnosed with prostate cancer in 1999, and who were 50-80-years old and alive on 1 October 2002. The men completed a questionnaire asking about their MQ, and several other sociodemographic, medical and economic characteristics. Results: Of 426 men who provided information and who had a spouse or partner, 168 (39.4%) reported having a lower MQ due to their disease. Increased expenditure (46.2% vs 30.9%; relative risk, 1.5; 95% confidence interval, 1.1-2.0) and decreased income (55.4% vs 36.5%; 1.5, 1.1-2.0) as a consequence of prostate cancer reduced their MQ. Patients who had erectile dysfunction had a lower MQ (46.3% vs 11.8%; 3.9; 2.0-7.6). There was also a lower MQ in men who were depressed or had urinary leakage as a consequence of prostate cancer. Younger men (50-69 years old) with prostate cancer had a lower MQ than older men (70-80 years; 51.9% vs 33.1%; 1.6; 1.2-2.0). Conclusions: Men whose economic situation is worsened by prostate cancer reported having a reduced MQ. There was also such an effect for men with erectile dysfunction, urinary leakage and depression, and among men diagnosed with prostate cancer when young. abstract_id: PUBMED:12206843 Quality of life and sexuality of men with prostate cancer 3 years after cryosurgery. The current study was designed to describe the long-term life quality and sexuality of men enrolled in a phase 2 clinical trial of cryosurgery for the treatment of localized prostate cancer. A total of 75 men were administered the Functional Assessment of Cancer Treatment-Prostate (FACT-P) before treatment and after treatment at 6 weeks, and at 3, 6, 12, 24, and 36 months. Additionally, these men completed a Sexuality Follow-Up Questionnaire (SFQ) 3 years after cryosurgery. By 12 months after cryosurgery, most FACT-P subscales had returned to pretreatment levels. Quality of life remained stable over the subsequent 2 years. The only exception to this general trend was persistent impairment in measures of social/family well-being. At 36 months, 13% (5 of 38) of patients had regained erectile functioning, and an additional 34% (13 of 38) of patients were sexually active with the help of aids. The 3-year quality-of-life outcomes support the renewed interest in cryosurgery. No late complications were observed. Whereas improvements in erectile function were observed between years 1 and 3 for some patients, most continue to experience erectile dysfunction. For these patients, aids are an important adjunct to the treatment of their erectile dysfunction. Answer: Yes, quality of life is different for men with erectile dysfunction (ED) and prostate cancer compared to men with ED due to other causes. According to a study comparing health-related quality of life (HRQOL) in impotent men with prostate cancer to that of impotent men without prostate cancer using the ExCEED database, men with ED and prostate cancer had worse sexual self-efficacy, erectile function, intercourse satisfaction, and orgasmic function than those with ED without prostate cancer. However, they experienced less psychological impact of ED on their sexual experience and emotional life. These findings persisted in multivariate linear regression models, suggesting that men with ED and prostate cancer have better disease-specific HRQOL than those with ED and no history of prostate cancer (PUBMED:12629383). This finding has important implications for clinicians when counseling patients newly diagnosed with prostate cancer and also when treating patients who present with ED of various etiologies. It indicates that the underlying cause of ED can influence the quality of life and the psychological impact of the condition on the individual.
Instruction: Does housing chronically homeless adults lead to social integration? Abstracts: abstract_id: PUBMED:22549528 Does housing chronically homeless adults lead to social integration? Objective: Supported housing programs have been successful in helping homeless adults obtain housing. This study examined whether improvements in social integration occur after clients obtain supported housing. Methods: Measures of social integration were examined for 550 chronically homeless adults with mental illness who participated in the 11-site Collaborative Initiative to Help End Chronic Homelessness. Social integration was conceptualized as a multidimensional construct of variables in six domains: housing, work, social support, community participation, civic activity, and religious faith. Changes in baseline measures related to the six domains and their interrelationships were examined at six and 12 months after entry into the supported housing program. Results: Chronically homeless adults showed substantial improvements in housing but remained socially isolated and showed limited improvement in other domains of social integration, which were only weakly correlated with one another. Conclusions: More attention is needed to develop rehabilitation interventions in supported housing programs to improve social integration of chronically homeless adults. Because improvements in some domains of social integration were only weakly related, it may be necessary to intervene in multiple domains simultaneously. abstract_id: PUBMED:34561888 "Halfway Independent": Experiences of formerly homeless adults living in permanent supportive housing. Permanent supportive housing (PSH), which combines affordable public housing with social services, has become the dominant model in the United States for providing housing to formerly homeless people. PSH has been effective in reducing re-entry to homelessness, yet has shown limited evidence of improving formerly homeless individuals' mental health and quality of life. This study aimed to understand the lived experiences of formerly homeless adults' adjustment to tenancy in PSH, with a focus on how living in PSH has affected their meaningful activity and social engagement. Based on a phenomenological approach, a thematic analysis was conducted using semi-structured interviews with 17 individuals living in three PSH buildings in New York City. Results suggested that PSH was beneficial in fulfilling formerly homeless individual's basic needs and facilitating lifestyle improvements, yet many were dissatisfied with their living conditions and lacked meaningful activity, social integration, and community belongingness. These issues were found to develop in large part as a result of formerly homeless individuals' disharmonious relationships within the social context of PSH, consisting of staff members, other residents, and people in the surrounding community. The effects of the COVID-19 pandemic and implications for PSH social services are discussed. abstract_id: PUBMED:33340381 Beyond housing: Understanding community integration among homeless-experienced veteran families in the United States. Community integration is important to address among homeless-experienced individuals. Little is known about helping veteran families (families with a parent who is a veteran) integrate into the community after homelessness. We sought to understand the experiences of community integration among homeless-experienced veteran families. We used a two-stage, community-partnered approach. First, we analysed 16 interviews with homeless-experienced veteran parents (parents who served in the military; n = 9) living in permanent housing and providers of homeless services (n = 7), conducted from February to September 2016, for themes of community integration. Second, we developed a workgroup of nine homeless-experienced veteran parents living in a permanent housing facility, who met four times from December 2016 to July 2017 to further understand community integration. We audio-recorded, transcribed and analysed the interviews and workgroups for community integration themes. For the analysis, we developed community integration categories based on interactions outside of the household and built a codebook describing each topic. We used the codebook to code the individual interviews and parent workgroup sessions after concluding that the workgroup and interview topics were consistent. Findings were shared with the workgroup. We describe our findings across three stages of community integration: (a) first housed, (b) adjusting to housing and the community, and (c) housing maintenance and community integration. We found that parents tended to isolate after transitioning into permanent housing. After this, families encountered new challenges and were guarded about losing housing. One facilitator to community integration was connecting through children to other parents and community institutions (e.g. schools). Although parents felt safe around other veterans, many felt judged by non-veterans. Parents and providers reported a need for resources and advocacy after obtaining housing. We share implications for improving community integration among homeless-experienced veteran families, including providing resources after obtaining housing, involving schools in facilitating social connections, and combating stigma. abstract_id: PUBMED:31097899 Life Goals and Gender Differences among Chronically Homeless Individuals Entering Permanent Supportive Housing. This research seeks to understand goals and the gender differences in goals among men and women who are transitioning into permanent supportive housing. Because of systemic gender inequality, men and women experience homelessness differently. Data collected for this study come from a longitudinal investigation of HIV risk behavior and social networks among women and men transitioning from homelessness to permanent supportive housing. As part of this study, 421 baseline interviews were conducted in English with homeless adults scheduled to move into permanent supportive housing; participants were recruited between September 2014 and October 2015. This paper uses goals data from the 418 male-or female-identified respondents in this study. Results identified goal differences in education and general health between men and women that should be taken into account when service providers, policy makers, and advocates are addressing the needs of homeless women. abstract_id: PUBMED:29981071 The elusive goal of social integration: A critical examination of the socio-economic and psychosocial consequences experienced by homeless young people who obtain housing. Objectives: The objective of this study was to provide an insider perspective on the experiences of nine formerly homeless young people as they transitioned into independent (market rent) housing and attempted to achieve meaningful social integration. Methods: The study was conducted in Toronto, Canada, and guided by the conceptual framework developed for the World Health Organization by the Commission on Social Determinants of Health. A critical ethnographic methodology was used. Over the course of 10 months, the lead author met every other week with nine formerly homeless young people who had moved into their own homes within 30 days prior to study recruitment. Results: Unaffordable housing, limited education, inadequate employment opportunities, poverty-level income, and limited social capital made it remarkably challenging for the young people to move forward. As the study progressed, the participants' ability to formulate long-range plans was impeded as they were forced to focus on day-to-day existence. Over time, living in a perpetual state of poverty led to feelings of "outsiderness," viewing life as a game of chance, and isolation. Conclusion: Rather than a secure, linear path from the streets to the mainstream, study participants were forced to take a precarious path full of structural gaps that left them stuck, spinning, and exhausted by the day-to-day struggle to meet basic needs. Despite their remarkable agency, it was almost impossible for the participants to achieve meaningful social integration given the structural inequities inherent in society. These observations have implications for practice, policy, and research. abstract_id: PUBMED:32779795 Finding home: Community integration experiences of formerly homeless women with problematic substance use in Housing First. Aims: This study explored community integration among women participating in a Housing First program. Physical, social, and psychological dimensions of community integration were examined. Methods: This study used neighborhood walk-along and photo-elicitation interviews to explore 16 formerly homeless women's experiences of community integration. Results: Participants described limited community integration. Health, poverty, service inaccessibility, and safety concerns shaped how they took part in activities in their neighborhoods. Participants primarily socialized with people in their buildings, though some preferred to keep to themselves. There was minimal sense of neighborhood belonging, with participants not interested in belonging to a community and being judged by others. Conclusion: Housing First promoted housing stability but did not contribute to community integration. Participants did not express a strong desire to integrate in their communities. Future research should consider the extent to which community integration remains a priority for marginalized populations, such as formerly homeless women. abstract_id: PUBMED:23804657 Impact of a housing first program on health utilization outcomes among chronically homeless persons. The authors examined the impact of a Housing First program on the use of specific health services, detoxification services, and criminal activity of long-term homeless individuals. The study sample consisted of eligible members of the inception cohort (18 enrollees) in the Single Adults Residential Assistance program (SARA) in Minneapolis, Minnesota. Analyses examined participant housing stability after enrollment in SARA and compared the use of a county medical center, detoxification programs, and criminal activity in the 2 years before and after enrollment in SARA. Only 1 of the 18 enrollees studied experienced homelessness during the 2-year follow-up after enrollment in SARA. There was a significant reduction in the amount of criminal activity in the 2-year period after SARA enrollment. The direction of association observed for other service uses remained consistent with expectations in existing literature, but were not statistically significant. Supportive housing for chronically homeless individuals may be successful at decreasing homelessness among this fragile population and may help reduce criminal activity. abstract_id: PUBMED:27422121 Individual, Housing, and Neighborhood Predictors of Psychological Integration Among Vulnerably Housed and Homeless Individuals. The current longitudinal study evaluated the individual, housing, and neighborhood characteristics predictive of feeling psychologically integrated within one's neighborhood among a population of homeless and vulnerably housed individuals. Participants were recruited at homeless shelters, meal programs, and rooming houses in Ottawa, Canada and participated in three in-person interviews, each approximately 1 year apart. Prospective and cross-sectional predictors of psychological integration at Follow-up 1 and Follow-up 2 were examined. There were 397 participants at baseline, 341 at Follow-up 1 and 320 at Follow-up 2. A hierarchical multiple regression uncovered several significant predictors of psychological integration. The most salient and common predictors were being older, having greater social support, living in high quality housing, and residing in a neighborhood with a positive impact. Implications for service provision and policy advancements are discussed. abstract_id: PUBMED:33816749 The intersection of housing and mental well-being: Examining the needs of formerly homeless young adults transitioning to stable housing. We examine the challenges formerly homeless young adults (FHYAs) face after they transition out of homelessness. Considering the adversities FHYAs face, it is unclear how transitioning to stable housing may affect their mental well-being or what types of stressors they may experience once housed. This study investigates the social environment young adults encounter in their transition to stable housing and examines trauma and social coping predictors of mental health symptoms in a sample of FHYAs to generate new knowledge for better intervening to meet their needs. Data were obtained from REALYST, a national research collaborative comprised of interdisciplinary researchers investigating young adults' (ages 18-26) experiences with homelessness. Cross-sectional data for 1426 young adults experiencing homelessness were collected from 2016 to 2017 across seven cities in the United States (i.e., Los Angeles, Phoenix, Denver, Houston, San Jose, St. Louis, and New York City). The analytical sub-sample for this study consisted of 173 FHYAs who were housed in their own apartment (via voucher from Housing and Urban Development or another source) or in transitional living programs during their participation in the study. Ordinary Least Squares regression was used to examine the influence of trauma and social coping strategies on indicators of mental well-being. Findings indicated that higher adversity scores and higher mental health help-seeking intentions were positively associated with higher levels of stress, psychological distress, and depression severity. Higher level of social coping was associated with lower levels of depression severity. Logistic regression results showed that young adults with higher adversity scores had higher odds of reporting clinical levels of post-traumatic symptoms. The study implications suggest that FHYAs who transition to stable housing continue to need support navigating and coping with stressful life events; and interventions that help FHYAs develop strong networks of social supports are needed to promote positive mental well-being. abstract_id: PUBMED:31297070 Determinants of Community Integration Among Formerly Homeless Veterans Who Received Supportive Housing. Community integration is recognized as a meaningful goal that is highly relevant to the long-term success of supportive housing programs. Research to date highlights concerns that some individuals in permanent supportive housing remain socially isolated and have limited success in other domains of community integration. However, we know little about what factors impact formerly homeless veterans' ability to achieve community integration. To identify factors associated with community integration among homeless veterans housed through the Department of Veterans Affairs' (VA's) Supportive Housing program (HUD-VASH), we performed secondary database analyses of Veterans (n = 560) housed via HUD-VASH in the VA Greater Los Angeles Healthcare System from 10/1/14-9/30/15. We conducted ordinal and logit regressions to examine associations between baseline HUD-VASH participant characteristics, supportive housing voucher type, health service utilization in the year post-housing, and three types of community integration outcomes (i.e., change in community adjustment, status of housing stability, and change in employment). Data were obtained from HOMES (VA's homeless registry) and Corporate Data Warehouse (CDW) (VA's a national administrative dataset of VA inpatient and outpatient care). Mental health service utilization was negatively associated with community adjustment, housing stability, and employment. Employment at baseline was positively associated with housing stability and employment. Also, substance use disorder visits was positively associated with employment, and combined medical and substance use disorder diagnoses was positively associated with change in community adjustment. We considered 29 variables and found relatively few were associated with community integration. Consistent with previous research, our study highlights the importance of mental health needs, and suggests that utilization of mental health services is an important indicator of improvements in community adjustment, housing stability, and employment. Answer: The provided abstracts suggest that while housing programs for chronically homeless adults, such as Supported Housing and Permanent Supportive Housing (PSH), have been successful in helping individuals obtain housing, they have shown limited success in achieving social integration for these individuals. Chronically homeless adults with mental illness who participated in supported housing programs showed substantial improvements in housing stability but remained socially isolated with limited improvement in other domains of social integration. These domains were only weakly correlated with one another, indicating that more attention is needed to develop rehabilitation interventions in supported housing programs to improve social integration (PUBMED:22549528). Similarly, formerly homeless adults living in PSH reported that while their basic needs were met and lifestyle improvements were facilitated, they were often dissatisfied with their living conditions and lacked meaningful activity, social integration, and community belongingness. Their disharmonious relationships within the PSH context, including with staff members, other residents, and the surrounding community, contributed to these issues (PUBMED:34561888). For homeless-experienced veteran families, community integration was found to be a challenge, with parents tending to isolate after transitioning into permanent housing. Connecting through children to other parents and community institutions, such as schools, was a facilitator for community integration. However, many felt judged by non-veterans, and there was a reported need for resources and advocacy after obtaining housing (PUBMED:33340381). The Housing First program, while promoting housing stability, did not contribute significantly to community integration among formerly homeless women with problematic substance use. Participants expressed limited interest in integrating into their communities and faced challenges such as health issues, poverty, service inaccessibility, and safety concerns (PUBMED:32779795). In conclusion, while housing programs provide a critical foundation for chronically homeless adults by addressing their immediate need for shelter, the evidence suggests that additional support and interventions are necessary to facilitate social integration and address the broader challenges faced by this population.
Instruction: Psychosocial factors and surgical outcomes: are elderly depressed patients less satisfied with surgery? Abstracts: abstract_id: PUBMED:24921847 Psychosocial factors and surgical outcomes: are elderly depressed patients less satisfied with surgery? Study Design: Longitudinal cohort study. Objective: In this study, we set out to assess the effect of preoperative depression on patient satisfaction after revision lumbar surgery. Summary Of Background Data: Patient satisfaction ratings are increasingly being used in health care as a proxy for quality of care. In the elderly, affective disorders such as depression have been shown to influence patient-reported outcomes and self-interpretation of health status. Methods: A total of 69 patients aged 65 years or older undergoing revision neural decompression and instrumented fusion for same-level recurrent stenosis-associated back and leg pain were included in this study. Preoperative Zung self-rating depression score, comorbidities, and postoperative satisfaction with surgical care and outcome were assessed for all patients. Baseline and 2-year visual analogue scale (VAS)-leg pain, VAS-back pain, Oswestry Disability Index, Short Form-12 physical component score and Short Form-12 mental component score, as well as health-state utility (EuroQol 5D) were assessed. Factors associated with patient satisfaction after surgical procedures were assessed via multivariate logistic regression analysis. Results: Compared with baseline, there was a statistically significant improvement in VAS-back pain 2.76±2.73 (pseudarthrosis [1.94±2.81], adjacent segment disease [4.35±3.16]), same-level recurrent stenosis [2±2.23]), VAS-leg pain 2.66±4.12, (adjacent segment disease [2.24±4.46] and same-level recurrent stenosis [3±3.78]). Two-year Oswestry Disability Index improved after surgery for pseudarthrosis (4.05±7.65), adjacent segment disease (6±13.63) and same-level recurrent stenosis (4.54±5.97). In a multivariate logistical regression model, increasing preoperative Zung self-rating depression scale scores were independently associated with patient dissatisfaction 2 years after revision lumbar surgery, (P<0.001). Conclusion: This study demonstrates that independent of surgical effectiveness, baseline depression influence patient satisfaction with health care, 2 years after revision lumbar surgery. Quality improvement initiatives using patient satisfaction as a proxy for quality of care should account for patients' baseline depression as a potential confounder especially in this age group. Level Of Evidence: 3. abstract_id: PUBMED:23890847 Persistent postmastectomy pain in breast cancer survivors: analysis of clinical, demographic, and psychosocial factors. Unlabelled: Persistent postmastectomy pain (PPMP) is increasingly recognized as a major individual and public health problem. Although previous studies have investigated surgical, medical, and demographic risk factors, in this study we aimed to more clearly elucidate the relationship of psychosocial factors to PPMP. Postmastectomy patients (611) were queried about pain location, severity, and burden 38.3 ± 35.4 months postoperatively. Validated questionnaires for depressive symptoms, anxiety, sleep, perceived stress, emotional stability, somatization, and catastrophizing were administered. Detailed surgical, medical, and treatment information was abstracted from patients' medical records. One third (32.5%) of patients reported PPMP, defined as ≥3/10 pain severity in the breast, axilla, side, or arm, which did not vary according to time since surgery. Multiple regression analysis revealed significant and independent associations between PPMP and psychosocial factors, including catastrophizing, somatization, anxiety, and sleep disturbance. Conversely, treatment-related factors including surgical type, axillary node dissection, surgical complication, recurrence, tumor size, radiation, and chemotherapy were not significantly associated with PPMP. These data confirm previous studies suggesting that PPMP is relatively common and provide new evidence of significant associations between psychosocial characteristics such as catastrophizing with PPMP, regardless of the surgical and medical treatment that patients receive, which may lead to novel strategies in PPMP prevention and treatment. Perspective: This cross-sectional cohort study of 611 postmastectomy patients investigated severity, location, and frequency of pain a mean of 3.2 years after surgery. Significant associations between pain severity and individual psychosocial attributes such as catastrophizing were found, whereas demographic, surgical, medical, and treatment-related factors were not associated with persistent pain. abstract_id: PUBMED:36934352 Associations between cardiovascular diseases and psychosocial factors and options for intervention Coronary artery disease is a highly prevalent heart disease and the leading cause of morbidity and mortality in developed countries. During the last decades, numerous studies have focused on the comprehension of the relationship between coronary heart disease and different psychosocial factors. Coronary artery bypass graft surgery is a common treatment for coronary artery disease and is usually associated with improved clinical outcomes. Symptoms of anxiety and unipolar depression are common psychological disorders in patients awaiting coronary artery bypass graft surgery. Several prospective cohort studies have been carried out on the factors affecting the short- and long-term outcome of coronary artery bypass graft surgery. Scientific literature reports that not only clinical features, e.g., cardial state, comorbidity or intraoperative factors influence the outcome of cardiac surgery. In a comprehension of psychosocial factors over traditional risk factors (hypertension, LDL cholesterol level, diabetes mellitus, smoking, obesity and physical inactivity) on morbidity and mortality rates, the previously mentioned ones proved to be determinant. Gathering patients' psychological status before undergoing heart surgery and providing psychological interventions if they are indicated would be beneficial. A better understanding of whether and when psychological interventions affect specific outcomes may help design even more powerful interventions and make better predictions of which patients will benefit from which psychological intervention. Psychological assessment and intervention thus merit integration into routine surgical care. Orv Hetil. 2023; 164(11): 411-419. abstract_id: PUBMED:32299757 Psychosocial predictors of outcomes up to one year following total knee arthroplasty. Background: Total knee arthroplasty (TKA) aims to relieve pain and improve physical functioning of the knee, however, some patients continue to experience pain and impaired function following TKA which cannot be explained by surgical and implant factors. Psychological factors may influence the outcomes of TKA. The aim of this prospective study was to examine the psychosocial factors that predicted pain, stiffness and physical functioning up to one year following TKA. Methods: One hundred and two patients completed pre-operative and one-year questionnaires which assessed a wide range of psychosocial and sociodemographic factors prior to surgery. The Oxford Knee Score (OKS) and the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) Pain, Stiffness and Physical Functioning subscales were used as outcome measures. Pearson correlation analysis and multiple linear regression were conducted to examine relationships between predictor and outcome variables. Results: Regression analysis showed that regarding variance in WOMAC outcome measures post TKA, our model predicted 31% for physical functioning, 25% for pain and 29% for stiffness at one year. Regarding variance in OKS post TKA, the model predicted 36% at one year. Greater levels of depressive symptoms and neuroticism and worse pre-operative scores significantly predicted poorer outcomes. Conclusions: The findings indicate that pre-operative psychosocial factors are important in understanding outcomes of TKA. Psychosocial factors could be considered during pre-operative assessment. Further research conducted on psychological interventions is needed within this population to determine whether early and one-year outcomes can be improved. abstract_id: PUBMED:26633863 Assessment of psychosocial factors and predictors of psychopathology in a sample of heart transplantation recipients: a prospective 12-month follow-up. Background And Objectives: In the last decades, researchers of heart transplantation (HT) programs have attempted to identify the existence of psychosocial factors that might influence the clinical outcome before and after the transplantation. The first objective of this study is the prospective description of changes in psychiatric and psychosocial factors in a sample of HT recipients through a 12-month follow-up. The second goal is to identify predictors of psychopathology 1 year after HT. Methods: Pretransplant baseline assessment consisted of clinical form; Hospital Anxiety and Depression Scale (HADS); Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Structured Clinical Interview; Coping questionnaire (COPE); Five Factors Inventory Revised; Apgar-Family questionnaire and Multidimensional Health Locus of Control (MHLC). The assessment 1 year after HT consisted of HADS, COPE, Apgar-Family and MHLC. Results: The sample included 78 recipients. During the waiting list period, 32.1% of them had a psychiatric disorder; personality factors profile was similar to the general population, and they showed adaptive coping strategies. Some changes in psychosocial factors were observed at 12 months after the surgery: lower scores of anxiety and depression, less necessity of publicly venting of feelings and a trend to an internal locus of control. Neuroticism and Disengagement pre-HT were predictors of psychopathology in the follow-up assessment. Conclusions: Pretransplant psychosocial screening is important and enables to find out markers of emotional distress like Neuroticism or Disengagement coping styles to identify patients who might benefit from psychiatric and psychological interventions. Successful HT involved some positive changes in psychosocial factors 12 months after the surgery beyond physical recovery. abstract_id: PUBMED:18687573 Psychosocial factors determining life expectancy of patients undergoing open heart surgery Not only the physical status of the patient and the clinical variables determine the outcome and recovery following open heart surgery. Psychosocial and socioeconomic factors have growing importance regarding this field. During the last decades, in the assessment of the results of revascularization the self-perceived health related quality of life of the patient has come into the limelight. Evidence suggests that self-perceived health related quality of life, depressive symptoms and anxiety together influence short and long term recovery following coronary bypass surgery. There is also a higher risk for morbidity and mortality among the lonely and the socially isolated. Lower education and poor social background may play a role in the higher mortality rates. In our review we summarize the psychosocial factors determining the outcome of heart surgery. abstract_id: PUBMED:17574501 Short-term and long-term psychosocial adjustment and quality of life in women undergoing different surgical procedures for breast cancer. Background: The various surgical procedures for early-stage breast cancer are equivalent in terms of survival. Therefore, other factors, such as the procedures' effect on psychosocial adjustment and quality of life (QOL), take on great importance. The aim of the current study was to prospectively examine the short- and long-term effects of mastectomy with reconstruction, mastectomy without reconstruction, and breast conservation therapy on aspects of psychosocial adjustment and QOL in a sample of 258 women with breast cancer. Methods: Participants completed questionnaires before surgery and then again 1, 6, 12, and 24 months after surgery. Questionnaires assessed depressive symptoms, anxiety, body image, sexual functioning, and QOL. Results: Adjustment patterns differed throughout the 2-year period after surgery. Some short-term changes in adjustment (less anxiety, less overall body satisfaction) were similar across surgery groups, whereas others (satisfaction with chest appearance, QOL in physical health domain) were higher for women who had breast conservation therapy. However, women who had mastectomy with reconstruction reported greater satisfaction with their abdominal area. During the long-term follow-up period (6 months to 2 years after surgery), women in all three groups experienced marked improvements in psychosocial adjustment (depressive symptoms, satisfaction with chest appearance, sexual functioning) and QOL in physical and mental health domains. In fact, the level for most variables returned to baseline levels or higher. Conclusions: Overall, the general patterns of psychosocial adjustment and QOL are similar among the three surgery groups. abstract_id: PUBMED:32021686 Psychosocial health of patients receiving orthopaedic treatment in northern Tanzania: A cross-sectional study. Background: Patients with musculoskeletal injuries in Sub-Saharan Africa often receive prolonged inpatient treatment due to limited access to surgical care. Little is known regarding the psychosocial impact of prolonged conservative treatment for orthopaedic injuries, which may add to disability and preclude rehabilitation. Methods: A cross-sectional, questionnaire study was conducted to characterize the psychosocial health of orthopaedic inpatients at a tertiary hospital in Moshi, Tanzania. Three validated surveys assessing coping strategies, functional social support, and symptoms of depression were orally administered to all orthopaedic patients with a length of stay (LOS) ≥ 6 days by a Tanzanian orthopaedic specialist. Results: Fifty-nine patient surveys were completed, and revealed 92% (54) of patients were more likely to utilize more adaptive than maladaptive coping strategies. Patients with chest or spinal column injuries were more likely to use maladaptive coping strategies (p = 0·027). Patients with head injuries had more social support compared to others (p = 0·009). Lack of insurance, limited education, and rural origins were associated with less functional social support, although this finding did not reach statistical significance. 23·7% (14) of patients had symptoms consistent with mild depression, 33·9% (20) with moderate depression, and 3·4% (2) with moderately-severe depression. LOS was the only significant predictor for depression severity. Conclusions: 61% (36) of orthopaedic inpatients exhibited depressive symptoms, indicating that the psychosocial health in this population is sub-optimal. Mental health is a crucial element of successful orthopaedic care. Access to timely surgical care would greatly decrease LOS, the most prominent predictor of depressive symptom severity. abstract_id: PUBMED:24152380 Psychosocial factors and mortality in women with early stage endometrial cancer. Objectives: Psychosocial factors have previously been linked with survival and mortality in cancer populations. Little evidence is available about the relationship between these factors and outcomes in gynaecologic cancer populations, particularly endometrial cancer, the fourth most common cancer among women. This study examined the relationship between several psychosocial factors prior to surgical resection and risk of all-cause mortality in women with endometrial cancer. Design: The study utilized a non-experimental, longitudinal design. Methods: Participants were 87 women (Mage = 60.69 years, SDage = 9.12 years) who were diagnosed with T1N0-T3N2 endometrial cancer and subsequently underwent surgery. Participants provided psychosocial data immediately prior to surgery. Survival statuses 4-5 years post-diagnoses were abstracted via medical record review. Cox regression was employed for the survival analysis. Results: Of the 87 women in this sample, 21 women died during the 4- to 5-year follow-up. Adjusting for age, presence of regional disease and medical comorbidity severity (known biomedical prognostic factors), greater use of an active coping style prior to surgery was significantly associated with a lower probability of all-cause mortality, hazard ratio (HR) = 0.78, p = .04. Life stress, depressive symptoms, use of self-distraction coping, receipt of emotional support and endometrial cancer quality of life prior to surgery were not significantly associated with all-cause mortality 4-5 years following diagnosis. Conclusions: Greater use of active coping prior to surgery for suspected endometrial cancer is associated with lower probability of all-cause mortality 4-5 years post-surgery. Future research should attempt to replicate these relationships in a larger and more representative sample and examine potential behavioural and neuroendocrine/immune mediators of this relationship. Statement Of Contribution: What is already known on this subject? Psychosocial factors have previously been linked with clinical outcomes in a variety of cancer populations. With regards to gynecologic cancer, the majority of the research has been conducted in ovarian cancer and examines the protective role of social support in mortality outcomes. What does this study add? Demonstrates association between active coping during perioperative period and 5 year survival. Demonstrates psychosocial-survival relationship exists independent of biobehavioral factors. abstract_id: PUBMED:32936216 Association of Breast Cancer Surgery With Quality of Life and Psychosocial Well-being in Young Breast Cancer Survivors. Importance: Young women with breast cancer are increasingly choosing bilateral mastectomy (BM), yet little is known about short-term and long-term physical and psychosocial well-being following surgery in this population. Objective: To evaluate the differential associations of surgery with quality of life (QOL) and psychosocial outcomes from 1 to 5 years following diagnosis. Design, Setting, And Participants: Cohort study. Setting: Multicenter, including academic and community hospitals in North America. Participants: Women age ≤40 when diagnosed with Stage 0-3 with unilateral breast cancer between 2006 and 2016 who had surgery and completed QOL and psychosocial assessments. Exposures (for Observational Studies): Primary breast surgery including breast-conserving surgery (BCS), unilateral mastectomy (UM), and BM. Main Outcomes And Measures: Physical functioning, body image, sexual health, anxiety and depressive symptoms were assessed in follow-up. Results: Of 826 women, mean age at diagnosis was 36.1 years; most women were White non-Hispanic (86.7%). Regarding surgery, 45% had BM, 31% BCS, and 24% UM. Of women who had BM/UM, 84% had reconstruction. While physical functioning, sexuality, and body image improved over time, sexuality and body image were consistently worse (higher adjusted mean scores) among women who had BM vs BCS (body image: year 1, 1.32 vs 0.64; P < .001; year 5, 1.19 vs 0.48; P < .001; sexuality: year 1, 1.66 vs 1.20, P < .001; year 5, 1.43 vs 0.96; P < .001) or UM (body image: year 1, 1.32 vs 1.15; P = .06; year 5, 1.19 vs 0.96; P = .02; sexuality: year 1, 1.66 vs 1.41; P = .02; year 5, 1.43 vs 1.09; P = .002). Anxiety improved across groups, but adjusted mean scores remained higher among women who had BM vs BCS/UM at 1 year (BM, 7.75 vs BCS, 6.94; P = .005; BM, 7.75 vs UM, 6.58; P = .005), 2 years (BM, 7.47 vs BCS, 6.18; P < .001; BM, 7.47 vs UM, 6.07; P < .001) and 5 years (BM, 6.67 vs BCS, 5.91; P = .05; BM, 6.67 vs UM, 5.79; P = .05). There were minimal between-group differences in depression levels in follow-up. Conclusions And Relevance: While QOL improves over time, young breast cancer survivors who undergo more extensive surgery have worse body image, sexual health, and anxiety compared with women undergoing less extensive surgery. Ensuring young women are aware of the short-term and long-term effects of surgery and receive support when making surgical decisions is warranted. Answer: The relationship between psychosocial factors and surgical outcomes, particularly in the context of elderly patients with depression, has been explored in several studies. The evidence suggests that psychosocial factors, including depression, can significantly influence patient satisfaction and outcomes following surgery. A longitudinal cohort study specifically assessed the effect of preoperative depression on patient satisfaction after revision lumbar surgery in elderly patients. The study found that increasing preoperative depression scores were independently associated with patient dissatisfaction two years after revision lumbar surgery, even when the surgery was effective in improving pain and disability (PUBMED:24921847). This indicates that elderly depressed patients may be less satisfied with surgery, and highlights the importance of considering patients' baseline depression as a potential confounder in quality improvement initiatives that use patient satisfaction as a proxy for quality of care. Other studies have also found significant associations between psychosocial factors and surgical outcomes. For instance, persistent postmastectomy pain in breast cancer survivors was found to be significantly associated with psychosocial factors such as catastrophizing, somatization, anxiety, and sleep disturbance, rather than with surgical or medical treatment factors (PUBMED:23890847). Similarly, psychosocial factors were identified as important predictors of pain, stiffness, and physical functioning up to one year following total knee arthroplasty, with greater levels of depressive symptoms and neuroticism predicting poorer outcomes (PUBMED:32299757). Moreover, psychosocial factors have been shown to influence mortality and quality of life in various surgical populations, including heart transplantation recipients and women with early-stage endometrial cancer. Active coping styles prior to surgery were associated with a lower probability of all-cause mortality in women with endometrial cancer (PUBMED:24152380), and psychosocial screening before heart transplantation was suggested to identify markers of emotional distress that could benefit from psychiatric and psychological interventions (PUBMED:26633863). In summary, the evidence supports the notion that psychosocial factors, including depression, play a significant role in surgical outcomes and patient satisfaction. Elderly depressed patients may indeed be less satisfied with surgery, and addressing psychosocial factors preoperatively could potentially improve postoperative satisfaction and outcomes.
Instruction: Prospective observer and software-based assessment of magnetic resonance imaging quality in head and neck cancer: Should standard positioning and immobilization be required for radiation therapy applications? Abstracts: abstract_id: PUBMED:25544553 Prospective observer and software-based assessment of magnetic resonance imaging quality in head and neck cancer: Should standard positioning and immobilization be required for radiation therapy applications? Purpose: The purpose of this study was to investigate the potential of a head and neck magnetic resonance simulation and immobilization protocol on reducing motion-induced artifacts and improving positional variance for radiation therapy applications. Methods And Materials: Two groups (group 1, 17 patients; group 2, 14 patients) of patients with head and neck cancer were included under a prospective, institutional review board-approved protocol and signed informed consent. A 3.0-T magnetic resonance imaging (MRI) scanner was used for anatomic and dynamic contrast-enhanced acquisitions with standard diagnostic MRI setup for group 1 and radiation therapy immobilization devices for group 2 patients. The impact of magnetic resonance simulation/immobilization was evaluated qualitatively by 2 observers in terms of motion artifacts and positional reproducibility and quantitatively using 3-dimensional deformable registration to track intrascan maximum motion displacement of voxels inside 7 manually segmented regions of interest. Results: The image quality of group 2 (29 examinations) was significantly better than that of group 1 (50 examinations) as rated by both observers in terms of motion minimization and imaging reproducibility (P < .0001). The greatest average maximum displacement was at the region of the larynx in the posterior direction for patients in group 1 (17 mm; standard deviation, 8.6 mm), whereas the smallest average maximum displacement was at the region of the posterior fossa in the superior direction for patients in group 2 (0.4 mm; standard deviation, 0.18 mm). Compared with group 1, maximum regional motion was reduced in group 2 patients in the oral cavity, floor of mouth, oropharynx, and larynx regions; however, the motion reduction reached statistical significance only in the regions of the oral cavity and floor of mouth (P < .0001). Conclusions: The image quality of head and neck MRI in terms of motion-related artifacts and positional reproducibility was greatly improved by use of radiation therapy immobilization devices. Consequently, immobilization with external and intraoral fixation in MRI examinations is required for radiation therapy application. abstract_id: PUBMED:36354715 Magnetic Resonance-Guided Radiation Therapy for Head and Neck Cancers. Despite the significant evolution of radiation therapy (RT) techniques in recent years, many patients with head and neck cancer still experience significant toxicities during and after treatments. The increased soft tissue contrast and functional sequences of magnetic resonance imaging (MRI) are particularly attractive in head and neck cancer and have led to the increasing development of magnetic resonance-guided RT (MRgRT). This approach refers to the inclusion of the additional information acquired from a diagnostic or planning MRI in radiation treatment planning, and now extends to online high-quality daily imaging generated by the recently developed MR-Linac. MRgRT holds numerous potentials, including enhanced baseline and planning evaluations, anatomical and functional treatment adaptation, potential for hypofractionation, and multiparametric assessment of response. This article offers a structured review of the current literature on these established and upcoming roles of MRI for patients with head and neck cancer undergoing RT. abstract_id: PUBMED:34544481 Patient positioning and immobilization procedures for hybrid MR-Linac systems. Hybrid magnetic resonance (MR)-guided linear accelerators represent a new horizon in the field of radiation oncology. By harnessing the favorable combination of on-board MR-imaging with the possibility to daily recalculate the treatment plan based on real-time anatomy, the accuracy in target and organs-at-risk identification is expected to be improved, with the aim to provide the best tailored treatment. To date, two main MR-linac hybrid machines are available, Elekta Unity and Viewray MRIdian. Of note, compared to conventional linacs, these devices raise practical issues due to the positioning phase for the need to include the coil in the immobilization procedure and in order to perform the best reproducible positioning, also in light of the potentially longer treatment time. Given the relative novelty of this technology, there are few literature data regarding the procedures and the workflows for patient positioning and immobilization for MR-guided daily adaptive radiotherapy. In the present narrative review, we resume the currently available literature and provide an overview of the positioning and setup procedures for all the anatomical districts for hybrid MR-linac systems. abstract_id: PUBMED:27709079 Functional magnetic resonance imaging techniques and their development for radiation therapy planning and monitoring in the head and neck cancers. Radiation therapy (RT), in particular intensity-modulated radiation therapy (IMRT), is becoming a more important nonsurgical treatment strategy in head and neck cancer (HNC). The further development of IMRT imposes more critical requirements on clinical imaging, and these requirements cannot be fully fulfilled by the existing radiotherapeutic imaging workhorse of X-ray based imaging methods. Magnetic resonance imaging (MRI) has increasingly gained more interests from radiation oncology community and holds great potential for RT applications, mainly due to its non-ionizing radiation nature and superior soft tissue image contrast. Beyond anatomical imaging, MRI provides a variety of functional imaging techniques to investigate the functionality and metabolism of living tissue. The major purpose of this paper is to give a concise and timely review of some advanced functional MRI techniques that may potentially benefit conformal, tailored and adaptive RT in the HNC. The basic principle of each functional MRI technique is briefly introduced and their use in RT of HNC is described. Limitation and future development of these functional MRI techniques for HNC radiotherapeutic applications are discussed. More rigorous studies are warranted to translate the hypotheses into credible evidences in order to establish the role of functional MRI in the clinical practice of head and neck radiation oncology. abstract_id: PUBMED:27125096 THERMOPLASTIC MATERIALS APPLICATIONS IN RADIATION THERAPY. This is an example of the use of thermoplastic materials in a high-tech medicine field, oncology radiation therapy, in order to produce the rigid masks for positioning and immobilization of the patient during simulation of the treatment procedure, the imaging verification of position and administration of the indicated radiation dose. Implementation of modern techniques of radiation therapy is possible only if provided with performant equipment (CT simulators, linear accelerators of high energy particles provided with multilamellar collimators and imaging verification systems) and accessories that increase the precision of the treatment (special supports for head-neck, thorax, pelvis, head-neck and thorax immobilization masks, compensating materials like bolus type material). The paper illustrates the main steps in modern radiation therapy service and argues the role of thermoplastics in reducing daily patient positioning errors during treatment. As part of quality assurance of irradiation procedure, using a rigid mask is mandatory when applying 3D conformal radiation therapy techniques, radiation therapy with intensity modulated radiation or rotational techninques. abstract_id: PUBMED:37056570 Multiparametric magnetic resonance imaging for radiation therapy response monitoring in soft tissue sarcomas: a histology and MRI co-registration algorithm. Rationale: To establish a spatially exact co-registration procedure between in vivo multiparametric magnetic resonance imaging (mpMRI) and (immuno)histopathology of soft tissue sarcomas (STS) to identify imaging parameters that reflect radiation therapy response of STS. Methods: The mpMRI-Protocol included diffusion-weighted (DWI), intravoxel-incoherent motion (IVIM), and dynamic contrast-enhancing (DCE) imaging. The resection specimen was embedded in 6.5% agarose after initial fixation in formalin. To ensure identical alignment of histopathological sectioning and in vivo imaging, an ex vivo MRI scan of the specimen was rigidly co-registered with the in vivo mpMRI. The deviating angulation of the specimen to the in vivo location of the tumor was determined. The agarose block was trimmed accordingly. A second ex vivo MRI in a dedicated localizer with a 4 mm grid was performed, which was matched to a custom-built sectioning machine. Microtomy sections were stained with hematoxylin and eosin. Immunohistochemical staining was performed with anti-ALDH1A1 antibodies as a radioresistance and anti-MIB1 antibodies as a proliferation marker. Fusion of the digitized microtomy sections with the in vivo mpMRI was accomplished through nonrigid co-registration to the in vivo mpMRI. Co-registration accuracy was qualitatively assessed by visual assessment and quantitatively evaluated by computing target registration errors (TRE). Results: The study sample comprised nine tumor sections from three STS patients. Visual assessment after nonrigid co-registration showed a strong morphological correlation of the histopathological specimens with ex vivo MRI and in vivo mpMRI after neoadjuvant radiation therapy. Quantitative assessment of the co-registration procedure using TRE analysis of different pairs of pathology and MRI sections revealed highly accurate structural alignment, with a total median TRE of 2.25 mm (histology - ex vivo MRI), 2.22 mm (histology - in vivo mpMRI), and 2.02 mm (ex vivo MRI - in vivo mpMRI). There was no significant difference between TREs of the different pairs of sections or caudal, middle, and cranial tumor parts, respectively. Conclusion: Our initial results show a promising approach to obtaining accurate co-registration between histopathology and in vivo MRI for STS. In a larger cohort of patients, the method established here will enable the prospective identification and validation of in vivo imaging biomarkers for radiation therapy response prediction and monitoring in STS patients via precise molecular and cellular correlation. abstract_id: PUBMED:3398739 MRI assisted treatment planning for radiation therapy of the head and neck. Improved visualization of head and neck tumors has been demonstrated with the use of magnetic resonance imaging (MRI). Using standard plastic radiation therapy immobilization casts and an MR positive surface marker system developed in this institution, we have utilized MRI as an adjunct to the simulation of complex radiation treatments for tumors of the head and neck. This technique includes an indirect display of field margins and/or isodose curves over selected MR images. The lack of induced artifact from the immobilization cast, improved delineation of tumor extension from normal anatomy and the ability to image in arbitrary planes without changing patient positioning favor the use of MR over CT for radiation therapy planning in the head and neck, while ensuring reproducibility of the treatment plan at subsequent therapy sessions. abstract_id: PUBMED:33635492 Actual applications of magnetic resonance imaging in dentomaxillofacial region. Magnetic resonance imaging (MRI) is a versatile imaging modality utilized in various medical fields. Specifically used for evaluation of soft tissues, with non-ionizing radiation and multiplanar sections that has provided great guidance to diagnosis. Nowadays, use of MRI in dental practice is becoming more pervasive, especially for the evaluation of head-and-neck cancer, detection of salivary gland lesions, lymphadenopathy, and temporomandibular joint disorders. Understanding the basic principles, its recent advances, and multiple applications in dentomaxillofacial region helps significantly in the diagnostic decision making. In this article, the principle of MRI and its recent advances are reviewed, with further discussion on the appearance of various maxillofacial pathosis. abstract_id: PUBMED:22458824 Robotic-based carbon ion therapy and patient positioning in 6 degrees of freedom: setup accuracy of two standard immobilization devices used in carbon ion therapy and IMRT. Purpose: To investigate repositioning accuracy in particle radiotherapy in 6 degrees of freedom (DOF) and intensity-modulated radiotherapy (IMRT, 3 DOF) for two immobilization devices (Scotchcast masks vs thermoplastic head masks) currently in use at our institution for fractionated radiation therapy in head and neck cancer patients. Methods And Materials: Position verifications in patients treated with carbon ion therapy and IMRT for head and neck malignancies were evaluated. Most patients received combined treatment regimen (IMRT plus carbon ion boost), immobilization was achieved with either Scotchcast or thermoplastic head masks. Position corrections in robotic-based carbon ion therapy allowing 6 DOF were compared to IMRT allowing corrections in 3 DOF for two standard immobilization devices. In total, 838 set-up controls of 38 patients were analyzed. Results: Robotic-based position correction including correction of rotations was well tolerated and without discomfort. Standard deviations of translational components were between 0.5 and 0.8 mm for Scotchcast and 0.7 and 1.3 mm for thermoplastic masks in 6 DOF and 1.2-1.4 mm and 1.0-1.1 mm in 3 DOF respectively. Mean overall displacement vectors were between 2.1 mm (Scotchcast) and 2.9 mm (thermoplastic masks) in 6 DOF and 3.9-3.0 mm in 3 DOF respectively. Displacement vectors were lower when correction in 6 DOF was allowed as opposed to 3 DOF only, which was maintained at the traditional action level of >3 mm for position correction in the pre-on-board imaging era. Conclusion: Setup accuracy for both systems was within the expected range. Smaller shifts were required when 6 DOF were available for correction as opposed to 3 DOF. Where highest possible positioning accuracy is required, frequent image guidance is mandatory to achieve best possible plan delivery and maintenance of sharp gradients and optimal normal tissue sparing inherent in carbon ion therapy. abstract_id: PUBMED:28816703 An electromechanical, patient positioning system for head and neck radiotherapy. In cancer treatment with radiation, accurate patient setup is critical for proper dose delivery. Improper arrangement can lead to disease recurrence, permanent organ damage, or lack of disease control. While current immobilization equipment often helps for patient positioning, manual adjustment is required, involving iterative, time-consuming steps. Here, we present an electromechanical robotic system for improving patient setup in radiotherapy, specifically targeting head and neck cancer. This positioning system offers six degrees of freedom for a variety of applications in radiation oncology. An analytical calculation of inverse kinematics serves as fundamental criteria to design the system. Computational mechanical modeling and experimental study of radiotherapy compatibility and x-ray-based imaging demonstrates the device feasibility and reliability to be used in radiotherapy. An absolute positioning accuracy test in a clinical treatment room supports the clinical feasibility of the system. Answer: The prospective observer and software-based assessment of magnetic resonance imaging (MRI) quality in head and neck cancer suggests that standard positioning and immobilization should indeed be required for radiation therapy applications. The study by PUBMED:25544553 found that the use of radiation therapy immobilization devices significantly improved the image quality of head and neck MRI in terms of motion-related artifacts and positional reproducibility. The greatest average maximum displacement was observed in the larynx region for patients without immobilization, while the smallest displacement was in the posterior fossa for patients with immobilization. The study concluded that immobilization with external and intraoral fixation in MRI examinations is required for radiation therapy application due to the improved image quality and reduced motion artifacts. This finding is supported by the broader literature on the subject. For instance, PUBMED:36354715 discusses the potential of magnetic resonance-guided radiation therapy (MRgRT) for head and neck cancers, which relies on the enhanced imaging capabilities of MRI for treatment planning and response assessment. The use of immobilization devices is implied as part of the process to ensure accurate and reproducible patient positioning. Similarly, PUBMED:34544481 highlights the importance of patient positioning and immobilization procedures for hybrid MR-Linac systems, which combine MRI with linear accelerators for radiation therapy. The paper notes that compared to conventional linear accelerators, MR-Linac systems require careful consideration of the positioning phase, including the inclusion of the coil in the immobilization procedure, to achieve reproducible positioning. In summary, the evidence suggests that standard positioning and immobilization are crucial for the accurate application of radiation therapy in head and neck cancer patients, as they significantly improve the quality of MRI images used for treatment planning and monitoring, thereby potentially enhancing treatment outcomes.
Instruction: Does concordance between survey responses and administrative records differ by ethnicity for prescription medication? Abstracts: abstract_id: PUBMED:22851553 Does concordance between survey responses and administrative records differ by ethnicity for prescription medication? Background: Self-reported prescription medication use data is often used to measure differences across ethnic groups, but its accuracy may differ across ethnic groups. Objective: We compared ethnic groups' self-reported medication use to their administrative records for respondents with diabetes, hypertension, and asthma. Methods: We linked the Canadian Community Health Survey to administrative prescription drug records for 17,191 respondents in British Columbia, Canada. We evaluated the concordance between self-reported medication use and prescription drug records using positive predictive value, negative predictive value, sensitivity, specificity, and kappa statistic for self-identified Whites, Chinese, South Asians, and Southeast Asians/Filipinos. The concordance was calculated using prescription drug records as the reference standard. We also estimated the odds of disagreement (either a false positive or negative) in medication use with logistic regressions for each ethnic group, and compared them using the Blinder-Oaxaca method. Results: We found that Chinese had the worst positive predictive value for asthma medication use at 0.41, while South Asians had the worst sensitivity for hypertension medication use at 0.60. The difference in reporting an error between ethnic groups was likely explained by differences in respondent characteristics. Particularly, if White respondents had the same characteristics as South Asians, then White respondents would have had 1.031 (95% CI: 1.020-1.041) higher odds of disagreement for hypertension medication use than with their own characteristics. Conclusion: Self-reported medication use may be a valid measure of ethnic groups' medication use if ethnic differences in characteristics, like household income are held constant. However, an important determinant of validity for all ethnic groups is whether medications are used routinely, or for a specific episode. abstract_id: PUBMED:28744981 The concordance between self-reported medication use and pharmacy records in pregnant women. Purpose: Several studies have been conducted to assess determinants affecting the performance or accuracy of self-reports. These studies are often not focused on pregnant women, or medical records were used as a data source where it is unclear if medications have been dispensed. Therefore, our objective was to evaluate the concordance between self-reported medication data and pharmacy records among pregnant women and its determinants. Methods: We conducted a population-based cohort study within the Generation R study, in 2637 pregnant women. The concordance between self-reported medication data and pharmacy records was calculated for different therapeutic classes using Yule's Y. We evaluated a number of variables as determinant of discordance between both sources through univariate and multivariate logistic regression analysis. Results: The concordance between self-reports and pharmacy records was moderate to good for medications used for chronic conditions, such as selective serotonin reuptake inhibitors or anti-asthmatic medications (0.88 and 0.68, respectively). Medications that are used occasionally, such as antibiotics, had a lower concordance (0.51). Women with a Turkish or other non-Western background were more likely to demonstrate discordance between pharmacy records and self-reported data compared with women with a Dutch background (Turkish: odds ratio, 1.63; 95% confidence interval, 1.16-2.29; other non-Western: odds ratio, 1.33; 95% confidence interval, 1.03-1.71). Conclusions: Further research is needed to assess how the cultural or ethnic differences may affect the concordance or discordance between both medication sources. The results of this study showed that the use of multiple sources is needed to have a good estimation of the medication use during pregnancy. abstract_id: PUBMED:30170505 The Errors in Reporting Medicare Coverage: A Comparison of Survey Data and Administrative Records. Objectives: We examine survey reporting of Medicare coverage of the older population by evaluating discordance between survey responses and administrative records. Method: We link data from the 2014 Current Population Survey Annual Social and Economic Supplement (CPS ASEC) and 2014 Medicare Enrollment Database to evaluate the extent to which individuals misreport Medicare coverage in the CPS ASEC. Using regression analyses, we assess factors associated with misreporting. Results: We find the CPS ASEC undercounts the population aged 65 years and older with Medicare by 4.5%. Misreporting of Medicare coverage is associated with citizenship status, immigration year of entry, employment, coverage of other household members, and imputation of Medicare responses. Adjusting for misreporting, Medicare coverage among older individuals increases from 93.4% to 95.6%. Discussion: The CPS ASEC underestimates Medicare coverage for the older population. Administrative records may be useful to evaluate and improve survey imputation of Medicare coverage when missing. abstract_id: PUBMED:38414538 Generating synthetic data from administrative health records for drug safety and effectiveness studies. Introduction: Administrative health records (AHRs) are used to conduct population-based post-market drug safety and comparative effectiveness studies to inform healthcare decision making. However, the cost of data extraction, and the challenges associated with privacy and securing approvals can make it challenging for researchers to conduct methodological research in a timely manner using real data. Generating synthetic AHRs that reasonably represent the real-world data are beneficial for developing analytic methods and training analysts to rapidly implement study protocols. We generated synthetic AHRs using two methods and compared these synthetic AHRs to real-world AHRs. We described the challenges associated with using synthetic AHRs for real-world study. Methods: The real-world AHRs comprised prescription drug records for individuals with healthcare insurance coverage in the Population Research Data Repository (PRDR) from Manitoba, Canada for the 10-year period from 2008 to 2017. Synthetic data were generated using the Observational Medical Dataset Simulator II (OSIM2) and a modification (ModOSIM). Synthetic and real-world data were described using frequencies and percentages. Agreement of prescription drug use measures in PRDR, OSIM2 and ModOSIM was estimated with the concordance coefficient. Results: The PRDR cohort included 169,586,633 drug records and 1,395 drug types for 1,604,734 individuals. Synthetic data for 1,000,000 individuals were generated using OSIM2 and ModOSIM. Sex and age group distributions were similar in the real-world and synthetic AHRs. However, there were significant differences in the number of drug records and number of unique drugs per person for OSIM2 and ModOSIM when compared with PRDR. For the average number of days of drug use, concordance with the PRDR was 16% (95% confidence interval [CI]: 12%-19%) for OSIM2 and 88% (95% CI: 87%-90%) for ModOSIM. Conclusions: ModOSIM data were more similar to PRDR than OSIM2 data on many measures. Synthetic AHRs consistent with those found in real-world settings can be generated using ModOSIM. Synthetic data will benefit rapid implementation of methodological studies and data analyst training. abstract_id: PUBMED:21915239 Ethnic differences in the use of prescription drugs: a cross-sectional analysis of linked survey and administrative data. Background: Evidence from the United States and Europe suggests that the use of prescription drugs may vary by ethnicity. In Canada, ethnic disparities in prescription drug use have not been as well documented as disparities in the use of medical and hospital care. We conducted a cross-sectional analysis of survey and administrative data to examine needs-adjusted rates of prescription drug use by people of different ethnic groups. Methods: For 19 370 non-Aboriginal people living in urban areas of British Columbia, we linked data on self-identified ethnicity from the Canadian Community Health Survey with administrative data describing all filled prescriptions and use of medical services in 2005. We used sex-stratified multivariable logistic regression analysis to measure differences in the likelihood of filling prescriptions by drug class (antihypertensives, oral antibiotics, antidepressants, statins, respiratory drugs and nonsteroidal anti-inflammatory drugs [NSAIDs]). Models were adjusted for age, general health status, treatment-specific health status, socio-economic factors and recent immigration (within 10 years). Results: We found evidence of significant needs-adjusted variation in prescription drug use by ethnicity. Compared with women and men who identified themselves as white, those who were South Asian or of mixed ethnicity were almost as likely to fill prescriptions for most types of medicines studied; moreover, South Asian men were more likely than white men to fill prescriptions for antibiotics and NSAIDs. The clearest pattern of use emerged among Chinese participants: Chinese women were significantly less likely to fill prescriptions for antihypertensives, antibiotics, antidepressants and respiratory drugs, and Chinese men for antidepressant drugs and statins. Interpretation: We found some disparities in prescription drug use in the study population according to ethnic group. The nature of some of these variations suggest that ethnic differences in beliefs about pharmaceuticals may generate differences in prescription drug use; other variations suggest that there may be clinically important disparities in treatment use. abstract_id: PUBMED:27375305 LINKING SURVEY AND ADMINISTRATIVE RECORDS: MECHANISMS OF CONSENT. Survey records are increasingly being linked to administrative databases to enhance the survey data and increase research opportunities for data users. A necessary prerequisite to linking survey and administrative records is obtaining informed consent from respondents. Obtaining consent from all respondents is a difficult challenge and one that faces significant resistance. Consequently, data linkage consent rates vary widely from study-to-study. Several studies have found significant differences between consenters and non-consenters on socio-demographic variables, but no study has investigated the underlying mechanisms of consent from a theory-driven perspective. In this study, we describe and test several hypotheses related to respondents' willingness to consent to an earnings and benefit data linkage request based on mechanisms related to financial uncertainty, privacy concerns, resistance towards the survey interview, level of attentiveness during the interview, the respondents' preexisting relationship with the administrative data agency, and matching respondents and interviewers on observable characteristics. The results point to several implications for survey practice and suggestions for future research. abstract_id: PUBMED:21092309 Health plan administrative records versus birth certificate records: quality of race and ethnicity information in children. Background: To understand racial and ethnic disparities in health care utilization and their potential underlying causes, valid information on race and ethnicity is necessary. However, the validity of pediatric race and ethnicity information in administrative records from large integrated health care systems using electronic medical records is largely unknown. Methods: Information on race and ethnicity of 325,810 children born between 1998-2008 was extracted from health plan administrative records and compared to birth certificate records. Positive predictive values (PPV) were calculated for correct classification of race and ethnicity in administrative records compared to birth certificate records. Results: Misclassification of ethnicity and race in administrative records occurred in 23.1% and 33.6% children, respectively; the majority due to missing ethnicity (48.3%) and race (40.9%) information. Misclassification was most common in children of minority groups. PPV for White, Black, Asian/Pacific Islander, American Indian/Alaskan Native, multiple and other was 89.3%, 86.6%, 73.8%, 18.2%, 51.8% and 1.2%, respectively. PPV for Hispanic ethnicity was 95.6%. Racial and ethnic information improved with increasing number of medical visits. Subgroup analyses comparing racial classification between non-Hispanics and Hispanics showed White, Black and Asian race was more accurate among non-Hispanics than Hispanics. Conclusions: In children, race and ethnicity information from administrative records has significant limitations in accurately identifying small minority groups. These results suggest that the quality of racial information obtained from administrative records may benefit from additional supplementation by birth certificate data. abstract_id: PUBMED:31236800 The Impact of Patient-Provider Race/Ethnicity Concordance on Provider Visits: Updated Evidence from the Medical Expenditure Panel Survey. Objective: To examine the association between race/ethnicity concordance and in-person provider visits following the implementation of the Affordable Care Act. Design: Using 2014-2015 data from the Medical Expenditure Panel Survey, we examine whether having a provider of the same race or ethnicity ("race/ethnicity concordance") affects the probability that an individual will visit a provider. Multivariate probit models are estimated to adjust for demographic, socioeconomic, and health factors. Results: Race/ethnicity concordance significantly increases the likelihood of seeking preventative care for Hispanic, African-American, and Asian patients relative to White patients (coef = 1.46, P < 0.001; coef = 0.71, P = 0.09; coef = 1.70, P < 0.001, respectively). Race/ethnicity concordance also increases the likelihood that Hispanic and Asian patients visit their provider for new health problems (coef = 2.14, P < 0.001 and coef = 1.49, P < 0.05, respectively). We find that race/ethnicity concordance is also associated with an increase in the likelihood that Hispanic and Asian patients continue to visit their provider for ongoing medical problems (Hispanic coef = 1.06, P < 0.001; Asian coef = 1.24, P < 0.05). Conclusions: There is an association between race/ethnicity concordance and the likelihood of patients visiting their provider. Our results demonstrate that racial disparities in health care utilization may be partially explained by race/ethnicity concordance. abstract_id: PUBMED:30506312 Adult Immigrants' Utilization of Physician Visits, Dentist Visits, and Prescription Medication. This study sought factors in immigrants' utilization of services of physicians and dentists, as well as their use of prescription medication. The study used data from 1452 adult immigrants collected for the National Health and Nutrition Examination Survey 2011-2012. Logistic regression results showed that age, US citizenship, and health insurance status were associated with the use of physician, dentist, and medication services. For this sample, physician visits were associated negatively with Hispanic ethnicity, poverty-level family income, and English-language proficiency. Also, dentist visits were associated positively with female gender, good health, and more education; and negatively with Black ethnicity, family income at 101-200% of poverty level, and English-language proficiency. Medication use was associated positively with poor health and female gender, and negatively with Hispanic ethnicity. Implications for policy and intervention development/implementation are discussed. abstract_id: PUBMED:23956015 Estimating the information gap between emergency department records of community medication compared to on-line access to the community-based pharmacy records. Objective: Errors in community medication histories increase the risk of adverse events. The objectives of this study were to estimate the extent to which access to community-based pharmacy records provided more information about prescription drug use than conventional medication histories. Materials And Methods: A prospective cohort of patients with public drug insurance who visited the emergency departments (ED) in two teaching hospitals in Montreal, Quebec was recruited. Drug lists recorded in the patients' ED charts were compared with pharmacy records of dispensed medications retrieved from the public drug insurer. Patient and drug-related predictors of discrepancies were estimated using general estimating equation multivariate logistic regression. Results: 613 patients participated in the study (mean age 63.1 years, 59.2% women). Pharmacy records identified 41.5% more prescribed medications than were noted in the ED chart. Concordance was highest for anticoagulants, cardiovascular drugs and diuretics. Omissions in the ED chart were more common for drugs that may be taken episodically. Patients with more than 12 medications (OR 2.92, 95% CI 1.71 to 4.97) and more than one pharmacy (OR 3.85, 95% CI 1.80 to 6.59) were more likely to have omissions in the ED chart. Discussion: The development of health information exchanges could improve the efficiency and accuracy of information about community medication histories if they enable automated access to dispensed medication records from community pharmacies, particularly for the most vulnerable populations with multiple morbidities. Conclusions: Pharmacy records identified a substantial number of medications that were not in the ED chart. There is potential for greater safety and efficiency with automated access to pharmacy records. Answer: Yes, concordance between survey responses and administrative records for prescription medication does differ by ethnicity. Studies have shown that there are ethnic disparities in the accuracy of self-reported medication use when compared to administrative records. For instance, Chinese respondents had the worst positive predictive value for asthma medication use, and South Asians had the worst sensitivity for hypertension medication use. The differences in reporting errors between ethnic groups were likely explained by differences in respondent characteristics, such as household income (PUBMED:22851553). Additionally, pregnant women with a Turkish or other non-Western background were more likely to demonstrate discordance between pharmacy records and self-reported data compared with women with a Dutch background (PUBMED:28744981). Furthermore, a cross-sectional analysis of survey and administrative data in Canada found significant needs-adjusted variation in prescription drug use by ethnicity, with some ethnic groups less likely to fill prescriptions for certain types of medicines compared to Whites (PUBMED:21915239). The quality of race and ethnicity information in children's health plan administrative records also showed limitations, particularly for smaller minority groups, suggesting that administrative records may benefit from additional supplementation by birth certificate data (PUBMED:21092309). Overall, these findings indicate that ethnicity can affect the concordance between self-reported medication use and administrative records, and this has implications for the validity of self-reported medication use data across different ethnic groups.
Instruction: Can the measurement of amylase in drain after distal pancreatectomy predict post-operative pancreatic fistula? Abstracts: abstract_id: PUBMED:26117433 Can the measurement of amylase in drain after distal pancreatectomy predict post-operative pancreatic fistula? Introduction: The most frequent reason for performing a distal pancreatectomy is the presence of cystic or neuroendocrine tumors, in which the distal pancreatic stump is often soft and non fibrotic. This parenchymal consistence represents the main risk factor for post-operative pancreatic fistula. In order to identify the fistula and assessing its severity postoperative monitoring of amylase from intraperitoneal drains is important. Methods: From a retrospective multicentric database analysis were included 33 patients who underwent distal pancreatectomy for pancreatic neoplastic disease. Results: Postoperative pancreatic fistula occurred in four cases. One patient had a ductal adenocarcinoma, two presented with pancreatic endocrine neoplasms and the last one had an intraductal papillary mucinous neoplasia. Two patients underwent open, the other two laparoscopic distal pancreatectomy. Discussion: Postoperative pancreatic fistulas after distal pancreatectomy worsen the quality of life, prolong the post-operative stay and delay further adjuvant therapy. In patients who underwent distal pancreatectomy literature exposed some advantages deriving from the placement of abdominal drainages only in selected cases and from their early removal. Patients presenting a high risk of pancreatic fistula had higher amylase levels of drainage fluid in the first postoperative day. Conclusion: POPF is the most frequently complication after pancreatectomy. In our analysis DFA1>5000 can be considered as a predictive factor for pancreatic fistula. For this reason, the systematic measurement of amylase in drain fluid in first-postoperative day can be considered a good clinical practice. abstract_id: PUBMED:30588532 Predictive value of post-operative drain amylase levels for post-operative pancreatic fistula. Backgrounds/aims: Traditionally, surgically placed pancreatic drains are removed, at the discretion of the operating surgeon. Moving towards enhanced recovery after surgery (ERAS), we looked for predictors for early drain removal. The purpose of this paper was to establish which postoperative days' (POD) drain amylase is most predictive against post-operative pancreatic fistula (POPF). Methods: We conducted a retrospective study of 196 patients who underwent pancreatic resection at our institute from January 2006 to October 2013. Drain amylase levels were routinely measured. The International Study Group of Pancreatic Fistula (ISGPF) definition of POPF, and clinical severity grading were used. Results: 5.1% (10 of 196) patients developed ISGPF Grades B and C POPF. Negative predictive value of developing significant POPF, if drain amylase values were low on PODs 1 and 3 was 98.7% (95% CI: 0.93-1.00). This translated to confidence in removing surgically placed pancreatic drains, at POD 1 and 3 when drain amylase values are low. Conclusions: Patients with low drain amylase values on POD 1 and 3, are unlikely to develop POPF and may have pancreatic drains removed earlier. abstract_id: PUBMED:28321709 Intra-Operative Amylase Concentration in Peri-Pancreatic Fluid Predicts Pancreatic Fistula After Distal Pancreatectomy. Post-operative pancreatic fistula (POPF) is a potentially severe complication following distal pancreatectomy. The aim of this study was to assess the predictive value of intra-operative amylase concentration (IOAC) in peri-pancreatic fluid after distal pancreatectomy for the diagnosis of POPF. Consecutive patients who underwent a distal pancreatectomy between November 2014 and September 2016 were included in the analysis. IOAC was measured, followed by drain fluid analysis for amylase on post-operative days (PODs) 1, 3, and 5. Receiver operator characteristic (ROC) analysis was performed to evaluate the discriminative capacity of IOAC as a predictor of POPF. IOAC was measured after distal pancreatectomy in 26 patients. The IOAC correlated significantly with (i) PODs 1, 3, and 5 drain amylase (p < 0.01); (ii) the development of POPF (p < 0.01); and (iii) the Clavien-Dindo grade of surgical complications (p = 0.02). Eighty-three percent of patients with an IOAC > 1000 experienced a post-operative complication (OR 18.3, 95% CI 2.51-103, p < 0.01). ROC curve analysis confirmed the predictive relationship of IOAC and POPF as an excellent test with an area under the curve of 0.92 (95% CI 0.81-0.99, p < 0.01). Measurement of IOAC allows early and accurate categorization of patients at risk for POPF in distal pancreatectomy. abstract_id: PUBMED:37046241 Drain fluid and serum amylase concentration ratio is the most reliable indicator for predicting postoperative pancreatic fistula after distal pancreatectomy. Background: Postoperative pancreatic fistula (POPF) is a major complication of pancreatic surgery. Drain fluid amylase concentration (DAC) is considered a predictive indicator of POPF. However, other indicators related to postoperative drain fluid amylase status exist, and the most reliable indicator for predicting POPF remains unclear. The object of this study is to identify the single most accurate indicator related to drain fluid amylase status of POPF after distal pancreatectomy (DP). Methods: This single-institution retrospective study included 122 patients who underwent DP. The study was conducted between 2010 and 2022 at Gifu University Hospital. We statistically analyzed DAC, drain fluid amylase amount (DAA) calculated by multiplying DAC and daily drainage volume, and drain and serum amylase concentration ratio (DSACR) to assess the correlation with POPF. Results: Based on the definition and grading of the International Study Group of Pancreatic Fistula, 24.6 (%) of the 122 patients had Grades B and C POPF. The result of the receiver operating characteristic (ROC) curve for predicting POPF after DP, DSACR had the highest area under curve(AUC) value among DAC, DAA, and DSACR both POD1 and POD3. The cutoff value of DSACR on POD1 was 17 (AUC 0.69, sensitivity 80.0%, specificity 58.2%, and accuracy 63.6%). The cutoff value of DSACR on POD3 was 22 (AUC 0.77, sensitivity 77.7%, specificity 73.3%, and accuracy 73.6%). Overall, DSACR on POD3 had the highest AUC value. Furthermore, a multivariate logistic regression analysis revealed that pancreatic texture (soft; odds ratio [OR] 9.22; 95% confidence interval [CI] 2.22-44.19; p < 0.01) and DSACR on POD3 (> 22; OR 8.76; 95% CI 2.78-31.59; p < 0.001) were independently associated with POPF after DP. Conclusions: DSACR is the most reliable indicator of drain fluid amylase status for predicting POPF after DP. abstract_id: PUBMED:34755016 Which is the best predictor of clinically relevant pancreatic fistula after pancreatectomy: drain fluid concentration or total amount of amylase? Aim: Drain fluid amylase concentration (DFAC) has been reported as a predictor of clinically relevant postoperative pancreatic fistula (CR-POPF) after pancreatectomy. However, the clinical significance of measuring the total drain fluid amylase amount (DFAA) considering the daily drainage volume of CR-POPF remains unclear. Methods: Data from 216 consecutive patients who underwent pancreaticoduodenectomy (PD) (n = 126) or distal pancreatectomy (DP) (n = 90) between August 2014 and November 2020 were reviewed. All drains were closed but not suctioned. DFAA was calculated by multiplying the DFAC and daily drainage fluid volume. DFAC and DFAA were recorded on d 1 and 3 after pancreatectomy. The cutoff value of CR-POPF was determined using the receiver operating characteristic curve. Results: CR-POPF was found in 75 patients (35%) (PD: 30%, DP: 41%, P = .111); the mortality rate was zero. The cutoff value of DFAC-day 1 was 1757 U/L (sensitivity [SE]: 84%, specificity [SP]: 62%, and accuracy [AC]: 69%). The cutoff value of DFAA-day 1 was 139 U (SE: 71%, SP: 72%, and AC: 71%). The cutoff value of DFAC-day 3 was 1044 U/L (SE: 73%, SP: 79%, and AC: 78%). The cutoff value of DFAA-day 3 was 21 U (SE: 68%, SP: 72%, and AC: 70%). Multivariate analysis indicated that a nondilated pancreatic duct and high DFAC-day 3 were independently associated with CR-POPF after PD, indicating that a prolonged operative duration, massive blood loss, and high DFAC-day 3 are independently associated with CR-POPF after DP. Conclusion: DFAC is more reliable than DFAA for predicting CR-POPF after both PD and DP. abstract_id: PUBMED:36634411 Adjusting Drain Fluid Amylase for Drain Volume Does Not Improve Pancreatic Fistula Prediction. Introduction: Drain fluid amylase (DFA) levels have been used to predict clinically relevant postoperative pancreatic fistula (CR-POPF) and guide postoperative drain management. Optimal DFA cutoff thresholds vary between studies, thereby prompting investigation of an alternative assessment technique. As DFA measurements could, in theory, be distorted by variations in ascites fluid production, we hypothesized that adjusting DFA for volume corrected drain fluid amylase (vDFA) would improve CR-POPF predictive models. Methods: A single-institution retrospective cohort study of patients, who underwent pancreatoduodenectomies (PD) and distal pancreatectomies (DP) between 2013 and 2019, was performed. DFAs and vDFAs were measured on postoperative day (POD) 3. Clinicopathologic variables were compared between cohorts by univariable and multivariable analyses and Receiver operating characteristic (ROC) curves. Results: Patients developing a CR-POPF were more likely to be male and have elevated DFA, vDFA, and body mass index (BMI). vDFA use did not contribute to a superior CR-POPF predictive model compared to DFA-a finding consistent on subanalysis of surgery type PD versus DP. In CR-POPF predictive models, DFA, vDFA, and male sex significantly improved CR-POPF predictive models when considering both surgery subtypes, while only DFA and vDFA significantly improved models when cohorts were segregated by surgery type. Conclusions: Postoperative DFA remains a preferred method of predicting CR-POPF as the proposed vDFA assessment technique only adds complexity without increased discriminability. abstract_id: PUBMED:29802050 Utility of drain fluid amylase measurement on the first postoperative day after distal pancreatectomy. Background: Early exclusion of a postoperative pancreatic fistula (POPF) may facilitate earlier drain removal in selected patients after distal pancreatectomy. The purpose of this study was to evaluate the role of first postoperative day drain fluid amylase (DFA1) measurement to predict POPF. Methods: Patients in whom DFA1 was measured after distal pancreatectomy were identified from a prospectively maintained database over a five-year period. A cut-off value of DFA1 was derived using ROC analysis, which yielded sensitivity and negative predictive value of 100% for excluding POPF. Results: DFA1 was available in 53 of 138 (38%) patients who underwent distal pancreatectomy. 19 of 53 patients (36%) developed a pancreatic fistula (Grade A - 15, Grade B - 3, Grade C - 1). Median DFA1 was significantly higher in those who developed a pancreatic fistula (5473; range 613-28,450) compared those without (802; range 57-2350). p < 0.0001. Using ROC analysis, a DFA1 less than 600 excluded pancreatic fistula with a sensitivity of 100% (AUROC of 0.91; SE = 0.04, p < 0.001). Conclusion: First postoperative day drain fluid amylase measurement may have a role in excluding pancreatic fistula after distal pancreatectomy. Such patients may be suitable for earlier drain removal. abstract_id: PUBMED:37653152 Postoperative Day 1 Drain Amylase After Pancreatoduodenectomy: Optimal Level to Predict Pancreatic Fistula. Introduction: Drain amylase on day 1 (DA-D1) after pancreaticoduodendectomy (PD) to predict occurrence of postoperative pancreatic fistula (POPF) is controversial. In this study, we evaluate the optimal DA-D1 level to predict clinically relevant POPF (CR-POPF). Methods: The 2014-2020 NSQIP pancreatectomy-targeted database was queried for patients who underwent elective PD. Perioperative data was extracted to determine development of POPF and CR-POPF per International Study Group of Pancreatic Fistula guidelines. Receiver operative curve (ROC) and Youden's index were used to assess the performance and optimal cutoff for DA-D1 to predict CR-POPF. The DA-D1 value was confirmed with a multivariable logistic regression to determine hazard ratios (HR) for CR-POPF and conditional logistic regression by modified fistula risk score (mFRS) subgroups. Results: A total of 6,087 patients with complete perioperative data were included. Mean DA-D1 was 2,897 ± 8,636 U/L; median drain duration was 5 days. CR-POPF was documented in 544 (8.9%) patients. DA-D1 ROC for CR-POPF had area under the curve of 0.779 (95%CI 0.759-0.798). Youden's index for the CR-POPF ROC coordinates had 77.6% sensitivity and 66.3% specificity, corresponding to DA-D1 values ≥ 720U/L as an optimal cutoff. CR-POPF was higher for patients with DA-D1 ≥ 720U/L (HR 4.6; p = 0.001). Patients DA-D1 < 720U/L with a negligible, low, intermediate, and high mFRS had respectively 1%, 3%, 4%, and 7% rate of CR-POPF. Conclusion: DA-D1 < 720U/L after elective PD is a clinically useful predictor of CR-POPF. For patients with negligible to intermediate FRS, surgeons should consider utilizing DA-D1 < 720 U/L for removal of a drain on the first postoperative day. abstract_id: PUBMED:33636044 Postoperative elevation of C-reactive protein levels and high drain fluid amylase output are strong predictors of pancreatic fistulas after distal pancreatectomy. Background: The aim of the present study was to identify the predictors of postoperative pancreatic fistula (POPF) after distal pancreatectomy (DP). Methods: The records of 97 consecutive patients who underwent DP at Ehime University Hospital between June 2009 and August 2020 were retrospectively reviewed. Patient characteristics, preoperative blood biochemistry data, operative findings, and postoperative findings until postoperative day (POD) 3 were investigated as potential predictors of clinically relevant POPF (CR-POPF). The product of the drain fluid amylase (DFA) value (U/L) and the drainage amount (mL/day) was defined as DFA output (U/day). Results: Of 97 patients who underwent DP, 23 (23.7%) developed CR-POPF. On multivariate analyses, high C-reactive protein (CRP) levels on POD 3 (>14.0 mg/dL) and high DFA output on POD 3 (>34 U/day) were found to be independent predictors of CR-POPF (odds ratios, 7.580 and 4.751, respectively; 95% confidence intervals, 2.052-27.995 and 1.487-15.175, respectively). Furthermore, the CRP value was helpful for predicting delayed CR-POPF in patients without POPF on POD3, and DFA output was useful for predicting the development of CR-POPF in patients diagnosed with POPF on POD3. Conclusion: Postoperative CRP values and DFA output may facilitate appropriate postoperative management after DP. abstract_id: PUBMED:32109648 C-reactive protein and drain amylase accurately predict clinically relevant pancreatic fistula after partial pancreaticoduodenectomy. Backround: C-reactive protein (CRP) and procalcitonin (PCT) have shown to be reliable predictors of inflammatory complications and anastomotic leak after colorectal surgery. Their predictive value after partial pancreaticoduodenectomy (PD) remains unclear. Materials And Methods: All consecutive pancreaticoduodenectomies (2009-2018) at our hospital were included. Drain amylase was evaluated on postoperative day (POD) 1, serum CRP and PCT were evaluated on POD 1-3. Receiver-operating characteristics curves were performed and significant cut-off values were tested using logistic regression. Results: Among 188 patients who underwent partial PD, clinically relevant pancreatic fistulas (POPF) occurred in 30 (16%) patients, including 20 (10.6%) with Grade B and 10 (5.3%) patients with Grade C. Postoperative complications (Clavien-Dindo ≥ III) were reported in 46 (24.5%) patients, including Grade IIIa in 16 (8.5%), IIIb in 18 (9.6%), IVa in 3 (1.6%), IVb in 2 (1.1%) and V in 7 (3.7%) patients. Drain amylase on POD 1 showed the largest area under the curve (0.872, p < 0.001), followed by CRP (0.803, p < 0.001) and PCT on POD 3 (0.651, p < 0.011). Drain amylase on POD 1 > 303 U/l (OR 0.045, 95% CI 0.010-0.195, p < 0.001), CRP > 203 mg/l (OR 0.098, 95% CI 0.041-0.235, p < 0.001) and PCT > 0.85 μg/l (OR 0.393, 95%CI 0.178-0.869, p = 0.02) were significant predictors of relevant POPF in the univariate analysis. CRP > 203 mg/l (OR 0.098, 95% CI 0.024-0.403, p = 0.001) and drain amylase > 303 U/l (OR 0.064, 95% CI 0.007-0.554, p = 0.01) remained independent predictors in the multivariable analysis. The combination of drain amylase on POD 1 and CRP on POD 3 had a sensitivity and specificity of 87.4% and 90.9% to predict relevant POPF. Conclusion: Drain amylase on POD 1 and CRP on POD 3 can accurately predict clinically relevant POPF after partial pancreaticoduodenectomy. The accuracy of PCT on POD 3 is limited. Answer: Yes, the measurement of amylase in drain fluid after distal pancreatectomy can predict postoperative pancreatic fistula (POPF). Several studies have demonstrated the utility of measuring drain fluid amylase (DFA) at various postoperative days (PODs) as a predictive factor for POPF. One study found that a drain fluid amylase level greater than 5000 U/L on the first postoperative day (DFA1) can be considered a predictive factor for pancreatic fistula, suggesting that systematic measurement of amylase in drain fluid on the first postoperative day is a good clinical practice (PUBMED:26117433). Another study reported that low drain amylase values on POD 1 and 3 are associated with a negative predictive value of 98.7% for developing significant POPF, indicating that patients with low amylase levels on these days are unlikely to develop POPF and may have their drains removed earlier (PUBMED:30588532). Furthermore, the intra-operative amylase concentration (IOAC) in peri-pancreatic fluid has been shown to correlate significantly with the development of POPF and can allow early and accurate categorization of patients at risk (PUBMED:28321709). Additionally, the drain and serum amylase concentration ratio (DSACR) on POD 3 has been identified as the most reliable indicator for predicting POPF after distal pancreatectomy, with a high area under the curve (AUC) value in receiver operating characteristic (ROC) analysis (PUBMED:37046241). Moreover, drain fluid amylase concentration (DFAC) has been found to be more reliable than the total drain fluid amylase amount (DFAA) for predicting clinically relevant POPF after both pancreaticoduodenectomy (PD) and distal pancreatectomy (DP) (PUBMED:34755016). In summary, the measurement of amylase in drain fluid after distal pancreatectomy is a valuable tool for predicting the risk of developing POPF, and it can guide clinical decisions regarding the management of postoperative drains.
Instruction: Can proopiomelanocortin methylation be used as an early predictor of metabolic syndrome? Abstracts: abstract_id: PUBMED:24222450 Can proopiomelanocortin methylation be used as an early predictor of metabolic syndrome? Objective: The objectives of this study were to compare early predictive marker of the metabolic syndrome with proopiomelanocortin (POMC) methylation status and to determine the association among birth weight, ponderal index, and cord blood methylation status. Research Design And Methods: We collected pregnancy outcome data from pregnant women, cord blood samples at delivery, and blood from children (7-9 years old; n = 90) through a prospective cohort study at Ewha Womans University, MokDong Hospital (Seoul, Korea), from 2003-2005. POMC methylation was assessed by pyrosequencing. We divided subjects into three groups according to cord blood POMC methylation: the low methylation (<10th percentile), mid-methylation, and high methylation (>90th percentile) groups. We analyzed the association of POMC methylation status at birth with adiposity and metabolic components using ANCOVA and multiple linear regression analysis. Results: Birth weights (P = 0.01) and ponderal indices (P = 0.01) in the high POMC methylation group were significantly lower than in the mid-POMC methylation group. In terms of metabolic components of childhood, blood triglycerides (57.97, 67.29 vs. 113.89 mg/dL; P = 0.03, 0.01) and insulin (7.10, 7.64 vs. 10.13 μIU/mL; P = 0.05, 0.02) at childhood were significantly higher in the high POMC methylation group than in the low and mid-POMC methylation group. Conclusions: High POMC methylation in cord blood was associated with lower birth weight, and children with high POMC methylation in cord blood showed higher triglycerides and higher insulin concentrations in blood. Thus, POMC methylation status in cord blood may be an early predictive marker of future metabolic syndrome. abstract_id: PUBMED:19723777 Hypothalamic proopiomelanocortin promoter methylation becomes altered by early overfeeding: an epigenetic model of obesity and the metabolic syndrome. Pre- and neonatal overfeeding programmes a permanent obesity disposition and accompanying diabetic and cardiovascular disorders, by unknown mechanisms. We proposed that early overfeeding may alter DNA methylation patterns of hypothalamic promoter regions of genes critically involved in the lifelong regulation of food intake and body weight. We induced neonatal overfeeding by rearing Wistar rats in small litters (SL) and thereafter mapped the DNA methylation status of CpG dinucleotides of gene promoters from hypothalamic tissue, using bisulfite sequencing. Neonatal overfeeding led to rapid early weight gain, resulting in a metabolic syndrome phenotype, i.e. obesity, hyperleptinaemia, hyperglycaemia, hyperinsulinaemia, and an increased insulin/glucose ratio. Accompanying, without group difference to controls, the promoter of the main orexigenic neurohormone, neuropeptide Y, was methylated at low levels (i.e. < 5%). In contrast, in SL rats the hypothalamic gene promoter of the main anorexigenic neurohormone, proopiomelanocortin (POMC), showed hypermethylation (P < 0.05) of CpG dinucleotides within the two Sp1-related binding sequences (Sp1, NF-kappaB) which are essential for the mediation of leptin and insulin effects on POMC expression. Consequently, POMC expression lacked upregulation, despite hyperleptinaemia and hyperinsulinaemia. Accordingly, the extent of DNA methylation within Sp1-related binding sequences was inversely correlated to the quotients of POMC expression/leptin (P = 0.02) and POMC expression/insulin (P < 0.001), indicating functionality of acquired epigenomic alterations. These data for the first time demonstrate a nutritionally acquired alteration of the methylation pattern and, consequently, the regulatory 'set point' of a gene promoter that is critical for body weight regulation. Our findings reveal overfeeding as an epigenetic risk factor of obesity programming and consecutive diabetic and cardiovascular disorders and diseases, in terms of the metabolic syndrome. abstract_id: PUBMED:33338550 Temporary effects of neonatal overfeeding on homeostatic control of food intake involve alterations in POMC promoter methylation in male rats. A small litter (SL) model was used to determine how neonatal overfeeding affects the homeostatic control of food intake in male rats at weaning and postnatal day (PND) 90. At PND4, litters were reduced to small (4 pups/dam) or normal (10 pups/dam) litters. At weaning, SL rats showed higher body weight and characteristic features of the metabolic syndrome. Gene expression of pro-opiomelanocortin (POMC), cocaine and amphetamine regulated transcript, neuropeptide Y (NPY) and leptin and ghrelin (GHSR) receptors were increased and POMC promoter was hypomethylated in arcuate nucleus, indicating that the early development of obesity may involve the GHSR/NPY system and changes in POMC methylation state. At PND90, body weight, metabolic parameters and gene expression were restored; however, POMC methylation state remained altered. This work provides insight into the effects of neonatal overfeeding, showing the importance of developmental plasticity in restoring early changes in central pathways involved in metabolic programming. abstract_id: PUBMED:32489968 Long-term effects of pro-opiomelanocortin methylation induced in food-restricted dams on metabolic phenotypes in male rat offspring. Objective: Maternal malnutrition affects the growth and metabolic health of the offspring. Little is known about the long-term effect on metabolic indices of epigenetic changes in the brain caused by maternal diet. Thus, we explored the effect of maternal food restriction during pregnancy on metabolic profiles of the offspring, by evaluating the DNA methylation of hypothalamic appetite regulators at 3 weeks of age. Methods: Sprague-Dawley rats were divided into 2 groups: a control group and a group with a 50% food-restricted (FR) diet during pregnancy. Methylation and expression of appetite regulator genes were measured in 3-week-old offspring using pyrosequencing, real-time polymerase chain reaction, and western blotting analyses. We analyzed the relationship between DNA methylation and metabolic profiles by Pearson's correlation analysis. Results: The expression of pro-opiomelanocortin (POMC) decreased, whereas DNA methylation significantly increased in male offspring of the FR dams, compared to the male offspring of control dams. Hypermethylation of POMC was positively correlated with the levels of high-density lipoprotein cholesterol (HDL-C) and low-density lipoprotein cholesterol in 3-week-old male offspring. In addition, there were significant positive correlations between hypermethylation of POMC and the levels of triglycerides, HDL-C, and leptin in 6-month-old male offspring. Conclusion: Our findings suggest that maternal food restriction during pregnancy influences the expression of hypothalamic appetite regulators via epigenetic changes, leading to the development of metabolic disorders in the offspring. abstract_id: PUBMED:28479374 Cafeteria diet differentially alters the expression of feeding-related genes through DNA methylation mechanisms in individual hypothalamic nuclei. We evaluated the effect of cafeteria diet (CAF) on the mRNA levels and DNA methylation state of feeding-related neuropeptides, and neurosteroidogenic enzymes in discrete hypothalamic nuclei. Besides, the expression of steroid hormone receptors was analyzed. Female rats fed with CAF from weaning increased their energy intake, body weight, and fat depots, but did not develop metabolic syndrome. The increase in energy intake was related to an orexigenic signal of paraventricular (PVN) and ventromedial (VMN) nuclei, given principally by upregulation of AgRP and NPY. This was mildly counteracted by the arcuate nucleus, with decreased AgRP expression and increased POMC and kisspeptin expression. CAF altered the transcription of neurosteroidogenic enzymes in PVN and VMN, and epigenetic mechanisms associated with differential promoter methylation were involved. The changes observed in the hypothalamic nuclei studied could add information about their differential role in food intake control and how their action is disrupted in obesity. abstract_id: PUBMED:29598821 Association between the DNA methylations of POMC, MC4R, and HNF4A and metabolic profiles in the blood of children aged 7-9 years. Background: Proopiomelanocortin (POMC), melanocortin 4 receptor (MC4R), and hepatocyte nuclear factor 4 alpha (HNF4A) are closely associated with weight gain and metabolic traits. In a previous study, we demonstrated associations between the methylations of POMC, MC4R, and HNF4A and metabolic profiles at birth. However, little is known about these associations in obese children. To evaluate the clinical utility of epigenetic biomarkers, we investigated to determine whether an association exists between the methylations of POMC, MC4R, and HNF4A and metabolic profiles in blood of normal weight and overweight and obese children. Methods: We selected 79 normal weight children and 41 overweight and obese children aged 7-9 years in the Ewha Birth and Growth Cohort study. POMC methylation levels at exon 3, and MC4R and HNF4A methylation levels in promoter regions were measured by pyrosequencing. Serum glucose, total cholesterol (TC), triglyceride, high-density lipoprotein cholesterol (HDL-c), and insulin levels were analyzed using a biochemical analyzer and an immunoradiometric assay. Partial correlation and multiple regression analysis were used to assess relationships between POMC, MC4R, and HNF4A methylation levels and metabolic profiles. Results: Significant correlations were found between POMC methylation and HDL-c levels, and between HNF4A methylation and both TC and HDL-c levels. Interestingly, associations were found between POMC methylation status and HDL-c levels, and between HNF4A methylation status and TC levels independent of body mass index. Conclusions: These findings show that POMC, MC4R, and HNF4A methylation status in the blood of children are associated with metabolic profiles. Therefore, we suggest that the DNA methylation status might serve as a potential epigenetic biomarkers of metabolic syndrome. abstract_id: PUBMED:23803567 High folate gestational and post-weaning diets alter hypothalamic feeding pathways by DNA methylation in Wistar rat offspring. Excess vitamins, especially folate, are consumed during pregnancy but later-life effects on the offspring are unknown. High multivitamin (10-fold AIN-93G, HV) gestational diets increase characteristics of metabolic syndrome in Wistar rat offspring. We hypothesized that folate, the vitamin active in DNA methylation, accounts for these effects through epigenetic modification of food intake regulatory genes. Male offspring of dams fed 10-fold folate (HFol) diet during pregnancy and weaned to recommended vitamin (RV) or HFol diets were compared with those born to RV dams and weaned to RV diet for 29 weeks. Food intake and body weight were highest in offspring of HFol dams fed the RV diet. In contrast, the HFol pup diet in offspring of HFol dams reduced food intake (7%, p = 0.02), body weight (9%, p = 0.03) and glucose response to a glucose load (21%, p = 0.02), and improved glucose response to an insulin load (20%, p = 0.009). HFol alone in either gestational or pup diet modified gene expression of feeding-related neuropeptides. Hypomethylation of the pro-opiomelanocortin (POMC) promoter occurred with the HFol pup diet. POMC-specific methylation was positively associated with glucose response to a glucose load (r = 0.7, p = 0.03). In conclusion, the obesogenic phenotype of offspring from dams fed the HFol gestational diet can be corrected by feeding them a HFol diet. Our work is novel in showing post-weaning epigenetic plasticity of the hypothalamus and that in utero programming by vitamin gestational diets can be modified by vitamin content of the pup diet. abstract_id: PUBMED:33630246 Early post-natal life stress induces permanent adrenocorticotropin-dependent hypercortisolism in male mice. Purpose: It has been hypothesized that specific early-life stress (ES) procedures on CD-1 male mice produce diabetes-like alterations due to the failure of negative feedback of glucocorticoid hormone in the pituitary. The aim of this study is to investigate the possible mechanism that leads to this pathological model, framing it in a more specific clinical condition. Methods: Metabolic and hypothalamic-pituitary-adrenal-related hormones of stressed mice (SM) have been analyzed immediately after stress procedures (21 postnatal days, PND) and after 70 days of a peaceful (unstressed) period (90 PND). These data have been compared to parameters from age-matched controls (CTR), and mice treated during ES procedures with oligonucleotide antisense for pro-opiomelanocortin (AS-POMC). Results: At 21 PND, SM presented an increased secretion of hypothalamic CRH and pituitary POMC-derived peptides, as well as higher plasmatic levels of ACTH and corticosterone vs. CTR. At 90 PND, SM showed hyperglycemia, with suppression of hypothalamic CRH, while pituitary and plasmatic ACTH levels, as well as plasma corticosterone, were constantly higher than in CTR. These values are accompanied by a progressive acceleration in gaining total body weight, which became significant vs. CTR at 90 PND together with a higher pituitary weight. Treatment with AS-POMC prevented all hormonal and metabolic alterations observed in SM, both at 21 and 90 PND. Conclusions: These findings show that these specific ES procedures affect the negative glucocorticoid feedback in the pituitary, but not in the hypothalamus, suggesting a novel model of ACTH-dependent hypercortisolism that can be prevented by silencing the POMC gene. abstract_id: PUBMED:30377736 Exposure of pregnant mice to triclosan causes hyperphagic obesity of offspring via the hypermethylation of proopiomelanocortin promoter. Triclosan (TCS), as a broad spectrum antibacterial agent, is commonly utilized in personal care and household products. Maternal urinary TCS level has been associated with changes in birth weight of infants. We in the present study investigated whether exposure of mice to 8 mg/kg TCS from gestational day (GD) 6 to GD14 alters prenatal and postnatal growth and development, and metabolic phenotypes in male and female offspring (TCS-offspring). Compared with control offspring, body weight in postnatal day (PND) 1 male or female TCS-offspring was reduced, but body weight gain was faster within postnatal 5 days. PND30 and PND60 TCS-offspring showed overweight with increases in visceral fat and adipocyte size. PND60 TCS-offspring displayed delayed glucose clearance and insulin resistance. PND30 TCS-offspring showed an increase in food intake without the changes in the oxygen consumption and respiratory exchange ratio (RER). The expression levels of proopiomelanocortin (POMC), α-melanocyte-stimulating hormone (α-MSH) and single-minded 1 (SIM1) in hypothalamus arcuate nucleus (ARC) and paraventricular nucleus (PVN), respectively, were significantly reduced in PND30 TCS-offspring compared to controls. The hypermethylation of CpG sites at the POMC promoter was observed in PND30 TCS-offspring, while the concentration of serum leptin was elevated and the level of STAT3 phosphorylation in ARC had no significant difference from control. This study demonstrates that TCS exposure during early/mid-gestation through the hypermethylation of the POMC promoter reduces the expression of anorexigenic neuropeptides to cause the postnatal hyperphagic obesity, leading to metabolic syndrome in adulthood. abstract_id: PUBMED:23151358 The lipocalin-type prostaglandin D2 synthase knockout mouse model of insulin resistance and obesity demonstrates early hypothalamic-pituitary-adrenal axis hyperactivity. Obesity and diabetes are closely associated with hyperactivation of the hypothalamic-pituitary-adrenal (HPA) axis. In this study, the diet-induced obese C57BL/6 mouse was used to test the hypothesis that chronically elevated metabolic parameters associated with the development of obesity such as cholesterol and glucose can aggravate basal HPA axis activity. Because the lipocalin-type prostaglandin D(2) synthase (L-PGDS) knockout (KO) mouse is a model of accelerated insulin resistance, glucose intolerance, and obesity, it was further hypothesized that HPA activity would be greater in this model. Starting at 8 weeks of age, the L-PGDS KO and C57BL/6 mice were maintained on a low-fat or high-fat diet. After 20 or 37 weeks, fasting metabolic parameters and basal HPA axis hormones were measured and compared between genotypes. Correlation analyses were performed to identify associations between obesity-related chronic metabolic changes and changes in the basal activity of the HPA axis. Our results have identified strong positive correlations between total cholesterol, LDL-cholesterol, glucose, and HPA axis hormones that increase with age in the C57BL/6 mice. These data confirm that obesity-related elevations in cholesterol and glucose can heighten basal HPA activity. Additionally, the L-PGDS KO mice show early elevations in HPA activity with no age-related changes relative to the C57BL/6 mice. Answer: Yes, proopiomelanocortin (POMC) methylation can potentially be used as an early predictor of metabolic syndrome. Studies have shown that high POMC methylation in cord blood is associated with lower birth weight, and children with high POMC methylation in cord blood exhibited higher triglycerides and higher insulin concentrations in blood, suggesting that POMC methylation status in cord blood may be an early predictive marker of future metabolic syndrome (PUBMED:24222450). Additionally, early overfeeding has been shown to alter DNA methylation patterns of hypothalamic promoter regions of genes critically involved in the lifelong regulation of food intake and body weight, including POMC, which may lead to a metabolic syndrome phenotype characterized by obesity, hyperleptinaemia, hyperglycaemia, hyperinsulinaemia, and an increased insulin/glucose ratio (PUBMED:19723777). Furthermore, temporary effects of neonatal overfeeding on homeostatic control of food intake involve alterations in POMC promoter methylation, indicating the importance of developmental plasticity in restoring early changes in central pathways involved in metabolic programming (PUBMED:33338550). These findings collectively suggest that POMC methylation status could serve as a potential epigenetic biomarker of metabolic syndrome (PUBMED:29598821).
Instruction: Chromatin assembly factor-1 (CAF-1)-mediated regulation of cell proliferation and DNA repair: a link with the biological behaviour of squamous cell carcinoma of the tongue? Abstracts: abstract_id: PUBMED:17543081 Chromatin assembly factor-1 (CAF-1)-mediated regulation of cell proliferation and DNA repair: a link with the biological behaviour of squamous cell carcinoma of the tongue? Aims: Squamous cell carcinoma (SCC) of the tongue shows aggressive behaviour and a poor prognosis. Clinicopathological parameters fail to provide reliable prognostic information, so the search continues for new molecular markers for this tumour. Chromatin assembly factor-1 (CAF-1) plays a major role in chromatin assembly during cell replication and DNA repair and has been proposed as a new proliferation marker. The aim of this study was to investigate its expression in SCC of the tongue. Methods And Results: The immunohistochemical expression of the p60 and p150 subunits of CAF-1 were evaluated in a series of SCCs of the tongue. The findings were correlated with the expression of proliferation cell nuclear antigen (PCNA) and patients' clinicopathological and follow-up data. CAF-1/p60 was expressed in all the tumours, whereas CAF-1/p150 was down-regulated in a number of cases. Overexpression of CAF-1/p60 and down-regulation of CAF-1/p150 identified SCCs with poor outcome, in addition to the classical prognostic parameters. Conclusions: Simultaneous CAF-1-mediated deregulation of cell proliferation and DNA repair takes place in aggressive SCC of the tongue. Therefore, the evaluation of CAF-1 expression may be a valuable tool for evaluation of the biological behaviour of these tumours. This may be relevant to the introduction of improved follow-up protocols and/or alternative therapeutic regimens. abstract_id: PUBMED:31318980 Immunohistochemical assessment of chromatin licensing and DNA replication factor 1, geminin, and γ-H2A.X in oral epithelial precursor lesions and squamous cell carcinoma. Background: Carcinogenesis occurs when the cell cycle is compromised. Chromatin licensing and DNA replication factor 1, geminin, and γ-H2A histone family member X are expressed in cells in G1 phase, S/G2 /M phases, and apoptosis, respectively, and these three markers may be useful for histological evaluation of malignant lesions. Here, we aimed to identify cell cycle phases and apoptosis using immunohistochemistry in oral epithelial precursor lesions and oral squamous cell carcinoma. Methods: Chromatin licensing and DNA replication factor 1, geminin, and γ-H2A histone family member X expression levels were immunohistochemically examined in tissue specimens from 55 patients with oral epithelial precursor lesions and 50 patients with oral squamous cell carcinoma. Associations of clinicopathological variables with marker expression were assessed. Results: Chromatin licensing and DNA replication factor 1 was expressed in the prickle cell layer of oral epithelial precursor lesions and many carcinoma cells of oral squamous cell carcinoma. Geminin reactivity was widely distributed in high-grade dysplasia and oral squamous cell carcinoma rather than low-grade or no dysplastic cases. γ-H2A histone family member X was expressed in the superficial layer of oral epithelial precursor lesions and scattered carcinoma cells of oral squamous cell carcinoma. In oral squamous cell carcinoma, lower geminin expression was observed in recurrent cases. Geminin and γ-H2A histone family member X were associated with the degree of differentiation and mode of invasion. Conclusion: Chromatin licensing and DNA replication factor 1, geminin, and γ-H2A histone family member X expression levels were correlated with oral carcinogenesis; these markers were associated with clinicopathological behaviors in oral squamous cell carcinoma. abstract_id: PUBMED:37783225 Silencing of heat shock factor 1 (HSF1) inhibits proliferation, invasion, and epithelial-mesenchymal transition in oral squamous cell carcinoma. Background: Oral squamous cell carcinoma is characterized by high rates of morbidity and mortality. Evidence obtained for different types of cancer shows that tumor initiation, progression, and therapeutic resistance are regulated by heat shock factor 1. This research aimed to analyze the effects of heat shock factor 1 on the biological behavior of oral squamous cell carcinoma. Methods: Clinicopathological and immunoexpression study of heat shock factor 1 in 70 cases of oral tongue SCC and functional assays by gene silencing of this factor in an oral tongue SCC cell line. Results: Heat shock factor 1 was overexpressed in oral tongue SCC specimens compared to normal oral mucosa (p < 0.0001) and in the SCC15 line compared to immortalized keratinocytes (p < 0.005). No significant associations were observed between overexpression of heat shock factor 1 and clinicopathological parameters or survival rates of the oral tongue SCC cases in the present sample. In vitro experiments showed that heat shock factor 1 silencing inhibited cell proliferation (p < 0.005) and cell cycle progression, with the accumulation of cells in the G0/G1 phase (p < 0.01). In addition, heat shock factor 1 silencing reduced cell invasion capacity (p < 0.05) and epithelial-mesenchymal transition, characterized by a decrease in vimentin expression (p < 0.05) and an increase in E-cadherin expression (p < 0.001). Conclusion: Heat shock factor 1 may exert several functions that help maintain cell stability under the stressful conditions of the tumor microenvironment. Thus, strategies targeting the regulation of this protein may in the future be a useful therapeutic tool to control the progression of oral squamous cell carcinoma. abstract_id: PUBMED:34218518 LAMC1 upregulation via TGFβ induces inflammatory cancer-associated fibroblasts in esophageal squamous cell carcinoma via NF-κB-CXCL1-STAT3. Cancer-associated fibroblasts (CAF) are a heterogeneous cell population within the tumor microenvironment,and play an important role in tumor development. By regulating the heterogeneity of CAF, transforming growth factor β (TGFβ) influences tumor development. Here, we explored oncogenes regulated by TGFβ1 that are also involved in signaling pathways and interactions within the tumor microenvironment. We analyzed sequencing data of The Cancer Genome Atlas (TCGA) and our own previously established RNA microarray data (GSE53625), as well as esophageal squamous cell carcinoma (ESCC) cell lines with or without TGFβ1 stimulation. We then focused on laminin subunit gamma 1 (LAMC1), which was overexpressed in ESCC cells, affecting patient prognosis, which could be upregulated by TGFβ1 through the synergistic activation of SMAD family member 4 (SMAD4) and SP1. LAMC1 directly promoted the proliferation and migration of tumor cells, mainly via Akt-NFκB-MMP9/14 signaling. Additionally, LAMC1 promoted CXCL1 secretion, which stimulated the formation of inflammatory CAF (iCAF) through CXCR2-pSTAT3. Inflammatory CAF promoted tumor progression. In summary, we identified the dual mechanism by which the upregulation of LAMC1 by TGFβ in tumor cells not only promotes ESCC proliferation and migration, but also indirectly induces carcinogenesis by stimulating CXCL1 secretion to promote the formation of iCAF. This finding suggests that LAMC1 could be a potential therapeutic target and prognostic marker for ESCC. abstract_id: PUBMED:29972923 Role of specificity protein 1 in transcription regulation of microRNA-92b in head and neck squamous cell carcinoma Objective: To investigate the role of transcription factor specificity protein 1 (SP1) in proliferation, migration and invasion in head and neck squamous cell carcinoma (HNSCC), and the role of SP1 in transcription regulation of microRNA (miRNA)-92b. Methods: Predicted the possible target miRNA of transcription factor SP1 by bioinformatic analysis. Furthermore, confirmed the binding sites of transcription factor SP1 and miRNA-92b promoter regions by chromatin immunoprecipitation. After transfecting SP1 siRNA and negative control siRNA, also performed quantitative real-time PCR (qPCR), cell proliferation assay and Transwell assay. Results: The bioinformatic analysis shows SP1 is a possible transcription factor of miRNA-92b. Chromatin immunoprecipitation suggests there are three binding sites in miRNA-92b promoter regions that can be combined with SP1. qPCR suggests in PCI-4A and PCI-37A cells the expression of SP1 in experimental group (respectively was 0.064±0.020 and 0.639±0.008) were significantly lower than negative control group (both were 1)(P<0.05). In PCI-4A and PCI-37A cells the expression of miRNA-92b in experimental group (respectively was 0.215±0.033 and 0.497±0.104) were significantly lower than negative control group (both were 1)(P<0.05). In experimental group proliferation of SP1 in PCI-4A and PCI-37A cells value A were significantly lower than negative control group (P<0.05). In experimental group migration of SP1 in PCI-4A and PCI-37A cells (respectively was 37.0±4.6 and 40.7±2.1) were significantly lower than negative control group (101.0±5.3 and 82.7±5.7) (P<0.05). In experimental group invasion of SP1 in PCI-4A and PCI-37A cells (respectively was 31.3±10.8 and 37.0±4.6) were significantly lower than negative control group (92.3±3.1 and 70.3±3.1)(P<0.05). Conclusions: SP1 promotes proliferation, migration and invasion abilities of HNSCC cells. SP1 is a transcription factor of miRNA-92b and can directly be involved in transcription regulation of miRNA-92b. abstract_id: PUBMED:34590150 FOXP4 promotes laryngeal squamous cell carcinoma progression through directly targeting LEF‑1. Forkhead box (FOX) proteins are multifaceted transcription factors that have been shown to be involved in cell cycle progression, proliferation and metastasis. FOXP4, a member of the FOX family, has been implicated in diverse biological processes in tumor initiation and progression. However, the molecular mechanisms of FOXP4 in laryngeal squamous cell carcinoma (LSCC) remain unknown. In the present study, differentially expressed transcripts in transforming growth factor‑β‑treated TU177 cells were screened using microarrays and it was found that FOXP4 was significantly upregulated. The high expression of FOXP4 was detected in LSCC tissues and cells, and predicted poor prognosis. The role of FOXP4 in laryngeal cancer cell proliferation, migration and invasion was determined by gain‑ and loss‑of‑function assays. Besides, FOXP4 was demonstrated to participate in the epithelial‑mesenchymal transition process at the mRNA and protein levels. Mechanically, FOXP4 directly bound to the promoter of lymphoid enhancer‑binding factor 1 and activated Wnt signaling pathway, which was confirmed via chromatin immunoprecipitation and luciferase reporter assays. Consequently, these findings provided novel mechanisms of FOXP4 in LSCC progression, which may be considered as potential therapeutic and prognostic targets for LSCC. abstract_id: PUBMED:25961369 Low miR-145 silenced by DNA methylation promotes NSCLC cell proliferation, migration and invasion by targeting mucin 1. MiR-145 has been implicated in the progression of non-small cell lung cancer (NSCLC); however, its exact mechanism is not well established. Here, we report that miR-145 expression is decreased in NSCLC cell lines and tumor tissues and that this low level of expression is associated with DNA methylation. MiR-145 methylation in NSCLC was correlated with a more aggressive tumor phenotype and was associated with poor survival time, as shown by Kaplan-Meier analysis. Additional multivariate Cox regression analysis indicated that miR-145 methylation was an independent prognostic factor for poor survival in patients with NSCLC. Furthermore, we found that restoration of miR-145 expression inhibited proliferation, migration and invasion of NSCLC by the direct targeting of mucin 1 by miR-145. Our results indicate that low miR-145 expression, due to methylation, promotes NSCLC cell proliferation, migration and invasion by targeting mucin 1. Therefore, miR-145 may be a valuable therapeutic target for NSCLC. abstract_id: PUBMED:20819078 MicroRNA-7 targets IGF1R (insulin-like growth factor 1 receptor) in tongue squamous cell carcinoma cells. miR-7 (microRNA-7) has been characterized as a tumour suppressor in several human cancers. It targets a number of proto-oncogenes that contribute to cell proliferation and survival. However, the mechanism(s) by which miR-7 suppresses tumorigenesis in TSCC (tongue squamous cell carcinoma) is unknown. The present bioinformatics analysis revealed that IGF1R (insulin-like growth factor 1 receptor) mRNA is a potential target for miR-7. Ectopic transfection of miR-7 led to a significant reduction in IGF1R at both the mRNA and protein levels in TSCC cells. Knockdown of miR-7 in TSCC cells enhanced IGF1R expression. Direct targeting of miR-7 to three candidate binding sequences located in the 3'-untranslated region of IGF1R mRNA was confirmed using luciferase-reporter-gene assays. The miR-7-mediated down-regulation of IGF1R expression attenuated the IGF1 (insulin-like growth factor 1)-induced activation of Akt (protein kinase B) in TSCC cell lines, which in turn resulted in a reduction in cell proliferation and cell-cycle arrest, and an enhanced apoptotic rate. Taken together, the present results demonstrated that miR-7 regulates the IGF1R/Akt signalling pathway by post-transcriptional regulation of IGF1R. Our results indicate that miR-7 plays an important role in TSCC and may serve as a novel therapeutic target for TSCC patients. abstract_id: PUBMED:29968320 High-mobility group box 1 protein modulated proliferation and radioresistance in esophageal squamous cell carcinoma. Background And Aim: The high-mobility group box 1 (HMGB1) protein plays an important role in a lot of biological behaviors, including DNA damage repair, gene transcription, cell replication, and cell death, and its expression is higher in many solid tumors tissues than in their adjacent normal tissues, and it is always involved in tumor proliferation, metastasis, therapeutic tolerance, and poor prognosis. However, HMGB1 in proliferation and radioresistance of esophageal squamous cell carcinoma (ESCC) remains poorly understood. In this study, the effect of HMGB1 on proliferation, cell death, DNA damage repair and radioresistance, and its underlying mechanism was investigated in human ESCC. Methods: The immunohistochemistry scores of tumor and adjacent normal tissues in ESCC tissue microarray were analyzed. Stable HMGB1 knockdown cell lines were constructed using Kyse150 and Kyse450 cells. Cell viability, radioresistance, apoptosis, autophagy, and DNA damage were determined using CCK-8, 5-ethynyl-2'-deoxyuridine, clonogenic survival assay, immunofluorescence, flow cytometry, and western blot assays. Results: Differential analyses showed that the expression of HMGB1 in esophageal cancer tissue was significantly higher than that in adjacent normal tissues. The downregulation of HMGB1 could effectively inhibit proliferation, increase radiosensitivity, impair DNA damage repair abilities, reduce autophagy, and increase apoptosis rates in ESCC cells after irradiation. Conclusions: HMGB1 is expected to be a potential target for ESCC therapy and radiosensitization. abstract_id: PUBMED:23109837 Tissue microarray-based evaluation of Chromatin Assembly Factor-1 (CAF-1)/p60 as tumour prognostic marker. In this study we aimed to confirm the emerging role of Chromatin Assembly Factor 1 (CAF-1 p60) as a new proliferation and prognostic marker for cancer and to test the usefulness of the tissue microarray technique (TMA) for CAF-1 p60 rapid screening in several human malignancies. CAF-1 is a histone chaperone, regulating chromatin dynamics during DNA replication and repair in eukaryotics. TMA is a powerful high-throughput methodology in the study of cancer, allowing simultaneous assessment of different biomarkers within large numbers of tissue specimens. We generated TMA taking 3 mm diameter-core biopsies from oral squamous cell carcinoma, prostate cancer, salivary gland tumours and skin melanoma specimens, which had been previously tested for CAF-1 p60 on routine tissue sections. We also analysed, for the first time, 30 larynx and 30 skin squamous cell carcinomas. CAF-1 p60 resulted over-expressed in both the tissue sections and the TMA specimens, with the highest levels of expression in tumours which were more aggressive and metastasizing. Notably, a high degree of agreement was found between the CAF-1 p60 assessment on TMAs and on routine tissue sections. Our findings confirm the prognostic role of CAF-1 p60 and indicate TMA as a really advantageous method for CAF-1 p60 immunohistochemical screening, allowing savings on both tissue quantity and operator-time. Answer: The study described in abstract PUBMED:17543081 investigated the expression of Chromatin Assembly Factor-1 (CAF-1) in squamous cell carcinoma (SCC) of the tongue and its potential link with the tumor's aggressive behavior and prognosis. CAF-1 is known to play a significant role in chromatin assembly during cell replication and DNA repair, and it has been proposed as a new proliferation marker. The immunohistochemical expression of CAF-1 subunits p60 and p150 was evaluated in a series of SCCs of the tongue and correlated with the expression of proliferation cell nuclear antigen (PCNA) and patients' clinicopathological and follow-up data. The study found that CAF-1/p60 was expressed in all tumors, while CAF-1/p150 was down-regulated in some cases. Overexpression of CAF-1/p60 and down-regulation of CAF-1/p150 were associated with SCCs that had a poor outcome, in addition to classical prognostic parameters. The results suggest that simultaneous CAF-1-mediated deregulation of cell proliferation and DNA repair occurs in aggressive SCC of the tongue. Therefore, evaluating CAF-1 expression may be a valuable tool for assessing the biological behavior of these tumors, which could be relevant for improving follow-up protocols and/or alternative therapeutic regimens.
Instruction: CT Urography for Diagnosis of Upper Urinary Tract Urothelial Carcinoma: Are Both Nephrographic and Excretory Phases Necessary? Abstracts: abstract_id: PUBMED:26295668 CT Urography for Diagnosis of Upper Urinary Tract Urothelial Carcinoma: Are Both Nephrographic and Excretory Phases Necessary? Objective: The objective of our study was to compare the diagnostic performance of nephrographic phase only, excretory phase only, and both nephrographic and excretory phases of CT urography (CTU) for the detection of upper tract urothelial carcinoma. Materials And Methods: Forty-nine consecutive patients with pathologically proven upper tract urothelial carcinoma who underwent a single-bolus CTU examination were evaluated. Forty-nine control patients with normal findings on two CTU examinations performed at a 1-year interval were included. Two radiologists independently reviewed the 98 CTU examinations at three different sessions (nephrographic phase only, excretory phase only, and both nephrographic and excretory phases simultaneously) and rated the likelihood of the presence of a urothelial carcinoma in each segment of the renal collecting system and ureter using a 5-point scale. Sensitivity, specificity, and AUC of ROC curve were calculated per segment and per patient. Results: A total of 314 segments, 56 of which contained tumors, were evaluated. In the per-segment analysis for reviewers 1 and 2, the sensitivity, specificity, and AUC, respectively, were as follows: 88%, 98%, and 0.95 and 84%, 97%, and 0.94 for the nephrographic phase; 79%, 98%, and 0.91 and 89%, 98%, and 0.95 for the excretory phase; and 88%, 99%, and 0.95 and 89%, 99%, and 0.96 for the combined nephrographic and excretory phases. The AUC of the combined nephrographic and excretory phases was significantly higher than that of the nephrographic phase (per-patient analysis, reviewer 2) and that of excretory phase (per-segment analysis, reviewer 1) but was not significantly different in any other comparisons. Conclusion: The nephrographic and excretory phases are complementary for the detection of upper tract urothelial carcinoma. abstract_id: PUBMED:21512076 Comparison of CT urography and excretory urography in the detection and localization of urothelial carcinoma of the upper urinary tract. Objective: The purpose of this study was to compare the accuracy of CT urography and excretory urography for the detection and localization of upper urinary tract urothelial carcinoma. Materials And Methods: Of 128 patients at high risk for upper tract urothelial carcinoma who were examined with both CT urography and excretory urography between 2002 and 2007, 24 were undiagnosed and excluded. CT urography and excretory urography results of the remaining 104 patients and 552 urinary tract segments were compared with histopathologic examination or follow-up imaging at 1 year. Two readers independently scored the confidence levels for the presence or absence of upper urinary tract urothelial carcinoma in each of six upper urinary tract segments on both CT urography and excretory urography; differences were resolved by consensus. Results: Upper urinary tract urothelial carcinoma was diagnosed in 77 (14%) segments of 46 (44%) patients. Per-patient sensitivity, specificity, overall accuracy, and area under the receiver operating characteristic curves for detecting carcinomas with CT urography (93.5% [43/46], 94.8% [55/58], 94.2% [98/104], and 0.963, respectively) were significantly greater than those for excretory urography (80.4% [37/46], 81.0% [47/58], 80.8% [84/104], and 0.831, respectively) (p = 0.041, p = 0.027, p = 0.001, and p < 0.001, respectively). Per-segment sensitivity and overall accuracy for the localization of upper urinary tract urothelial carcinoma were significantly greater with CT urography (87.0% [67/77] and 97.8% [540/552]) than with excretory urography (41.6% [32/77] and 91.5% [505/552]) (p < 0.0001). Conclusion: CT urography was more accurate than excretory urography in the detection and localization of upper urinary tract urothelial carcinoma and should be considered as the initial examination for the evaluation of patients at high risk for upper urinary tract urothelial carcinoma. abstract_id: PUBMED:33348865 CT Urography Findings of Upper Urinary Tract Carcinoma and Its Mimickers: A Pictorial Review. Urothelial carcinoma (UC) is the fourth most frequent tumor in Western countries and upper tract urothelial carcinoma (UTUC), affecting pyelocaliceal cavities and ureter, accounts for 5-10% of all UCs. Computed tomography urography (CTU) is now considered the imaging modality of choice for diagnosis and staging of UTUC, guiding disease management. Although its specificity is very high, both benign and malignant diseases could mimic UTUCs and therefore have to be well-known to avoid misdiagnosis. We describe CTU findings of upper urinary tract carcinoma, features that influence disease management, and possible differential diagnosis. abstract_id: PUBMED:30617530 Value of imaging in upper urinary tract tumors Background: Staging of bladder cancer, hematuria as well as the evaluation of unclear findings of the kidneys and ureters are the most frequent indications for imaging of the upper urinary tract (UUT). Endourological assessment of the UUT is much more invasive compared to imaging of the bladder, raising the question of the optimal imaging technique. Several technical improvements regarding computed tomography (CT) as well as magnetic resonance imaging (MRI) were implemented in recent years. Objectives: To compare the efficacy and limitations of the most important imaging techniques regarding the UUT. Materials And Methods: Systematic review of the literature and current German, European, and American guidelines regarding bladder cancer, urothelial carcinoma of the UUT and hematuria. Results: The CT-based urography has superseded excretory urography and is the first choice for imaging of the UUT. In case of contraindications, MRI is a feasible alternative. In all cases, a urography phase is indispensable. Conclusions: Imaging of the UUT has to be used in a reasonable combination together with endourological methods and cytology. Optical coherence tomography, confocal laser endomicroscopy and scientific innovations such as radiomics might improve UUT imaging and differential diagnosis of UUT lesions in the future. abstract_id: PUBMED:21748467 Usefulness of computed tomography performed immediately after excretory urography in patients with delayed opacification or dilated upper urinary tract of unknown cause. Purpose: To evaluate the diagnostic value of computed tomography (CT) performed immediately after excretory urography (EU) in patients with delayed renal opacification or dilated upper urinary system with nonconclusive diagnosis after EU. Materials And Methods: CT was performed immediately after EU in 39 patients with delayed opacification or dilated upper urinary system of unknown cause, without additional intravenous contrast administration for the CT study. We classified EU + CT findings as benign or malignant causes and we compared our results with the final diagnosis. Results: The combination of EU + CT correctly diagnosed 38 out of the 39 cases with a sensitivity of 97%. Correct diagnosis was established in all malignant cases (n = 17) but one benign case consistent with blood clots in the upper urinary tract was incorrectly diagnosed as a multicentric urothelial carcinoma. Sensitivity, specificity, and accuracy for the diagnosis of the underlying cause with EU + CT was 100%, 95%, and 97%, respectively. The final diagnoses were: urothelial carcinoma (n = 10), stone disease (n = 10), bladder tumor (n = 4), benign post-treatment ureteral stenosis (n = 4), ureteral invasion (n = 3), benign bladder disease (n = 2), urinary tract infections (n = 2), crossing vessels (n = 1), ureteropelvic junction obstruction (n = 1), retrocaval ureter (n = 1), and blood clots in the upper urinary tract due to bleeding renal metastasis from lung cancer (n = 1). Conclusion: Combined EU and CT study allowed correct diagnosis of the underlying cause of delayed excretion or upper urinary tract dilatation in 97% of cases. The combination of EU and CT provides diagnosis reducing time and radiation. abstract_id: PUBMED:28126213 Upper and Lower Tract Urothelial Imaging Using Computed Tomography Urography. Computed tomography (CT) urography is the best noninvasive method of evaluating the upper urinary tract for urothelial malignancies. However, the utility of CT urography is heavily contingent on the use of proper image acquisition protocols. This article focuses on the appropriate protocols for optimizing CT urography acquisitions, including contrast administration and the timing of imaging acquisitions, as well as the use of ancillary techniques to increase collecting system distention. In addition, imaging findings are discussed that should raise concern for urothelial carcinoma at each of the 3 segments of the urinary tract: the intrarenal collecting systems, ureters, and bladder. abstract_id: PUBMED:19037644 MR urography for suspected upper tract urothelial carcinoma. The key components of the MR urography protocol for suspected upper tract urothelial carcinoma are coronal T2-weighted hydrographic sequences without contrast agent and coronal gadolinium-enhanced T1-weighted 3D-spoiled gradient-recalled echo in nephrographic and pyelographic phases. Upper tract urothelial carcinomas can be categorized into papillary tumor, flat tumor, and infiltrative tumor based on the growth pattern and extent. Papillary lesions appear as small filling defects of soft tissue signal on T2-weighted hydrographic and T1-weighted pyelographic phase images. On nephrographic phase images, the lesions show homogeneous enhancement. A flat tumor appears as a segmental area of diffuse thickening and enhancement of the urinary tract wall on nephrographic phase images. Infiltrative tumor often appears as a large heterogeneously enhancing mass. MR urography is a promising alternative for CT urography in the evaluation of upper tract urothelial carcinoma, especially when the patient has a contraindication to iodinated contrast material. abstract_id: PUBMED:26750188 Role of computed tomography urography in the clinical evaluation of upper tract urothelial carcinoma. Intravenous urography has been widely used for the evaluation of upper tract urothelial carcinoma. However, computed tomography urography presently has a higher diagnostic accuracy for upper tract urothelial carcinoma (94.2-99.6%) than intravenous urography (80.8-84.9%), and has replaced intravenous urography as the first-line imaging test for investigating patients with a high risk of upper tract urothelial carcinoma. Although the detection rate for bladder tumors using standard computed tomography urography is not yet high enough to replace cystoscopy, the addition of a 60- to 80-s delayed scan after the administration of contrast material for the whole pelvis improves the detection rate. A drawback to computed tomography urography is the higher radiation dose of 15-35 mSv, compared with a mean effective dose of 5-10 mSv for intravenous urography. Among several approaches to reducing the radiation dose, the use of an iterative reconstruction algorithm is most likely to become an effective solution because of its simplicity. One advantage of computed tomography urography over intravenous urography is its ability to reliably differentiate between upper tract urothelial carcinoma and calculi or blood clots. Computed tomography urography also shows characteristic findings of other benign conditions. These findings, in combination with negative cytology, are very important diagnostic clues for avoiding an unnecessary nephroureterectomy. For the clinical staging, a recent study has reported the high diagnostic accuracy of computed tomography urography with respect to ≥pT3 tumors. The present review shows the current status of computed tomography urography for the evaluation of upper tract urothelial carcinoma. abstract_id: PUBMED:31440804 Upper urinary tract urothelial carcinoma on multidetector CT: spectrum of disease. Urothelial carcinoma of the upper urinary tract (UUT) is a relatively uncommon genitourinary malignancy, accounting for about 5-7% of urothelial tumors. The significant features of this tumor are multifocality and high rate of recurrence. Computed tomography urography (CTU) has replaced excretory urography (EU) and retrograde pyelography (RP) for imaging of upper tract urothelial carcinoma. While many studies have confirmed high sensitivity (88-100%) and specificity (93-100%) of CTU, an optimized CT protocol is of critical importance in screening, staging, and post-operative follow-up of patients (Chlapoutakis, Eur J Radiol 73(2):334-338, 2010; Caoli and Cohan, Abdom Radiol (NY) 41(6):1100-1107, 2016). The key element of the CT protocol is to have adequate distension of the collecting system with excreted contrast, to detect subtle lesions at an early stage. In this article, we discuss the background of upper urinary tract TTC, pathogenesis, CT protocol and the role of imaging in evaluation of this malignancy, staging, as well as different imaging appearances. abstract_id: PUBMED:30031461 Upper and Lower Tract Urothelial Imaging Using Computed Tomography Urography. Computed tomography (CT) urography is the best noninvasive method of evaluating the upper urinary tract for urothelial malignancies. However, the utility of CT urography is heavily contingent on the use of proper image acquisition protocols. This article focuses on the appropriate protocols for optimizing CT urography acquisitions, including contrast administration and the timing of imaging acquisitions, as well as the use of ancillary techniques to increase collecting system distention. In addition, imaging findings are discussed that should raise concern for urothelial carcinoma at each of the 3 segments of the urinary tract: the intrarenal collecting systems, ureters, and bladder. Answer: CT urography (CTU) is a critical imaging modality for the diagnosis of upper urinary tract urothelial carcinoma (UTUC). The question of whether both nephrographic and excretory phases are necessary for the detection of UTUC has been addressed in the literature. A study comparing the diagnostic performance of nephrographic phase only, excretory phase only, and both phases combined found that the nephrographic and excretory phases are indeed complementary for the detection of UTUC. The area under the curve (AUC) of the combined nephrographic and excretory phases was significantly higher than that of the nephrographic phase alone for one reviewer in the per-patient analysis and higher than that of the excretory phase alone for another reviewer in the per-segment analysis. However, there were no significant differences in other comparisons, suggesting that while the combined phases may offer some diagnostic advantage, each phase alone still provides substantial diagnostic value (PUBMED:26295668). CT urography has been shown to be more accurate than excretory urography in the detection and localization of UTUC, and it should be considered as the initial examination for evaluating patients at high risk for this type of cancer (PUBMED:21512076). Moreover, CTU is now considered the imaging modality of choice for the diagnosis and staging of UTUC, guiding disease management (PUBMED:33348865). In terms of imaging techniques, CT-based urography has superseded excretory urography and is the first choice for imaging of the upper urinary tract (UUT). MRI is a feasible alternative in cases of contraindication to CT, and a urography phase is indispensable in all cases (PUBMED:30617530). In conclusion, while both nephrographic and excretory phases of CTU provide valuable diagnostic information and their combination may offer some advantages, each phase alone is also capable of detecting UTUC with high sensitivity and specificity. The choice of phase(s) may depend on the specific clinical scenario, patient risk factors, and the need for detailed anatomical and functional assessment of the urinary tract.
Instruction: Does weight-adjusted anti-tumour necrosis factor treatment favour obese patients with Crohn's disease? Abstracts: abstract_id: PUBMED:23337170 Does weight-adjusted anti-tumour necrosis factor treatment favour obese patients with Crohn's disease? Background: Adalimumab (ADA) is a subcutaneous anti-tumour necrosis factor (anti-TNF) agent, effective in inducing and maintaining remission in Crohn's disease (CD). Unlike Infliximab (IFX), ADA dosing is not weight adjusted and dose frequency is based on clinical response. Aim: To determine whether obesity is a risk factor for early loss of response (LOR) to anti-TNF treatment and whether weight-adjusted anti-TNF treatment is favourable. Materials And Methods: A hospital database of CD patients receiving anti-TNF treatment was analyzed retrospectively. The relationship between time to LOR and BMI was examined by Kaplan-Meier (KM) survival curves and a Cox proportional hazards model. Results: ADA patients: Of the 54 patients (46 BMI<30 and 8 BMI≥30), KM estimation indicated a significantly shorter time to dose escalation in the BMI of at least 30 (χ=6.117, P=0.01). The Cox proportional hazards model showed that an increased hazard of LOR to ADA is related to increases in BMI (P=0.04). IFX patients: Of the 76 patients (62 BMI<30 and 14 BMI≥30), KM estimation showed that the differences in survival curves were not significant (χ=1.933, P=0.16) for the BMI groups. This was supported by the Cox proportional hazard model (P=0.36). Conclusion: BMI appears to be important in predicting ADA efficacy (LOR) in CD. IFX appears to overcome this reduction of efficacy in obese patients. A prospective study evaluating the effect of weight on anti-TNF drug response and serum drug levels is warranted. abstract_id: PUBMED:31955605 Associations Between Obesity and the Effectiveness of Anti-Tumor Necrosis Factor-α Agents in Inflammatory Bowel Disease Patients: A Literature Review and Meta-analysis. Background: A total of 15% to 40% of adult inflammatory bowel disease (IBD) patients are obese. The influence of obesity on anti-tumor necrosis factor-α (anti-TNF-α) treatment in IBD patients is not consistent. Objective: To determine the association between obesity and the efficacy of anti-TNF treatment in IBD patients. Methods: We performed a systematic search from January 1990 through November 2019 on MEDLINE, Web of Science, Google Scholar, ClinicalTrials.gov, and Cochrane library. We included randomized controlled trials and observational cohort studies that investigated the outcome of anti-TNF treatment in IBD patients with stratification according to body mass index or body weight. The odds ratio (OR) and its 95% CI were calculated. Results: In this pooled meta-analysis, we observed that obesity increased the odds of failure of anti-TNF therapy (OR = 1.195; 95% CI = 1.034-1.380; P = 0.015; I2 = 47.8%). After performing subgroup analyses, obesity was associated with higher odds of anti-TNF treatment failure in ulcerative colitis (UC) patients (OR = 1.413; 95% CI = 1.008-1.980; P = 0.045; I2 = 20.0%) but not in Crohn's disease patients (OR = 1.099; 95% CI = 0.928-1.300). Obesity significantly increased the odds of treatment failure of both dose-fixed and weight-based anti-TNF agents (OR = 1.121, 95% CI = 1.027-1.224, P = 0.011, and OR = 1.449, 95% CI = 1.006-2.087, P = 0.046, respectively). Conclusion and Relevance: In our meta-analysis, obesity was associated with the inferior response of anti-TNF treatments in UC patients. Clinicians should be aware that obese UC patients may require higher doses in anti-TNF treatment. abstract_id: PUBMED:28895012 Biologic Agents Are Associated with Excessive Weight Gain in Children with Inflammatory Bowel Disease. Background: Children with active inflammatory bowel disease (IBD) are frequently underweight. Anti-tumor necrosis factor (anti-TNF) agents may induce remission and restore growth. However, its use in other autoimmune diseases has been associated with excess weight gain. Our aim was to examine whether children with IBD could experience excess weight gain. Methods: A centralized diagnostic index identified pediatric IBD patients evaluated at our institution who received anti-TNF therapy for at least 1 year between August 1998 and December 2013. Anthropometric data were collected at time of anti-TNF initiation and annually. Excess weight gain was defined as ΔBMI SDS (standard deviation score) where patients were (1) reclassified from "normal" to "overweight/obese," (2) "overweight" to "obese," or (2) a final BMI SDS >0 and ΔSDS >0.5. Results: During the study period, 268 children received anti-TNF therapy. Of these, 69 had sufficient follow-up for a median of 29.3 months. Median age at first anti-TNF dose was 12.8 years. At baseline, mean weight SDS was -0.7 (SD 1.4), while mean BMI SDS was -0.6 (1.3). Using baseline BMI SDS, 11.6% were overweight/obese. At last follow-up (LFU), however, the mean ΔBMI SDS was 0.50 (p < 0.0001). However, 10 (17%) patients had excess weight gain at LFU; 3 patients were reclassified from "normal" to "obese," and 7 had a final BMI SDS >0 and ΔSDS >0.5. Conclusions: Pediatric patients with IBD may experience excess weight gain when treated with anti-TNF agents. Monitoring for this side effect is warranted. abstract_id: PUBMED:34291800 Visceral Adipose Tissue Volumetrics Inform Odds of Treatment Response and Risk of Subsequent Surgery in IBD Patients Starting Antitumor Necrosis Factor Therapy. Background: Data describing the effect of obesity on antitumor necrosis factor (anti-TNF) treatment response are inconsistent. Visceral adipose tissue (VAT) is a superior marker of adiposity to body mass index. However, its effect on treatment response is unclear. We aimed to evaluate the effect of VAT on anti-TNF treatment response. Methods: Inflammatory bowel disease (IBD) patients starting anti-TNF agents between January 1, 2009, and July 31, 2019, were included. 3-dimensional measurements of VAT volume and visceral fat index (visceral:subcutaneous adipose tissue ratio; VFI) were obtained from computed tomography (CT) scans. Subjects were categorized by predefined volume cutoffs (<1500cm3, 1500-2999cm3, ≥3000cm3) and VFI (<0.33, 0.33-0.66, ≥0.67). Primary outcomes included a composite treatment response end point at 6 and 12 months. Secondary outcomes were surgery at 6 and 12 months. Multivariable logistic regression was used to calculate adjusted odds ratio (aOR) and 95% confidence interval (CI). Results: The final cohort included 176 patients. No significant differences in treatment response at 6 months was observed. At 12 months, compared with volume <1500cm3, patients with volume 1500-2999cm3 had higher odds of response (aOR, 3.52; 95% CI, 1.16-10.71; P = .023), whereas volume ≥3000cm3 did not. Compared with VFI<0.33, VFI ≥0.67 had higher odds of surgery at 6 (aOR, 48.22; 95% CI, 4.73-491.57; P = .023) and 12 months (aOR, 20.94; 95% CI, 3.14-139.67; P = .004). Post hoc analysis suggested VAT may affect drug pharmacokinetics. Conclusions: We found VAT volume is associated with anti-TNF treatment response in a nondose dependent manner, and VFI may inform risk of surgery after anti-TNF initiation. If confirmed by prospective studies, VAT volumetrics are potentially useful biomarkers to inform IBD treatment decisions. abstract_id: PUBMED:38291691 Trajectory of body mass index and obesity in children with Crohn's disease compared to healthy children. Background: There is increasing recognition that children with Crohn's Disease (CD) can develop obesity. Methods: Using the RISK Study, an inception cohort of pediatric CD participants, and Bone Mineral Density in Childhood Study (BMDCS), a longitudinal cohort of healthy children, multivariable linear mixed effects, generalized linear mixed effects, and logistic regression models were used to evaluate factors associated with change in body mass index z-score (BMIZ), obesity, and excessive weight gain, respectively. Results: 1029 CD participants (625 exposed to antitumor necrosis factor (anti-TNF) therapy) and 1880 healthy children were included. Change in BMIZ was higher in CD exposed to anti-TNF as compared to CD unexposed to anti-TNF and the healthy reference group. Sex, age, baseline BMIZ, C-reactive protein, anti-TNF, and steroids were associated with changes in BMIZ in CD. CD exposed (odds ratio [OR] 4.81, confidence interval [CI] 4.00-5.78) and unexposed (OR 3.14, CI 2.62-3.76) had a greater likelihood of becoming obese versus the healthy reference group. While the prevalence of obesity was higher at baseline in the healthy reference group (21.3%) versus CD participants (8.5% exposed vs. 11.1% unexposed), rates of obesity were similar by the end of follow-up (21.4% healthy vs. 20.3% exposed vs. 22.5% unexposed). Anti-TNF therapy was an independent risk factor for the development of obesity and excessive weight gain in CD participants. Conclusions: Patients with CD have dynamic changes in BMIZ over time, and while for most, this is restorative, for some, this can lead to obesity and excessive weight gain. It is important to understand the factors that may lead to these changes, including anti-TNF therapy. Counseling of patients and early lifestyle intervention may be necessary. abstract_id: PUBMED:29385469 Anti-TNF Therapeutic Drug Monitoring in Postoperative Crohn's Disease. Background: Anti-TNF prevents postoperative Crohn's disease recurrence in most patients but not all. This study aimed to define the relationship between adalimumab pharmacokinetics, maintenance of remission and recurrence. Methods: As part of a study of postoperative Crohn's disease management, some patients undergoing resection received prophylactic postoperative adalimumab. In these patients, serum and fecal adalimumab concentration and serum anti-adalimumab antibodies [AAAs] were measured at 6, 12 and 18 months postoperatively. Levels of Crohn's disease activity index [CDAI], C-reactive protein [CRP] and fecal calprotectin [FC] were assessed at 6 and 18 months postoperatively. Body mass index and smoking status were recorded. A colonoscopy was performed at 6 and/or 18 months. Results: Fifty-two patients [32 on monotherapy and 20 on combination therapy with thiopurine] were studied. Adalimumab concentration did not differ significantly between patients in endoscopic remission vs recurrence [Rutgeerts ≥ i2] [9.98µg/mL vs 8.43 µg/mL, p = 0.387]. Patients on adalimumab monotherapy had a significantly lower adalimumab concentration [7.89 µg/mL] than patients on combination therapy [11.725 µg/mL] [p = 0.001], and were significantly more likely to have measurable AAA [31% vs 17%, p = 0.001]. Adalimumab concentrations were lower in patients with detectable AAA compared with those without [3.59 µg/mL vs 12.0 µg/mL, p < 0.001]. Adalimumab was not detected in fecal samples. Adalimumab serum concentrations were lower in obese patients compared with in non-obese patients [p = 0.046]. Conclusion: Adalimumab concentration in patients treated with adalimumab to prevent symptomatic endoscopic recurrence postoperatively is, for most patients, well within the therapeutic window, and is not significantly lower in patients who develop recurrence compared with in those who remain in remission. Mechanisms of anti-TNF failure to prevent postoperative recurrence remain to be determined in these patients. abstract_id: PUBMED:23158500 Risk factors of non-alcoholic fatty liver disease in patients with inflammatory bowel disease. Background: Metabolic risk factors are associated with non-alcoholic fatty liver disease (NAFLD), but they are less frequent in inflammatory bowel disease (IBD). Aim: This study evaluates the frequency of NAFLD and its risk factors among IBD patients including anti-TNF-α therapy. Methods: IBD patients who underwent abdominal imaging from January, 2009 to December, 2010 were analyzed in this nested, case-controlled study. IBD patients with NAFLD by imaging were compared with those who had no evidence of NAFLD (control). Results: Among 928 IBD patients, 76 (8.2%) had evidence of NAFLD by imaging, and were compared to 141 patients without NAFLD evaluated (study: control ratio=~1:2). NAFLD patients were older (46.0 ± 13.3 vs. 42.0 ±14.1 years; p=0.018) and had a later onset of IBD compared to the control group (37.2 ± 15.3 vs. 28.7 ± 23.8 years; p=0.002). Metabolic syndrome was present in 29.0% of NAFLD patients, with a median Adult Treatment Panel risk factor of 2 [Interquartile range 1,3]. Patients not receiving anti-TNF-α therapy had a higher occurrence of NAFLD (p=0.048). In multivariate analysis, hypertension (OR=3.5), obesity (OR=2.1), small bowel surgeries (OR=3.7), and use of steroids at the time of imaging (OR=3.7) were independent factors associated with NAFLD. Conclusion: NAFLD occurred in 8.2% of the IBD population. NAFLD patients were older and had a later onset of IBD disease. IBD patients develop NAFLD with fewer metabolic risk factors than non-IBD NAFLD patients. It is also less common among patients who received anti-TNF-α therapy. abstract_id: PUBMED:36687448 Impact of tumor necrosis factor antagonist combination and anti-integrin therapies on body mass index in inflammatory bowel disease: A cross-sectional study. Background: The impact of biologic therapies on body mass index (BMI) in patients with inflammatory bowel disease (IBD) is unclear. This study investigates any associations between BMI, type of IBD, and the type of medications taken among patients with IBD with varying weight categories. Methods: A cross sectional study was performed in an IBD tertiary care center. Data was obtained from patients with IBD attending outpatient clinics from January 1st, 2021 until November 1st, 2021. Adult patients, older than 18 years, with a diagnosis of Crohn's disease (CD) or ulcerative colitis (UC) were recruited. The primary outcome was the association between BMI and medication used in IBD. The secondary outcome was the association between BMI and disease type and location in patients with IBD. Results: The study included a total of 528 patients of which, 66.5% have CD. Patients with normal weight comprises 55.9% of the participants, while those who are underweight, overweight or obese are 3.4, 28.2, and 12.5%, respectively. None of the underweight patients had UC. Among the normal weight, overweight and obese BMI categories, 34.6% vs. 36.2% vs. 31.8% had UC, respectively. Patients who are on tumor necrosis factor inhibitors (anti-TNF) with an immunomodulator (anti-TNF combination), are more likely to be overweight or obese than patients who are not on anti-TNF combination (OR 2.86, 95% CI 1.739-4.711, p < 0.001). Patients on vedolizumab are twice as likely to be overweight or obese than patients not on vedolizumab (OR 2.23, 95% CI 1.086-4.584, p < 0.05). Patients with ileocolonic CD are more likely to be overweight or obese compared to other subtypes of CD (OR 1.78, 95% CI 1.14-2.77, p = 0.01). Conclusion: Many patients with IBD are either obese or overweight. Patients with IBD who are on anti-TNF combination therapy or vedolizumab monotherapy are more likely to be obese and overweight. In addition, patients will ileocolonic CD are more likely to be obese or overweight. abstract_id: PUBMED:25795947 Whey and soy protein supplements changes body composition in patients with Crohn's disease undergoing azathioprine and anti-TNF-alpha therapy. Background: Crohn's disease (CD) is a chronic transmural inflammation of the gastrointestinal tract of unknown cause. Malnutrition associated with active CD has been reduced although obesity has increased. Dietary strategies such as those with high-protein have been proposed to reduce body fat. This study compares the effects of two supplements on the nutritional status of CD patients. Materials And Methods: 68 CD patients were randomized in two groups: whey protein group (WP) and soy protein group (SP). Using bioimpedance analysis, anthropometry and albumin and pre-albumin dosages the nutritional status was measured before starting the intervention and after 8 and 16 weeks. The disease activity was determined by Crohn's Disease Activity Index and serum C-reactive protein dosage and dietary intake by 24h dietary recalls. Results: Forty-one patients concluded the study and both supplements changed body composition similarly. Triceps skin fold thickness (p< 0.001) and body fat percentage (p=0.001) decreased, whereas mid-arm muscle circumference (p=0.004), corrected arm muscle area (p=0.005) and body lean percentage (p=0.001) increased. Conclusions: For Crohn's disease patients undergoing anti TNF-alpha and azatioprine therapies, supplementation with whey and soy proteins changes body composition through reduction of body fat and thus contributes to control inflammation. abstract_id: PUBMED:33094963 The Effect of Adiposity on Anti-Tumor Necrosis Factor-Alpha Levels and Loss of Response in Crohn's Disease Patients. Introduction: A high body mass index is known to adversely affect antitumor necrosis factor-alpha trough levels and secondary loss of response (SLOR) in patients with Crohn's disease. We hypothesize that high levels of adiposity negatively affect these outcomes and aimed to determine if this relationship exists. Methods: We performed a retrospective cross-sectional study of 69 patients with Crohn's disease from two tertiary inflammatory bowel disease centers between February 1, 2015, and June 30, 2018. Primary responders to infliximab (IFX) or adalimumab (ADA) who had a trough level performed within 6 months of CT or MRI scan and at least 12 months of clinical follow-up were eligible for inclusion. Body composition as measured on CT/MRI scans were correlated with trough concentration and time SLOR. Multivariate adjustments were made for established risk factors known to affect trough levels and SLOR. Results: Of 69 included patients, 44 (63.8%) and 25 (36.2%) patients received IFX and ADA, respectively. Multivariate analysis revealed that IFX trough concentrations were inversely correlated with visceral fat area (-0.02 [-0.04, -0.003], P = 0.03), visceral fat index (-0.07 [-0.12, -0.01], P = 0.02) and visceral fat: skeletal muscle area ratio (-3.81 [-7.13, -0.50], P = 0.03), but not body mass index (-0.23 [-0.52, 0.06], P = 0.11). No predictive factors were found for ADA. Increased total adipose area was associated with an increased risk of SLOR in ADA-treated patients, but not IFX-treated patients (hazard ratio = 1.01 [1.002, 1.016], P = 0.011). Discussion: Visceral adiposity is an important predictor of IFX trough levels, and high total adiposity predicts for SLOR to ADA. Answer: The relationship between obesity and the efficacy of anti-tumor necrosis factor (anti-TNF) treatment in Crohn's disease (CD) patients is complex and varies depending on the specific anti-TNF agent used. According to the study by Singh et al. (PUBMED:23337170), obesity appears to be a risk factor for early loss of response (LOR) to adalimumab (ADA), a non-weight adjusted anti-TNF treatment. The study found that obese patients (BMI≥30) had a significantly shorter time to dose escalation compared to non-obese patients, indicating that ADA may be less effective in obese patients. In contrast, the same study did not find a significant difference in the efficacy of infliximab (IFX), a weight-adjusted anti-TNF treatment, between obese and non-obese patients. This suggests that weight-adjusted anti-TNF treatments like IFX may overcome the reduction in efficacy seen in obese patients when using non-weight adjusted treatments like ADA. A meta-analysis by Singh et al. (PUBMED:31955605) also observed that obesity increased the odds of failure of anti-TNF therapy in inflammatory bowel disease (IBD) patients, particularly in those with ulcerative colitis (UC), but not significantly in CD patients. This indicates that while obesity is generally associated with a poorer response to anti-TNF treatments, the effect may be more pronounced in UC than in CD. Furthermore, the study by Singh et al. (PUBMED:33094963) found that high levels of adiposity negatively affected anti-TNF trough levels and secondary loss of response (SLOR) in CD patients, particularly with ADA treatment. This suggests that obese CD patients may require higher doses or more frequent dosing of non-weight adjusted anti-TNF agents like ADA to maintain therapeutic drug levels and prevent SLOR. In conclusion, while weight-adjusted anti-TNF treatments like IFX may not show a significant difference in efficacy between obese and non-obese CD patients, non-weight adjusted treatments like ADA may be less effective in obese patients, potentially favoring the use of weight-adjusted treatments in this population. However, the evidence is not entirely consistent, and further research is warranted to fully understand the impact of obesity on anti-TNF treatment efficacy in CD patients.
Instruction: Axillary reverse mapping: Is it feasible in locally advanced breast cancer patients? Abstracts: abstract_id: PUBMED:32912671 Axillary reverse mapping in patients undergoing axillary dissection -a short review of the literature. Axillary lymph node dissection (ALND) can be avoided not only in patients with negative sentinel lymph nodes (SLNs) but also in those with one or two positive SLNs receiving breast or axillary radiation. However, ALND has remained the standard treatment for patients with clinically positive nodes (cN+). Although axillary reverse mapping (ARM) was developed to map and preserve arm lymphatic drainage during ALND, it could not be indicated for cN + patients because metastatic rate of ARM nodes is high. However, a new type of conservative ALND with ARM attempts to preserve ARM lymphatics and nodes except SLNs and other suspicious palpable nodes, including suspicious ARM nodes. This procedure allowed reduction of the rate of arm lymphedema without increasing axillary recurrence, although patients received postoperative chemotherapy and high-risk patients underwent axillary radiation. Thus, a traditional full ALND may not be necessary for cN + patients in the era of effective multimodality therapy. abstract_id: PUBMED:37958475 Axillary Reverse Mapping in Clinically Node-Positive Breast Cancer Patients. Background: Axillary reverse mapping (ARM) nodes are involved in a significant proportion of clinically node-positive (cN+) breast cancer patients. However, neoadjuvant chemotherapy (NAC) is effective at decreasing the incidence of nodal metastases in cN+ patients. Patients And Methods: One hundred forty-five cN+ patients with confirmed nodal involvement on ultrasound-guided fine needle aspiration cytology were enrolled in this study: one group underwent axillary lymph node dissection (ALND) without NAC (upfront surgery group), and the other group underwent ALND following NAC (NAC group). The patients underwent 18F-FDG-positron emission tomography/computed tomography (18F-FDG-PET/CT) before surgery, as well as an ARM procedure during ALND. Results: the rates of involvement of ARM nodes in the NAC group were significantly lower than those of the upfront surgery group (36.6% vs. 62.2%, p < 0.01). Notably, involvement was significantly decreased after NAC in non-luminal-type tumors as compared to the luminal-type (18.4% vs. 48.5%: p < 0.01). Moreover, there was a significant difference in ARM node involvement after NAC between patients with or without axillary uptake of 18F-FDG (61.5% vs. 32.5%: p < 0.01). Conclusions: NAC significantly decreased the risk of ARM node metastases in cN+ patients, but 18F-FDG-PET/CT was not suitable to detect residual metastatic disease of the axilla after NAC. abstract_id: PUBMED:32059974 Evaluation of axillary reverse mapping (ARM) in clinically axillary node negative breast cancer patients - Randomised controlled trial. Background: Axillary lymph node dissection (ALND) is an important procedure for control of axillary nodal metastasis in breast cancer patients. Lymphedema, restriction of shoulder movement and axillary nodal recurrence are the most disabling complications of the procedure. Axillary reverse mapping (ARM) procedure for arm lymph node identification emerged as a step for their preservation during ALND. Here we are testing the effect of ARM on lymphedema development and whether it compromises oncological safety in early breast cancer patients. Patients And Methods: 98 clinically node free breast cancer female patients undergoing completion ALND after positive sentinel lymph node biopsy were recruited in the study. They were put into group A (49 patients with ARM + ve preservation ALND) and group B (49 patients in the conventional ALND group). ARM procedure was performed in both groups, ARM positive nodes were preserved in group A, marked and taken out with other axillary LN in group B. The outcome was histopathology of ARM + ve LN, development of arm lymphedema, and restriction of shoulder movement on follow-up. Results: ARM was positive in 46 patients (93.8%) in group A and 43 patients (87.8%) in group B, ARM + ve LN revealed positive metastasis only in 1 patient (2.3%) in group B. Lymphedema developed in 3 (6.5% patients in group A and 9 patients (20.9%) in group B. Restriction of shoulder movement showed a non-significant difference between the two groups. Conclusion: Axillary reverse mapping and preservation of arm lymphatics helped to decrease the lymphedema rate without compromising oncological safety in early breast cancer. abstract_id: PUBMED:34422491 Axillary Reverse Mapping in Patients Undergoing Axillary Lymph Node Dissection: A Single Institution Experience From India. Introduction Axillary lymph node dissection (ALND) remains the gold standard for clinically node-positive and sentinel node biopsy (SLNB) positive breast cancer patients, but it is associated with the debilitating morbidity of lymphedema. Recently, a new technique of axillary reverse mapping (ARM) has been described which helps in differentiating arm lymphatics from breast lymphatics. Aim To evaluate the applicability of the ARM technique with blue dye and the incidence of metastases in ARM nodes in the Indian population. Method A total of 120 patients underwent ARM during ALND. Blue lymphatic channels and lymph nodes were noted. All axillary nodes along with ARM nodes were dissected and sent separately for pathological evaluation for metastases. Results ARM nodes or lymphatics were identified in 65 (54.17%) out of 120 patients. The mean ARM lymph node yield was 1.4. The patients in whom ARM lymph nodes or lymphatics were not identified had significantly higher T stage and N stage (p <0.00001) than in whom it was identified. There was no significant correlation between ARM identification with BMI, estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2/neu), and neoadjuvant chemotherapy (NACT) status. ARM nodes were found metastatic in three patients (7.5%). All these patients had clinically N2 disease and all had pathologically more than ten nodes involved in the axilla. Conclusion The identification rate of ARM nodes and lymphatics with blue dye is lower in Indian patients who present with higher clinical T and N stage disease. Other clinicopathological parameters were not associated with the identification rate. The rate of metastasis in ARM nodes is high in patients with a high axillary tumor burden. Hence, preserving ARM nodes may not be oncologically safe in higher N stage disease. abstract_id: PUBMED:37154174 Axillary reverse mapping in breast cancer: An overview. Standard operative management for breast carcinoma has significantly shifted from extensive procedures to minor interventions.Although axillary dissection was a fundamental component of operative management, sentinel biopsy is an actual process for axillary staging. Axillary dissection may be postponed for cases that have negative SLNs or 1 or 2 infiltrated lymph nodes undergoing breast or axillary radiation. Contrarily, axillary dissection is still the conventional management for patients with clinically positive nodes.Arm lymphedema is a frequent and overwhelming complication of axillary dissection, with a worse impact on the patient's life.Axillary reverse mapping was recently introduced to map and conserve the lymph drain of the upper limb throughout axillary dissection or sentinel biopsy. A technique based on the theory that the breast's lymphatic drainage differs from those that drain the arm, so preserving lymphatic drainage of the upper limb can prevent lymphedema, thereby not raising the risk of axillary recurrence.Therefore, this technique is the reverse of sentinel biopsy, which remove the lymph nodes that drain the breast. abstract_id: PUBMED:24934169 Axillary reverse mapping: Is it feasible in locally advanced breast cancer patients? Introduction: Axillary dissection is associated with a high incidence of lymphedema, which has been brought down with the introduction of sentinel lymph node biopsy (SLNB) in patients with early breast cancer. However, sentinel lymph node biopsy is not widely accepted in patients of locally advanced breast cancer (LABC) [T3N1, Any T4, Any N2-3 with no distant metastasis] after neo-adjuvant chemotherapy (NACT) and these patients routinely undergo axillary lymph node clearance. Axillary reverse mapping (ARM) with blue dye has the potential to differentiate the arm lymphatics from the breast lymphatics and it can be used to decrease lymphedema in patients undergoing ALND by preserving these lymphatics. However, ARM in LABC patients is yet to be accepted as the standard of care. Materials And Methods: 51 patients of locally advanced breast carcinoma were included in the study from May 2011 to May 2012. All patients received neo-adjuvant chemotherapy followed by modified radical mastectomy. Axillary reverse mapping (ARM) was carried out using blue dye. 2 ml of methylene blue dye was injected intradermal, upper medial aspect of the ipsilateral arm. The number, size and site and distribution of lymph nodes identified were recorded and the nodes were labelled as ARM nodes and complete axillary dissection was carried out. Results: Blue nodes were identified in 45 (88.2%) out of the 51 patients. The average number of ARM nodes identified was 4.03 ± 0.28 [range 1-8]. In majority (77.8%) of the cases, nodes were located in the triangle formed by axillary vein above, below by the first intercostobrachial nerve and medially by the chest wall/serratus anterior. In patients with complete or partial response to NACT, ARM and breast axillary LN were negative in 63.3% patients whereas 36.6% had positive breast but negative ARM nodes. In this study we did not intend to preserve any ARM nodes but in 90% of these cases, at least one ARM node had to be removed or was injured during axillary clearance. ARM nodes could be identified in 15 (83.3%) out of the 18 patients with stable or progressive disease following ARM. 12 (80%) out of these 15 cases demonstrated positive ARM and breast LN whereas 3 (20%) patients had positive breast but negative ARM nodes. Skin tattooing (82.3%) was the most common complication observed in our study. Conclusions: Identification rates of ARM nodes can be improved by injecting the blue dye in the upper medial aspect of the arm at the time of induction. Majority of the arm nodes lie between the axillary vein and the first intercostobrachial nerve. It is difficult to preserve the ARM nodes in patients of LABC, who have had good response to NACT and in patients of LABC with poor response to NACT, the incidence of metastasis in ARM nodes is quite high. Therefore, ARM is not a feasible option in patients with locally advanced breast cancer. abstract_id: PUBMED:25704555 Is axillary reverse mapping feasible in breast cancer patients? In the surgical treatment of breast cancer, axillary lymph node dissection (ALND) can be avoided not only in sentinel lymph node (SLN)-negative patients but also in SLN-positive patients who undergo breast-conserving surgery with whole-breast irradiation and systemic therapy. However, it should be performed not only in clinically node-positive patients but also in other SLN-positive patients who do not meet the Z-0011 criteria. The axillary reverse mapping (ARM) technique has been developing for identifying and preserving lymphatic drainage from the arm during ALND, thereby expected to minimize arm lymphedema. Nevertheless, ARM nodes could be involved not only in clinically node-positive patients but also in clinically node-negative patients. Previously, it was considered that preservation of the ARM lymphatics or lymph nodes is not oncologically safe in patients with axillary lymph node metastases. However, recent studies have demonstrated that the ARM procedure is oncologically feasible in clinically node-negative, SLN-positive patients when ARM nodes do not coincide with SLNs. When ARM nodes do not coincide with SLNs, they are not involved even in SLN-positive patients. On the other hand, ARM lymphatics/nodes within the boundaries of a standard ALND should be resected in SLN-positive patients, when ARM nodes are SLN-ARM nodes. Therefore, surgical treatment of the axilla can be individualized on the basis of the axillary nodal status. abstract_id: PUBMED:35059584 Axillary Reverse Lymphatic Mapping in the Treatment of Axillary Accessory Breast Cancer: A Case Report and Review of Management. Accessory breast tissue is a rare aberration of normal breast development, that presents most commonly in the axilla. Similar to normal breast tissue, it can undergo physiologic and pathologic changes, including malignant transformation. We report a rare case of accessory breast cancer, treated with surgical resection and axillary reverse mapping (ARM), and review current literature focusing on management. We report a 68-year-old female with a history of left breast cancer treated with lumpectomy and axillary dissection, who later developed in-breast recurrence treated with re-lumpectomy and sentinel node biopsy which mapped at the contralateral (right) axilla, but was negative. Two years later screening imaging revealed right axillary tail focal asymmetry with two spiculated masses. Core biopsy showed invasive ductal carcinoma (IDC), and histologic examination of the biopsy could not determine whether this represents a new primary breast cancer or axillary metastasis from the contralateral site. She underwent lumpectomy of the two masses and sentinel node biopsy. During surgery, the masses were identified in the axilla itself, rather than the axillary tail. Final pathology revealed IDC, pT1N0(sn), and extensive ductal carcinoma in situ (DCIS). Due to positive margins, she underwent re-lumpectomy with ARM. Final pathology revealed residual DCIS with negative new margins. The patient was referred for adjuvant radiotherapy. Accessory axillary breast tissue can be confused with axillary tail tissue. It is necessary for the surgeon to distinguish between them by meticulous physical examination and radiologic evaluation, as resection of axillary breast tissue may warrant reverse lymphatic mapping for lymphedema prevention. abstract_id: PUBMED:27444925 Axillary reverse mapping in axillary surgery for breast cancer: an update of the current status. Axillary reverse mapping (ARM) is a technique by which the lymphatic drainage of the upper extremity that traverses the axillary region can be differentiated from the lymphatic drainage of the breast during axillary lymph node dissection (ALND). Adding this procedure to ALND may reduce upper extremity lymphedema by preserving upper extremity drainage. This review of the current literature on the ARM procedure discusses the feasibility, safety and relevance of this technique. A PubMed literature search was performed until 12 August 2015. A total of 31 studies were included in this review. The studies indicated that the ARM procedure adequately identifies the upper extremity lymph nodes and lymphatics in the axillary basin using blue dye or fluorescence. Preservation of ARM lymph nodes and corresponding lymphatics was proven to be oncologically safe in clinically node-negative breast cancer patients with metastatic lymph node involvement in the sentinel lymph node (SLN) who are advised to undergo a completion ALND. The ARM procedure is technically feasible with a high visualisation rate using blue dye or fluorescence. ALND combined with ARM can be regarded as a promising surgical refinement in order to reduce the incidence of upper extremity lymphedema in selected groups of patients. abstract_id: PUBMED:33160781 Oncological safety of selective axillary dissection after axillary reverse mapping in node-positive breast cancer. Introduction: Although the need for axillary lymph node dissection (AD) is decreasing in breast cancer patients, it remains necessary in some cases. Axillary reverse mapping (ARM) enables the detection of upper extremity lymphatic drainage that may be spared during selective axillary dissection (SAD) so as to reduce the risk of lymphedema. The ability of the ARM-SAD procedure to reduce the incidence of lymphedema is being tested in an ongoing randomized trial. Crossover between arm drainage and breast drainage is well documented in the axilla, however, and whether the procedure is oncologically safe remains controversial. We aim to assess the axillary failure rate when a few nodes draining the upper arm are being spared by the ARM-SAD. Methods: We report oncological outcomes, and axillary failure in particular, in the first 100 consecutive axillary node-positive patients treated with ARM-SAD as part of a pilot study and a randomized trial. Results: A median of 18 (IQR 14-22) axillary nodes were excised per patient. During the follow-up (median 51 months, IQR 34-91), 11 patients experienced a treatment failure, but only one - treated with neoadjuvant chemotherapy - developed overt axillary disease as a first (and isolated) event. The crude rate of axillary failure was 1.36% (95% CI: 0.19-9.63) with an estimated 5-year crude cumulative incidence of 1.85% (95% CI: 0-5.47%). Conclusions: The axillary failure rate was low in our patients and did not exceed rates reported in the literature after standard AD, thus indicating that the ARM-SAD procedure is oncologically safe. Answer: Axillary reverse mapping (ARM) is a technique developed to map and preserve arm lymphatic drainage during axillary lymph node dissection (ALND) to potentially reduce the incidence of lymphedema, a common and debilitating complication of ALND. The feasibility of ARM in locally advanced breast cancer (LABC) patients is a subject of investigation. In the context of LABC, the feasibility of ARM is challenged by the high incidence of metastasis in ARM nodes. A study including LABC patients who received neoadjuvant chemotherapy (NACT) found that ARM nodes could be identified in a majority of patients, but preserving ARM nodes was difficult, especially in patients who responded well to NACT. In patients with poor response to NACT, the incidence of metastasis in ARM nodes was quite high, suggesting that ARM may not be a feasible option in LABC patients (PUBMED:24934169). Moreover, another study reported that the identification rate of ARM nodes and lymphatics with blue dye is lower in Indian patients who present with higher clinical T and N stage disease. The rate of metastasis in ARM nodes was high in patients with a high axillary tumor burden, indicating that preserving ARM nodes may not be oncologically safe in higher N stage disease (PUBMED:34422491). However, ARM has been shown to be oncologically safe in clinically node-negative, sentinel lymph node-positive patients when ARM nodes do not coincide with sentinel lymph nodes (PUBMED:25704555). Additionally, a review of the current literature on ARM suggests that the procedure is technically feasible and can be considered a promising surgical refinement to reduce the incidence of upper extremity lymphedema in selected groups of patients (PUBMED:27444925). In summary, while ARM has potential benefits in reducing lymphedema, its feasibility in LABC patients is limited due to the high rate of metastasis in ARM nodes and the difficulty in preserving these nodes during ALND. ARM may be more suitable for patients with lower stages of breast cancer where the axillary nodal burden is not as high (PUBMED:24934169; PUBMED:34422491; PUBMED:25704555; PUBMED:27444925).
Instruction: Is the ultrasound-estimated bladder weight a reliable method for evaluating bladder outlet obstruction? Abstracts: abstract_id: PUBMED:21166745 Is the ultrasound-estimated bladder weight a reliable method for evaluating bladder outlet obstruction? Objective: • To evaluate the correlation between ultrasound-estimated bladder weight (UEBW) in patients with different degrees of bladder outlet obstruction (BOO). Methods: • We evaluated 50 consecutive non-neurogenic male patients with lower urinary tract symptoms (LUTS) referred to urodynamic study (UDS). All patients self-answered the International Prostate Score Symptoms (IPSS) questionnaire. After the UDS, the bladder was filled with 150 mL to determine UEBW. • Patients with a bladder capacity under 150 mL, a previous history of prostate surgery or pelvic irradiation, an IPSS score <8, a bladder stone or urinary tract infection were excluded. • After a pressure-flow study, the Schafer linear passive urethral resistance relation nomogram was plotted to determine the grade of obstruction: Grades I-II/VI were defined as mild obstruction, Grades III-IV/VI as moderate obstruction, and Grades V-VI/VI as severe obstruction. Results: • The UEBW was 51.7 ± 26.9, 54.1 ± 30.0 and 54.8 ± 28.2 in patients with mild, moderate and severe BOO, respectively (P= 0.130). The UEBW allowed us to define four groups: (i) UEBW <35 g; (ii) 35 g ≤ UEBW < 50 g; (iii) 50 g ≤ UEBW < 70 g; and (4) UEBW ≥ 70 g. • We did not find any differences in age, prostate weight, IPSS, PVR, cystometric bladder capacity, presence of detrusor overactive and degree of obstruction in the aforementioned groups. Conclusion: • Despite the fact that some studies have emphasized the value of UEBW as an efficient non-invasive method for evaluating lower urinary tract obstruction, our study suggests that UEBW does not present any individual correlation with LUTS or objective measurements of BOO. abstract_id: PUBMED:28394496 Change of Ultrasound Estimated Bladder Weight and Bladder Wall Thickness After Treatment of Bladder Outlet Obstruction With Dutasteride. Objectives: To investigate the change of bladder wall hypertrophy to relieve bladder outlet obstruction (BOO) by treatment with 5α-reductase inhibitor. Methods: Men who have BOO confirmed by urodynamic study (BOO index ≥40) were treated with dutasteride 0.5 mg once a day for 6 months. We measured ultrasound estimated bladder weight (UEBW), UEBW divided by body surface area (UEBW/BSA), and bladder wall thickness (BWT) before and after treatment. Changes in LUTS parameters were assessed by using the International Prostate Symptom Score, uroflowmetry, residual urine volume, prostate volume, serum prostate-specific antigen (PSA), and LUTS outcome scores (LOS). Correlation between the change of LUTS parameters and UEBW, UEBW/BSA, and BWT were evaluated. We assessed the changes of bladder wall hypertrophy according to the results of benefit, satisfaction, and willingness to continue (BSW) questionnaire. Results: Thirty patients completed the 6-month study. The mean UEBW was 47.10 ± 7.79 g before and 50.07 ± 5.39 g after dutasteride treatment (P = 0.259). The mean UEBW/BSA was 26.47 ± 4.30 g/m2 before and 28.2 ± 3.53 g/m2 after treatment (P = 0.253), and there was no definite change in mean BWT after treatment (P = 0.301). Most LUTS parameters including LOS significantly improved. Increased BOO index value was related to decreased BWT (ρ = 0.361, P = 0.049). There was no definite change in mean UEBW, UEBW/BSA, and BWT according to the results of the BSW questionnaire. Conclusions: There was no change in UEBW, UEBW/BSA and BWT despite improving most clinical parameters suggesting BOO. The changes of bladder wall hypertrophy parameters still have limitations to directly reflect the relief of BOO. abstract_id: PUBMED:19468439 The use of ultrasound-estimated bladder weight in diagnosing bladder outlet obstruction and detrusor overactivity in men with lower urinary tract symptoms. Objectives: Measurement of bladder weight using ultrasound estimates of bladder wall thickness and bladder volume is an emerging clinical measurement technique that may have a role in the diagnosis of lower urinary tract dysfunction. We have reviewed available literature on this technique to assess current clinical status. Methods: A systematic literature search was carried out within PubMed and MedLine to identify relevant publications. These were then screened for relevance. Preliminary results from our clinical experiments using the technique are also included. Results: We identified 17 published papers concerning the technique which covered clinical studies relating ultrasound-estimated bladder wall thickness to urodynamic diagnosis in men, women, and children together with change in response to treatment of bladder outlet obstruction. The original manual technique has been challenged by a commercially available automated technique. Conclusion: Ultrasound-estimated bladder weight is a promising non-invasive technique for the categorization of storage and voiding disorders in both men and women. Further studies are needed to validate the technique and assess accuracy of diagnosis. abstract_id: PUBMED:20846683 Ultrasound estimated bladder weight and measurement of bladder wall thickness--useful noninvasive methods for assessing the lower urinary tract? Purpose: In the last decade interest has arisen in the use of ultrasound derived measurements of bladder wall thickness, detrusor wall thickness and ultrasound estimated bladder weight as potential diagnostic tools for conditions known to induce detrusor hypertrophy. However, to date such measurements have not been adopted into clinical practice. We performed a comprehensive review of the literature to assess the potential clinical usefulness of these measurements. Materials And Methods: A MEDLINE® search was conducted to identify all published literature up to June 2009, investigating measurements of bladder wall thickness, detrusor wall thickness and ultrasound estimated bladder weight. Results: Measurements of bladder and detrusor wall thickness, and ultrasound estimated bladder weight have been studied in men, women and children. A convincing trend has been shown in the ability of these measurements to differentiate men with from those without bladder outlet obstruction. In addition, measurements of bladder wall thickness have revealed a considerable difference between detrusor overactivity and urodynamic stress incontinence. A number of confounding variables and a lack of standardized methodology has resulted in discrepancies among studies. Therefore, reproducible diagnostic ranges or cutoff values have not been established. Conclusions: Ultrasound derived measurements of bladder and detrusor wall thickness, and ultrasound estimated bladder weight are potential noninvasive clinical tools for assessing the lower urinary tract. abstract_id: PUBMED:21247605 The diagnostic efficacy of 3-dimensional ultrasound estimated bladder weight corrected for body surface area as an alternative nonurodynamic parameter of bladder outlet obstruction. Purpose: We investigated the relationship between ultrasound estimated bladder weight/corrected ultrasound estimated bladder weight and the bladder outlet obstruction index derived from pressure flow study to evaluate its diagnostic efficacy to predict bladder outlet obstruction. Materials And Methods: A total 193 men older than 50 years with lower urinary tract symptoms were enrolled in this study. Ultrasound estimated bladder weight measurements were made with a 3-dimensional ultrasound system. Corrected bladder weight was defined as ultrasound estimated bladder weight divided by body surface area on data analysis. The study population was classified into obstructed and unobstructed groups (bladder outlet obstruction index 40 or greater and less than 40, respectively). We evaluated the correlation between bladder outlet obstruction and clinical parameters, including bladder weight/corrected bladder weight and the diagnostic accuracy of bladder weight/corrected bladder weight for bladder outlet obstruction. Results: A total of 50 (26%) and 143 patients (74%) were categorized as obstructed and nonobstructed, respectively. Corrected bladder weight, maximum urine flow and the bladder contraction index showed statistically significant differences between the groups. Bladder weight/corrected bladder weight positively correlated with the bladder outlet obstruction index and corrected bladder weight showed a stronger correlation. Corrected bladder weight was significantly increased depending on obstruction severity. When corrected bladder weight was used to diagnose obstruction, sensitivity, specificity, and positive and negative predictive values were 61.9%, 59.8%, 33.8% and 82.6%, respectively, at a 28 gm/m(2) cutoff. Conclusions: Ultrasound estimated bladder weight/corrected ultrasound estimated bladder weight is a statistically significant parameter correlating with bladder outlet obstruction. However, bladder weight/corrected bladder weight alone was insufficient to predict bladder outlet obstruction due to its weak correlation with and low accuracy for diagnosing obstruction. abstract_id: PUBMED:12161944 Decrease of ultrasound estimated bladder weight during tamsulosin treatment in patients with benign prostatic enlargement. Objective: The noninvasive method for estimating bladder weight (UEBW, Ultrasound Estimated Bladder Weight) can be used as a measure of bladder hypertrophy and may have clinical use for evaluating intravesical obstruction in male patients. The aim of this study was to assess whether, in patients with bladder outlet obstruction (BOO), tamsulosin treatment produced any significant change in UEBW. Methods: 32 male patients with lower urinary tract symptoms (LUTS) suggestive of BOO [benign prostatic hyperflesia (BPH) was the apparent cause of BOO] were enrolled in an open pilot study. At baseline, physical examination, ECG, hematochemical tests, urine analysis, urine culture, urodynamics, urethrocystography, transrectal ultrasound, UEBW and symptom score were performed. Using the International Continence Society (ICS) nomogram, patients were assigned to three different groups: obstructed, not obstructed and equivocal. Only patients in the obstructed and equivocal categories were treated with tamsulosin 0.4 mg once daily for 6 months. Follow-up for all patients took place after 30 days, 3 and 6 months of treatment. Results: In the obstructed group of patients, the decrease in UEBW was observed at 30 days and maintained up to 6 months, with a significantly improved Qmax. A statistically significant correlation was found between UEBW and postvoid residual urine (PVR) and Abrams-Griffith number (AG). Conclusions: The results of this study suggest a significant change in UEBW during tamsulosin treatment. The change observed might be suggestive of a therapeutic effect of tamsulosin on the detrusor muscle. Further and more extensive studies are needed in order to confirm a possible therapeutic effect of tamsulosin on the detrusor muscle. abstract_id: PUBMED:8996337 Noninvasive quantitative estimation of infravesical obstruction using ultrasonic measurement of bladder weight. Purpose: Ultrasound estimated bladder weight was compared to pressure-flow studies to test the ability of ultrasound estimated bladder weight to predict infravesical obstruction. Materials And Methods: A total of 65 men with urinary symptoms underwent ultrasonic measurement of bladder weight and pressure-flow studies. Assuming the bladder is a sphere, ultrasound estimated bladder weight was calculated from bladder wall thickness measured ultrasonically and intravesical volume. Results: Ultrasound estimated bladder weight correlated significantly (p < 0.0001) with the Abrams-Griffiths number, urethral resistance factor and the Schäfer grade of obstruction. A cutoff value of 35 gm. for ultrasound estimated bladder weight revealed a diagnostic accuracy of 86.2% (56 of 65 cases) for infravesical obstruction with 12.1 (4 of 33) and 15.6% (5 of 32) false-positive and false-negative rates, respectively. Conclusions: Ultrasound estimated bladder weight can be measured noninvasively at the bedside and it is promising as a reliable predictor of infravesical obstruction. abstract_id: PUBMED:21308749 Ultrasound estimated bladder weight in men attending the uroflowmetry clinic. Aims: To determine if measurements of ultrasound estimated bladder weight (UEBW) provide an additional diagnostic tool when assessing men with lower urinary tract symptoms (LUTS) in the uroflowmetry clinic. Methods: One hundred men with LUTS attending the uroflowmetry clinic underwent transabdominal ultrasound measurement of bladder weight, using the BVM 9500 bladder scanner (Verathon Medical, Bothell, WA). These data were explored for any correlation between measurements of maximum flow rate (Q(max)) with UEBW, age, height, weight, body mass index (BMI), ICIQ M-LUTS score, M-LUTS voiding score, M-LUTS incontinence score, IPSS, IPSS quality of life score, voided volume, and post-void residual urine. Based on previously reported probabilities of bladder outlet obstruction (BOO), patients were grouped for analysis (Group 1 = Q(max) <10, Group 2 = Q(max) 10-15, Group 3 = Q(max) >15). A one-way ANOVA was undertaken to assess any difference in mean UEBW between the three groups. Results: Statistically significant negative correlations between Q(max) and age (r = -0.308, P = 0.002), M-LUTS voiding score (r = -0.298, P = 0.003), IPSS (r = -0.295, P = 0.003), and post-void residual (r = -0.213, P = 0.033) were observed. A statistically significant positive correlation between Q(max) and voided volume (r = 0.503, P < 0.01) was observed. No association between Q(max) and UEBW was observed (r = 0.12, P = 0.243). Mean UEBW for the three groups was remarkably similar. One-way ANOVA identified there was no statistically significant effect of UEBW on Q(max) F(2, 97) = 0.175, P = 0.840. Conclusion: Mean UEBW did not differ significantly between the three Q(max) groups. Further work is required to investigate the relationship of Q(max) and UEBW in men with urodynamic confirmation of either BOO or detrusor underactivity. abstract_id: PUBMED:34903702 Bladder Sonomorphological Tests in Diagnosing Bladder Outlet Obstruction in Patients with Lower Urinary Tract Symptoms: A Systematic Review and Meta-Analysis. Aims: We aimed to investigate the accuracy of bladder sonomorphological parameters including detrusor wall thickness (DWT) and ultrasound-estimated bladder weight (UEBW) for diagnosing bladder outlet obstruction (BOO) in patients with lower urinary tract symptoms (LUTS). Methods: A comprehensive search was conducted through databases including PubMed, EMBASE, MEDLINE, Cochrane Library, Medicine, China Knowledge Network (CNKI), China Biomedical Literature Database, Wanfang Database, the Chongqing VIP Chinese Science, and Technology Periodical Database (VIP) to select studies assessing the diagnostic accuracy of DWT and UEBW to diagnose BOO in adults with LUTS. Databases were searched from inception to 2020 without restriction. Study quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), and measures of accuracy were calculated using random-effects model. Results: The initial search included 84 publications, of which 78 publications were screened, and 16 studies with 1,847 patients finally contained diagnostic data. The results from 10 out of 16 studies assessing DWT showed a pooled sensitivity (SSY) of 0.68 (95% CI, 0.56-0.78) and specificity (SPY) of 0.91 (95% CI, 0.82-0.96) with I2 values of 93%, while 6 studies evaluating UEBW were analyzed with a SSY of 0.88 (95% CI, 0.78-0.93) and SPY of 0.81 (95% CI, 0.67-0.90) with I2 values of 83%. Conclusions: DWT shows high SPY, and UEBW performs high SSY of diagnosing BOO. Further well-designed studies are needed to evaluate the utilization of DWT and UEBW for the diagnosis of BOO. abstract_id: PUBMED:16986025 Automatic Measurement of Ultrasound-Estimated Bladder Weight (UEBW) from Three-Dimensional Ultrasound. Ultrasound-estimated bladder weight (UEBW) has the promise to become an important indicator for the diagnosis of bladder outlet obstruction. Our goal was to develop and evaluate an approach to accurately, consistently, conveniently, and noninvasively measure UEBW using three-dimensional (3D) ultrasound imaging. A 3D image of the bladder is acquired using a handheld ultrasound machine. The infravesical region of the bladder is delineated on this 3D data set to enable the calculation of bladder volume and the bladder surface area. The outer anterior wall of the bladder is delineated to enable the calculation of the bladder wall thickness. The UEBW is measured as a product of the bladder surface area, bladder wall thickness, and bladder muscle specific gravity. The UEBW was measured on 20 healthy male subjects and each subject was imaged several times at different bladder volumes to evaluate the consistency of the UEBW measurement. Our approach measured the average UEBW among healthy subjects to be 42 g (SD = 6 g). The UEBW was found to be fairly consistent with an average standard deviation of 4 g across a single subject at different bladder volumes between 200 mL and 400 mL. Our surface area measurements show that the bladder shape is significantly nonspherical. Answer: The ultrasound-estimated bladder weight (UEBW) has been investigated as a potential non-invasive method for evaluating bladder outlet obstruction (BOO), but the reliability of this method is still under debate based on the available literature. Several studies have explored the correlation between UEBW and BOO with varying results. One study found no significant differences in UEBW among patients with mild, moderate, or severe BOO, suggesting that UEBW does not present any individual correlation with lower urinary tract symptoms (LUTS) or objective measurements of BOO (PUBMED:21166745). Another study reported no change in UEBW, UEBW divided by body surface area (UEBW/BSA), and bladder wall thickness (BWT) after treatment with dutasteride for BOO, despite improvements in most clinical parameters suggesting BOO (PUBMED:28394496). The diagnostic efficacy of UEBW has been questioned, with one study indicating that while UEBW/corrected UEBW is a statistically significant parameter correlating with BOO, it alone was insufficient to predict BOO due to its weak correlation with and low accuracy for diagnosing obstruction (PUBMED:21247605). Another study found no association between maximum flow rate (Qmax) and UEBW, and mean UEBW did not differ significantly between groups with different Qmax values (PUBMED:21308749). However, some studies have shown a trend in the ability of UEBW measurements to differentiate men with BOO from those without, and measurements of BWT have revealed differences between detrusor overactivity and urodynamic stress incontinence (PUBMED:20846683). Additionally, a systematic review and meta-analysis found that UEBW performs with high sensitivity in diagnosing BOO (PUBMED:34903702). In conclusion, while UEBW has shown potential as a non-invasive tool for assessing the lower urinary tract and diagnosing BOO, the evidence suggests that its reliability may be limited. Further studies are needed to validate the technique and establish reproducible diagnostic ranges or cutoff values (PUBMED:20846683, PUBMED:34903702). The current literature indicates that UEBW should not be used in isolation for diagnosing BOO and should be interpreted in conjunction with other clinical and urodynamic findings.
Instruction: Are conventional cardiovascular risk factors predictive of two-year mortality in hemodialysis patients? Abstracts: abstract_id: PUBMED:11597037 Are conventional cardiovascular risk factors predictive of two-year mortality in hemodialysis patients? Background: In general population hypertension, diabetes mellitus, overweight, hyperlipidemia and smoking are well-established risk factors for cardiovascular disease. However, the effect of these conventional risk factors on cardiovascular disease and mortality of patients on hemodialysis is not well understood. Indeed, some risk factors such as high blood pressure, hyperlipidemia and excess weight have been recently claimed to correlate with improved survival. Objective: This study was undertaken to define the prevalence of these conventional risk factors in 453 hemodialysis patients, predominantly African-Americans, to determine their influence on two-year survival. Result: High cholesterol was found in 30% of the patients, high LDL-cholesterol in 25% and high triglycerides in 16%. Lipoprotein(a) (LP(a)) was elevated in 68% of the patients. 31% of our patients had predialysis mean arterial blood pressure (MAP) over 114, and 25% were obese based on a body mass index (BMI) over 30, 26% were diabetic and 25% were active smokers. Smoking was more common among our male and Caucasian patients. The aggregate score for the risk factors were 2.4+/-0.1 per patient, which increased to 3.2+/-0.1 in patients with obesity or diabetes, to 3.0+/-0.1 with hypertension and to 2.8+/-0.1 with active smoking. In multivariate Cox model analysis, prealbumin, body weight and blood pressure showed a positive correlation with two-year survival whereas diabetes mellitus had a negative correlation. Hyperlipidemia did not correlate to patients' two-year mortality. Smoking was associated with higher mortality, but that did not reach statistical significance. Conclusion: Conventional risk factors at least over a two-year period do not readily account for the higher mortality of a group of predominantly African-American patients on hemodialysis. The lack of prediction is speculated to be partly due to the overriding beneficial effects of better nutrition and due to the presence of other yet to be well-defined factors such as hyperhomocysteinemia, oxidative stress, coronary calcification, hitherto unidentified uremic toxins or a combination of these factors. abstract_id: PUBMED:38474780 Geriatric Nutritional Risk Index and First-Year Mortality in Incident Hemodialysis Patients. Objective: The Geriatric Nutritional Risk Index is a simple nutritional screening method, and this study aimed to investigate the association between the initial Geriatric Nutritional Risk Index and all-cause mortality in incident patients in the first year after the initiation of hemodialysis. Materials And Methods: This study is a retrospective cohort study and used the Korean Renal Data System database. Patients who were eligible for Geriatric Nutritional Risk Index assessment and underwent hemodialysis from January 2016 to December 2019 were included. The primary outcome was all-cause mortality, and outcome evaluation was performed in December 2020. A Cox proportional hazard model was used to analyze the association between the Geriatric Nutritional Risk Index and mortality. Results: A total of 10,545 patients were included, and the mean age was 63.9 ± 3.7 years. The patients were divided into four groups by the quartile of the Geriatric Nutritional Risk Index with a mean value of 96.2 ± 8.2. During the study period, 545 (5.2%) deaths occurred. The surviving patients had higher Geriatric Nutritional Risk Index values than ones who died in the first year of hemodialysis initiation (96.6 ± 7.5 vs. 88.2 ± 9.3, p < 0.001). Quartile 1 (Geriatric Nutritional Risk Index < 91.8) showed a significantly increased risk of all-cause (Hazard Ratio: 2.56; 95% Confidence Interval: 2.13-3.09; p < 0.001) and cardiovascular mortality (Hazard Ratio: 22.29; 95% Confidence Interval: 1.71- 3.08; p < 0.001) at the first year in comparison with Quartile 4 (Geriatric Nutritional Risk Index ≥ 101.3). In areas under the receiver-operating characteristic curves of all-cause mortality, the Geriatric Nutritional Risk Index model improved predictive values, compared to the baseline model. The area with the Geriatric Nutritional Risk Index model was significantly higher than the one with a model including albumin or body mass index (p < 0.001). Conclusions: These findings suggest that a low Geriatric Nutritional Risk Index (<91.8) is associated with first-year all-cause and cardiovascular mortality in patients who start hemodialysis and may be a useful and reproducible tool for assessing prognoses in this population. abstract_id: PUBMED:32116784 Cardiovascular Mortality Can Be Predicted by Heart Rate Turbulence in Hemodialysis Patients. Background: Excess mortality in hemodialysis patients is mostly of cardiovascular origin. We examined the association of heart rate turbulence (HRT), a marker of baroreflex sensitivity, with cardiovascular mortality in hemodialysis patients. Methods: A population of 290 prevalent hemodialysis patients was followed up for a median of 3 years. HRT categories 0 (both turbulence onset [TO] and slope [TS] normal), 1 (TO or TS abnormal), and 2 (both TO and TS abnormal) were obtained from 24 h Holter recordings. The primary end-point was cardiovascular mortality. Associations of HRT categories with the endpoints were analyzed by multivariable Cox regression models including HRT, age, albumin, and the improved Charlson Comorbidity Index for hemodialysis patients. Multivariable linear regression analysis identified factors associated with TO and TS. Results: During the follow-up period, 20 patients died from cardiovascular causes. In patients with HRT categories 0, 1 and 2, cardiovascular mortality was 1, 10, and 22%, respectively. HRT category 2 showed the strongest independent association with cardiovascular mortality with a hazard ratio of 19.3 (95% confidence interval: 3.69-92.03; P < 0.001). Age, calcium phosphate product, and smoking status were associated with TO and TS. Diabetes mellitus and diastolic blood pressure were only associated with TS. Conclusion: Independent of known risk factors, HRT assessment allows identification of hemodialysis patients with low, intermediate, and high risk of cardiovascular mortality. Future prospective studies are needed to translate risk prediction into risk reduction in hemodialysis patients. abstract_id: PUBMED:28045952 Predictive Factors of One-Year Mortality in a Cohort of Patients Undergoing Urgent-Start Hemodialysis. Background: Chronic kidney disease (CKD) affects 10-15% of adult population worldwide. Incident patients on hemodialysis, mainly those on urgent-start dialysis at the emergency room, have a high mortality risk, which may reflect the absence of nephrology care. A lack of data exists regarding the influence of baseline factors on the mortality of these patients. The aim of this study was to evaluate the clinical and laboratory characteristics of this population and identify risk factors that contribute to their mortality. Patients And Methods: We studied 424 patients who were admitted to our service between 01/2006 and 12/2012 and were followed for 1 year. We analyzed vascular access, risk factors linked to cardiovascular disease (CVD) and mineral and bone disease associated with CKD (CKD-MBD), and clinical events that occurred during the follow-up period. Factors that influenced patient survival were evaluated by Cox regression analysis. Results: The patient mean age was 50 ± 18 years, and 58.7% of them were male. Hypertension was the main cause of primary CKD (31.8%). Major risk factors were smoking (19.6%), dyslipidemia (48.8%), and CVD (41%). Upon admission, most patients had no vascular access for hemodialysis (89.4%). Biochemical results showed that most patients were anemic with high C-reactive protein levels, hypocalcemia, hyperphosphatemia, elevated parathyroid hormone and decreased 25-hydroxy vitamin D. At the end of one year, 60 patients died (14.1%). These patients were significantly older, had a lower percentage of arteriovenous fistula in one year, and low levels of 25-hydroxy vitamin D. Conclusions: The combined evaluation of clinical and biochemical parameters and risk factors revealed that the mortality in urgent-start dialysis is associated with older age and low levels of vitamin D deficiency. A lack of a permanent hemodialysis access after one year was also a risk factor for mortality in this population. abstract_id: PUBMED:27495980 Strong predictive value of mannose-binding lectin levels for cardiovascular risk of hemodialysis patients. Background: Hemodialysis patients have higher rates of cardiovascular morbidity and mortality compared to the general population. Mannose-binding lectin (MBL) plays an important role in the development of cardiovascular disease. In addition, hemodialysis alters MBL concentration and functional activity. The present study determines the predictive value of MBL levels for future cardiac events (C-event), cardiovascular events (CV-event) and all-cause mortality in HD patients. Methods: We conducted a prospective study of 107 patients on maintenance hemodialysis. Plasma MBL, properdin, C3d and sC5b-9 was measured before and after one dialysis session. The association with future C-events, CV-events, and all-cause mortality was evaluated using Cox regression models. Results: During median follow-up of 27 months, 36 participants developed 21 C-events and 36 CV-events, whereas 37 patients died. The incidence of C-events and CV-events was significantly higher in patients with low MBL levels (<319 ng/mL, lower quartile). In fully adjusted models, low MBL level was independently associated with increased CV-events (hazard ratio 3.98; 95 % CI 1.88-8.24; P < 0.001) and C-events (hazard ratio 3.96; 95 % CI 1.49-10.54; P = 0.006). No association was found between low MBL levels and all-cause mortality. Furthermore, MBL substantially improved risk prediction for CV-events beyond currently used clinical markers. Conclusions: Low MBL levels are associated with a higher risk for future C-events and CV-events. Therefore, MBL levels may help to identify hemodialysis patients who are at risk to develop cardiovascular disease. abstract_id: PUBMED:34357124 Aortic Arch Calcification and Cardiomegaly Are Associated with Overall and Cardiovascular Mortality in Hemodialysis Patients. Patients with end-stage renal disease have a higher risk of cardiovascular morbidity and mortality. In this study, we investigated the predictive ability of a combination of cardiothoracic ratio (CTR) and aortic arch calcification (AoAC) for overall and cardiovascular mortality in patients receiving hemodialysis. We also evaluated the predictive power of AoAC and CTR for clinical outcomes. A total of 365 maintenance hemodialysis patients were included, and AoAC and CTR were measured using chest radiography at enrollment. We stratified the patients into four groups according to a median AoAC score of three and CTR of 50%. Multivariable Cox proportional hazards analysis was used to identify the risk factors of mortality. The predictive performance of the model for clinical outcomes was assessed using the χ2 test. Multivariable analysis showed that, compared to the AoAC < 3 and CTR < 50% group, the AoAC ≥ 3 and CTR < 50% group (hazard ratio [HR], 4.576; p < 0.001), and AoAC ≥ 3 and CTR ≥ 50% group (HR, 5.912; p < 0.001) were significantly associated with increased overall mortality. In addition, the AoAC < 3 and CTR ≥ 50% (HR, 3.806; p = 0.017), AoAC ≥ 3 and CTR < 50% (HR, 4.993; p = 0.002), and AoAC ≥ 3 and CTR ≥ 50% (HR, 8.614; p < 0.001) groups were significantly associated with increased cardiovascular mortality. Furthermore, adding AoAC and CTR to the basic model improved the predictive ability for overall and cardiovascular mortality. The patients who had a high AoAC score and cardiomegaly had the highest overall and cardiovascular mortality among the four groups. Furthermore, adding AoAC and CTR improved the predictive ability for overall and cardiovascular mortality in the hemodialysis patients. abstract_id: PUBMED:25350835 Association of interleg difference of ankle brachial index with overall and cardiovascular mortality in chronic hemodialysis patients. Background: The ankle-brachial index (ABI) is associated with peripheral vascular atherosclerosis, adverse cardiovascular outcomes, and all-cause mortality. However, there were limited data available on studying the effect of interleg ABI difference. Methods: We investigated the association of the interleg ABI difference with overall and cardiovascular mortality in chronic hemodialysis in a retrospective observational cohort of 369 Taiwanese patients undergoing chronic hemodialysis. Results: An interleg ABI difference of ≥0.15 in hemodialysis patients had significant predictive power for all-cause and cardiovascular mortality in crude analysis. The hazard ratio (HR) for all-cause mortality was 3.00 [95% confidence interval (CI), 1.91-4.71]; the HR for cardiovascular mortality was 3.13 (95% CI, 1.82-5.38). After adjustment for confounding variables, this difference continued to have significant predictive power for all-cause mortality but lost its predictive power for fatal cardiac outcome. ABI <0.9 and high brachial-ankle pulse wave velocity were independently associated with an interleg ABI difference of ≥0.15 in hemodialysis patients. Moreover, in the subgroup analysis, we found that this difference was an independent factor for overall and cardiovascular mortality, particularly in elder patients, female patients, or those with ABI <0.9. Conclusion: Detection of an interleg ABI difference of ≥0.15 was an independent risk factor for overall mortality in hemodialysis patients but it may affect cardiovascular mortality through the effect of peripheral vascular disease. abstract_id: PUBMED:33544559 Relationship between serum leptin levels, non-cardiovascular risk factors and mortality in hemodialysis patients. Introduction. Hemodialysis (HD) patients have higher mortality rate than the general population. Recent studies indicate a significant role of non-cardiovascular risk factors in for mortality in HD patients. Leptin is protein hormone and may indicate malnutrition in HD patients. Its role in mortality in these patients is being examined. This study aimed to investigate the correlation between serum leptin levels and non-cardiovascular risk factors and relationship between leptin level and mortality in HD patients.Methods. The prospective study included 93 patients on maintenance HD and follow-up period was 12 months. We measured leptin level and evaluated non-cardiovascular risk factors: nutritional status, anemia, volemia, parameters of mineral and bone disorder.Results. Out of 93 patients 9 died during study and 1 underwent kidney transplantation. Malnutrition and hypervolemia were two main non-cardiovascular risk factors among deceased subjects. Leptin showed a significant direct correlation with nutritional BMI (r = 0.72, P < 0.001), fat tissue index (r = 0.74, P < 0.001) and statistically significant inverse correlation with leantissue index (r = -0.349, P < 0.05) and inverse correlation with volemic parameters (overhydration / extracellular water ratio (r = -0.38, P < 0.001), but no association with anemia and mineral bone parameters was observed. Elevated leptin levels were associated with better survival. However, no statistically significant difference in survival rates was observed between the study groups (Log-Rank P =0.214, Breslow P =0.211, Tarone-Ware P=0.212).Conclusion. Deceased patients had significantly lower leptin values. Leptin was associated with two non-cardiovascular risk factors for mortality: malnutrition and hypervolemia. abstract_id: PUBMED:29913453 Uremic Pruritus is Associated with Two-Year Cardiovascular Mortality in Long Term Hemodialysis Patients. Background/aims: Uremic pruritus (UP) is an unpleasant complication in patients undergoing maintenance dialysis. Cardiovascular and infection related deaths are the major causes of mortality in patients undergoing dialysis. Studies on the correlation between cardiovascular or infection related mortality and UP are limited. Methods: We analyze 866 maintenance hemodialysis (MHD) patients in our hemodialysis centers. Clinical parameters and 24-month cardiovascular and infection-related mortality are recorded. Results: The associations between all-cause, cardiovascular and infection related mortality with clinical data including UP are analyzed. Multivariate Cox regression demonstrated that UP is a significantly predictor for 24-month cardiovascular mortality in the MHD patients (Hazard ratio: 3.164; 95% confidence interval, 1.743-5.744; p < 0.001). Conclusion: Uremic pruritus is one of the predictor of 24-month cardiovascular mortality in MHD patients. abstract_id: PUBMED:36769658 One-Year Mortality after Hemodialysis Initiation: The Prognostic Role of the CHA2DS2-VASc Score. Background: CKD is a significant cause of morbidity, cardiovascular and all-cause mortality. CHA2DS2-VASc is a score used in patients with atrial fibrillation to predict thromboembolic risk; it also appears to be useful to predict mortality risk. The aim of the study was to evaluate CHA2DS2-VASc scores as a tool for predicting one-year mortality after hemodialysis is started and for identifying factors associated with higher mortality. Methods: Retrospective analysis of patients who started hemodialysis between January 2014 and December 2019 in Centro Hospitalar Universitário Lisboa Norte. We evaluated mortality within one year of hemodialysis initiation. The CHA2DS2-VASc score was calculated at the start of hemodialysis. Results: Of 856 patients analyzed, their mean age was 68.3 ± 15.5 years and the majority were male (61.1%) and Caucasian (84.5%). Mortality within one-year after starting hemodialysis was 17.8% (n = 152). The CHA2DS2-VASc score was significantly higher (4.4 ± 1.7 vs. 3.5 ± 1.8, p < 0.001) in patients who died and satisfactorily predicted the one-year risk of mortality (AUC 0.646, 95% CI 0.6-0.7, p < 0.001), with a sensitivity of 71.7%, a specificity of 49.1%, a positive predictive value of 23.9% and a negative predictive value of 89.2%. In the multivariate analysis, CHA2DS2-VASc ≥3.5 (adjusted HR 2.24 95% CI (1.48-3.37), p < 0.001) and central venous catheter at dialysis initiation (adjusted HR 3.06 95% CI (1.93-4.85)) were significant predictors of one-year mortality. Conclusion: A CHA2DS2-VASc score ≥3.5 and central venous catheter at hemodialysis initiation were predictors of one-year mortality, allowing for risk stratification in hemodialysis patients. Answer: Conventional cardiovascular risk factors, which are well-established in the general population, do not readily account for the higher mortality of hemodialysis patients over a two-year period. A study found that high cholesterol, LDL-cholesterol, and triglycerides, as well as elevated Lipoprotein(a), were common among hemodialysis patients, but these factors did not correlate with two-year mortality. Instead, prealbumin, body weight, and blood pressure showed a positive correlation with survival, while diabetes mellitus had a negative correlation. Smoking was associated with higher mortality, but this did not reach statistical significance. The study concluded that the lack of prediction might be due to the overriding beneficial effects of better nutrition and the presence of other factors such as hyperhomocysteinemia, oxidative stress, coronary calcification, and unidentified uremic toxins (PUBMED:11597037). Other studies have identified different factors and tools that may be more predictive of mortality in hemodialysis patients. For instance, the Geriatric Nutritional Risk Index was found to be associated with first-year all-cause and cardiovascular mortality, suggesting that nutritional status is a critical prognostic factor (PUBMED:38474780). Heart rate turbulence, a marker of baroreflex sensitivity, was also shown to be an independent predictor of cardiovascular mortality (PUBMED:32116784). Additionally, urgent-start dialysis patients' one-year mortality was associated with older age, lack of permanent hemodialysis access, and vitamin D deficiency (PUBMED:28045952). Furthermore, mannose-binding lectin levels were found to be strong predictors of cardiovascular events, and a combination of aortic arch calcification and cardiomegaly was associated with overall and cardiovascular mortality (PUBMED:27495980; PUBMED:34357124). An interleg difference of ankle brachial index was an independent risk factor for overall mortality, and serum leptin levels correlated with non-cardiovascular risk factors and were associated with better survival (PUBMED:25350835; PUBMED:33544559). Uremic pruritus was identified as a predictor of 24-month cardiovascular mortality (PUBMED:29913453), and the CHA2DS2-VASc score was useful in predicting one-year mortality after hemodialysis initiation (PUBMED:36769658). In summary, while conventional cardiovascular risk factors may not be directly predictive of two-year mortality in hemodialysis patients, other factors, including nutritional status, heart rate turbulence, vascular calcification, and specific biomarkers, have been identified as important predictors of mortality in this patient population.
Instruction: Early postoperative outcomes following surgical repair of complete atrioventricular septal defects: is down syndrome a risk factor? Abstracts: abstract_id: PUBMED:24201860 Early postoperative outcomes following surgical repair of complete atrioventricular septal defects: is down syndrome a risk factor? Objective: To evaluate the impact of Down syndrome on the early postoperative outcomes of children undergoing complete atrioventricular septal defect repair. Design: Retrospective cohort study. Setting: Single tertiary pediatric cardiac center. Patients: All children admitted to PICU following biventricular surgical repair of complete atrioventricular septal defect from January 2004 to December 2009. Interventions: None. Measurements And Main Results: A total of 107 children, 67 with Down syndrome, were included. Children with Down syndrome were operated earlier: 4 months (interquartile range, 3.5-6.6) versus 5.7 months (3-8.4) for Down syndrome and non-Down syndrome groups, respectively (p < 0.01). There was no early postoperative mortality. There was no significant difference in the prevalence of dysplastic atrioventricular valve between the two groups. Two children (2.9%) from Down syndrome and three children (7.5%) from non-Down syndrome group required early reoperation (p = 0.3). Junctional ectopic tachycardia was the most common arrhythmia, and the prevalence of junctional ectopic tachycardia was similar between the two groups (9% and 10% in Down syndrome and non-Down syndrome, respectively, p = 1). One patient from each group required insertion of permanent pacemaker for complete heart block. Children with Down syndrome had significantly higher prevalence of noncardiac complications, that is, pneumothorax, pleural effusions, and infections (p < 0.01), than children without Down syndrome. There was a trend for longer duration of mechanical ventilation in children with Down syndrome (41 hr [20-61 hr] vs 27.5 hr [15-62 hr], p = 0.2). However, there was no difference in duration of PICU stay between the two groups (2 d [1.3-3 d] vs 2 d [1-3 d], p = 0.9, respectively). Conclusions: In our study, we found no difference in the prevalence of atrioventricular valve dysplasia between children with and without Down syndrome undergoing complete atrioventricular septal defect repair. This finding contrasts with previously published data, and further confirmatory studies are required. Although clinical outcomes were similar, children with Down syndrome had a significantly higher prevalence of noncardiac complications in the early postoperative period than children without Down syndrome. abstract_id: PUBMED:36260102 Surgical Outcomes of Congenital Heart Disease in Down Syndrome: Tertiary Center Experience-Focus on the Electrical Conduction System. To document outcomes of cardiac surgical repair in Down syndrome (DS) patients with specific focus on the associated electrical conduction morbidities, ultimately leading to a higher incidence of pacemaker implantation (PMI). A retrospective study conducted between 2011 and 2020. A total of 167 DS patients undergoing 204 surgeries were included. The mean gestational age (GA) and mean weight were 37.3 weeks and 5.5 kg, respectively. Complete atrioventricular septal defect (AVSD) was the most common diagnosis. Pre-operative ECG revealed superior axis deviation (SAD) in 92 and 32% of patients with AVSD and isolated perimembranous ventricular septal defect (VSD), respectively (p < 0.01). Postoperative right bundle branch block (RBBB) was observed in 83 and 55% of patients with AVSD and following perimembranous VSD repair, respectively (p = 0.04). Ten patients underwent post-operative pacemaker implantation (PMI). Reintervention rate was around 8.9%. Three mortalities were encountered throughout the study period, 2 of which were in-hospital deaths. Low mortality was observed, however, a higher rate of PMI requirements noted with risk factors including lower age and weight. abstract_id: PUBMED:25886812 Early Complete Atrioventricular Canal Repair Yields Outcomes Equivalent to Late Repair. Background: Repair of complete atrioventricular canal early in infancy has traditionally carried greater morbidity and mortality than repair performed later. However, an individualized anatomy-based repair may give young infants outcomes that are equivalent to older patients. Methods: We retrospectively reviewed 139 patients who underwent complete atrioventricular canal repair from January 2005 to December 2012. An individualized approach was used: 2-patch repair was performed in 98 patients for large ventricular septal defects and a modified single-patch ("Australian technique") was used in 41 for "shallow" ventricular septal defects. Results: The average age was 25.5 ± 3.9 weeks, 50% were boys, and 78% had trisomy 21. Mean follow-up was 5.1 ± 0.2 years, with 100% completeness of data. There were 3 in-hospital deaths (2.1%) and 1 late death (0.7%). A permanent pacemaker was required in 2 patients (1.4%). The rate for left atrioventricular valve reoperation was 8% at a mean of 211 ± 238 days after the original repair (range, 6 to 682 days). Compared with patients aged older than 3 months, the 39 patients (28%) who were younger than 3 months had similar perioperative courses and rate of reoperation. Compared with patients with an Australian repair, the 98 patients (71%) with a 2-patch repair were more likely to have trisomy 21 and had slightly increased cardiopulmonary bypass and cross-clamp times but similar outcomes. Multivariate analysis showed postoperative left atrioventricular valve regurgitation greater than 2 and left ventricular outflow tract obstruction were significant risk factors for reoperation on the left atrioventricular valve (both p < 0.05). Conclusions: Repair of complete atrioventricular canal using an individualized surgical approach yields reoperation and early mortality rates similar for younger infants compared with older infants, obviating the need to delay operation in symptomatic patients. abstract_id: PUBMED:24057432 Outcomes of repair of complete atrioventricular septal defect in the current era. Objectives: We sought to evaluate the surgical outcomes of the repair of complete atrioventricular septal defects (cAVSDs) in our institution in the current era. Methods: From 2000 to 2011, 138 patients underwent definitive repair of cAVSD. Repair was performed using a two-patch technique in 92.0% of patients and one-patch technique in 2.2%, and the ventricular septal component was closed directly in 5.8% of patients. Results: Operative mortality was 1.4% (2 of 138). Overall mortality was 5.8% (8 of 138). Follow-up was 96% complete. Freedom from reoperation was 84.3% (95% CI 77.1-91.5%) at 8 years. Age >6 months at repair was associated with higher rates of reoperation (P = 0.001; HR 6.85; 95% CI 2.30-20.44). However, operating at <6 months of age was associated with longer intensive care unit stay (P = 0.019; median 2.7 vs 1.4 days), mechanical ventilation (P = 0.001; median 1.7 vs 0.9 days) and postoperative hospital stay (P = 0.016; median 8 vs 5 days). Moderate or greater left atrioventricular valvular regurgitation (LAVVR) at discharge was a risk factor for reoperation (P < 0.001; HR 10.85; 95% CI 3.75-31.40). Conclusions: Repair of cAVSD carries low mortality, but a moderate reoperation rate. An optimal time for repair of the cAVSD is between 3 and 6 months of age. Repair prior to 3 months of age and the need for cleft closure were associated with a higher degree of LAVVR at discharge. Greater LAVVR at discharge is a risk factor for reoperation regardless of age at initial repair. In the current era, Down's syndrome is not a risk factor for reoperation. abstract_id: PUBMED:31065759 Preoperative Clinical and Echocardiographic Factors Associated with Surgical Timing and Outcomes in Primary Repair of Common Atrioventricular Canal Defect. In complete atrioventricular canal defect (CAVC), there are limited data on preoperative clinical and echocardiographic predictors of operative timing and postoperative outcomes. A retrospective, single-center analysis of all patients who underwent primary biventricular repair of CAVC between 2006 and 2015 was performed. Associated cardiac anomalies (tetralogy of Fallot, double outlet right ventricle) and arch operation were excluded. Echocardiographic findings on first postnatal echocardiogram were correlated with surgical timing and postoperative outcomes using bivariate descriptive statistics and multivariable logistic regression. 153 subjects (40% male, 84% Down syndrome) underwent primary CAVC repair at a median age of 3.3 (IQR 2.5-4.2) months. Median postoperative length of stay (LOS) was 7 (IQR 5-15) days. Eight patients (5%) died postoperatively and 24 (16%) required reoperation within 1 year. On multivariable analysis, small aortic isthmus (z score < - 2) was associated with early primary repair at < 3 months (OR 2.75, 95% CI 1.283-5.91) and need for early reoperation (OR 3.79, 95% CI 1.27-11.34). Preoperative ventricular dysfunction was associated with higher postoperative mortality (OR 7.71, 95% CI 1.76-33.69). Other factors associated with mortality and longer postoperative LOS were prematurity (OR 5.30, 95% CI 1.24-22.47 and OR 5.50, 95% CI 2.07-14.59, respectively) and lower weight at surgery (OR 0.17, 95% CI 0.04-0.75 and OR 0.55, 95% CI 0.35-0.85, respectively). Notably, preoperative atrioventricular valve regurgitation and Down syndrome were not associated with surgical timing, postoperative outcomes or reoperation, and there were no echocardiographic characteristics associated with late reoperation beyond 1 year after repair. Key preoperative echocardiographic parameters helped predict operative timing and postoperative outcomes in infants undergoing primary CAVC repair. Aortic isthmus z score < - 2 was associated with early surgical repair and need for reoperation, while preoperative ventricular dysfunction was associated with increased mortality. These echocardiographic findings may help risk-stratified patients undergoing CAVC repair and improve preoperative counseling and surgical planning. abstract_id: PUBMED:18626209 Evaluation of surgical approaches and early and midterm results of treatment for atrioventricular septal defect Objectives: We evaluated patients who underwent complete or partial surgical correction for atrioventricular septal defect (AVSD) with regard to surgical techniques and early and midterm results. Study Design: Forty-six patients were treated for complete (n=28) or partial (n=18) AVSD between 2000 and 2007. There were nine boys and 19 girls (mean age 5.5 months; range 1.5 to 11 months) with complete AVSD. Of these, 17 patients underwent total repair, while 11 patients underwent palliative procedures. Five males and 13 females (mean age 11 years; range 1 to 50 years) with partial AVSD were treated with total repair. Down syndrome was seen in nine patients (32.1%) and one patient (5.6%) in complete and partial AVSD groups, respectively. Twenty-one patients (75%) and 14 patients (77.8%) could be followed-up for a mean of 26.3 months (range 1-72) and 21.8 months (range 2 to 71) in the two groups, respectively. Results: Total repair of partial AVSD resulted in no mortality or significant morbidity. Early postoperative mortality occurred in three cases (10.7%) after repair of complete AVSD, one of which had Down syndrome. Six patients required prolonged mechanical ventilation beyond one week. Two patients without Down syndrome underwent reoperation due to severe atrioventricular (AV) valve insufficiency in the early postoperative period. None of the patients required permanent pacemaker implantation. Clinical and echocardiographic monitoring showed moderate left AV valve insufficiency in three patients in each group, while the remaining patients had no or minimal insufficiency. Conclusion: Total repair of complete AVSD should be the procedure of choice in early infancy. Left AV valve insufficiency continues to be the most important cause of postoperative morbidity in these cases. abstract_id: PUBMED:31804135 Outcomes of surgical repair of complete atrioventricular canal defect in patients younger than 2 years of age. Background: Early surgical management of complete atrioventricular (AV) canal defect is the optimal treatment option. Since the published evidence on outcomes is inconclusive, we retrospectively studied the outcomes of patients in our institution. Objective: Study outcomes of complete AV canal repair. Design: Retrospective, descriptive. Settings: Single institute. Patients And Methods: Medical records of patients under 2 years of age who underwent complete AV canal repair from January 2004 to December 2014 were retrospectively reviewed. Main Outcome Measures: Pre- and postoperative morbidity and mortality. Sample Size: 140 patients. Result: The median (IQR) age at the time of surgery was 5.4 (3.9-8.2) months. Down syndrome was diagnosed in 98 (70%) of patients. AV valve regurgitation was found preoperatively in 129 (92%) and postoperatively in 135 (96%) patients. There was a significant association between preoperative pulmonary hypertension and the development of pulmonary hypertension in the postoperative period ( P=.04). Thirty-three patients needed reoperation. Arrhythmia was found in 19 patients, 16 of whom required pacemaker insertion. Seven patients died (5%). Conclusion: The presence of preoperative and postoperative AV valve regurgitation was common in this cohort but did not significantly affect patient survival. Our findings suggest an acceptable outcome for repair of complete AV septal defect with few complications postoperatively. Limitation: Retrospective in single institute. Conflict Of Interest: None. abstract_id: PUBMED:34350818 The effect of surgical technique, age, and Trisomy 21 on early outcome of surgical management of complete atrioventricular canal defect. Background: The optimal timing, surgical technique, and the influence of Trisomy 21 on the outcome of surgical repair of Complete Atrioventricular Canal Defect remains uncertain. We reviewed our experience in the repair of CAVC to identify the influence of these factors on operative outcomes. Methods: A prospective study included 70 patients, who underwent repair of CAVC at our institute between July, 2016 and October, 2019. Primary endpoint was mortality and the secondary endpoint was a degree of left atrioventricular valve regurgitation. Results: No significant difference was noted between patients operated on, at the first 6 months of age versus later, regarding mortality or LAVV regurgitation. Surgical repair by modified single-patch technique showed a significant reduction in bypass time (71.13 ± 13.507 min versus 99.19 ± 27.092 min, p-value = 0.001). Compared to closure of cleft only, posterior annuloplasty used for repair of LAVV resulted in significant reduction in the occurrence of post-operative valve regurgitation during the early period (LAVV 2 + 43 versus 7 %, p-value = 0.03) and at 6 months of follow-up (LAVV 2 + 35.4 versus 0 %, p-value = 0.01), respectively. Conclusions: Early intervention, in the first 6 months in patients with CAVC by surgical repair gives comparable acceptable results to later repair; Trisomy 21 was not found to be a risk factor for early intervention. Repair of common AV valve by cleft closure with posterior LAVV annuloplasty showed better results with a significant decrease in post-operative LAVV regurgitation and early mortality in comparison to the closure of cleft only. abstract_id: PUBMED:30590597 Long-term results after surgical repair of atrioventricular septal defect. Objectives: We analysed our 29-year experience of surgical repair of atrioventricular septal defect (AVSD) to define risk factors for mortality and reoperation. Methods: Between 1988 and 2017, 508 patients received AVSD repair in our institution; 359 patients underwent surgery for complete AVSD, 76 for intermediate AVSD and 73 for partial AVSD. The median age of the patients was 6.1 months (interquartile range 10.3 months), and the median weight was 5.6 kg (interquartile range 3.2 kg). The standard AVSD repair was performed using 2-patch technique (n = 347) and complete cleft closure (n = 496). The results were divided into 2 surgical eras (early era 1986-2004 and late era 2004-2017). Risk factors were analysed to determine the impact of patient age, weight, the presence of trisomy 21 and complex AVSD on mortality and reoperation rate. Results: In-hospital mortality decreased from 10.2% (n = 26) in early surgical era to 1.6% (n = 4) in late surgical era (P < 0.001). Seventy-seven patients required reoperation. Freedom from reoperation was 84.4% after 25 years. The main indication for reoperation was left atrioventricular valve regurgitation (13.8%). The multivariable Cox regression analysis revealed reoperation of the left AV valve, early surgical era, patient age <3.0 months and complex AVSD to be independent risk factors for mortality. Age <3.0 months, complex AVSD and moderate/severe left AV valve regurgitation at discharge predicted reoperation. Conclusions: AVSD repair can be performed with low mortality and reoperation rate. Age <3 months, complex AVSD and moderate/severe regurgitation of the left AV valve at discharge were predictors for reoperation. Reoperation of the left AV valve was the strongest risk factor for mortality. abstract_id: PUBMED:24532614 Surgical outcome of congenital heart disease in Down's syndrome. Background And Objective: only a few studies have compared surgical outcomes in children with and without Down's syndrome. The purpose of this study was to investigate the surgical outcome in children with Down's syndrome and congenital heart disease and to compare it with that in age- and sex-matched non-Down controls. Patients And Methods: the charts of 32 consecutive Down's syndrome patients who underwent surgery for congenital heart disease in a single center from January 2006 to May 2010, were compared with the data of 32 age- and sex-matched non-Down controls, focusing on comparative echocardiographic details, surgical outcome, and intensive care unit data. The data were compared using the 2-group t test. The children were followed up for a median period of 48 months. Results: the most common cardiac lesion in the study group was ventricular septal defect (41%) followed by atrioventricular septal defect (28%). Ventilation and intensive care unit stay were significantly longer in Down's syndrome children, with a trend towards more respiratory complications. At a median follow-up of 48 months, there were no cardiac events or mortality in the study group. Simple lesions were more common than atrioventricular septal defect in Down's syndrome. Conclusion: children with Down's syndrome can be operated on with negligible mortality and good functional outcome, but with a higher surgical morbidity. Answer: Based on the provided abstracts, the impact of Down syndrome on early postoperative outcomes following surgical repair of complete atrioventricular septal defects (AVSD) appears to be mixed. Some studies found no significant difference in the prevalence of atrioventricular valve dysplasia between children with and without Down syndrome undergoing complete AVSD repair (PUBMED:24201860). However, children with Down syndrome had a significantly higher prevalence of noncardiac complications in the early postoperative period than children without Down syndrome (PUBMED:24201860). Another study reported a higher rate of pacemaker implantation (PMI) requirements in Down syndrome patients, with risk factors including lower age and weight (PUBMED:36260102). Additionally, one study found that in the current era, Down syndrome is not a risk factor for reoperation (PUBMED:24057432), while another study indicated that Trisomy 21 was not found to be a risk factor for early intervention (PUBMED:34350818). Furthermore, some studies suggest that the presence of Down syndrome does not significantly affect patient survival or the need for reoperation (PUBMED:31804135), and that preoperative atrioventricular valve regurgitation and Down syndrome were not associated with surgical timing, postoperative outcomes, or reoperation (PUBMED:31065759). In contrast, one study noted that children with Down syndrome had longer ventilation and intensive care unit stays, with a trend towards more respiratory complications (PUBMED:24532614). In summary, while Down syndrome may be associated with a higher prevalence of noncardiac complications and possibly a higher rate of PMI, it does not consistently emerge as a risk factor for mortality, reoperation, or longer-term outcomes following surgical repair of complete AVSD. The variability in findings across studies suggests that while Down syndrome may present some unique challenges in the postoperative period, it does not uniformly worsen surgical outcomes.
Instruction: Does laser ablation prostatectomy lead to oncological compromise? Abstracts: abstract_id: PUBMED:18782304 Does laser ablation prostatectomy lead to oncological compromise? Objective: To assess the incidence and outcome of incidental prostate cancer detected at transurethral resection of the prostate (TURP), and to evaluate whether laser ablation prostatectomy would miss significant cancer by failing to provide tissue for histopathological analysis. Patients And Methods: Information on TURP-detected prostate cancer was gathered from 1996 to 2006, from The South-west Cancer Intelligence Service, hospital-operating and coding records, histopathology databases and The British Association of Urological Surgeons Cancer Registry. We recorded the total number of prostate cancers diagnosed per year, number of TURPs performed, Gleason scores and patients outcomes. Results: TURP-detected prostate cancer has declined since the relatively high rates (22%) recorded locally in 1996-97. Between 2001 and 2006, a mean (range) of 124 (111-135) prostate cancers were detected per year. Incidental cancers accounted for only 1.5-5.6% of all newly diagnosed prostate cancers per year. Incidental cancers had a mean (sem) Gleason score of 5.7 (0.3) compared to 8.0 (0.3) in known cancers (P < 0.01) undergoing TURP. Of newly diagnosed patients, 82% were allocated to active surveillance, whilst 18% were started on hormone therapy, with no prostate cancer-related deaths over a mean (sem, range) follow-up of 49.7 (2.4, 11-81) months. Conclusions: TURP mainly samples transitional-zone tissue where tumours are relatively uncommon, and have a good prognosis. Our series of incidental TURP-detected cancers showed an incidence in keeping with published data, and favourable histological and clinical outcomes. We suggest the lack of tissue should not discourage the use of laser prostatectomy surgery. abstract_id: PUBMED:31754772 Dependence of laser-induced tissue ablation on optical fiber movements for laser prostatectomy. Purpose: The aim of the current study was to identify the efficient fiber movements for 532-nm laser prostatectomy. Materials And Methods: 532-nm Lithium triborate (LBO) laser light was tested on 120 kidney tissues at three different translational speeds (TS 1, 2, and 4 mm/s) and four different rotational speeds (RS 0.5, 1.0, 1.6, and 2.1 rad/s). The applied power was 120 W at a 2-mm working distance and 60° sweeping angle. Ablation rate and dimensions of resulting ablation craters were measured. Results: Slower TSs and RSs created deeper and wider ablation craters with thinner coagulation, leading to more efficient ablation performance. Maximal ablation rate was achieved at a TS of 2 mm/s and RSs of 0.5 and 1.0 rad/s. An RS of 0.5 rad/s accompanied surface carbonization for all the TSs. Irrespective of TS, ablation rate became saturated at faster RSs than 1.0 rad/s. Faster TSs or RSs reduced tissue ablation, but increased thermal coagulation due to a shorter interaction time. Conclusions: Optimal ablation efficiency occurred at a TS of 2 mm/s and a RS of 1.0 rad/s with a thin coagulation of around 1.0 mm and no or minimal carbonization. Further studies will validate the current findings with prostate tissue and high-power levels for laser prostatectomy. abstract_id: PUBMED:27541586 MRI-guided focal laser ablation for prostate cancer followed by radical prostatectomy: correlation of treatment effects with imaging. Purpose: To correlate treatment effects of MRI-guided focal laser ablation in patients with prostate cancer with imaging using prostatectomy as standard of reference. Methods: This phase I study was approved by the Institutional Review Board. Three weeks prior to prostatectomy, five patients with histopathologically proven, low/intermediate grade prostate cancer underwent transrectal MRI-guided focal laser ablation. Per patient, only one ablation was performed to investigate the effect of ablation on the tissue rather than the effectiveness of ablation. Ablation was continuously monitored with real-time MR temperature mapping, and damage-estimation maps were computed. A post-ablation high-resolution T1-weighted contrast-enhanced sequence was acquired. Ablation volumes were contoured and measured on histopathology specimens (with a shrinkage factor of 1.15), T1-weighted contrast-enhanced images, and damage-estimation maps, and were compared. Results: A significant volume correlation was seen between the ablation zone on T1-weighted contrast-enhanced images and the whole-mount histopathology section (r = 0.94, p = 0.018). The damage-estimation maps and histopathology specimen showed a correlation of r = 0.33 (p = 0.583). On histopathology, the homogeneous necrotic area was surrounded by a reactive transition zone (1-5 mm) zone, showing neovascularisation, and an increased mitotic index, indicating increased tumor activity. Conclusions: The actual ablation zone was better indicated by T1-weighted contrast-enhanced than by damage-estimation maps. Histopathology results highlight the importance of complete tumor ablation with a safety margin. abstract_id: PUBMED:37743895 Right coronary artery compromise following radiofrequency catheter ablation for supraventricular tachycardia: cases reports. Background: Coronary compromise is a serious potential complication following catheter ablation; however, procedural details in the literature are often lacking, preventing the identification of learning opportunities. Case Summary: We report two cases of right coronary compromise following catheter ablation for symptomatic supraventricular tachycardia. After radiofrequency energy delivery at the coronary sinus ostium in both cases, inferior lead ST-elevation was observed. Diagnostic coronary angiography identified an occluded posterior left ventricular branch of the coronary artery, and optical coherence tomography demonstrated a high thrombus burden at this location. Electrocardiographic ST-segments settled with implantation of a drug-eluting stent. Discussion: Coronary compromise was likely secondary to energy delivery during catheter ablation. This case series highlights the need for electrophysiologist to understand coronary anatomy relative to anatomical landmarks, to anticipate the risk of vascular injury as physical distance from the site of ablation is likely important. Risk for coronary compromise, while a rare complication, needs to be discussed with patients during the consenting process. We also demonstrate the importance of an efficient multi-disciplinary team process for managing acute procedural complications. abstract_id: PUBMED:37247118 Robot-assisted radical prostatectomy following holmium laser enucleation of the prostate: perioperative, functional, and oncological outcomes. Robot-assisted radical prostatectomy with previous holmium laser enucleation of the prostate is challenging, and few studies have analyzed its perioperative, functional, and oncological outcomes. Here we retrospectively evaluated 298 robot-assisted radical prostatectomies, including 25 with and 273 without previous holmium laser enucleation of the prostate, performed in 2015-2022. Regarding perioperative outcomes, operative and console times were significantly longer in the previous holmium laser enucleation of the prostate group. In contrast, the estimated blood loss was similar between groups, and there were no transfusions or intraoperative complications. Multivariable Cox hazard regression analysis of the functional outcomes of postoperative urinary continence showed that body mass index, intraoperative bladder neck repair, and nerve sparing were independently associated factors, whereas a history of holmium laser enucleation of the prostate was not. Similarly, a history of holmium laser enucleation of the prostate was not associated with biochemical recurrence; however, positive surgical margins and seminal vesicle invasion were independent risk factors of biochemical recurrence. Our findings revealed that robot-assisted radical prostatectomy after holmium laser enucleation of the prostate was safe and raised no concerns of postoperative urinary incontinence or biochemical recurrence. Therefore, robot-assisted radical prostatectomy may be a treatment option for patients with prostate cancer after holmium laser enucleation of the prostate. abstract_id: PUBMED:25164484 Photoactive dye-enhanced tissue ablation for endoscopic laser prostatectomy. Background And Objective: Laser light has been widely used as a surgical tool to treat benign prostate hyperplasia (BPH) over 20 years. Recently, application of high laser power up to 200 W was often reported to swiftly remove a large amount of prostatic tissue. The purpose of this study was to validate the feasibility of photoactive dye injection to enhance light absorption and eventually to facilitate tissue vaporization with low laser power. Materials And Methods: Chicken breast tissue was selected as a target tissue due to minimal optical absorption at the visible wavelength. Four biocompatible photoactive dyes, including amaranth (AR), black dye (BD), hemoglobin powder (HP), and endoscopic marker (EM), were selected and tested in vitro with a customized 532 nm laser system with radiant exposure ranging from 0.9 to 3.9 J/cm(2) . Light absorbance and ablation threshold were measured with UV-Vis spectrometer and Probit analysis, respectively, and compared to feature the function of the injected dyes. Ablation performance with dye-injection was evaluated in light of radiant exposure, dye concentration, and number of injection. Results: Higher light absorption by injected dyes led to lower ablation threshold as well as more efficient tissue removal in the order of AR, BD, HP, and EM. Regardless of the injected dyes, ablation efficiency principally increased with radiant exposure, dye concentration, and number of injection. Among the dyes, AR created the highest ablation rate of 44.2 ± 0.2 µm/pulse due to higher absorbance and lower ablation threshold. High aspect ratios up to 7.1 ± 0.4 entailed saturation behavior in the tissue ablation injected with AR and BD, possibly resulting from plume shielding and increased scattering due to coagulation. Preliminary tests on canine prostate with a hydraulic injection system demonstrated that 80 W with dye injection yielded comparable ablation efficiency to 120 W with no injection, indicating 33% reduced laser power with almost equivalent performance. Conclusion: Due to efficient coupling of optical energy, pre-injection of photoactive dyes promoted the degree of tissue removal during laser irradiation. Further studies will investigate spatial distribution of dyes and optimal injecting pressure to govern the extent of dye-assisted ablation in a predictable manner. In-depth comprehension on photoactive dye-enhanced tissue ablation can help accomplish efficient and safe laser vaporization for BPH with low power application. abstract_id: PUBMED:27363841 Self-aliquoting micro-grooves in combination with laser ablation-ICP-mass spectrometry for the analysis of challenging liquids: quantification of lead in whole blood. We present a technique for the fast screening of the lead concentration in whole blood samples using laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). The whole blood sample is deposited on a polymeric surface and wiped across a set of micro-grooves previously engraved into the surface. The engraving of the micro-grooves was accomplished with the same laser system used for LA-ICP-MS analysis. In each groove, a part of the liquid blood is trapped, and thus, the sample is divided into sub-aliquots. These aliquots dry quasi instantly and are then investigated by means of LA-ICP-MS. For quantification, external calibration against aqueous standard solutions was relied on, with iron as an internal standard to account for varying volumes of the sample aliquots. The (208)Pb/(57)Fe nuclide ratio used for quantification was obtained via a data treatment protocol so far only used in the context of isotope ratio determination involving transient signals. The method presented here was shown to provide reliable results for Recipe ClinChek® Whole Blood Control levels I-III (nos. 8840-8842), with a repeatability of typically 3 % relative standard deviation (n = 6, for Pb at 442 μg L(-1)). Spiked and non-spiked real whole blood was analysed as well, and the results were compared with those obtained via dilution and sectorfield ICP-MS. A good agreement between both methods was observed. The detection limit (3 s) for lead in whole blood was established to be 10 μg L(-1) for the laser ablation method presented here. Graphical Abstract Micro-grooves are filled with whole blood, dried, and analyzed by laser ablation ICP-mass spectrometry. Notice that the laser moves in perpendicular direction with regard to the micro-grooves. abstract_id: PUBMED:22899315 Angular effect of optical fiber movement on endoscopic laser prostatectomy. Background And Objective: The optimal fiber manipulation during laser prostatectomy has been highlighted as a critical element to achieve desirable clinical outcomes. However, scientific understanding of the physical interplay between fiber movement and ablative tissue response is still lacking. The objective of this study was to quantitatively investigate the effect of angular movement of an optical fiber on tissue ablation performance. Study Design/materials And Methods: Porcine kidney was employed as a tissue model in vitro. A 180 W 532 nm surgical laser with 750 µm side-firing fibers was utilized to mimic clinical laser prostatectomy. The effect of fiber manipulation parameters on the tissue such as irradiance, number of overlapping pulses (OP), and beam path length (BPL) was assessed at various fiber sweeping (rotational) angles ranging from 0° to 120°. Morphological properties of the post-irradiated tissue were also evaluated in light of ablation depth, coagulative necrosis, and volumetric ablation density (VAD). Results: As sweeping angle (SA) increased, both laser irradiance and number of OP decreased but BPL increased. Ablation depth was maximized (5.4 ± 1.0 mm) at SAs less than 30° but decreased at higher SAs. The SAs of 15° and 30° demonstrated the minimal thickness of denaturized tissue (0.74 ± 0.14 mm) and VAD (total laser energy/ablation volume (AV) ≈ 4.6 ± 0.46 J/mm(3) ). Decreasing depth and increasing tissue coagulation associated with increasing SA resulted from substantial reduction in both beam irradiance and number of OP, eventually impeding ablation process. Excessive tissue denaturation also occurred when no rotational motion was applied to the fiber possibly due to plume shielding. Conclusion: Inefficient tissue ablation could lead to adverse post-operative complications due to unwanted thermal injury to peripheral tissue. A SA of 30° was found to be desirable for effective tissue ablation, and further clinical investigations will validate the current findings. abstract_id: PUBMED:7542966 A comparison of transurethral prostatectomy with visual laser ablation of the prostate using the Urolase right-angle fiber for the treatment of BPH. The use of right-angle laser fibers for the treatment of benign prostatic hyperplasia (BPH) has gained widespread acceptance over the past several years. The number of right-angle fibers introduced into the marketplace has continued to grow, but most fibers have not been evaluated thoroughly with properly designed clinical trials. The Urolase fiber has undergone the most extensive clinical trials conducted to date. This report reviews the significant clinical trials with particular emphasis on the comparison of transurethral prostatectomy with visual laser ablation of the prostate. From these initial results, it would appear that laser ablation using the Urolase fiber is a safe and effective surgical therapy for men with symptomatic BPH. From our analysis, it would appear that transurethral prostatectomy provides a greater degree of symptomatic improvement as compared with transurethral resection of the prostate. It is important to consider that these are some of the earliest clinical trials assessing the efficacy of laser prostatectomy and that technological advancements are likely to improve the clinical outcome. abstract_id: PUBMED:20346578 Focal laser ablation for prostate cancer followed by radical prostatectomy: validation of focal therapy and imaging accuracy. An increased incidence of low-risk prostate cancer (PCa) has led investigators to develop focal therapy as a management option for PCa. We evaluated the effects of focal laser ablation (FLA) on PCa tissue and the accuracy of magnetic resonance imaging (MRI) in determining ablated lesion volume by comparing the whole-mount histology and MRI in four patients that underwent FLA followed by radical prostatectomy. Ablated areas were characterized by homogeneous coagulation necrosis. The MRI-calculated ablated volume correlated well with histopathology. We found that FLA creates confluent ablation with no evidence of viable cells in treated regions. Postablation MRI is able to determine the ablation accurately. Answer: Based on the provided abstracts, laser ablation prostatectomy does not appear to lead to oncological compromise. The abstract from PUBMED:18782304 indicates that transurethral resection of the prostate (TURP), which is a traditional method for prostate tissue sampling, mainly samples transitional-zone tissue where tumors are relatively uncommon and have a good prognosis. The study found that incidental TURP-detected cancers had a lower mean Gleason score compared to known cancers and that the majority of newly diagnosed patients were allocated to active surveillance with no prostate cancer-related deaths over the follow-up period. This suggests that even if laser ablation prostatectomy fails to provide tissue for histopathological analysis, it may not result in significant oncological compromise due to the low incidence and favorable outcomes of incidental cancers in the transitional zone. Furthermore, the abstract from PUBMED:37247118 discusses robot-assisted radical prostatectomy following holmium laser enucleation of the prostate and concludes that this procedure is safe and does not raise concerns of postoperative urinary incontinence or biochemical recurrence. This implies that even after laser ablation, subsequent radical prostatectomy can be performed without compromising oncological outcomes. The other abstracts provided (PUBMED:31754772, PUBMED:27541586, PUBMED:37743895, PUBMED:25164484, PUBMED:27363841, PUBMED:22899315, PUBMED:7542966, PUBMED:20346578) do not directly address the question of oncological compromise following laser ablation prostatectomy. They focus on technical aspects of laser ablation, imaging accuracy, and procedural outcomes in different contexts. In conclusion, the evidence from the abstracts suggests that laser ablation prostatectomy does not lead to oncological compromise and may be a viable option for patients with prostate conditions, including those with incidental prostate cancer detected during TURP.
Instruction: Scatter radiation from chest radiographs: is there a risk to infants in a typical NICU? Abstracts: abstract_id: PUBMED:19997727 Scatter radiation from chest radiographs: is there a risk to infants in a typical NICU? Objective: To evaluate the dose of scatter radiation to infants in a NICU in order to determine the minimal safe distance between isolettes. Materials And Methods: Dose secondary to scattered radiation from an acrylic phantom exposed to vertical and horizontal beam exposures at 56 kVp was measured at 93 cm and 125 cm from the center of the phantom. This corresponds to 2 and 3 ft between standard isolettes, respectively. For horizontal exposures, the dosimeter was placed directly behind a CR plate and scatter dose at 90-degrees and 135-degrees from the incident beam was also measured. Exposures were obtained at 160 mAs and the results were extrapolated to correspond to 2.5 mAs. Four measurements were taken at each point and averaged. Results: At 125 cm and 93 cm there was minimal scatter compared to daily natural background radiation dose (8.493 microGy). Greatest scatter dose obtained from a horizontal beam exposure at 135 degrees from the incident beam was still far below background radiation. Conclusion: Scatter radiation dose from a single exposure as well as cumulative scatter dose from numerous exposures is significantly below natural background radiation. Infants in neighboring isolettes are not at added risk from radiation scatter as long as the isolettes are separated by at least 2 ft. abstract_id: PUBMED:29442153 Quantification of scatter radiation from radiographic procedures in a neonatal intensive care unit. Background: In a neonatal intensive care unit (NICU), preterm infants are often exposed to a large number of radiographic examinations, which could cause adjacent neonates, family caregivers and staff members to be exposed to a dose amount due to scatter radiation. Objective: To provide information on scatter radiation exposure levels in a NICU, to compare these values with the effective dose limits established by the European Union and to evaluate the effectiveness of radiation protection devices in this setting. Materials And Methods: Radiation exposure levels due to scatter radiation were estimated by passive detectors (thermoluminescent dosimeters) and direct dosimetric measurements (with a dose rate meter); in the latter case, an angular map of the scatter dose distribution was achieved. Results: The dose due to scatter radiation to staff in our setting is approximately 160 μSv/year, which is markedly lower than the effective dose limit for workers established by the European Union (20 mSv/year). The doses range between 0.012 and 0.095 μSv/radiograph. Considering a mean hospitalization period of 3 months and our NICU workload, the corresponding scatter radiation dose to an adjacent patient and/or his/her caregiver is at most 40 μSv. Conclusion: For distances greater than 1 m from the irradiation field, both scatter dose absorbed by a staff member during a year and that by an adjacent patient and/or his/her caregiver during hospitalization is less than 1 mSv, which is the exposure limit for public members in a year. abstract_id: PUBMED:32986183 Evaluation of dose reduction potential in scatter-corrected bedside chest radiography using U-net. Bedside radiography has increasingly attracted attention because it allows for immediate image diagnosis after X-ray imaging. Currently, wireless flat-panel detectors (FPDs) are used for digital radiography. However, adjustment of the X-ray tube and FPD alignment are extremely difficult tasks. Furthermore, to prevent a poor image quality caused by scattered X-rays, scatter removal grids are commonly used. In this study, we proposed a scatter-correction processing method to reduce the radiation dose when compared with that required by the X-ray grid for the segmentation of a mass region using deep learning during bedside chest radiography. A chest phantom and an acrylic cylinder simulating the mass were utilized to verify the image quality of the scatter-corrected chest X-rays with a low radiation dose. In addition, we used the peak signal-to-noise ratio and structural similarity to quantitatively assess the quality of the low radiation dose images compared with normal grid images. Furthermore, U-net was used to segment the mass region during the scatter-corrected chest X-ray with a low radiation dose. Our results showed that when scatter correction is used, an image with a quality equivalent to that obtained by grid radiography is produced, even when the imaging dose is reduced by approximately 20%. In addition, image contrast was improved using scatter radiation correction as opposed to using scatter removal grids. Our results can be utilized to further develop bedside chest radiography systems with reduced radiation doses. abstract_id: PUBMED:34148897 Usefulness of Combining Post-processing Scatter Correction and an Anti-scatter Grid in Chest Standing Radiography Purpose: The aim of this study was to evaluate the usefulness of combining post-processing scatter correction (IG) and an anti-scatter grid (RG) in chest radiography. Method: To determine the combination protocol (Hyb) that was closed to RG 12:1 (RG12), we measured the content rate of scattered radiation for each combination (RG12, IG12, RG3-12+IG3-12). Task-based modulation transfer function (MTF_Task) and SDNR were evaluated using RG12, IG12, and Hyb. Additionally, seven radiologists performed visual evaluation by using chest phantom. Result: The protocol of Hyb was RG8+IG3. In SDNR, Hyb (RG8+IG3) was equal to or higher than RG12, and MTF_Task was equal in all grid systems. Hyb (RG8+IG3) was significantly superior to RG12 in visual evaluation. Conclusion: The combining post-processing scatter correction should be useful for improving inspection throughput and reducing the risk of grid's damage. abstract_id: PUBMED:36018348 A phantom study of a protective trolley for neonatal radiographic imaging: new equipment to protect the operator from scatter radiation. Chest radiography is commonly performed as a diagnostic tool of neonatal diseases. Contact-based radiation personal protective equipment (RPPE) has been widely used for radiation protection, but it does not provide full body protection and it is often shared between users, which has become a major concern during the coronavirus disease 2019 (COVID-19) pandemic. To address these issues, we developed a novel trolley to protect radiographers against X-ray radiation by reducing scatter radiation during neonatal radiographic examinations. We measured the scatter radiation doses from a standard neonatal chest radiograph to the radiosensitive organs using a phantom operator in three protection scenarios (trolley, radiation personal protective equipment [RPPE], no protection) and at three distances. The results showed that the scatter radiation surface doses were significantly reduced when using the trolley compared with RPPE and with no protection at a short distance (P<0.05 for both scenarios in all radiosensitive organs). The novel protective trolley provides a non-contact protective tool for radiographers against the hazard of scatter radiation during neonatal radiography examinations. abstract_id: PUBMED:36656425 Determination of scattered radiation dose for radiological staff during portable chest examinations of COVID-19 patients. The COVID-19 pandemic has resulted in a large increase in the number of patients admitted to hospitals. Radiological technologists (RTs) are often required to perform portable chest X-ray radiography on these patients. Normally, when performing a portable X-ray, radiation protection equipment is critical as it reduces the scatter radiation dose to hospital workers. However, during the pandemic, the use of a lead shield caused a heavy weight burden on workers who were responsible for a large number of patients. This study aimed to investigate scatter radiation doses received at various distances, directions, and positions. Radiation measurements were performed using the PBU-60 whole body phantom to determine scatter radiation doses at 100-200 cm and eight different angles around the phantom. The tests were conducted with and without lead shielding. Additionally, the doses were compared using the paired t test (p < 0.005) to determine suitable positions for workers who did not wear lead protection that adhered to radiation safety requirements. Scatter radiation doses of all 40 tests showed a highest and lowest value of 1285.5 nGy at 100 cm in the anteroposterior (AP) semi upright position and 134.7 nGy at 200 cm in the prone position, respectively. Correlation analysis between the dosimeter measurement and calculated inverse square law showed good correlation, with an R2 value of 0.99. Without lead shielding, RTs must stay at a distance greater than 200 cm from patients for both vertical and horizontal beams to minimize scatter exposure. This would allow for an alternative way of performing portable chest radiography for COVID-19 patients without requiring lead shielding. abstract_id: PUBMED:34497783 Clinical Characteristics and Outcomes Until 2 Years of Age in Preterm Infants With Typical Chest Imaging Findings of Bronchopulmonary Dysplasia: A Propensity Score Analysis. Objective: The goal of the current study was to assess the associations of typical chest imaging findings of bronchopulmonary dysplasia (BPD) in preterm infants with clinical characteristics and outcomes until 2 years of age. Method: This retrospective cohort study enrolled 256 preterm infants with BPD who were admitted between 2014 and 2018. A propensity score analysis was used to adjust for confounding factors. The primary outcomes were the severity of BPD, home oxygen therapy (HOT) at discharge and mortality between 28 days after birth and 2 years of age. A multivariate logistic regression analysis was performed to identify related variables of mortality. Results: Seventy-eight patients with typical chest imaging findings were enrolled, of which 50 (64.1%) were first found by CXR, while 28 (35.9%) were first found by CT. In addition, 85.9% (67/78) were discovered before 36 weeks postmenstrual age (PMA) (gestational age [GA] < 32 weeks) or before 56 days after birth (GA > 32 weeks). After propensity score matching, the matched groups consisted of 58 pairs of patients. Those with typical imaging findings had a remarkably higher mortality rate (29.3 vs. 12.1%, p = 0.022, OR 3.021), higher proportion of severe BPD (32.8 vs. 12.1%, p = 0.003, OR 4.669) and higher rate of HOT at discharge (74.1 vs. 46.6%, p = 0.002, OR 3.291) than those without typical imaging findings. The multivariate logistic regression analysis showed that typical imaging findings ≤ 7 days and typical typical imaging findings >7 days were independent risk factors for mortality in preterm infants with BPD (OR 7.794, p = 0.004; OR 4.533, p = 0.001). Conclusions: More attention should be given to chest imaging findings of BPD, especially in the early stage (within 7 days). Early recognition of the development of BPD helps early individualized treatment of BPD. Clinical Trial Registration:www.ClinicalTrials.gov, identifier: NCT04163822. abstract_id: PUBMED:23275433 Intervention minimizing preterm infants' exposure to NICU light and noise. Neonatal intensive care unit (NICU) light and noise may be stressful to preterm infants. This research evaluated the physiological stability of 54 infants born at 28- to 32-weeks' gestational age while wearing eye goggles and earmuffs for a 4-hour period in the NICU. Infants were recruited from four NICUs of university-affiliated hospitals and randomized to the intervention-control or control-intervention sequences. Heart rate (HR), heart rate variability (HRV), and oxygen saturation (O2 sat) were collected using the SomtéTM device. Confounding variables such as position and handling were assessed by videotaping infants during the study periods. Results indicated that infants had more stress responses while wearing eye goggles and earmuffs since maximum HR was found to be significantly higher and high-frequency power of HRV significantly lower during the intervention as compared with the control period. Therefore, this intervention is not recommended for the clinical practice. abstract_id: PUBMED:36774728 Associations between feeding and development in preterm infants in the NICU and throughout the first year of life. Background: There is little published evidence regarding associations between feeding and development in preterm infants which could help identify infants most needing follow-up services. Aims: To determine if preterm infant feeding and development were predictable throughout the first year of life and identify associations with maternal factors, neonatal factors, and socioeconomic measures. Study Design: Prospective single-site study of the feeding and development of extremely and very preterm infants at three time points throughout the first year of life. Subjects: Infants <32 weeks gestational age were followed from neonatal intensive care unit (NICU) discharge (DC) until 12 months corrected gestational age (CGA). Outcome Measures: Feeding and development were evaluated at NICU DC, 3 months and 12 months CGA. Maternal health, infant health, and socioeconomic measures were also recorded. Results: Significant differences were found between assessments for feeding and development at each of the three time points: NICU DC (p = 0.026), 3 months CGA (p = 0.001), and 12 months CGA (p = 0.000); however, no associations were found between feeding and development at NICU DC and 12 months CGA (p = 0.137). Of the maternal factors determined to be significant, none were consistent enough as to be considered relevant. Conclusions: This study demonstrated that preterm infants with typical feeding and development at DC may go on to develop concerns in these areas, and those who scored abnormally at DC may perform typically during the first year of life. This study affirms the importance of NICU follow-up services to support feeding and development for all infants born <32 weeks gestation. abstract_id: PUBMED:35612644 Efficacy of the scatter correction algorithm in portable chest radiography. Purpose: Portable chest radiographs (CXRs) continue to be a vital diagnostic tool for emergency and critical care medicine. The scatter correction algorithm (SCA) is a post-processing algorithm aiming to reduce scatter within portable images. This study aimed to assess whether the SCA improved image quality (IQ) in portable CXRs. Methods: Objective and subjective IQ assessments were undertaken on both phantom and clinical images, respectively. For objective analysis, attenuators were placed on the anterior surface of the patient's thorax to simulate pathologies present within uniform regions of the phantom's lung and heart. Phantom CXRs were acquired with three different tube-current-times (mAs). Phantom images were processed with different SCA strengths. Contrast to noise ratios (CNR) within the attenuator were determined for each algorithm strength and compared to non-SCA images. For subjective analysis, two independent radiologists graded 30 clinical images with and without the SCA activated. The images were graded for IQ in different anatomical structures and overall diagnostic confidence. Results: Objectively, most strengths of the SCA improved the CNR in both regions. However, a detrimental effect was recorded for some algorithm strengths in regions of high contrast. Subjectively, both observers recorded the SCA significantly improved IQ in clinical CXRs in all anatomical regions. Observers indicated the greatest improvement in the lung and hilar regions, and least improvement in the chest wall and bone. All images with and without the SCA were deemed diagnostic. Conclusion: This study shows the potential radiation dose neutral IQ improvement when using an SCA in clinical patient CXRs. Answer: Scatter radiation from chest radiographs in a typical Neonatal Intensive Care Unit (NICU) has been evaluated to determine if there is a risk to infants. The studies have shown that the dose of scatter radiation to infants in a NICU is minimal compared to daily natural background radiation dose. For example, at distances of 125 cm and 93 cm from the center of an acrylic phantom during radiographic exposure, the scatter radiation was found to be significantly below natural background radiation, indicating that infants in neighboring isolettes are not at added risk from radiation scatter as long as the isolettes are separated by at least 2 feet (PUBMED:19997727). Further research has quantified scatter radiation exposure levels in a NICU and compared these values with the effective dose limits established by the European Union. The results showed that the dose due to scatter radiation to staff is approximately 160 μSv/year, which is markedly lower than the effective dose limit for workers (20 mSv/year). For adjacent patients and/or their caregivers, the scatter radiation dose during a mean hospitalization period of 3 months is at most 40 μSv, which is less than the exposure limit for public members in a year (1 mSv) (PUBMED:29442153). In conclusion, based on the available evidence, scatter radiation from chest radiographs in a typical NICU does not pose a significant risk to infants, provided that appropriate distances are maintained between the isolettes and radiographic examinations are conducted following safety protocols.
Instruction: Does imprint cytology improve the accuracy of transrectal prostate needle biopsy? Abstracts: abstract_id: PUBMED:24964902 Does imprint cytology improve the accuracy of transrectal prostate needle biopsy? Objective: To evaluate the accuracy of imprint cytology of core needle biopsy specimens in the diagnosis of prostate cancer. Methods: Between December 24, 2011 and May 9, 2013, patients with an abnormal DRE and/or serum PSA level of >2.5 ng/mL underwent transrectal prostate needle biopsy. Samples with positive imprint cytology but negative initial histologic exam underwent repeat sectioning and histological examination. Results: 1,262 transrectal prostate needle biopsy specimens were evaluated from 100 patients. Malignant imprint cytology was found in 236 specimens (18.7%), 197 (15.6%) of which were confirmed by histologic examination, giving an initial 3.1% (n = 39) rate of discrepant results by imprint cytology. Upon repeat sectioning and histologic examination of these 39 biopsy samples, 14 (1.1% of the original specimens) were then diagnosed as malignant, 3 (0.2%) as atypical small acinar proliferation (ASAP), and 5 (0.4%) as high-grade prostatic intraepithelial neoplasia (HGPIN). Overall, 964 (76.4%) specimens were negative for malignancy by imprint cytology. Seven (0.6%) specimens were benign by cytology but malignant cells were found on histological evaluation. On imprint cytology examination, nonmalignant but abnormal findings were seen in 62 specimens (4.9%). These were all due to benign processes. After reexamination, the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, false-positive rate, false-negative rate of imprint preparations were 98.1, 96.9, 98.4, 92.8, 99.3, 1.6, 3.1%, respectively. Conclusion: Imprint cytology is valuable tool for evaluating TRUS-guided core needle biopsy specimens from the prostate. Use of imprint cytology in combination with histopathology increases diagnostic accuracy when compared with histopathologic assessment alone. abstract_id: PUBMED:23112457 Touch imprint cytology of prostate core needle biopsy specimens: A useful method for immediate reporting of prostate cancer. Background: Cytology plays an important role in the preoperative assessment of many cancers. It is used as a first-line pathological investigation in both screening and diagnostic purposes. Aims: To determine the diagnostic value and accuracy of touch imprint cytology (TIC) smear of prostate core needle biopsy (CNB) specimens in the diagnosis of prostate carcinoma. Materials And Methods: One hundred and twenty-one patients had ultrasound-guided transrectal prostate CNB. A total of 1210 TIC smears were prepared from all CNB specimens. Results: Diagnoses of 1210 TIC smears were compared with the histopathological findings of the CNB specimens. One hundred and seventy (14%) TIC smears were found positive for malignancy, 35 (2.9%) were diagnosed as suspicious for malignancy and 1005 (83.1%) were found negative for malignancy. Twenty-five of 35 suspicious imprints and 150 of 170 malignant smears were confirmed to be malignant on histopathological evaluation. Although 20 malignant TIC smears were defined as benign in standard histological preparations, 10 of them had definitive diagnosis of malignancy following extensive serial sectioning. Last of all, there were 10 false-positive cytology results. Moreover, 10 of the 35 suspected TIC smears were false negative when compared with the histopathological diagnosis. The sensitivity, specificity, positive predictive value and negative predictive value of touch imprint smear results were 100%, 98%, 90.2% and 100%, respectively. Conclusions: TIC smears can provide an immediate and reliable cytological diagnosis of prostate carcinoma. It may clearly help the rapid detection of carcinoma, particularly in highly suspected cases that had negative routine biopsy results for malignancy with abnormal serum prostate specific antigen (PSA) levels and atypical digital rectal examination. abstract_id: PUBMED:18958585 Diagnostic yield of touch imprint cytology of prostate core needle biopsies. Touch imprint cytology may provide additional information to core needle biopsy interpretation according to previous reports. The aim of this study was to investigate the diagnostic yield of this method in the diagnosis of prostate carcinoma. For this purpose, 452 transrectal prostate needle biopsies were evaluated from 56 patients. All patients were clinically suspicious of having prostate carcinoma. Two touch imprints were prepared from each fresh biopsy cylinder. Results of the standard histology and of the touch imprint evaluation were compared. Histologically negative biopsy cylinders were further evaluated for prostate carcinoma by fine step serial sectioning. The standard histological examination showed adenocarcinoma in 27 patients. Touch imprint cytology revealed atypical cells suspicious of carcinoma in 38 patients. This group included all 27 patients with positive standard histology and further 11 patients with initially negative core biopsy. Following serial sectioning, in three out of these 11 samples, histological evidence of a carcinoma could be proven. Fine step serial sectioning of all 29 core biopsies negative for carcinoma by standard histological examination, 26 patients remained negative. All three core biopsies initially negative by standard histology but positive after serial sectioning had cytology findings suspicious of carcinoma. We conclude, that in problematic cases the additional use of touch imprint cytology and serial sectioning of prostate core needle biopsies significantly improve the diagnostic accuracy. abstract_id: PUBMED:17928311 Imprint cytology improves accuracy of computed tomography-guided percutaneous transthoracic needle biopsy. The aim of the present study was to investigate whether imprint cytology can improve the diagnostic accuracy of computed tomography-guided transthoracic core biopsy. Between October 1997 and June 2004, thoracic lesions in 622 patients underwent biopsy using 19-gauge coaxial guiding needles and 20-gauge biopsy needles under computed tomography guidance. Touch imprint cytology and histopathology were performed for all biopsy specimens. Of these lesions, 431 (74.1%) were diagnosed as malignant, 151 (25.9%) as benign and 40 (6%) as nondiagnostic. Imprint cytology plus histology shows an improved diagnostic accuracy of 96.4% compared with that of imprint cytology alone (92.3%) or histopathology alone (93.0%). Procedure-related complications requiring further treatment occurred in eight (1.4%) patients. In conclusion, imprint cytology combined with histopathology can improve the diagnostic accuracy of computed tomography-guided transthoracic needle biopsy. abstract_id: PUBMED:29284293 Assessment of Needle Tip Deflection During Transrectal Guided Prostate Biopsy: Implications for Targeted Biopsies. Objectives: To measure needle tip deflection during transrectal ultrasound (TRUS) prostate biopsy and evaluate predictors for needle tip deflection. Materials And Methods: Analysis of 568 prostate biopsies obtained from 51 consecutive patients who underwent a standard 12-core TRUS guided prostate biopsy. TRUS guided prostate biopsies were performed using BK flex500, with a side-fire biplane probe. Each biopsy core image was captured and clinical data were recorded prospectively. The angle between the expected trajectory of the needle and actual needle course was measured using the longitudinal view of the captured image. The distance between expected and actual needle tip was calculated. We measured median and interquartile needle tip deflection rate stratified by side and location (apex, midgland, base). Univariable and multivariable linear regressions analysis were performed. Results: The overall median needle tip deflection was 1.77 mm (IQR 1.35-2.47). Location did not significantly alter needle deflection measurements. On multivariable linear regression analysis, higher prostate volume (B = 0.007 95%, CI 0.004, 0.011; p < 0.001) and the right sided biopsy (B = 0.191 95%, CI 0.047, 0.336; p = 0.010) emerged as predictors of higher needle tip deflection. Conclusions: To the best of our knowledge this is the first study to measure needle tip deflection during TRUS guided prostate biopsies. We demonstrated that larger prostate size and biopsy side may affect the accuracy of biopsies. These results may have clinical implication to those performing targeted biopsies. abstract_id: PUBMED:30760274 Transperineal versus transrectal prostate biopsy in the diagnosis of prostate cancer: a systematic review and meta-analysis. Background: Because conventional prostate biopsy has some limitations, optimal variations of prostate biopsy strategies have emerged to improve the diagnosis rate of prostate cancer. We conducted the systematic review to compare the diagnosis rate and complications of transperineal versus transrectal prostate biopsy. We searched for online publications published through June 27, 2018, in PubMed, Scopus, Web of Science, and Chinese National Knowledge Infrastructure databases. The relative risk and 95% confidence interval were utilized to appraise the diagnosis and complication rate. The condensed relative risk of 11 included studies indicated that transperineal prostate biopsy has the same diagnosis accuracy of transrectal prostate biopsy; however, a significantly lower risk of fever and rectal bleeding was reported for transperineal prostate biopsy. No clue of publication bias could be identified. Short Conclusion: To conclude, this review indicated that transperineal and transrectal prostate biopsy have the same diagnosis accuracy, but the transperineal approach has a lower risk of fever and rectal bleeding. More studies are warranted to confirm these findings and discover a more effective diagnosis method for prostate cancer. abstract_id: PUBMED:26855442 Touch Imprint Cytology and Stereotactically-Guided Core Needle Biopsy of Suspicious Breast Lesions: 15-Year Follow-up. Introduction: Stereotactically-guided core needle biopsies (CNB) of breast tumours allow histological examination of the tumour without surgery. Touch imprint cytology (TIC) of CNB promises to be useful in providing same-day diagnosis for counselling purposes and for planning future surgery. Having addressed the issue of accuracy of immediate microscopic evaluation of TIC, we wanted to re-examine the usefulness of this procedure in light of the present health care climate of cost containment by incorporating the surgical 15-year follow-up data and outcome. Patients and Methods: From January until December 1996 we performed TIC in core needle biopsies of 173 breast tumours in 169 patients, consisting of 122 malignant and 51 benign tumours. Histology of core needle biopsies was proven by surgical histology in all malignant and in 5 benign tumours. Surgical breast biopsy was not performed in 46 patients with 46 benign lesions, as the histological result from the core needle biopsy and the result of the TIC were in agreement with the suspected diagnosis from the complementary breast diagnostics. A 15-year follow-up of these patients followed in 2013 and follow-up data was collected from 40 women. Results: In the 15-year follow-up of the 40 benign lesions primarily confirmed using CNB and TIC, a diagnostic sensitivity, specificity, positive and negative predictive value and accuracy of 100 % was found. Conclusion: TIC and stereotactically guided CNB showed excellent long-term follow-up in patients with benign breast lesions. The use of TIC to complement CNB can therefore provide immediate cytological diagnosis of breast lesions. abstract_id: PUBMED:2029399 Franzén transrectal fine-needle biopsy versus ultrasound-guided transrectal core biopsy of the prostate gland. Digitally-guided transrectal fine-needle aspiration biopsy was compared with ultrasound-guided transrectal core biopsy of the prostate gland. Both biopsy techniques were equally effective in detecting prostate cancer. Core biopsies were generally graded higher than fine-needle aspirations. The reproducibility was approximately 80% for both methods. abstract_id: PUBMED:34674018 Transrectal ultrasound-guided prostate needle biopsy remains a safe method in confirming a prostate cancer diagnosis: a multicentre Australian analysis of infection rates. Purpose: Worldwide, transrectal ultrasound-guided prostate needle remains the most common method of diagnosing prostate cancer. Due to high infective complications reported, some have suggested it is now time to abandon this technique in preference of a trans-perineal approach. The aim of this study was to report on the infection rates following transrectal ultrasound-guided prostate needle biopsy in multiple Australian centres. Materials And Methods: Data were collected from seven Australian centres across four states and territories that undertake transrectal ultrasound-guided prostate needle biopsies for the diagnosis of prostate cancer, including major metropolitan and regional centres. In four centres, the data were collected prospectively. Rates of readmissions due to infection, urosepsis resulting in intensive care admission and mortality were recorded. Results: 12,240 prostate biopsies were performed in seven Australian centres between July 1998 and December 2020. There were 105 readmissions for infective complications with rates between centres ranging from 0.19 to 2.60% and an overall rate of 0.86%. Admission to intensive care with sepsis ranged from 0 to 0.23% and overall 0.03%. There was no mortality in the 12,240 cases. Conclusion: Infective complications following transrectal ultrasound-guided prostate needle biopsies are very low, occurring in less than 1% of 12,240 biopsies. Though this study included a combination of both prospective and retrospective data and did not offer a comparison with a trans-perineal approach, TRUS prostate biopsy is a safe means of obtaining a prostate cancer diagnosis. Further prospective studies directly comparing the techniques are required prior to abandoning TRUS based upon infectious complications. abstract_id: PUBMED:2336764 Comparison of transrectal fine-needle aspiration cytology and core needle biopsy in diagnosis of prostate cancer. One hundred sixty-nine transrectal fine-needle aspirations of the prostate gland were performed in 166 patients over a two-year period. The results were compared with simultaneous core needle biopsy performed in all but 4 patients. Forty-seven (28%) aspirations were either unsatisfactory or inconclusive. Of the remaining 122 (72%) patients in whom a cytologic diagnosis could be made, core biopsy was available in 120. Aspiration cytology was 87 percent sensitive and 96 percent specific with an overall agreement of 93 percent with core biopsy. No major complications occurred. We conclude that fine-needle aspiration of the prostate is accurate, safe, and cost-effective, and greater application of this technique is encouraged. Answer: Yes, imprint cytology does improve the accuracy of transrectal prostate needle biopsy. Studies have shown that the use of imprint cytology in combination with histopathology increases diagnostic accuracy when compared with histopathologic assessment alone. For instance, one study reported that after reexamination, the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of imprint preparations were 98.1%, 96.9%, 98.4%, 92.8%, and 99.3%, respectively (PUBMED:24964902). Another study found that touch imprint cytology (TIC) smears could provide an immediate and reliable cytological diagnosis of prostate carcinoma, with a sensitivity, specificity, positive predictive value, and negative predictive value of 100%, 98%, 90.2%, and 100%, respectively (PUBMED:23112457). Additionally, touch imprint cytology was shown to reveal atypical cells suspicious of carcinoma in cases where standard histology was negative, and subsequent serial sectioning confirmed carcinoma in some of these cases, indicating that the additional use of touch imprint cytology can significantly improve diagnostic accuracy (PUBMED:18958585). Moreover, the use of imprint cytology has been shown to improve diagnostic accuracy in other biopsy procedures as well, such as computed tomography-guided transthoracic needle biopsy, where the combination of imprint cytology and histopathology improved diagnostic accuracy to 96.4% (PUBMED:17928311). In summary, the evidence suggests that imprint cytology is a valuable tool for evaluating transrectal ultrasound-guided core needle biopsy specimens from the prostate and can enhance the accuracy of prostate cancer diagnosis when used alongside traditional histopathological examination.
Instruction: Is the spinal cord motoneuron exclusively a target in ALS? Abstracts: abstract_id: PUBMED:33377424 Hypoxic stress visualized in the cervical spinal cord of ALS patients. Objective: Amyotrophic lateral sclerosis (ALS) is a progressive and fatal motor neuron disease. Hypoxic stress is suspected as the pathogenesis of ALS, however, no positron emission tomography (PET) study for hypoxic stress has been conducted in the spinal cord of ALS patients.Methods: In the present study, we examined cervical spinal hypoxic stress of nineALS patients with upper extremity (U/E) atrophy by18F-fluoromisonidazole (FMISO) PET.Results: On the ipsilateral side of C1 and C5 levels, 18F-FMISO uptake increased significantly compared with the contralateral side (*p < 0.05) and the control subject (**p < 0.01). In addition, a strong correlation was found between 18F-FMISO uptake of the C5 level and the rate of progression of the ALS FRS-R score (R = 0.781, *p = 0.013).Conclusion: These results indicate that hypoxic stress increased in the spinal cord of ALS patients with a close link to ALS progression. Both hypoxic stress and a compromised response to hypoxia, which may lead to subsequent motor neuron death, could be a potential therapeutic target for ALS. abstract_id: PUBMED:34446086 Blood-spinal cord barrier leakage is independent of motor neuron pathology in ALS. Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease involving progressive degeneration of upper and lower motor neurons. The pattern of lower motor neuron loss along the spinal cord follows the pattern of deposition of phosphorylated TDP-43 aggregates. The blood-spinal cord barrier (BSCB) restricts entry into the spinal cord parenchyma of blood components that can promote motor neuron degeneration, but in ALS there is evidence for barrier breakdown. Here we sought to quantify BSCB breakdown along the spinal cord axis, to determine whether BSCB breakdown displays the same patterning as motor neuron loss and TDP-43 proteinopathy. Cerebrospinal fluid hemoglobin was measured in living ALS patients (n = 87 control, n = 236 ALS) as a potential biomarker of BSCB and blood-brain barrier leakage. Cervical, thoracic, and lumbar post-mortem spinal cord tissue (n = 5 control, n = 13 ALS) were then immunolabelled and semi-automated imaging and analysis performed to quantify hemoglobin leakage, lower motor neuron loss, and phosphorylated TDP-43 inclusion load. Hemoglobin leakage was observed along the whole ALS spinal cord axis and was most severe in the dorsal gray and white matter in the thoracic spinal cord. In contrast, motor neuron loss and TDP-43 proteinopathy were seen at all three levels of the ALS spinal cord, with most abundant TDP-43 deposition in the anterior gray matter of the cervical and lumbar cord. Our data show that leakage of the BSCB occurs during life, but at end-stage disease the regions with most severe BSCB damage are not those where TDP-43 accumulation is most abundant. This suggests BSCB leakage and TDP-43 pathology are independent pathologies in ALS. abstract_id: PUBMED:34769048 Swim Training Ameliorates Hyperlocomotion of ALS Mice and Increases Glutathione Peroxidase Activity in the Spinal Cord. (1) Background: Amyotrophic lateral sclerosis (ALS) is an incurable, neurodegenerative disease. In some cases, ALS causes behavioral disturbances and cognitive dysfunction. Swimming has revealed a neuroprotective influence on the motor neurons in ALS. (2) Methods: In the present study, a SOD1-G93A mice model of ALS were used, with wild-type B6SJL mice as controls. ALS mice were analyzed before ALS onset (10th week of life), at ALS 1 onset (first symptoms of the disease, ALS 1 onset, and ALS 1 onset SWIM), and at terminal ALS (last stage of the disease, ALS TER, and ALS TER SWIM), and compared with wild-type mice. Swim training was applied 5 times per week for 30 min. All mice underwent behavioral tests. The spinal cord was analyzed for the enzyme activities and oxidative stress markers. (3) Results: Pre-symptomatic ALS mice showed increased locomotor activity versus control mice; the swim training reduced these symptoms. The metabolic changes in the spinal cord were present at the pre-symptomatic stage of the disease with a shift towards glycolytic processes at the terminal stage of ALS. Swim training caused an adaptation, resulting in higher glutathione peroxidase (GPx) and protection against oxidative stress. (4) Conclusion: Therapeutic aquatic activity might slow down the progression of ALS. abstract_id: PUBMED:33170340 ALS-CSF-induced structural changes in spinal motor neurons of rat pups cause deficits in motor behaviour. Amyotrophic lateral sclerosis (ALS) is a late-onset, neurodegenerative disease associated with the loss of motor neurons in the spinal cord, brain stem and primary motor cortex. Deficit in the motor function is one of the clinical features of this disease. However, the association between adverse morphological alterations in the spinal motor neurons and motor deficit in sporadic ALS (SALS) is still debated. The present study has sought to investigate the effects of serial intrathecal injections of ALS-CSF into rat pups, at post-natal (P) days 3, 9 and 14, on the motor neuronal (MN) morphology at the cervical and lumbar levels of the spinal cord at P16 and P22. The present study used Cresyl violet and Golgi-Cox staining methods to determine the progressive changes in the morphology of spinal MNs in both cervical and lumbar extensions. The study found a loss of motor neurons in the spinal cord (36% for P16 in cervical and 41.7% in P16 lumbar and 49.57% for P22 cervical and 44.63% for P22 lumbar) and reduced choline acetyl transferase (ChAT) expression after repeated infusion of ALS-CSF. Significant increase in the soma area was also found in ALS-CSF rats (around 21% in P22 cervical and 26.4% in P22 lumbar). Soma hypertrophy was associated with increased dendritic arborization of MNs at both cervical and lumbar levels of the spinal cord. The data also showed a direct correlation between ALS-CSF induced changes in the MN number in the spinal cord and motor behavioral deficits. The loss of MNs, reduced ChAT, changes in soma and dendritic morphology with declined rotarod performance, thus, confirming the pathological phenotypes as seen in ALS patients. abstract_id: PUBMED:31031688 Spinal Cord Imaging in Amyotrophic Lateral Sclerosis: Historical Concepts-Novel Techniques. Amyotrophic lateral sclerosis (ALS) is the most common adult onset motor neuron disease with no effective disease modifying therapies at present. Spinal cord degeneration is a hallmark feature of ALS, highlighted in the earliest descriptions of the disease by Lockhart Clarke and Jean-Martin Charcot. The anterior horns and corticospinal tracts are invariably affected in ALS, but up to recently it has been notoriously challenging to detect and characterize spinal pathology in vivo. With recent technological advances, spinal imaging now offers unique opportunities to appraise lower motor neuron degeneration, sensory involvement, metabolic alterations, and interneuron pathology in ALS. Quantitative spinal imaging in ALS has now been used in cross-sectional and longitudinal study designs, applied to presymptomatic mutation carriers, and utilized in machine learning applications. Despite its enormous clinical and academic potential, a number of physiological, technological, and methodological challenges limit the routine use of computational spinal imaging in ALS. In this review, we provide a comprehensive overview of emerging spinal cord imaging methods and discuss their advantages, drawbacks, and biomarker potential in clinical applications, clinical trial settings, monitoring, and prognostic roles. abstract_id: PUBMED:36731429 A cellular taxonomy of the adult human spinal cord. The mammalian spinal cord functions as a community of cell types for sensory processing, autonomic control, and movement. While animal models have advanced our understanding of spinal cellular diversity, characterizing human biology directly is important to uncover specialized features of basic function and human pathology. Here, we present a cellular taxonomy of the adult human spinal cord using single-nucleus RNA sequencing with spatial transcriptomics and antibody validation. We identified 29 glial clusters and 35 neuronal clusters, organized principally by anatomical location. To demonstrate the relevance of this resource to human disease, we analyzed spinal motoneurons, which degenerate in amyotrophic lateral sclerosis (ALS) and other diseases. We found that compared with other spinal neurons, human motoneurons are defined by genes related to cell size, cytoskeletal structure, and ALS, suggesting a specialized molecular repertoire underlying their selective vulnerability. We include a web resource to facilitate further investigations into human spinal cord biology. abstract_id: PUBMED:36443249 TDP-43 knockdown in mouse model of ALS leads to dsRNA deposition, gliosis, and neurodegeneration in the spinal cord. Transactive response DNA binding protein 43 kilodaltons (TDP-43) is a DNA and RNA binding protein associated with severe neurodegenerative diseases such as amyotrophic lateral sclerosis (ALS), primarily affecting motor neurons in the brain and spinal cord. Partial knockdown of TDP-43 expression in a mouse model (the amiR-TDP-43 mice) leads to progressive, age-related motor dysfunction, as observed in ALS patients. Work in Caenorhabditis elegans suggests that TDP-43 dysfunction can lead to deficits in chromatin processing and double-stranded RNA (dsRNA) accumulation, potentially activating the innate immune system and promoting neuroinflammation. To test this hypothesis, we used immunostaining to investigate dsRNA accumulation and other signs of CNS pathology in the spinal cords of amiR-TDP-43 mice. Compared with wild-type controls, TDP-43 knockdown animals show increases in dsRNA deposition in the dorsal and ventral horns of the spinal cord. Additionally, animals with heavy dsRNA expression show markedly increased levels of astrogliosis and microgliosis. Interestingly, areas of high dsRNA expression and microgliosis overlap with regions of heavy neurodegeneration, indicating that activated microglia could contribute to the degeneration of spinal cord neurons. This study suggests that loss of TDP-43 function could contribute to neuropathology by increasing dsRNA deposition and subsequent innate immune system activation. abstract_id: PUBMED:38127186 Neural Differentiation and spinal cord organoid generation from induced pluripotent stem cells (iPSCs) for ALS modelling and inflammatory screening. C9orf72 genetic mutation is the most common genetic cause of ALS/FTD accompanied by abnormal protein insufficiency. Induced pluripotent stem cell (iPSC)-derived two-dimensional (2D) and three-dimensional (3D) cultures are providing new approaches. Therefore, this study established neuronal cell types and generated spinal cord organoids (SCOs) derived from C9orf72 knockdown human iPSCs to model ALS disease and screen the unrevealed phenotype. Wild-type (WT) iPSC lines from three healthy donor fibroblasts were established, and pluripotency and differentiation ability were identified by RT-PCR, immunofluorescence and flow cytometry. After infection by the lentivirus with C9orf72-targeting shRNA, stable C9-knockdown iPSC colonies were selected and differentiated into astrocytes, motor neurons and SCOs. Finally, we analyzed the extracted RNA-seq data of human C9 mutant/knockout iPSC-derived motor neurons and astrocytes from the GEO database and the inflammatory regulation-related genes in function and pathways. The expression of inflammatory factors was measured by qRT-PCR. The results showed that both WT-iPSCs and edited C9-iPSCs maintained a similar ability to differentiate into the three germ layers, astrocytes and motor neurons, forming SCOs in a 3D culture system. The constructed C9-SCOs have features of spinal cord development and multiple neuronal cell types, including sensory neurons, motor neurons, and other neurons. Based on the bioinformatics analysis, proinflammatory factors were confirmed to be upregulated in C9-iPSC-derived 2D cells and 3D cultured SCOs. The above differentiated models exhibited low C9orf72 expression and the pathological characteristics of ALS, especially neuroinflammation. abstract_id: PUBMED:24904291 Deregulated expression of cytoskeleton related genes in the spinal cord and sciatic nerve of presymptomatic SOD1(G93A) Amyotrophic Lateral Sclerosis mouse model. Early molecular events related to cytoskeleton are poorly described in Amyotrophic Lateral Sclerosis (ALS), especially in the Schwann cell (SC), which offers strong trophic support to motor neurons. Database for Annotation, Visualization and Integrated Discovery (DAVID) tool identified cytoskeleton-related genes by employing the Cellular Component Ontology (CCO) in a large gene profiling of lumbar spinal cord and sciatic nerve of presymptomatic SOD1(G93A) mice. One and five CCO terms related to cytoskeleton were described from the spinal cord deregulated genes of 40 days (actin cytoskeleton) and 80 days (microtubule cytoskeleton, cytoskeleton part, actin cytoskeleton, neurofilament cytoskeleton, and cytoskeleton) old transgene mice, respectively. Also, four terms were depicted from the deregulated genes of sciatic nerve of 60 days old transgenes (actin cytoskeleton, cytoskeleton part, microtubule cytoskeleton and cytoskeleton). Kif1b was the unique deregulated gene in more than one studied region or presymptomatic age. The expression of Kif1b [quantitative polymerase chain reaction (qPCR)] elevated in the lumbar spinal cord (40 days old) and decreased in the sciatic nerve (60 days old) of presymptomatic ALS mice, results that were in line to microarray findings. Upregulation (24.8 fold) of Kif1b was seen in laser microdissected enriched immunolabeled motor neurons from the spinal cord of 40 days old presymptomatic SOD1(G93A) mice. Furthermore, Kif1b was dowregulated in the sciatic nerve Schwann cells of presymptomatic ALS mice (60 days old) that were enriched by means of cell microdissection (6.35 fold), cell sorting (3.53 fold), and primary culture (2.70 fold) technologies. The gene regulation of cytoskeleton molecules is an important occurrence in motor neurons and Schwann cells in presymptomatic stages of ALS and may be relevant in the dying back mechanisms of neuronal death. Furthermore, a differential regulation of Kif1b in the spinal cord and sciatic nerve cells emerged as key event in ALS. abstract_id: PUBMED:34479980 Beneficial Effects of Transplanted Human Bone Marrow Endothelial Progenitors on Functional and Cellular Components of Blood-Spinal Cord Barrier in ALS Mice. Convincing evidence of blood-spinal cord barrier (BSCB) alterations has been demonstrated in amyotrophic lateral sclerosis (ALS) and barrier repair is imperative to prevent motor neuron dysfunction. We showed benefits of human bone marrow-derived CD34+ cells (hBM34+) and endothelial progenitor cells (hBM-EPCs) intravenous transplantation into symptomatic G93A SOD1 mutant mice on barrier reparative processes. These gains likely occurred by replacement of damaged endothelial cells, prolonging motor neuron survival. However, additional investigations are needed to confirm the effects of administered cells on integrity of the microvascular endothelium. The aim of this study was to determine tight junction protein levels, capillary pericyte coverage, microvascular basement membrane, and endothelial filamentous actin (F-actin) status in spinal cord capillaries of G93A SOD1 mutant mice treated with human bone marrow-derived stem cells. Tight junction proteins were detected in the spinal cords of cell-treated versus non-treated mice via Western blotting at four weeks after transplant. Capillary pericyte, basement membrane laminin, and endothelial F-actin magnitudes were determined in cervical/lumbar spinal cord tissues in ALS mice, including controls, by immunohistochemistry and fluorescent staining. Results showed that cell-treated versus media-treated ALS mice substantially increased tight junction protein levels, capillary pericyte coverage, basement membrane laminin immunoexpressions, and endothelial cytoskeletal F-actin fluorescent expressions. The greatest benefits were detected in mice receiving hBM-EPCs versus hBM34+ cells. These study results support treatment with a specific cell type derived from human bone marrow toward BSCB repair in ALS. Thus, hBM-EPCs may be advanced for clinical applications as a cell-specific approach for ALS therapy through restored barrier integrity. Answer: No, the spinal cord motoneuron is not exclusively targeted in ALS. While motoneuron degeneration is a hallmark feature of ALS, other components of the spinal cord and associated systems are also implicated in the disease. For instance, blood-spinal cord barrier (BSCB) leakage occurs in ALS, but the regions with the most severe BSCB damage are not necessarily where TDP-43 accumulation, a marker of ALS pathology, is most abundant, suggesting that BSCB leakage and TDP-43 pathology are independent pathologies in ALS (PUBMED:34446086). Additionally, deregulated expression of cytoskeleton-related genes has been observed in both motor neurons and Schwann cells in presymptomatic stages of ALS, indicating that cytoskeletal alterations are an important event in cells other than motoneurons (PUBMED:24904291). Furthermore, ALS pathology is not limited to the spinal cord, as evidenced by the presence of hypoxic stress in the cervical spinal cord of ALS patients, which is linked to ALS progression and suggests that both hypoxic stress and a compromised response to hypoxia could be potential therapeutic targets (PUBMED:33377424). Moreover, ALS is associated with a complex interplay of cellular and molecular changes, including inflammation, oxidative stress, and cytoskeletal abnormalities, which affect various cell types within the spinal cord. For example, swim training in an ALS mouse model showed increased glutathione peroxidase activity in the spinal cord, indicating that oxidative stress is a factor in ALS pathology beyond motoneurons (PUBMED:34769048). Additionally, ALS-CSF-induced structural changes in spinal motor neurons of rat pups cause deficits in motor behavior, suggesting that factors in cerebrospinal fluid can affect motor neuron morphology and function (PUBMED:33170340). A comprehensive cellular taxonomy of the adult human spinal cord has also identified various glial and neuronal clusters, indicating a diverse community of cell types involved in spinal cord function and potentially in ALS pathology (PUBMED:36731429). Furthermore, TDP-43 knockdown in a mouse model of ALS leads to dsRNA deposition, gliosis, and neurodegeneration in the spinal cord, implicating innate immune system activation and inflammation in the disease process (PUBMED:36443249). Lastly, transplantation of human bone marrow endothelial progenitors in ALS mice has shown beneficial effects on the functional and cellular components of the functional and cellular components of the blood-spinal cord barrier, indicating that endothelial cells and barrier integrity are also important in ALS pathology (PUBMED:34479980). This suggests that while motoneurons are a key target in ALS, they are not the only cell type affected by the disease. Other spinal cord cells, including glial cells, endothelial cells, and components of the BSCB, are also involved in the disease process, highlighting the complexity of ALS pathology.
Instruction: Is rehabilitation effective in massive rotator cuff tears? Abstracts: abstract_id: PUBMED:25890809 Is rehabilitation effective in massive rotator cuff tears? Background: Irreparable massive rotator cuff tears are challenging to treat. Our objective here was to evaluate the efficacy of a specifically designed rehabilitation programme. Hypothesis: We hypothesised that outcomes of the rehabilitation programme would vary with the site of the tears. Materials And Methods: Patients with irreparable massive rotator cuff tears and shoulder pseudoparalysis were included prospectively. They followed a five-session specific rehabilitation programme. The outcomes were analysed according to the site of the tears. Results: We included 45 patients with a mean age of 67 years. At last follow-up after rehabilitation, 24 patients had recovered more than 160° of anterior shoulder elevation. Treatment failure was common in patients with massive anterior rotator cuff tears or tears involving three or more tendons. Patients with massive posterior tears, in contrast, often experienced substantial improvements, even in the medium term. Conclusion: Outcomes of rehabilitation therapy in patients with irreparable massive rotator cuff tears and shoulder pseudoparalysis vary according to the site and number of the tears. Failure of rehabilitation therapy is common in patients with massive anterior tears or tears involving at least three tendons. In contrast, in patients with isolated massive posterior tears, substantial benefits from rehabilitation therapy can be expected. Level Of Evidence: III. abstract_id: PUBMED:33276163 Nonoperative treatment of chronic, massive irreparable rotator cuff tears: a systematic review with synthesis of a standardized rehabilitation protocol. Purpose: A massive, irreparable rotator cuff tear may cause significant pain and dysfunction. However, the efficacy of nonoperative treatment modalities in this subset of patients is not currently well known. Also, there is currently no gold standard nonoperative protocol to guide treatment. The goal of the present systematic review is to determine if there is any evidence to support the use of various nonoperative treatment modalities and synthesize a standardized nonoperative treatment protocol for the patient with a massive irreparable rotator cuff tear. Methods: A comprehensive review of the literature utilizing PRISMA guidelines was performed. Studies involving clinical outcomes of nonoperative treatment of massive, irreparable rotator cuff tears were included. Articles were reviewed by 2 reviewers to determine inclusion or exclusion based on established criteria. Selected articles were reviewed for results of clinical and functional outcomes. The studies were also reviewed to determine their level of evidence and potential sources of bias. A standardized nonoperative treatment protocol was developed by taking described elements of the protocols used in studies that demonstrated clinical improvement beyond the MCID for the outcome scores used by the authors. Results: A total of 10 studies met inclusion criteria for our studies. Of the included studies, 1 was Level III evidence and the remaining 9 were Level IV evidence. Multiple studies showed significant improvement exceeding the MCID for functional outcome scores following treatment. Also, several studies demonstrated significant improvements in strength and range of motion. The overall success of nonoperative treatment ranged from 32%-96%. The synthesized nonoperative treatment protocol is characterized by requiring some supervised physical therapy, often requiring 12 weeks or more, focusing on supine exercises with gradual progression to upright. Corticosteroid injections and nonsteroidal anti-inflammatory drugs may also be of benefit. Conclusion: Despite low-quality evidence, nonoperative treatment has been shown to be efficacious for patients with chronic, massive, irreparable rotator cuff tears. Using these results, a synthesized rehabilitation program was developed to guide clinicians when treating patients with massive irreparable rotator cuff tears. abstract_id: PUBMED:28593093 SUPERIOR CAPSULE RECONSTRUCTION FOR MASSIVE ROTATOR CUFF TEARS - KEY CONSIDERATIONS FOR REHABILITATION. Superior capsule reconstruction is a recently-developed surgical technique for the treatment of massive, irreparable rotator cuff tears. So far, biomechanical cadaveric studies and clinical outcomes results have been promising concerning integrity, stability, and ROM after superior capsule reconstruction. As this technique has only been recently developed, an evidence-based rehabilitation protocol has not been previously designed. Thus, the purpose of this clinical commentary is to provide an overview of superior capsule reconstruction and to propose a rehabilitation program based on the available scientific evidence. The existing evidence is supplemented by the experience of the senior author who has performed more than forty superior capsule reconstruction procedures to date. This proposed rehabilitation protocol consists of four distinct phases, focusing on maximal protection, range of motion and muscular endurance, muscular strength and return to activity. Level Of Evidence: 5. abstract_id: PUBMED:17042025 Physiotherapy rehabilitation in patients with massive, irreparable rotator cuff tears. Background: Massive rotator cuff tears provide a challenge for effective rehabilitation. Work has been ongoing at Torbay Hospital, Devon since 2000 to develop an exercise programme for the management of this patient group. This programme has been evaluated in a pilot study and a further randomised controlled trial is currently taking place which will enable us to estimate the treatment effect. This paper discusses the background to the development of the rehabilitation programme, the programme itself and the results of the pilot study. The pilot study was an evaluation of the rehabilitation programme. Objectives: This study examined the effectiveness of a physiotherapy regime for the treatment of patients with massive rotator cuff tears. Methods: Patients identified through primary and secondary care referrals to physiotherapy with a clinical diagnosis of a massive rotator cuff tear underwent an ultrasound scan to confirm the diagnosis. A massive cuff tear was one where the leading edge of the tear had retracted past the glenoid margin. The clinical diagnosis was based on the presence of some or all of the following signs: positive humeral thrust on elevation, gross weakness and wasting of supraspinatus and infraspinatus, infraspinatus lag and rupture of the long head of biceps. Eligible patients were invited to take part in the study and informed consent was obtained. The baseline assessment was carried out and then the patient undertook the treatment programme. Outcome measures were reassessed 12 weeks from the baseline assessment. Design: A cohort study of 10 patients evaluating the change from baseline to twelve weeks in the shoulder function of patients undergoing a programme of anterior deltoid strengthening and functional rehabilitation. The outcome measures used were the Oxford Shoulder Disability Questionnaire (OSDQ) and SF36. The OSDQ is validated for use with the UK population and has 12 questions with 5 point responses. The lowest (best) score is 12 and the highest (worse) score is 60. Results: Scores on the OSDQ improved with all patients. The mean improvement was 9 (range 3 to 16, standard deviation 10.3). The SF36 showed an improvement in the pain scores for all patients (mean 22 points) and an overall improvement of 10 points for the sections on role limitation due to physical health. There was an overall decline in perceived general health (9 points) and in role limitation due to emotional health (23 points). Conclusions: As all 10 patients showed improved scores on the OSDQ, in spite of the long-standing nature of many of their shoulder problems, this rehabilitation programme was shown to improve shoulder function in this group of patients. The variation shown in the quality of life scores reflects the age group of this cohort who had a mean age of 75.5 years. All patients deemed their pain and function to have improved over the three-month period. abstract_id: PUBMED:33987078 Current concepts in the rehabilitation of rotator cuff related disorders. Rotator cuff related disorders (RCRD) are common. Exercise-based rehabilitation can improve outcomes, yet uncertainty exists regarding the characteristics of these exercises. This scoping review paper summarises the key characteristics of the exercise-based rehabilitation of rotator cuff related disorders (RCRD). An iterative search process was used to capture the breadth of current evidence and a narrative summary of the data was produced. 57 papers were included. Disagreement around terminology, diagnostic standards, and outcome measures limits the comparison of the data. Rehabilitation should utilise a biopsychosocial approach, be person-centred and foster self-efficacy. Biomedically framed beliefs can create barriers to rehabilitation. Pain drivers in RCRSD are unclear, as is the influence of pain during exercise on outcomes. Expectations and preferences around pain levels should be discussed to allow the co-creation of a programme that is tolerated and therefore engaged with. The optimal parameters of exercise-based rehabilitation remain unclear; however, programmes should be individualised and progressive, with a minimum duration of 12 weeks. Supervised or home-based exercises are equally effective. Following rotator cuff repair, rehabilitation should be milestone-driven and individualised; communication across the MDT is essential. For individuals with massive rotator cuff tears, the anterior deltoid programme is a useful starting point and should be supplemented by functional rehabilitation, exercises to optimise any remaining cuff and the rest of the kinetic chain. In conclusion, exercise-based rehabilitation improves outcomes for individuals with a range of RCRD. The optimal parameters of these exercises remain unclear. Variation exists across current physiotherapy practice and post-operative rehabilitation protocols, reflecting the wide-ranging spectrum of individuals presenting with RCRD. Clinicians should use their communication and rehabilitation expertise to plan an exercise-based program in conjunction with the individual with RCRSD, which is regularly reviewed and adjusted. abstract_id: PUBMED:37197034 Massive and Irreparable Rotator Cuff Tears: A Review of Current Definitions and Concepts. Background: While massive and irreparable rotator cuff tears (MIRCTs) have been abundantly studied, inconsistent definitions in the literature and theories about pain and dysfunction related to them can be difficult to navigate when considering an individual patient. Purpose: To review the current literature for definitions and critical concepts that drive decision-making for MIRCTs. Study Design: Narrative review. Methods: A search of the PubMed database was performed to conduct a comprehensive literature review on MIRCTs. A total of 97 studies were included. Results: Recent literature reflects added attention to clarifying the definitions of "massive, "irreparable," and "pseudoparalysis." In addition, numerous recent studies have added to the understanding of what generates pain and dysfunction from this condition and have reported on new techniques for addressing them. Conclusion: The current literature provides a nuanced set of definitions and conceptual foundations on MIRCTs. These can be used to better define these complex conditions in patients when comparing current surgical techniques to address MIRCTs, as well as when interpreting the results of new techniques. While the number of effective treatment options has increased, high-quality and comparative evidence on treatments for MIRCTs is lacking. abstract_id: PUBMED:35172414 Research progress of arthroscopic long head of biceps tendon transposition in treatment of irreparable massive rotator cuff tears Objective: To review the research progress of arthroscopic long head of biceps tendon (LHBT) transposition in treatment of irreparable massive rotator cuff tears. Methods: The domestic and foreign related literature in recent years on the treatment of irreparable massive rotator cuff tears with different LHBT transposition methods under arthroscopy was reviewed and analyzed. Results: Arthroscopic LHBT transposition is an effective method for irreparable massive rotator cuff tears, which mainly includes "proximal cut", "both two cuts", "distal cut", and "no cut". Different methods of LHBT transposition can achieve good effectiveness, but its long-term effectiveness needs further follow-up. Conclusion: Arthroscopic LHBT transposition in treatment of irreparable massive rotator cuff tears is simple and effective. The patients can recover quickly after operation with less injury. But the technique has higher requirements for surgeons, and the indications must be strictly controlled. abstract_id: PUBMED:8532340 Conservative therapy and rehabilitation following surgery of the rotator cuff To achieve the best results after shoulder surgery, an optimal rehabilitation program is absolutely necessary. The physiotherapist must pay attention to some basic features such as biomechanics, functional anatomy and the current findings. The schedule of rehabilitation depends on the operative technique, the possibility of reconstruction of the rotator cuff and on the load capacity of the soft tissue. The goal of shoulder rehabilitation is the recovery of painless and normal shoulder function. abstract_id: PUBMED:32643613 Systematic review of reversing pseudoparalysis of the shoulder due to massive, irreparable rotator cuff tears. Background: Correcting pseudoparalysis of the shoulder due to massive rotator cuff tear is challenging. The most reliable treatment for restoring active shoulder elevation is debatable. Therefore, the purpose of this systematic review was to evaluate the success of various treatment options for reversing pseudoparalysis due to massive rotator cuff tear. Methods: A search was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines of the MEDLINE database, Cochrane database, Sportdiscus, and Google Scholar database for articles evaluating shoulder pseudoparalysis due to massive rotator cuff tears. Results: Nine articles evaluating reverse total shoulder arthroplasty (RTSA), superior capsular reconstruction (SCR), and rehabilitation programs were included in the study. Though there was variability, the definition of pseudoparalysis was active forward elevation (AFE) less than 90° with preserved passive range of motion (ROM). Reversal of pseudoparalysis was defined as restoration of AFE greater than 90°. The overall rate of reversal of pseudoparalysis across studies was similar for RTSA (96% ± 17%) and SCR (94% ± 3%). However, there was a difference in average improvement in AFE for RTSA (56° ± 11°) and SCR (106° ± 20°). A progressive rehabilitation program described improvements in a single study with 82% reversal of pseudoparalysis. Conclusion: The available Level IV evidence suggests that RTSA and SCR reliably reverse pseudoparalysis in most patients with massive, irreparable rotator cuff tears. However, the dissimilar improvements in ROM suggest that a more consistent definition of pseudoparalysis is warranted. Future randomized controlled trials are needed to determine the best treatment approach for patients with massive irreparable rotator cuff tears. abstract_id: PUBMED:32974412 Outcomes of the Star Repair for Large and Massive Rotator Cuff Tears: A Modified Triple-Row Technique. Background: Large and massive rotator cuff repairs constitute a true challenge for arthroscopic shoulder surgeons. Retear rates as high as 20% have been reported after arthroscopic double-row and suture-bridge techniques used for these tears. Hypothesis: A modified triple-row repair will provide satisfactory clinical results with lower risk for retear. Study Design: Case series; Level of evidence, 4. Methods: Between March 2016 and August 2017, a total of 52 patients with large and massive rotator cuff tears received a modified triple-row cuff repair. A middle repositioning anchor was inserted between the medial and the lateral rows. The middle anchor sutures were loaded to lateral knotless anchors in a star-shaped configuration. Functional evaluation was performed using the American Shoulder and Elbow Surgeons score, University of California, Los Angeles score, Constant-Murley score, and Simple Shoulder Test. Subjective evaluation was carried out using a visual analog scale for pain and a subjective shoulder value score. Health-related as well as disease-specific quality-of-life scores were also used. Retear rates were assessed by means of musculoskeletal ultrasonography. Patients were evaluated for a minimum of 24 months. Results: This study included 34 female and 18 male patients with a mean age of 57.17 ± 6.7 years. There were 35 patients (67.3%) with large tears and 17 patients (32.7%) with massive tears. Significant improvement from preoperative values was seen in all functional and subjective scores (P < .001). The mean forward flexion was 163° ± 9.7°, and the mean lateral abduction was 159.4° ± 9.4°. All patients had excellent scores on the general health-related and disease-specific quality-of-life scales. No retears were reported at the end of the follow-up period. Conclusion: The star-shaped, modified triple-row cuff repair is a valid and effective solution for surgical management of large and massive rotator cuff tears, providing excellent results and low risk for retears. Answer: Rehabilitation can be effective in patients with massive rotator cuff tears, but the outcomes vary depending on several factors, including the site and number of the tears. A specifically designed rehabilitation program was found to be beneficial, particularly for patients with isolated massive posterior tears, who often experienced substantial improvements, even in the medium term. However, treatment failure was more common in patients with massive anterior rotator cuff tears or tears involving three or more tendons (PUBMED:25890809). A systematic review indicated that nonoperative treatment, including rehabilitation, has been shown to be efficacious for patients with chronic, massive, irreparable rotator cuff tears. The success rate of nonoperative treatment ranged from 32%-96%, and a synthesized rehabilitation protocol was developed to guide clinicians. This protocol often requires supervised physical therapy, focusing on supine exercises with gradual progression to upright, and may include corticosteroid injections and nonsteroidal anti-inflammatory drugs (PUBMED:33276163). For massive rotator cuff tears, the anterior deltoid program is suggested as a useful starting point for rehabilitation, which should be supplemented by functional rehabilitation and exercises to optimize any remaining cuff and the rest of the kinetic chain. Exercise-based rehabilitation should be individualized and progressive, with a minimum duration of 12 weeks (PUBMED:33987078). In summary, while rehabilitation can be effective for massive rotator cuff tears, the specific approach and expected outcomes depend on the characteristics of the tear and the individualized treatment plan. It is important to note that while some patients may experience significant improvements, others, particularly those with more extensive tears, may have less favorable outcomes. Clinicians should tailor rehabilitation programs to the individual needs of the patient, considering the location and extent of the tear (PUBMED:25890809; PUBMED:33276163; PUBMED:33987078).
Instruction: Is the arthroscopic modified tension band suture technique suitable for all full-thickness rotator cuff tears? Abstracts: abstract_id: PUBMED:27017412 Is the arthroscopic modified tension band suture technique suitable for all full-thickness rotator cuff tears? Background: We aimed to identify the clinical and structural outcomes after arthroscopic repair of full-thickness rotator cuff tears of all sizes with a modified tension band suture technique. Methods: Among 63 patients who underwent arthroscopic rotator cuff repair for a full-thickness rotator cuff tear with the modified tension band suture technique at a single hospital between July 2011 and March 2013, 47 were enrolled in this study. The mean follow-up period was 29 months. Visual analog scale scores, range of motion, American Shoulder and Elbow Surgeons scores, Constant scores, and Shoulder Strength Index were measured preoperatively and at the final follow-up. For radiologic evaluation, we conducted magnetic resonance imaging 6 months postoperatively and ultrasonography at the final follow-up. We allocated the small and medium tears to group A and the large and massive tears to group B and then compared clinical outcomes and repair integrity. Results: Postoperative clinical outcomes at the final follow-up showed significant improvements compared with those seen during preoperative evaluations (P < .001). However, group B showed worse clinical results than group A. Evaluation with magnetic resonance imaging performed 6 months postoperatively and ultrasonography taken at the final follow-up revealed that group B showed a significantly higher retear rate than did group A (69% vs. 6%, respectively; P < .001). Conclusion: Arthroscopic repair with the modified tension band suture technique for rotator cuff tears was a more suitable method for small to medium tears than for large to massive tears. abstract_id: PUBMED:28101634 Is the arthroscopic suture bridge technique suitable for full-thickness rotator cuff tears of any size? Purpose: The purpose of this study was to compare functional outcomes and tendon integrity between the suture bridge and modified tension band techniques for arthroscopic rotator cuff repair. Methods: A consecutive series of 128 patients who underwent the modified tension band (MTB group; 69 patients) and suture bridge (SB group; 59 patients) techniques were enrolled. The pain visual analogue scale (VAS), Constant, and American Shoulder and Elbow Surgeons (ASES) scores were determined preoperatively and at the final follow-up. Rotator cuff hypotrophy was quantified by calculating the occupation ratio (OR). Rotator cuff integrity and the global fatty degeneration index were determined by using magnetic resonance imaging at 6 months postoperatively. Results: The average VAS, Constant, and ASES scores improved significantly at the final follow-up in both groups (p < 0.05 for all scores). The retear rate of small-to-medium tears was similar in the modified tension band and suture bridge groups (7.0 vs. 6.8%, respectively; p = n.s.). The retear rate of large-to-massive tears was significantly lower in the suture bridge group than in the modified tension band group (33.3 vs. 70%; p = 0.035). Fatty infiltration (postoperative global fatty degeneration index, p = 0.022) and muscle hypotrophy (postoperative OR, p = 0.038) outcomes were significantly better with the suture bridge technique. Conclusion: The retear rate was lower with the suture bridge technique in the case of large-to-massive rotator cuff tears. Additionally, significant improvements in hypotrophy and fatty infiltration of the rotator cuff were obtained with the suture bridge technique, possibly resulting in better anatomical outcomes. The suture bridge technique was a more effective method for the repair of rotator cuff tears of all sizes as compared to the modified tension band technique. Level Of Evidence: Retrospective Cohort Design, Treatment Study, level III. abstract_id: PUBMED:29803504 Clinical outcomes and repair integrity after arthroscopic full-thickness rotator cuff repair: suture-bridge versus double-row modified Mason-Allen technique. Background: This retrospective study compared the clinical and radiologic outcomes of patients who underwent arthroscopic rotator cuff repairs by the suture-bridge and double-row modified Mason-Allen techniques. Methods: From January 2012 to May 2013, 76 consecutive cases of full-thickness rotator cuff tear, 1 to 4 cm in the sagittal plane, for which arthroscopic rotator cuff repair was performed, were included. The suture-bridge technique was used in 37 consecutive shoulders; and the double-row modified Mason-Allen technique, in 39 consecutive shoulders. Clinical outcomes at a minimum of 2 years (mean, 35.7 months) were evaluated postoperatively using the visual analog scale; University of California, Los Angeles Shoulder Scale; American Shoulder and Elbow Surgeons Subjective Shoulder Scale; and Constant score. Postoperative cuff integrity was evaluated at a mean of 17.7 months by magnetic resonance imaging. Results: At the final follow-up, the clinical outcomes improved in both groups (all P < .001) but with no significant differences between the 2 groups (all P > .05). The retear rate was 18.9% in the shoulders subjected to suture-bridge repair and 12.8% in the double-row modified Mason-Allen group; the difference was not significant (P = .361). Conclusions: Despite the presence of fewer suture anchors, the patients who underwent double-row modified Mason-Allen repair had comparable shoulder functional outcomes and a comparable retear rate with those who underwent suture-bridge repair. Therefore, the double-row modified Mason-Allen repair technique can be considered an effective treatment for patients with medium- to large-sized full-thickness rotator cuff tears. abstract_id: PUBMED:26055919 Clinical Outcomes of Modified Mason-Allen Single-Row Repair for Bursal-Sided Partial-Thickness Rotator Cuff Tears: Comparison With the Double-Row Suture-Bridge Technique. Background: Various repair techniques have been reported for the operative treatment of bursal-sided partial-thickness rotator cuff tears. Recently, arthroscopic single-row repair using a modified Mason-Allen technique has been introduced. Hypothesis: The arthroscopic, modified Mason-Allen single-row technique with preservation of the articular-sided tendon provides satisfactory clinical outcomes and similar results to the double-row suture-bridge technique after conversion of a partial-thickness tear to a full-thickness tear. Study Design: Cohort study; Level of evidence, 3. Methods: A retrospective study was conducted on 84 consecutive patients with symptomatic, bursal-sided partial-thickness rotator cuff tears involving more than 50% thickness of the tendon. A total of 47 patients were treated by the modified Mason-Allen single-row repair technique, preserving the articular-sided tendon, and 37 patients were treated by the double-row suture-bridge repair technique after conversion to a full-thickness tear. The clinical and functional outcomes were evaluated using the American Shoulder and Elbow Surgeons (ASES) and Constant scores and a visual analog scale (VAS) for pain and satisfaction of patients. Magnetic resonance imaging (MRI) was used to analyze the integrity of tendons at 6-month follow-up. Patients were followed up for a mean of 32.5 months. Results: In the 47 patients treated with the modified Mason-Allen suture technique, the VAS score decreased from a preoperative mean of 5.3 ± 0.3 to 0.9 ± 0.5 at the time of final follow-up. There was a statistically significant increase in the mean ASES score (from 45.4 ± 2.9 to 88.6 ± 4.5) and mean Constant score (from 66.9 ± 2.6 to 88.1 ± 2.4) (P < .001). Four of 47 patients (8.5%) demonstrated retears at 6-month postoperative MRI. There was no statistical difference in terms of functional outcomes and the retear rate compared with those of patients with the suture-bridge repair technique (3 patients, 8.1%). However, the mean number of suture anchors used in the patients with modified Mason-Allen suture repair (1.2 ± 0.4) was significantly fewer than that in the patients with suture-bridge repair (3.2 ± 0.4) (P < .01). Conclusion: The modified Mason-Allen single-row repair technique that preserved the articular-sided tendon provided satisfactory clinical outcomes in patients with symptomatic, bursal-sided partial-thickness rotator cuff tears. Despite a fewer number of suture anchors, the shoulder functional outcomes and retear rate in patients after modified Mason-Allen repair were comparable with those of patients who underwent double-row suture-bridge repair. Therefore, the modified Mason-Allen single-row repair technique using a triple-loaded suture anchor can be considered as an effective treatment in patients with bursal-sided partial-thickness rotator cuff tears. abstract_id: PUBMED:20514268 Arthroscopic suture bridge repair technique for full thickness rotator cuff tear. Background: The purpose of our study is to evaluate the clinical results of arthroscopic suture bridge repair for patients with rotator cuff tears. Methods: Between January 2007 and July 2007, fifty-one shoulders underwent arthroscopic suture bridge repair for full thickness rotator cuff tears. The average age at the time of surgery was 57.1 years old, and the mean follow-up period was 15.4 months. Results: At the last follow-up, the pain at rest improved from 2.2 preoperatively to 0.23 postoperatively and the pain during motion improved from 6.3 preoperatively to 1.8 postoperatively (p < 0.001 and p < 0.001, respectively). The range of active forward flexion improved from 138.4 degrees to 154.6 degrees , and the muscle power improved from 4.9 kg to 6.0 kg (p = 0.04 and 0.019, respectively). The clinical results showed no significant difference according to the preoperative tear size and the extent of fatty degeneration, but imaging study showed a statistical relation between retear and fatty degeneration. The average Constant score improved from 73.2 to 83.79, and the average University of California at Los Angeles score changed from 18.2 to 29.6 with 7 excellent, 41 good and 3 poor results (p < 0.001 and p = 0.003, respectively). Conclusions: The arthroscopic suture bridge repair technique for rotator cuff tears may be an operative method for which a patient can expect to achieve clinical improvement regardless of the preoperative tear size and the extent of fatty degeneration. abstract_id: PUBMED:32844306 The functional outcome of arthroscopic rotator cuff repair with double-row knotless vs knot-tying anchors. To date two main techniques are used in arthroscopic full-thickness rotator cuff tears, the conventional knot-tying suture bridge technique and the knotless technique. We evaluated whether there is a difference in clinical outcome using both techniques. Our patients underwent arthroscopic treatment of full-thickness rotator cuff tears, and we retrospectively evaluated clinical function, strength and surgery time. Eighty-three shoulders operated between September 2012 and December 2013 were included in the study. We had nineteen patients in the knotless group, and sixty-four in the knot-tying group. In addition, we performed preoperatively radiological (magnetic resonance imaging-MRI) conformation of full-thickness rotator cuff tear in our patients. For clinical evaluation, we used Quick Disabilities of the Arm, Shoulder and Hand score (q-DASH) and the Shoulder Pain and Disability (SPADI) score, and we measured the strength of a range of motion postoperatively using a conventional dynamometer. The patients were evaluated preoperatively, and at 6, 9, and 12 months postoperatively. The follow-up period was 12 months. The scores in both treatment groups improved at twelve months follow-up, but there was no statistical difference between both groups at twelve months after surgery; q-DASH score between groups (p = 0.092) and SPADI score (p = 0.700). Similarly, there was no statistical difference between the groups in regard to strength, surgery time, and range of motion at the twelve months follow-up. Our data confirm that both techniques may be used successfully to repair full-thickness rotator cuff tears with very good functional outcome.Level of evidence IV. abstract_id: PUBMED:22949957 Modified Mason-Allen suture bridge technique: a new suture bridge technique with improved tissue holding by the modified Mason-Allen stitch. We present a new method of suture bridge technique for medial row fixation using a modified Mason-Allen stitch instead of a horizontal mattress. Medial row configuration of the technique is composed of the simple stitch limb and the modified Mason-Allen stitch limb. The limbs are passed through the tendon by a shuttle relay. The simple stitch limb passes the cuff once and the modified Mason-Allen stitch limb passes three times which creates a rip stop that prevents tendon pull-out. In addition, the Mason-Allen suture bridge configuration is basically a knotless technique which has an advantage of reducing a possibility of strangulation of the rotator cuff tendon, impingement or irritation that may be caused by knot. abstract_id: PUBMED:32490425 Clinical outcomes following arthroscopic repair of articular vs. bursal partial-thickness rotator cuff tears with follow-up of 2 years or more. Background: The diagnosis and treatment of partial-thickness rotator cuff tears remain controversial, and only a few studies have carried out clinical evaluation and comparison based on different types of tears. The aim of this study was to compare the clinical outcomes of arthroscopic cuff repairs using the suture bridge technique in patients with articular partial-thickness rotator cuff tears (APRCTs) vs. those with bursal partial-thickness rotator cuff tears (BPRCTs). Methods: We retrospectively evaluated 29 patients with APRCTs and 22 patients with BPRCTs who underwent arthroscopic cuff repair using the suture bridge technique with a minimum 2-year follow-up. Clinical outcomes were evaluated preoperatively and postoperatively using the visual analog scale score, Japanese Orthopaedic Association (JOA) score, Constant score (CS), active range of motion (ROM) of shoulder flexion and abduction, improvement rate for each score, and retear rate. Results: The APRCT group had more women, fewer cases of subacromial decompression, and more patients whose condition changed intraoperatively and transitioned into a complete tear. Preoperatively, the JOA score, CS, ROM of shoulder flexion, ROM of shoulder abduction, and external shoulder rotation strength were lower in the APRCT group. Postoperatively, all scores improved significantly in both groups, and the JOA score, CS, and external shoulder rotation strength remained significantly lower in the APRCT group. Improvement and retear rates were not significantly different between the groups. Conclusions: The suture bridge technique significantly improved the clinical outcomes of patients with APRCTs and BPRCTs. Preoperative and postoperative functional parameters were worse in APRCT patients. abstract_id: PUBMED:31042439 Arthroscopic Repair of the Isolated Subscapularis Full-Thickness Tear: Single- Versus Double-Row Suture-Bridge Technique. Background: No clinical comparative study has addressed isolated subscapularis tears after arthroscopic repair with either single-row or double-row suture-bridge technique. Purpose/hypothesis: The purpose of this study is to compare clinical outcomes and structural integrity after arthroscopic repair of an isolated subscapularis full-thickness tear with either the single-row technique or the double-row suture-bridge technique. The authors hypothesized that there would be no significant differences in clinical outcomes and structural integrity between approaches. Study Design: Cohort study; Level of evidence, 3. Methods: This study included 56 patients who underwent arthroscopic repair of an isolated subscapularis full-thickness tear with grade II or less fatty infiltration in the subscapularis muscle with either a single-row technique (n = 31) or a double-row suture-bridge technique (n = 25). Functional outcomes were assessed with the visual analog scale (VAS) for pain, Subjective Shoulder Value (SSV), American Shoulder and Elbow Surgeons (ASES) score, the University of California, Los Angeles (UCLA) shoulder score, and active range of motion. Magnetic resonance arthrography (MRA) or computed tomographic arthrography (CTA) was performed 6 months after surgery to assess the structural integrity of the repaired tendon. Results: At the 2-year follow-up, all scoring parameters applied (VAS, SSV, ASES, and UCLA), subscapularis strength, and active range of motion improved significantly in both groups as compared with preoperative values ( P < .001). However, there were no significant differences between groups in any of these clinical outcome measurements (VAS, 1.2 vs 1.1; SSV, 91.3 vs 91.8; ASES, 91.0 vs 91.4; UCLA, 31.9 vs 32.1). On follow-up MRA or CTA, the overall retear rate did not differ significantly between the single-row group (13%, 4 of 31) and the double-row group (12%, 3 of 25). Conclusion: Arthroscopic single-row repair and double-row suture-bridge repair of isolated full-thickness subscapularis tears both yielded satisfactory clinical outcomes and structural integrity with no significant differences among patients with good muscle quality. abstract_id: PUBMED:27498108 Preservation of bursal-sided tendon in partial-thickness articular-sided rotator cuff tears: a novel arthroscopic transtendon anatomic repair technique. Introduction: The purpose of this study was to introduce a novel arthroscopic transtendon anatomic repair technique that spares the intact bursal-sided tendon in articular-sided partial-thickness rotator cuff tears (PTRCT) and to present shoulder functional outcomes in patients with symptomatic articular-sided PCRCT that involves more than 50 % of its thickness after arthroscopic repair using a novel technique. Materials And Methods: Eighteen patients with symptomatic articular-sided PCRCT involving more than 50 % of the tendon's thickness underwent arthroscopic repair using a devised technique. The devised technique restores only the torn articular portion of the rotator cuff at the anatomical footprint using a suture anchor, and preserves the integrity of the corresponding bursal-sided tendon by tying knots at the most lateral bursal side on the subacromial space. Clinical and functional outcome using ASES and Constant scores were evaluated. The structural integrity of the rotator cuff was evaluated by MRI at 6 months postoperatively. Results: Pain relief and shoulder functional outcomes were encouraging during the recovery phase after operation. ASES (preoperative 54.0 ± 10.3 to postoperative 92.6 ± 8.0), Constant score (61.2 ± 8.5-88.0 ± 5.3), VAS for pain (4.9 ± 2.6-0.6 ± 0.7) improved significantly after arthroscopic transtendon anatomic repair (p < 0.001). No patients had rotator cuff retears on 6-month MRI. No complications related to surgical procedures had occurred. Conclusion: The devised technique of arthroscopic transtendon repair provided satisfactory functional outcomes without postoperative discomforts. This technique minimizes over-tightening of the articular layer and reduces tension mismatches between the articular and bursal layers, which are considered as important factors for improvement of postoperative shoulder motion. Answer: The arthroscopic modified tension band suture technique is not suitable for all full-thickness rotator cuff tears. Clinical and structural outcomes after arthroscopic repair of full-thickness rotator cuff tears with this technique showed that it was more suitable for small to medium tears than for large to massive tears. In a study with 63 patients, postoperative clinical outcomes showed significant improvements, but group B (large and massive tears) showed worse clinical results than group A (small and medium tears). Additionally, group B had a significantly higher retear rate than group A (69% vs. 6%, respectively) (PUBMED:27017412). In comparison, the suture bridge technique was found to be more effective for the repair of rotator cuff tears of all sizes compared to the modified tension band technique. The retear rate for large-to-massive tears was significantly lower with the suture bridge technique (33.3% vs. 70%), and there were significant improvements in hypotrophy and fatty infiltration of the rotator cuff with the suture bridge technique (PUBMED:28101634). Furthermore, the double-row modified Mason-Allen technique showed comparable shoulder functional outcomes and a comparable retear rate with the suture-bridge repair, suggesting it can be an effective treatment for patients with medium- to large-sized full-thickness rotator cuff tears (PUBMED:29803504). In summary, while the modified tension band suture technique can be used for small to medium full-thickness rotator cuff tears, it is less suitable for large to massive tears, and other techniques such as the suture bridge or double-row modified Mason-Allen may provide better outcomes for larger tears.
Instruction: Does stent design affect probability of restenosis? Abstracts: abstract_id: PUBMED:12478231 Stent design: implications for restenosis. There is increasing evidence that stent design influences angiographic restenosis and clinical outcomes. After nearly 15 years of clinical experience, there is now a plethora of stent designs available, and yet no single design incorporates all the characteristics of the ideal stent. The specific metallic composition of a stent limits the type of stent geometry possible, and the biocompatibility of the metal or surface coating may affect long-term stent healing. Studies have shown that stent geometry designed to optimize expansion and lower recoil is a prerequisite for favorable clinical outcomes. Strut thickness appears to be an important risk factor for restenosis, but changing one parameter, such as strut thickness, requires altering other design characteristics, thus altering the overall stent design. Future stent designs should combine the best features of conventional stent design with special modifications to facilitate multi-agent drug elution for a variety of applications. abstract_id: PUBMED:11526357 Does stent design affect probability of restenosis? A randomized trial comparing Multilink stents with GFX stents. Background: Experimental studies have revealed that stent configuration influences intimal hyperplasia. The purpose of this study was to evaluate clinical outcomes for 2 stent designs in a randomized trial with quantitative coronary angiography (QCA) and intravascular ultrasonography (IVUS). Methods: We randomly assigned 100 patients with 107 lesions and symptomatic coronary artery disease to deployment of a Multilink stent (Advanced Cardiovascular Systems, Guidant, Santa Clara, Calif) or a GFX stent (Applied Vascular Engineering, Santa Rosa, Calif) with IVUS guidance. QCA and IVUS studies were performed before and after intervention and at follow-up (4.2 +/- 1.0 months). Results: There were no significant differences in baseline characteristics and QCA and IVUS parameters before and after intervention between the 2 groups. However, minimal lumen diameter at follow-up was significantly larger in the Multilink group (2.46 +/- 0.59 vs 2.08 +/- 0.79 mm, P <.05). Maximal in-stent intimal hyperplasia was significantly larger in the GFX group (2.9 +/- 1.7 vs 1.8 +/- 1.2 mm(2), P <.01). The restenosis rate differed between the 2 groups (Multilink 4% vs GFX 26%, P =.003). In multiple stepwise logistic regression analysis, the only predictor that significantly correlated with restenosis was stent type (P <.01). The odds ratio for the GFX stent-treated vessels was 18.65 (95% confidence interval 2.10-165.45). Conclusions: With deployment of the GFX stent, a thicker neointima develops within the stent. Stent configuration may affect clinical outcomes. abstract_id: PUBMED:22670206 Multi-scale simulations of the dynamics of in-stent restenosis: impact of stent deployment and design. Neointimal hyperplasia, a process of smooth muscle cell re-growth, is the result of a natural wound healing response of the injured artery after stent deployment. Excessive neointimal hyperplasia following coronary artery stenting results in in-stent restenosis (ISR). Regardless of recent developments in the field of coronary stent design, ISR remains a significant complication of this interventional therapy. The influence of stent design parameters such as strut thickness, shape and the depth of strut deployment within the vessel wall on the severity of restenosis has already been highlighted but the detail of this influence is unclear. These factors impact on local haemodynamics and vessel structure and affect the rate of neointima formation. This paper presents the first results of a multi-scale model of ISR. The development of the simulated restenosis as a function of stent deployment depth is compared with an in vivo porcine dataset. Moreover, the influence of strut size and shape is investigated, and the effect of a drug released at the site of injury, by means of a drug-eluting stent, is also examined. A strong correlation between strut thickness and the rate of smooth muscle cell proliferation has been observed. Simulation results also suggest that the growth of the restenotic lesion is strongly dependent on the stent strut cross-sectional profile. abstract_id: PUBMED:31159989 A meta-analysis of the effect of stent design on clinical and radiologic outcomes of carotid artery stenting. Objective: Procedural characteristics, including stent design, may influence the outcome of carotid artery stenting (CAS). A thorough comparison of the effect of stent design on outcome of CAS is thus warranted to allow for optimal evidence-based clinical decision making. This study sought to evaluate the effect of stent design on clinical and radiologic outcomes of CAS. Methods: A systematic search was conducted in MEDLINE, Embase, and Cochrane databases in May 2018. Included were articles reporting on the occurrence of clinical short- and intermediate-term major adverse events (MAEs; any stroke or death) or radiologic adverse events (new ischemic lesions on postprocedural magnetic resonance diffusion-weighted imaging [MR-DWI], restenosis, or stent fracture) in different stent designs used to treat carotid artery stenosis. Random effects models were used to calculate combined overall effect sizes. Metaregression was performed to identify the effect of specific stents on MAE rates. Results: From 2654 unique identified articles, two randomized, controlled trials and 66 cohort studies were eligible for analysis (including 46,728 procedures). Short-term clinical MAE rates were similar for patients treated with open cell vs closed cell or hybrid stents. Use of an Acculink stent was associated with a higher risk of short-term MAE compared with a Wallstent (risk ratio [RR], 1.51; P = .03), as was true for use of Precise stent vs Xact stent (RR, 1.55; P < .001). Intermediate-term clinical MAE rates were similar for open vs closed cell stents. Use of open cell stents predisposed to a 25% higher chance (RR, 1.25; P = .03) of developing postprocedural new ischemic lesions on MR-DWI. No differences were observed in the incidence of restenosis, stent fracture, or intraprocedural hemodynamic depression with respect to different stent design. Conclusions: Stent design is not associated with short- or intermediate-term clinical MAE rates in patients undergoing CAS. Furthermore, the division in open and closed cell stent design might conceal true differences in single stent efficacy. Nevertheless, open cell stenting resulted in a significantly higher number of subclinical postprocedural new ischemic lesions detected on MR-DWI compared with closed cell stenting. An individualized patient data meta-analysis, including future studies with prospective homogenous study design, is required to adequately correct for known risk factors and to provide definite conclusions with respect to carotid stent design for specific subgroups. abstract_id: PUBMED:29967933 Multiobjective design optimization of stent geometry with wall deformation for triangular and rectangular struts. The stent geometrical design (e.g., inter-strut gap, length, and strut cross-section) is responsible for stent-vessel contact problems and changes in the blood flow. These changes are crucial for causing some intravascular abnormalities such as vessel wall injury and restenosis. Therefore, structural optimization of stent design is necessary to find the optimal stent geometry design. In this study, we performed a multiobjective stent optimization for minimization of average stress and low wall shear stress ratio while considering the wall deformation in 3D flow simulations of triangular and rectangular struts. Surrogate-based optimization with Kriging method and expected hypervolume improvement (EHVI) are performed to construct the surrogate model map and find the best configuration of inter-strut gap (G) and side length (SL). In light of the results, G-SL configurations of 2.81-0.39 and 3.00-0.43 mm are suggested as the best configuration for rectangular and triangular struts, respectively. Moreover, considering the surrogate model and flow pattern conditions, we concluded that triangular struts work better to improve the intravascular hemodynamics. ᅟ Graphical abstract. abstract_id: PUBMED:38068277 Evaluation of Femoropopliteal In-Stent Restenosis Characteristics Stratified by Stent Design. Purpose: To evaluate the potential differences in characteristics of femoropopliteal in-stent restenosis (ISR) stratified by stent design with a focus on the swirling flow-inducing BioMimics 3D helical centerline stent. Methods: Patients with ISR of the superficial femoral and popliteal arteries undergoing reintervention were included in this study. The primary endpoint was the angiographic localization and extent of restenosis or reocclusion with the following five different stent systems: SMART Control stent, Supera peripheral stent, GORE® VIABAHN® endoprosthesis, BioMimics 3D stent, and Zilver® PTX® stent. Results: 414 ISR lesions were analyzed, affecting 236 Supera stents, 67 BioMimics 3D stents, 48 Zilver® PTX® stents, 38 SMART Control stents, and 25 VIABAHN® endoprostheses. The mean stent diameter and length were 5.7 ± 0.77 mm and 121.4 ± 94.8 mm, respectively. ISR included 310 (74.9%) lesions with 1 stent, 89 (21.5%) lesions with 2 stents, 14 (3.4%) lesions with 3 stents, and 1 lesion (0.2%) with 4 stents. Most lesions presented as reocclusions (67.4%) rather than focal (13.3%) or diffuse restenoses (19.3%). No significant differences in ISR lesion morphology were found. By trend, BioMimics 3D stent lesion extension was more focal (16.4% versus 12.7%, p = 0.258), with the highest proportion of lesions in which only the proximal stent third was affected (9.0% versus 5.8%, p = 0.230), as compared to the average of the other four devices. The occlusion rate was the second lowest for the BioMimics 3D stent (64.2 vs. 68.0%, p = 0.316). Risk factors for restenosis or occlusion were active smoking, pre-interventional occlusion, and popliteal intervention. Conclusion: Our results suggest that the helical centerline stent design of the BioMimics 3D stent, which results in a swirling flow with increased wall shear stress, may offer protective properties over straight stent designs, including DES and endoprosthesis, regarding localization and extension of restenosis. Prospective, randomized studies are warranted. abstract_id: PUBMED:26519227 Biocompatibility of a novel zinc stent with a closed-cell-design. Biomaterials made of zinc have been widely described to be antioxidative, hypothrombogenic, antiinflammatory and antiproliferative. Additionally in vivo zinc is toxic only in high concentrations and can completely be metabolized in vivo. Due to these properties zinc based vascular stents might be able to reduce the rate of restenosis in comparison to bare metal stents and zinc stents might be also able to limit the foreign body reaction. In the presented study we tested the biocompatibility and degradability of a stent made of zinc and characterized by a closed-cell-design to achieve high opening force and to increase stent stiffness. After 100 days of enzymatic and hydrolytic degradation in 15 ml blood serum (fetal calf serum) a significant loss of weight (1.72 wt% ) was measured. Zinc was compared to other metals in terms of degradation rates. After six weeks of incubation in physiologic sodium chloride solution zinc showed the slowest degradation time, 6 times less than stainless steel and 4 times less than magnesium. In the tests for cytotoxic effects the degraded zinc stent caused no changes in the LDH-release and cell membrane integrity (3T3 cells, mouse fibroblasts) respectively, in the cell activity/proliferation (MTS assay) and in the morphological characteristics of the cells and cell layers in comparison to the control material (polystyrene). Based on these results the tested zinc stent proved to be non-cytotoxic and to be characterized by degradation characteristics which might be advantageous in comparison to magnesium and stainless steel. abstract_id: PUBMED:34409580 Influences of Stent Design on In-Stent Restenosis and Major Cardiac Outcomes: A Scoping Review and Meta-Analysis. Thanks to the developments in implantable biomaterial technologies, invasive operating procedures, and widespread applications especially in vascular disease treatment, a milestone for interventional surgery was achieved with the introduction of vascular stents. Despite vascular stents providing a solution for embolisms, this technology includes various challenges, such as mechanical, electro-chemical complications, or in-stent restenosis (ISR) risks with long-term usage. Therefore, further development of biomaterial technologies is vital to overcome such risks and problems. For this purpose, recent research has focused mainly on the applications of surface modification techniques on biomaterials and vascular stents to increase their hemocompatibility. ISR risk has been reduced with the development and prevalent usage of the art technology stent designs of drug-eluting and biodegradable stents. Nevertheless, their problems have not been overcome completely. Furthermore, patients using drug-eluting stents are faced with further clinical challenges. Therefore, the bare metal stent, which is the first form of the vascular stent technology and includes the highest ISR risk, is still in common usage for vascular treatment applications. For this reason, further research is necessary to solve the remaining vital problems. In this scoping review, stent-based major cardiac events including ISR are analyzed depending on different designs and material selection in stent manufacturing. Recent and novel approaches to overcome such challenges are stated in detail. abstract_id: PUBMED:26821272 Drug-Eluting Stent Design is a Determinant of Drug Concentration at the Endothelial Cell Surface. Although drug-eluting stents (DES) have greatly reduced arterial restenosis, there are persistent concerns about stent thrombosis. DES thrombosis is attributable to retarded vascular re-endothelialization due to both stent-induced flow disturbance and the inhibition by the eluted drug of endothelial cell proliferation and migration. The present computational study aims to determine the effect of DES design on both stent-induced flow disturbance and the concentration of eluted drug at the arterial luminal surface. To this end, we consider three closed-cell stent designs that resemble certain commercial stents as well as three "idealized" stents that provide insight into the impact of specific characteristics of stent design. To objectively compare the different stents, we introduce the Stent Penalty Index (SPI), a dimensionless quantity whose value increases with both the extent of flow disturbance and luminal drug concentration. Our results show that among the three closed-cell designs studied, wide cell designs lead to lower SPI and are thus expected to have a less adverse effect on vascular re-endothelialization. For the idealized stent designs, a spiral stent provides favorable SPI values, whereas an intertwined ring stent leads to an elevated SPI. The present findings shed light onto the effect of stent design on the concentration of the eluted drug at the arterial luminal surface, an important consideration in the assessment of DES performance. abstract_id: PUBMED:12478230 Drug-eluting stents: role of stent design, delivery vehicle, and drug selection. Increasing focus has recently been directed toward the different parameters of drug-eluting stents-stent design, delivery-vehicle materials, and drug properties-and the manner in which each of these elements may affect the function of the stents. Several specific characteristics of design may affect restenosis, although design optimization often presents a choice between acute procedural success and long-term biological stability. The influence of design parameters such as strut thickness and cell configuration is described. Polymer material has frequently been used to coat drug-eluting stents, although some agents, such as paclitaxel, can be attached directly to the stent's surface, obviating the need for a polymer layer. The properties of agents used in drug-eluting stents and how those properties affect delivery and long-term outcome are discussed, as is the influence of the disease state of the target vessel on stent safety and efficacy. Answer: Yes, stent design does affect the probability of restenosis. There is evidence that specific design features of stents, such as strut thickness, shape, and the depth of strut deployment within the vessel wall, can influence the severity of restenosis (PUBMED:22670206). Studies have shown that stent geometry designed to optimize expansion and lower recoil is a prerequisite for favorable clinical outcomes, and strut thickness has been identified as an important risk factor for restenosis (PUBMED:12478231). A randomized trial comparing Multilink stents with GFX stents found that stent configuration may affect clinical outcomes, with the Multilink group showing a significantly larger minimal lumen diameter at follow-up and a lower restenosis rate compared to the GFX group (PUBMED:11526357). Additionally, a meta-analysis of carotid artery stenting outcomes indicated that stent design is not associated with short- or intermediate-term clinical major adverse events rates, but open cell stenting resulted in a significantly higher number of subclinical postprocedural new ischemic lesions detected on magnetic resonance diffusion-weighted imaging compared with closed cell stenting (PUBMED:31159989). Furthermore, the design of drug-eluting stents, which includes the stent design, delivery vehicle, and drug selection, is known to play a role in the function of the stents and may affect restenosis rates (PUBMED:12478230). Overall, these findings suggest that stent design is a critical factor in the probability of restenosis following stent deployment.
Instruction: Central venous line dressings: can you stick it? Abstracts: abstract_id: PUBMED:28663107 Prevention of central venous line associated bloodstream infections in adult intensive care units: A systematic review. Background: In adult Intensive Care Units, the complexity of patient treatment requirements make the use of central venous lines essential. Despite the potential benefits central venous lines can have for patients, there is a high risk of bloodstream infection associated with these catheters. Aim: Identify and critique the best available evidence regarding interventions to prevent central venous line associated bloodstream infections in adult intensive care unit patients other than anti-microbial catheters. Methods: A systematic review of studies published from January 2007 to February 2016 was undertaken. A systematic search of seven databases was carried out: MEDLINE; CINAHL Plus; EMBASE; PubMed; Cochrane Library; Scopus and Google Scholar. Studies were critically appraised by three independent reviewers prior to inclusion. Results: Nineteen studies were included. A range of interventions were found to be used for the prevention or reduction of central venous line associated bloodstream infections. These interventions included dressings, closed infusion systems, aseptic skin preparation, central venous line bundles, quality improvement initiatives, education, an extra staff in the Intensive Care Unit and the participation in the 'On the CUSP: Stop Blood Stream Infections' national programme. Conclusions: Central venous line associated bloodstream infections can be reduced by a range of interventions including closed infusion systems, aseptic technique during insertion and management of the central venous line, early removal of central venous lines and appropriate site selection. abstract_id: PUBMED:35084138 Central venous catheter, PICC-line or Midline : which catheter for my patient? Midline long peripheral venous catheters are an interesting alternative to central venous catheters and PICC-lines when the placement of a peripheral venous catheter is impossible or when the patient requires prolonged intravenous treatment. Midline catheters can be inserted at the patient's bedside and require no radiological verification after insertion. They can be kept in place for up to 14 days and allow for repeated blood sampling. The mechanical, infectious, or thrombotic complications and safety profile do not differ from other venous catheters, notably PICC-line. abstract_id: PUBMED:26472949 Characterizing the Risk Factors Associated With Venous Thromboembolism in Pediatric Patients After Central Venous Line Placement. Objectives: With the apparent increase in venous thromboembolism noted in the pediatric population, it is important to define which children are at risk for clots and to determine optimal preventative therapy. The purpose of this study was to determine the risk factors for venous thromboembolism in pediatric patients with central venous line placement. Methods: This was an observational, retrospective, case-control study. Control subjects were patients aged 0 to 18 years who had a central venous line placed. Case subjects had a central line and a radiographically confirmed diagnosis of venous thromboembolism. Results: A total of 150 patients were included in the study. Presence of multiple comorbidities, particularly the presence of a congenital heart defect (34.7% case vs. 14.7% control; p < 0.005), was found to put pediatric patients at increased risk for thrombosis. Additionally, the administration of parenteral nutrition through the central line (34.7% case vs. 18.7% control; p = 0.03) and location of the line increased the risk for clot formation. Conclusions: With increased awareness of central venous line-related thromboembolism, measures should be taken to reduce the number and duration of central line placements, and further studies addressing the need for thromboprophylaxis should be conducted. abstract_id: PUBMED:26362005 Central venous catheter repair is not associated with an increased risk of central line infection or colonization in intestinal failure pediatric patients. Purpose: The intestinal failure (IF) population is dependent upon central venous catheters (CVC) to maintain minimal energy requirements for growth. Central venous catheter infections (CVCI) are frequent and an independent predictor of intestinal failure associated liver disease. A common complication in children with long-term CVC is the risk of line breakage. Given the often-limited usable vascular access sites in this population, it has been the standard of practice to perform repair of the broken line. Although widely practiced, it is unknown if this practice is associated with increased line colonization rates and subsequent line loss. Methods: A retrospective review of our institutional IF population over the past 8years (2006-2014) was performed. Utilizing a prospectively constructed database, all pediatric patients (n=13, ages 0-17 years) with CVC dependency enrolled in the Children's Intestinal Rehabilitation Program with IF were included who underwent a repair and/or replacement procedure of their line. The control replacement group was CVCs that were replaced without being repaired (36), the experimental repair group was CVCs that were repaired (8). The primary outcome of interest was the mean number of days in each group from the intervention (replacement or repair) to line infection/colonization. Mann-Whitney tests for significance were performed with p-values <0.05 being the threshold value for significance. Results: There were no catheter repair associated CVCI. The mean number of days from the replacement or repair of a CVC to its removal owing to infection/colonization was 210.0 and 162.8days respectively. There was no statistically significant difference between these groups in time to removal owing to line infection (p=0.55). Conclusion: Repair of central venous catheters in the pediatric population with intestinal failure does not lead to an increased rate of central venous catheter infection and should be performed when possible. abstract_id: PUBMED:28334482 Minimising central line-associated bloodstream infection rate in inserting central venous catheters in the adult intensive care units. Aims And Objectives: To investigate the procedural aspects in inserting central venous catheters that minimise central line-associated bloodstream infection rates in adult intensive care units through a structured literature review. Background: In adult intensive care units, central line-associated bloodstream infections are a major cause of high mortality rates and increased in costs due to the consequences of complications. Methods: Eligible articles were identified by combining indexed keywords using Boolean operator of "AND" under databases of Ovid and CINAHL. Titles and abstract of retrieved papers were screened and duplicates removed. Inclusion and exclusion criteria were applied to derive the final papers, which contained seminal studies. The quality of papers was assessed using a special data extraction form. Results: The number of papers retrieved from all databases was 337, reduced to 302 after removing duplicates. Papers were scanned for titles and abstract to locate those relevant to the review question. After this, 250 papers were excluded for different reasons and a total of 52 papers were fully accessed to assess for eligibility. The final number of papers included was 10 articles. Conclusion: Many interventions can be implemented in the adult intensive care unit during the insertion of a central venous catheter to minimise central line-associated bloodstream infections rates. These include choosing the subclavian site to insert the catheters as the least infectious and decolonising patients' skin with alcoholic chlorhexidine gluconate preparation due to its broad antimicrobial effect and durability. Relevance To Clinical Practice: Choosing optimal sites for central venous catheter insertion is a complex process that relies on many factors. Furthermore, the introduction of chlorhexidine gluconate preparations should be accompanied with multifaceted interventions including quality improvement initiatives to improve healthcare workers' compliance. As a quality marker in adult intensive care units, healthcare sectors should work on establishing benchmarks with other sectors around the world. abstract_id: PUBMED:32590391 Rates of Venous Thromboembolism and Central Line-Associated Bloodstream Infections Among Types of Central Venous Access Devices in Critically Ill Children. Objectives: Central venous access devices, including peripherally inserted central catheters and central venous catheters, are often needed in critically ill patients, but also are associated with complications, including central-line associated bloodstream infections and venous thromboembolism. We compared different central venous access device types and these complications in the PICU. Design: Multicenter, cohort study. Setting: One hundred forty-eight participating Virtual PICU Systems, LLC, hospital PICU sites. Patients: Pediatric patients with central venous access placed from January 1, 2010, to December 31, 2015. Interventions: None. Measurements And Main Results: Patient and central venous access device variables postulated to be associated with central-line associated bloodstream infection and venous thromboembolism were included. Data were analyzed using Pearson chi-square test or Fisher exact test for categorical variables, Mann-Whitney U test for continuous variables, and logistic regression and classification trees for multivariable analysis that examined significant predictors of venous thromboembolism or central-line associated bloodstream infection. Analysis included 74,196 first lines including 4,493 peripherally inserted central catheters and 66,194 central venous catheters. An increased rate of venous thromboembolism (peripherally inserted central catheter: 0.93%, central venous catheter: 0.52%; p = 0.001) (peripherally inserted central catheter: 8.65/1,000 line days, central venous catheter: 6.29/1,000 line days) and central-line associated bloodstream infection (peripherally inserted central catheter: 0.73%, central venous catheter: 0.24%; p = 0.001) (peripherally inserted central catheter: 10.82/1,000 line days, central venous catheter: 4.97/1,000 line days) occurred in peripherally inserted central catheters. In multivariable analysis, central venous catheters had decreased association with central-line associated bloodstream infection (odds ratio, 0.505; 95% CI, 0.336-0.759; p = 0.001) and venous thromboembolism (odds ratio, 0.569; 95% CI, 0.330-0.982; p = 0.043) compared with peripherally inserted central catheters. Conclusions: Peripherally inserted central catheters are associated with higher rates of central-line associated bloodstream infection and venous thromboembolism than central venous catheters in children admitted to the PICU. abstract_id: PUBMED:30142121 Comparison of Complication Rates of Central Venous Catheters Versus Peripherally Inserted Central Venous Catheters in Pediatric Patients. Objectives: The purpose of our study is to compare the rate of central line-associated blood stream infections and venous thromboembolism in central venous catheters versus peripherally inserted central catheters in hospitalized children. There is a growing body of literature in adults describing an increased rate of venous thromboembolisms and similar rates of central line-associated blood stream infection associated with peripherally inserted central catheters versus central venous catheters. It is not known if the rate of central line-associated blood stream infection and venous thromboembolism differs between peripherally inserted central catheters and central venous catheters in children. Based on current adult literature, we hypothesize that central line-associated blood stream infection rates for peripherally inserted central catheters and central venous catheters will be similar, and the rate of venous thromboembolism will be higher for peripherally inserted central catheters versus central venous catheters. Design: This is a cohort study using retrospective review of medical records and prospectively collected hospital quality improvement databases. Setting: Quaternary-care pediatric hospital from October 2012 to March 2016. Patients: All patients age 1 day to 18 years old with central venous catheters and peripherally inserted central catheters placed during hospital admission over the study dates were included. Central venous catheters that were present upon hospital admission were excluded. The primary outcomes were rate of central line-associated blood stream infection and rate of venous thromboembolism. Interventions: None. Measurements And Main Results: Of 2,709 catheters included in the study, 1,126 were peripherally inserted central catheters and 1,583 were central venous catheters. Peripherally inserted central catheters demonstrated a higher rate of both infection and venous thromboembolism than central venous catheters in all reported measures. In multivariable analysis, peripherally inserted central catheters had increased association with central line-associated blood stream infection (odds ratio of 3.15; 95% CI, 1.74-5.71; p = 0.0002) and increased association with venous thromboembolism (odds ratio of 2.71; 95% CI, 1.65-4.45; p < 0.0001) compared with central venous catheters. Conclusions: Rates of central line-associated blood stream infection and venous thromboembolism were higher in hospitalized pediatric patients with peripherally inserted central catheters as compared to central venous catheters. Our study confirms the need for further investigation into the safety of central access devices to assist in proper catheter selection. abstract_id: PUBMED:23537408 The PICC line, a new approach for venous access Peripheral Inserted Central Catheter (PICC) line is a peripherally inserted central catheter. This implantable medical device is placed into a peripheral vein of the arm in order to obtain an intravenous central access. This device can find its use in various applications like intravenous delivery of parenteral nutrition, anticancer agents and antibiotics, as well as for blood sampling. PICC line is not widely used in medical practice because it remains largely unknown. The aim of this review is thus to introduce PICC line to the medical and scientific community. First, we will approach its insertion and maintenance of the dressing. We will then detail the benefits and drawbacks associated with its use, and finally discuss its position with regards to the other central venous access available. abstract_id: PUBMED:19231550 Central venous line dressings: can you stick it? Aim: The objective of this study is to investigate which central venous catheter dressing is most secure. Background: Central venous catheter insertion is a common procedure. A secure dressing is essential to prevent early line displacement. Many different dressings are used, but there is no consensus in choosing an optimal dressing. Methods: A sandwich, loop-line, or bridge technique was used to apply each of the dressings. Two mechanisms of displacement were tested: dressing adherence to skin and dressing adherence to line. Dressing to skin adherence was tested on a relatively hairless part of the upper arm. Weights were added sequentially until the dressing peeled off. Dressing to line adherence was tested by applying the dressing to a 7F Dual Lumen Bard Hickman line passing through a piece of foam (measuring 13 x 12 cm). Weights were attached to the line until the cuff was pulled through the foam. Results: Dressing to skin adherence was poorest for the clear dressings, followed by Mefix and Sleek, and greatest for a combination of Tegaderm and Mefix. Dressing to line adherence was improved using a sandwich technique instead of a loop-line technique and most secure when a bridge technique was used to the thicker shaft of the line. Conclusions: The dressings used for securing Hickman lines are not all equally secure. The least effective is the IV 3000 loop-line dressing. Tegaderm-Mefix bridge and Tegaderm-Mefix-Sleek combination dressings are the most secure and cost effective. abstract_id: PUBMED:26138518 Implementation of central venous catheter bundle in an intensive care unit in Kuwait: Effect on central line-associated bloodstream infections. Central line-associated bloodstream infection (CLABSIs) is an important healthcare-associated infection in the critical care units. It causes substantial morbidity, mortality and incurs high costs. The use of central venous line (CVL) insertion bundle has been shown to decrease the incidence of CLABSIs. Our aim was to study the impact of CVL insertion bundle on incidence of CLABSI and study the causative microbial agents in an intensive care unit in Kuwait. Surveillance for CLABSI was conducted by trained infection control team using National Health Safety Network (NHSN) case definitions and device days measurement methods. During the intervention period, nursing staff used central line care bundle consisting of (1) hand hygiene by inserter (2) maximal barrier precautions upon insertion by the physician inserting the catheter and sterile drape from head to toe to the patient (3) use of a 2% chlorohexidine gluconate (CHG) in 70% ethanol scrub for the insertion site (4) optimum catheter site selection. (5) Examination of the daily necessity of the central line. During the pre-intervention period, there were 5367 documented catheter-days and 80 CLABSIs, for an incidence density of 14.9 CLABSIs per 1000 catheter-days. After implementation of the interventions, there were 5052 catheter-days and 56 CLABSIs, for an incidence density of 11.08 per 1000 catheter-days. The reduction in the CLABSI/1000 catheter days was not statistically significant (P=0.0859). This study demonstrates that implementation of a central venous catheter post-insertion care bundle was associated with a reduction in CLABSI in an intensive care area setting. Answer: The study by PUBMED:19231550 investigated which central venous catheter dressing is most secure. The study tested different dressings using a sandwich, loop-line, or bridge technique to apply each dressing and evaluated two mechanisms of displacement: dressing adherence to skin and dressing adherence to the line. The results showed that dressing to skin adherence was poorest for clear dressings, followed by Mefix and Sleek, and greatest for a combination of Tegaderm and Mefix. Dressing to line adherence was improved using a sandwich technique instead of a loop-line technique and was most secure when a bridge technique was used on the thicker shaft of the line. The study concluded that not all dressings used for securing Hickman lines are equally secure, with the least effective being the IV 3000 loop-line dressing. The most secure and cost-effective dressings were Tegaderm-Mefix bridge and Tegaderm-Mefix-Sleek combination dressings.
Instruction: Is level V neck dissection necessary in primary parotid cancer? Abstracts: abstract_id: PUBMED:24965707 Is level V neck dissection necessary in primary parotid cancer? Objectives/hypothesis: This study aims to evaluate the pattern of nodal metastasis to level V in parotid cancer and to examine the clinical value of level V neck dissection (LVND). Study Design: Retrospective cohort study. Methods: Retrospective chart review of 86 patients (47 N0 nodal metastasis [N0] neck and 39 positive nodal metastasis [N(+) ] neck) who received parotidectomy and neck dissection was performed. The prevalence of pathological nodal metastasis in level V neck was evaluated and correlated with locoregional recurrence. Results: LVND was performed in 10.6% and 28.2% of patients with clinical NO (cN0) and cN(+) neck disease, respectively. The prevalence of pathological positive nodal metastasis was 0% (cN0) and 81.8% (cN(+) ). In patients with cN0 neck, the rate of recurrence in level V was 6%. Conclusion: In our patient cohort with predominantly high-grade parotid cancer, LVND was necessary in patients with cN(+) neck because there was a high likelihood for pathologically positive nodal metastasis. In patients with cN0 neck, the rate of recurrence in level V was low enough not to warrant a routine inclusion of LVND. abstract_id: PUBMED:35302270 Indications and outcomes for elective dissection of level V in primary parotid cancer. Background: The extent of cervical lymphadenectomy required for primary parotid cancer is not well-established. Methods: In this retrospective case-control study, 84 patients who underwent primary parotidectomy and neck dissection for primary parotid cancer between 2010 and 2019 were identified and analyzed. Results: Of the 84 patients, 37 underwent elective level V neck dissection. All six (16.0%) who had occult level V nodes had clinically evident, preoperative anterior cervical metastases, a statistically significant finding. No other clinical factors are correlated with posterior neck involvement. There was no significant difference in disease-free or overall survival for patients with occult level V disease relative to positive lymph nodes in other levels. Conclusions: Patients with clinically evident anterolateral cervical lymphatic metastases from parotid cancer preoperatively have high rates of occult level V nodes. Level V neck dissection can be avoided in cN0 patients and offered no survival advantage. abstract_id: PUBMED:16148706 Neck dissection of level IIb: is it really necessary? Objectives: To determine whether resection of level IIb is necessary in elective or therapeutic neck dissections. Study Design: Prospective case series. Methods: Level IIb nodes were analyzed for micrometastases as separate specimens in 160 neck dissections on 148 patients with squamous cell carcinoma of the head and neck. Results: In 106 elective neck dissections (N0 necks) from upper aerodigestive tract (UADT) and skin/parotid squamous carcinoma primaries, level IIb was involved in 4.5% and 33%, respectively. In 54 therapeutic neck dissections (N+ necks) from UADT and skin/parotid squamous carcinoma primaries, level IIb was involved in 25% and 71%, respectively. Apart from skin/parotid squamous carcinoma primaries, level IIb was never involved unless level IIa was also involved. Conclusions: Level IIb nodes can be left in situ in UADT primary carcinomas in nontonsillar N0 necks without significantly compromising regional clearance of micrometastases. abstract_id: PUBMED:35351350 Is elective neck dissection necessary for patients with cT3-4N0 parotid gland cancer? Objective: Management of the cervical lymph nodes in patients with cT3-4N0 parotid gland cancer (PGC) has been controversial. This study investigated the need for elective neck dissection (END) in patients with cT3-4N0 PGC. Methods: We retrospectively examined cervical lymph node metastasis, overall survival (OS), and disease-free survival (DFS) rates in 40 patients with cT3-4N0 PGC according to whether or not END was performed. Results: Cervical lymph node metastasis occurred in 27.5% of patients and level II was the most common area. Recurrence could be treated by salvage neck dissection. There was no significant difference in OS (P=0.581) or DFS (P=0.728) between the group that underwent END and the group that did not. Conclusion: END at level II is worth performing because of the occult lymph node metastasis rate. The area of neck dissection should be limited because there is no evidence that END improves the prognosis of cT3-4N0 PGC. abstract_id: PUBMED:15746848 Elective neck dissection versus observation in primary parotid carcinoma. Objective: To evaluate the efficacy of elective neck dissection in the clinically negative neck of patients with primary carcinoma of the parotid gland. Study design and setting A retrospective analysis was undertaken at a university Department of Otorhinolaryngology-Head and Neck Surgery on 83 previously untreated patients with primary carcinoma of the parotid gland and a clinically negative neck. The reliability of fine needle aspiration cytology, frozen section, and the clinico-pathologic findings of patients with occult neck metastases were analyzed. The regional recurrence rate and the outcome were compared among 2 groups; one with elective neck dissection (N = 41) and one without elective neck dissection (N = 42). Results: The diagnosis of malignancy was known preoperatively in 59 (71%) cases, the exact histologic tumor type in 36 (43%) and the grade in 37 (44%) of 83 cases. Occult metastases were detected in 8 (20%) of 41 cNO patients, in 5 cases associated with a high-grade and in 3 cases with a low-grade carcinoma. Recurrence of disease developed in 5 (12%) patients in the elective neck dissection group and in 11 (26%) patients in the observation group. All of the 7 neck recurrences occurred in the observation group. The 5-year actuarial and disease-free survival rate was 80% and 86% for patients with elective neck dissection and 83% and 69% for patients without neck dissection. Conclusion and significance A routine elective neck dissection is suggested in all patients with primary carcinoma of the parotid gland. The efficacy of elective neck dissection, nevertheless, has never been evaluated prospectively. abstract_id: PUBMED:33222323 Elective neck dissection in primary parotid carcinomas: A systematic review and meta-analysis. Background: To estimate the rate of occult cervical lymph node metastases in cN0 patients affected by primary parotid carcinomas and to scrutinize the evidence on the indication and extent of elective neck dissection in these neoplasms. Methods: Medline, Embase, Web of Science, Cochrane Library and Scopus were searched until August 31, 2020, to identify studies reporting the use of elective neck dissection in the management of malignant parotid tumours. The PRISMA checklist was used. A single arm meta-analysis was then made to determine the pooled rate of occult lymph node metastases. Risk of bias of the included studies was assessed through the ROBINS-E tool. Results: The initial search returned 20 541 articles, of which twelve met the inclusion criteria and were included in the meta-analysis. They comprised 1310 patients with parotid carcinoma, of whom 542 cN0 underwent elective neck dissection, which led to the diagnosis of lymph node metastasis (pN+/cN0) in 113 cases. Meta-analysis of the results of elective neck dissection showed an overall rate of occult metastases of 0.22 (99% CI: 0.14-0.30). Locally advanced or high-grade tumours were the commonest indications for elective neck dissection in the included studies. The most dissected lymph node levels were I-II-III, and level II was the commonest site of occult nodal metastases. Conclusions: An occult metastasis rate of 0.22 (99% CI: 0.14-0.30) represents a not negligible percentage value, which should encourage further research to outline the most appropriate elective neck management in cN0 patients with parotid carcinomas. abstract_id: PUBMED:36939401 Patterns of Lymph Node Metastasis in Parotid Cancer and Implications for Extent of Neck Dissection. Objective: The role and extent of neck dissection in primary parotid cancer are controversial. Herein, we characterize patterns of lymph node metastasis in parotid cancer. Study Design: Retrospective analysis. Setting: National Cancer Database. Methods: Patients with the 6 most common histologic subtypes of parotid cancer were selected. Primary outcomes were the distribution of positive lymph nodes by level and overall survival assessed by Cox analysis. Secondary outcomes included predictors of extended lymph node involvement (≥3 lymph nodes or Level IV/V involvement), via logistic regression. Results: Six thousand nine hundred seventy-seven patients with acinic cell carcinoma, adenocarcinoma, adenoid cystic carcinoma, carcinoma ex pleomorphic adenoma (CExPA), mucoepidermoid carcinoma, and salivary duct carcinoma (SDC) were included. Among cN0 patients, 8.2% of low-grade tumor patients had occult nodal metastasis versus 30.9% in high-grade tumor patients. Elective neck dissection was not associated with an overall survival benefit (adjusted hazard ratio: 1.10; 0.94-1.30, p = .238). Among cN+ tumors, CExPA (odds ratio [OR]: 1.88, 1.05-3.39, p = .034) and high-grade pathology (OR: 3.03, 1.87-4.93, p < .001) were predictive of having ≥3 pathologic nodes. CExPA (OR: 2.13, 1.22-3.72, p = .008), adenocarcinoma (OR: 1.60, 1.11-2.31, p = .013), SDC (OR: 1.92, 1.17-3.14, p < .01), and high-grade pathology (OR: 3.61, 2.19-5.97, p < .001) were predictive of Level IV/V neck involvement. Conclusions: In parotid malignancy, nodal metastasis distribution is dependent on histology and grade. High-grade tumors and certain histologies (SDC and adenocarcinoma) had a higher incidence of occult nodes. Comprehensive neck dissection should also be considered for node-positive high-grade tumors, SDC, and adenocarcinoma. abstract_id: PUBMED:36939620 Systematic Review and Meta-Analysis on the Incidence of Level-Specific Cervical Nodal Metastasis in Primary Parotid Malignancies. Objective: In primary parotid gland malignancies, the incidence of level-specific cervical lymph node metastasis in clinically node-positive necks remains unclear. This study aimed to determine the incidence of level-specific cervical node metastasis in clinically node-negative (cN0) and node-positive (cN+) patients who presented with primary parotid malignancies. Data Sources: Electronic databases (MEDLINE, EMBASE, PubMed, Cochrane). Review Methods: Random-effects meta-analysis was used to calculate pooled estimate incidence of level-specific nodal metastasis for parotid malignancies with 95% confidence intervals (CIs). Subgroup analyses of cN0 and cN+ were performed. Results: Thirteen publications consisting of 818 patients were included. The overall incidence of cervical nodal involvement in all neck dissections was 47% (95% CI, 31%-63%). Among those who were cN+, the incidence of nodal positivity was 89% (95% CI, 75%-98%). Those who were cN0 had an incidence of 32% (95% CI, 14%-53%). In cN+ patients, the incidence of nodal metastasis was high at all levels (level I 33%, level II 73%, level III 48%, level IV 39%, and level V 37%). In cN0 patients, the incidence of nodal metastasis was highest at levels II (28%) and III (11%). Conclusion: For primary parotid malignancies, the incidence of occult metastases was 32% compared to 89% in a clinically positive neck. It is recommended that individuals with a primary parotid malignancy requiring elective treatment of the neck have a selective neck dissection which involves levels II to III, with the inclusion of level IV based on clinical judgment. Those undergoing a therapeutic neck dissection should undergo a comprehensive neck dissection (levels I-V). abstract_id: PUBMED:23007927 Parotidectomy and neck dissection in the management of conjunctival melanoma: are they necessary? Objectives/hypothesis: The objectives of this study were to review traditional techniques for the management of conjunctival melanoma and assess the need for parotidectomy and neck dissection in the management of conjunctival melanoma. Study Design: Retrospective review. Methods: This study was a retrospective review conducted in a tertiary academic medical center of patients diagnosed with conjunctival melanoma over a 20-year period Results: There were 39 patients diagnosed with conjunctival melanoma identified from January 1990 to December 2010. Follow-up varied from 2 to 201 months (median, 25 months). Of the patients, 16 (41%) had local recurrences at the primary site, two (13%) of whom later presented with parotid disease. One patient with parotid recurrence had a subsequent neck dissection for confirmed metastatic spread. No patient in this series had metastatic cervical disease without initial spread to the parotid. The probability of disease-free survival at 1, 2, and 5 years was 77%, 68%, and 50%, respectively. The probability of parotid free progression at 1, 2, and 5 years was 100%, 96%, and 90%, respectively. Conclusions: Conjunctival melanoma is a rare malignancy traditionally managed with aggressive treatment to optimize local control. The role for staging parotidectomy with or without neck dissection has been heavily debated. Based on our review, parotidectomy only needs to be undertaken when high suspicion for metastatic spread is present, such as a palpable or radiographically evident mass. In addition, without documented parotid disease, neck dissection is not required. abstract_id: PUBMED:12006762 Value of neck dissection in patients with cancer of the parotid gland and a clinical NO neck. Cervical lymph node metastases in patients with parotid gland carcinoma are not rare. Regional metastases have a significant influence on the prognosis of these patients. In spite of the clinical relevance of lymphogenous metastases, the indications for elective treatment of the neck are not well defined. In the present review the controversies and therapeutical strategies of ipsilateral neck lymph nodes and their extension in patients with potential occult lymph node metastases are discussed. On the basis of the published data, in consideration of the direction of lymphogenous metastatic spread of parotid gland carcinomas, an elective neck dissection is recommended in carcinomas with high percentage of lymphatic spread also in the N0 neck. Consideration of additional parameters (> T2, lymphangiosis carcinomatosa) is appropriate to perform also a neck dissection in carcinomas with low risk for lymphogenous metastases. An elective neck dissection should include levels I, II, III and upper V. Answer: The necessity of level V neck dissection (LVND) in primary parotid cancer is debated and seems to depend on the clinical presentation of the disease. In a retrospective cohort study, LVND was performed in 10.6% of patients with clinical N0 (cN0) neck disease and 28.2% of patients with clinically positive nodal metastasis (cN+). The prevalence of pathological positive nodal metastasis was 0% in cN0 patients and 81.8% in cN+ patients. In cN0 patients, the rate of recurrence in level V was 6%, suggesting that LVND may not be routinely necessary in these patients but is necessary in patients with cN+ neck due to the high likelihood of pathologically positive nodal metastasis (PUBMED:24965707). Another study found that patients with clinically evident anterolateral cervical lymphatic metastases preoperatively had high rates of occult level V nodes. However, level V neck dissection could be avoided in cN0 patients and offered no survival advantage, indicating that LVND may be more relevant for patients with clinically evident cervical metastases (PUBMED:35302270). A systematic review and meta-analysis found an overall rate of occult metastases of 22% in cN0 patients with parotid carcinomas, suggesting that elective neck dissection should be considered, especially in locally advanced or high-grade tumors. The most common site of occult nodal metastases was level II, which may imply that a more limited neck dissection could be appropriate (PUBMED:33222323). In summary, the necessity of LVND in primary parotid cancer appears to be higher in patients with clinically positive nodal disease (cN+), while its routine use in cN0 patients is more controversial and may not be warranted, especially if there is no evidence of anterolateral cervical metastases or high-grade pathology. The decision to perform LVND should be individualized based on clinical factors such as the presence of clinically evident metastases and the grade of the tumor.
Instruction: Are methicillin-resistant Staphylococcus aureus that produce Panton-Valentine leucocidin (PVL) found among residents of care homes? Abstracts: abstract_id: PUBMED:18755697 Are methicillin-resistant Staphylococcus aureus that produce Panton-Valentine leucocidin (PVL) found among residents of care homes? Objectives: Panton-Valentine leucocidin (PVL)-positive Staphylococcus aureus are responsible for causing skin and soft tissue infections, with the potential to cause severe invasive disease. Recently, methicillin-resistant Staphylococcus aureus (MRSA) strains that produce PVL have emerged in the community. As residents of care homes are a key group at risk of MRSA colonization and infection, we have examined the epidemiology of MRSA in three large cohorts of residents in urban care homes to establish whether PVL-positive MRSA strains are present in this setting. Methods: Nasal swabs (n = 3037) collected from consenting residents of 69 care homes in Leeds, UK, were screened for MRSA using chromogenic agar over three periods (June-August 2005, November-December 2006 and October-November 2007). PCR amplification was used to detect genes encoding PVL. Antibiogram profile and PFGE were also used to characterize MRSA isolates (n = 601). Results: MRSA prevalence was 21%, 20% and 19% in each cohort, respectively. The majority of the isolates were related epidemiologically to the predominant local nosocomial epidemic MRSA strain, EMRSA-15 (78%). No isolate carried the genes encoding PVL. Twelve percent of the isolates (n = 74) had increased susceptibility to non-beta-lactam agents and were distributed across 31 care homes. Conclusions: MRSA strains that produced PVL were not found to be colonizing residents of care homes between 2005 and 2007. Continued surveillance is, however, necessary to understand the interaction between MRSA in care homes and hospitals, especially to reduce the chance that the former may amplify community-associated MRSA strains. abstract_id: PUBMED:35017376 Paediatric osteoarticular infections caused by staphylococcus aureus producing panton-valentine leucocidin in morocco: Risk factors and clinical features. Objective: We aimed to estimate the prevalence of Staphylococcus aureus producing Panton-Valentine leucocidin (PVL) isolated from children diagnosed with osteoarticular infections (OAIs), and to examine risk factors and clinical features. Methods: This prospective study was conducted from January 2017 to December 2018. All hospitalised children diagnosed with S. aureus OAI are included. Blood cultures, articular fluids, synovial tissues and/or bone fragments were collected for bacteriological culture. Antimicrobial susceptibility tests were determined by disk diffusion method. Genes encoding methicillin resistance (mecA) and PVL virulence factors (luk-S-PV and luk-F-PV) were detected by multiplex polymerase chain reaction. The demographic, clinical, laboratory, radiographic and clinical features were reviewed prospectively from medical records. Results: A total of 37 children with S. aureus OAIs were included, 46% of them have PVL-positive infection and 70.6% were male. The mean age was 8.12 years (±4.57), and almost were from rural settings (76.5%). Children with Staphylococcus aureus producing Panton-Valentine leucocidin (SA-PVL) were significantly associated with type of infection (P = 0.005), location of infection (P = 0.037) and abnormal X-ray (P = 0.029). All strains SA-PVL+ are sensitive to methicillin, but one strain SA-PVL negative was methicillin-resistant S. aureus, confirmed by gene mecA positive. Conclusion: The prevalence of S. aureus infections producing PVL toxin was high in OAIs amongst Moroccan children, mainly due to methicillin-susceptible S. aureus. Type and location of infections and abnormal X-ray were significantly associated with SA-PVL. Routine diagnostic testing of PVL-SA, continuous epidemiological surveillance and multidisciplinary management of OAI is essential to prevent serious complications. abstract_id: PUBMED:33225649 Frequency Of Panton Valentine Leucocidin Gene In Staphylococcus Aureus From Skin And Soft Tissue Infections. Background: Staphylococcus aureus harbouring Panton Valentine Leucocidin gene are emerging and spreading worldwide. PVL gene was first identified by Noel Panton and Francis Valentine in 1932 who explained its ability to lyse leucocytes and its main relationship with skin and soft tissue infections. In Pakistan only limited data is available on the frequency and molecular analysis of PVL gene positive Staph aureus. Therefore, this study was conducted to understand the clinical epidemiology of PVL positive Staph aureus in our setup. Objectives of the study was aimed to determine the frequency of PVL gene in Staph aureus obtained from pus samples from skin and soft tissue infections from various departments; indoor and outdoor of a tertiary care hospital of Lahore. Methods: 384 Staph aureus isolates from skin and soft tissue infections were selected from both indoor and outdoor departments of hospital. After identification by phenotypic methods, they were processed by PCR using luk-F and luk-S primers for the detection of PVL gene. Results: 186 out of 384 Staph aureus isolates were positive for PVL gene. Overall frequency of PVL gene was 49%. Frequency of PVL gene in Staph aureus was 44.9% in males and 53.5% in females. The highest frequency of PVL gene was detected in paediatric age group. A large majority of positive isolates were from pus samples other than swabs and from the general surgery department. They mostly belong to indoor with indoor outdoor ratio of approximately 2:1. Frequencies of PVL gene in MRSA and MSSA were 51% and 44% respectively. Frequency of PVL gene was found to be high in Ciprofloxacin sensitive, Gentamicin sensitive, Erythromycin resistant and Fusidic acid resistant isolates. Conclusion: Almost half of Staph aureus isolates were found PVL positive. They were mostly multidrug resistant came from indoor setup. This situation is very alarming so, there is a need to adopt strict infection control policies in the hospitals to limit the widespread and injudicious use of antibiotics. There is also a need to apply PVL positive Staph aureus treatment to the effected individuals which involve not only antibiotics but also the decolonization of effected individuals and their close contacts. abstract_id: PUBMED:36660999 Prevalence of Panton Valentine Leucocidin gene containing Staphylococcus aureus in pus samples from Paediatric patients. Staphylococcus (Staph) aureus containing Panton Valentine Leucocidin (PVL) gene are spreading in the whole world. This gene encodes PVL toxin that has lytic effect on WBCs contributing to the low immunity of the body. It also causes pus formation in various places of the body. This study was conducted to understand the effect of PVL positive Staph aureus in causing purulent infections in children between the age of one day to 15 years. Pus samples from various sites of the body from children between the age of one day to 15 years were taken. The number of pus samples containing Staph aureus was 45. These were collected over a period of one year, from October 2, 2017 to September 30, 2018, at the Shaikh Zayed Hospital, Lahore. A total of 27 (60%) PVL samples were positive Staph aureus. Prevalence of PVL gene was noted to be high in MSSA 9(64%), wound swabs 18(75%), in isolates from orthopaedic department 6(75%), indoor 21(63%), and in males 18(66%). Our study showed that most of the Staph aureus samples that were obtained from pus samples from children had PVL gene in their genome. This percentage is very high. To control its spread, we need to treat not only the patients but also their close contacts. The main objective to conduct this study was to assess the prevalence of PVL positive Staph aureus strain in our local setup. Paediatric age group was selected because it is the most vulnerable group and pus samples were chosen because this strain causes recurrent purulent infections. abstract_id: PUBMED:37678435 Whole-genome sequencing links cases dispersed in time, place, and person while supporting healthcare worker management in an outbreak of Panton-Valentine leucocidin meticillin-resistant Staphylococcus aureus; and a review of literature. This is a report on an outbreak of Panton-Valentine leucocidin-producing meticillin-resistant Staphylococcus aureus (PVL-MRSA) in an intensive care unit (ICU) during the COVID-19 pandemic that affected seven patients and a member of staff. Six patients were infected over a period of ten months on ICU by the same strain of PVL-MRSA, and a historic case identified outside of the ICU. All cases were linked to a healthcare worker (HCW) who was colonized with the organism. Failed topical decolonization therapy, without systemic antibiotic therapy, resulted in ongoing transmission and one preventable acquisition of PVL-MRSA. The outbreak identifies the support that may be needed for HCWs implicated in outbreaks. It also demonstrates the role of whole-genome sequencing in identifying dispersed and historic cases related to the outbreak, which in turn aids decision-making in outbreak management and HCW support. This report also includes a review of literature of PVL-MRSA-associated outbreaks in healthcare and highlights the need for review of current national guidance in the management of HCWs' decolonization regimen and return-to-work recommendations in such outbreaks. abstract_id: PUBMED:36313888 Severe Staphylococcus aureus infections in children: Case reports and management of positive Panton-Valentine leucocidin cases. Background: Staphylococcus aureus is a well-known bacterium associated with carriage and responsible for different types of infections. The Panton-Valentine leucocidin (PVL) is a key virulence factor causing tissue necrosis. PVL can, however, be present in both benign and life-threatening infections. Case Reports And Management: We present three pediatric severe infections occurring over a period of only three weeks, in February 2021, and caused by genetically unrelated methicillin-sensitive Staphylococcus aureus producing PVL in a tertiary children's hospital in Belgium. The first one presented with necrotizing pneumonia, the second one with a neck abscess extended to the mediastinum, and the last one had sacral osteomyelitis complicated by endocarditis. The management of these infections is mostly based on expert opinions. The most appropriate treatment seems to be the combination of early surgical drainage of infected collections with an antibiotic regimen associating two antibiotics; beta-lactams and either clindamycin or linezolid. Human immunoglobulins also appear to be useful as adjunctive therapy. Conclusion: PVL-producing Staphylococcus aureus is associated with life-threatening infections in children. Prompt management is needed including surgery and appropriate antibiotic regimens. abstract_id: PUBMED:30333405 Rapidly Progressive Multiple Cavity Formation in Necrotizing Pneumonia Caused by Community-acquired Methicillin-resistant Staphylococcus aureus Positive for the Panton-Valentine Leucocidin Gene. A 66-year-old man was transferred to our hospital for pneumonia that was resistant to sulbactam/ampicillin and levofloxacin therapy. Chest computed tomography showed the rapidly progressive formation of multiple cavities. Methicillin-resistant Staphylococcus aureus (MRSA) was isolated, and the patient was diagnosed with necrotizing pneumonia caused by community-acquired MRSA (CA-MRSA). The MRSA strain had type IV staphylococcus cassette chromosome mec and genes encoding Panton-Valentine leucocidin (PVL). CA-MRSA necrotizing pneumonia with the PVL gene is rare; only three cases have been previously reported in Japan. We administered anti-MRSA antibiotics and the patient achieved complete clinical and radiological improvement. abstract_id: PUBMED:38117559 Genomic characterization of a unique Panton-Valentine leucocidin-positive community-associated methicillin-resistant Staphylococcus aureus lineage increasingly impacting on Australian indigenous communities. In 2010 a single isolate of a trimethoprim-resistant multilocus sequence type 5, Panton-Valentine leucocidin-positive, community-associated methicillin-resistant Staphylococcus aureus (PVL-positive ST5 CA-MRSA), colloquially named WA121, was identified in northern Western Australia (WA). WA121 now accounts for ~14 % of all WA MRSA infections. To gain an understanding of the genetic composition and phylogenomic structure of WA121 isolates we sequenced the genomes of 155 WA121 isolates collected 2010-2021 and present a detailed genomic description. WA121 was revealed to be a single clonally expanding lineage clearly distinct from sequenced ST5 strains reported outside Australia. WA121 strains were typified by the presence of the distinct PVL phage φSa2wa-st5, the recently described methicillin resistance element SCCmecIVo carrying the trimethoprim resistance (dfrG) transposon Tn4791, the novel β-lactamase transposon Tn7702 and the epidermal cell differentiation inhibitor (EDIN-A) plasmid p2010-15611-2. We present evidence that SCCmecIVo together with Tn4791 has horizontally transferred to Staphylococcus argenteus and evidence of intragenomic movement of both Tn4791 and Tn7702. We experimentally demonstrate that p2010-15611-2 is capable of horizontal transfer by conjugative mobilization from one of several WA121 isolates also harbouring a pWBG749-like conjugative plasmid. In summary, WA121 is a distinct and clonally expanding Australian PVL-positive CA-MRSA lineage that is increasingly responsible for infections in indigenous communities in northern and western Australia. WA121 harbours a unique complement of mobile genetic elements and is capable of transferring antimicrobial resistance and virulence determinants to other staphylococci. abstract_id: PUBMED:34184723 Panton-Valentine leukocidin toxin associated methicillin-susceptible Staphylococcus aureus infection: a report of two pediatric cases Staphylococcus aureus colonizes the nasopharynx in one third of healthy individuals and is also responsible for several infections in pediatrics such as endocarditis, pneumonia and osteoarticular infections. It has several virulence mechanisms, such as Panton Valentine leukocidin (PVL), which is an exotoxin that causes cell death. It is commonly related to methicillin-resistant Staphylococcus aureus (MRSA) and more serious pulmonary and musculoskeletal infections. However, PVL is not exclusive to MRSA. Two clinical cases of patients with infection by methicillin-sensitive Staphylococcus aureus producing this exotoxin are presented. abstract_id: PUBMED:26028260 Panton-Valentine leucocidin expression by Staphylococcus aureus exposed to common antibiotics. Objectives: We set out to investigate the impact of common antibiotics on Panton-Valentine Leucocidin (PVL) expression by methicillin-sensitive Staphylococcus aureus (MSSA). PVL expression by methicillin-resistant S. aureus (MRSA) is reportedly enhanced by β-lactams, but inhibited by protein-synthesis inhibitors, a fact that has influenced management of infections associated with PVL. Although PVL is more frequently associated with MSSA than MRSA in the UK, the effect of antibiotics on PVL expression by MSSA has not been fully addressed. Methods: MSSA was cultured in vitro with varying concentrations of flucloxacillin, clindamycin or linezolid and PVL expression measured by qRT-PCR and Western blotting. A murine MSSA abscess model was developed to measure leucocidin expression in vivo following antibiotic treatment. Results: 9% (27/314) of MSSA isolates from patients with uncomplicated community skin/soft tissue infections were positive for PVL genes (lukFS-PV). PVL expression by MSSA in broth was unaffected by varying concentrations of flucloxacillin, clindamycin or linezolid. In a murine abscess model, treatment with flucloxacillin did, however, enhance in vivo MSSA lukF-PV transcription and this was sustained even when flucloxacillin was combined with clindamycin, or clindamycin plus linezolid. Notwithstanding increased leucocidin transcription, functional leucotoxin activity was not enhanced. Treatment with flucloxacillin plus clindamycin significantly decreased leucotoxin activity, but the addition of a second protein synthesis inhibitor, linezolid, did not confer benefit. Conclusions: Our results suggest flucloxacillin in combination with a single protein-synthesis inhibitor such as clindamycin would give the best treatment outcome. Answer: No, methicillin-resistant Staphylococcus aureus (MRSA) strains that produce Panton-Valentine leucocidin (PVL) were not found to be colonizing residents of care homes between 2005 and 2007 according to the study conducted in urban care homes in Leeds, UK. The study involved screening nasal swabs collected from consenting residents of 69 care homes over three periods and found that while MRSA prevalence was around 19-21%, none of the isolates carried the genes encoding PVL (PUBMED:18755697).
Instruction: Ecological Study of HIV Infection and Hypertension in Sub-Saharan Africa: Is There a Double Burden of Disease? Abstracts: abstract_id: PUBMED:33803352 Pandemics and Burden of Stroke and Epilepsy in Sub-Saharan Africa: Experience from a Longstanding Health Programme. Eighty percent of people with stroke live in low- to middle-income nations, particularly in sub-Saharan Africa (SSA) where stroke has increased by more than 100% in the last decades. More than one-third of all epilepsy-related deaths occur in SSA. HIV infection is a risk factor for neurological disorders, including stroke and epilepsy. The vast majority of the 38 million people living with HIV/AIDS are in SSA, and the burden of neurological disorders in SSA parallels that of HIV/AIDS. Local healthcare systems are weak. Many standalone HIV health centres have become a platform with combined treatment for both HIV and noncommunicable diseases (NCDs), as advised by the United Nations. The COVID-19 pandemic is overwhelming the fragile health systems in SSA, and it is feared it will provoke an upsurge of excess deaths due to the disruption of care for chronic diseases such as HIV, TB, hypertension, diabetes, and cerebrovascular disorders. Disease Relief through Excellent and Advanced Means (DREAM) is a health programme active since 2002 to prevent and treat HIV/AIDS and related disorders in 10 SSA countries. DREAM is scaling up management of NCDs, including neurologic disorders such as stroke and epilepsy. We described challenges and solutions to address disruption and excess deaths from these diseases during the ongoing COVID-19 pandemic. abstract_id: PUBMED:23597299 Heart failure in sub-Saharan Africa. The heart failure syndrome has been recognized as a significant contributor to cardiovascular disease burden in sub-Saharan African for many decades. Seminal knowledge regarding heart failure in the region came from case reports and case series of the early 20th century which identified infectious, nutritional and idiopathic causes as the most common. With increasing urbanization, changes in lifestyle habits, and ageing of the population, the spectrum of causes of HF has also expanded resulting in a significant burden of both communicable and non-communicable etiologies. Heart failure in sub-Saharan Africa is notable for the range of etiologies that concurrently exist as well as the healthcare environment marked by limited resources, weak national healthcare systems and a paucity of national level data on disease trends. With the recent publication of the first and largest multinational prospective registry of acute heart failure in sub-Saharan Africa, it is timely to review the state of knowledge to date and describe the myriad forms of heart failure in the region. This review discusses several forms of heart failure that are common in sub-Saharan Africa (e.g., rheumatic heart disease, hypertensive heart disease, pericardial disease, various dilated cardiomyopathies, HIV cardiomyopathy, hypertrophic cardiomyopathy, endomyocardial fibrosis, ischemic heart disease, cor pulmonale) and presents each form with regard to epidemiology, natural history, clinical characteristics, diagnostic considerations and therapies. Areas and approaches to fill the remaining gaps in knowledge are also offered herein highlighting the need for research that is driven by regional disease burden and needs. abstract_id: PUBMED:34781929 Integrating diabetes, hypertension and HIV care in sub-Saharan Africa: a Delphi consensus study on international best practice. Background: Although HIV continues to have a high prevalence among adults in sub-Saharan Africa (SSA), the burden of noncommunicable diseases (NCD) such as diabetes and hypertension is increasing rapidly. There is an urgent need to expand the capacity of healthcare systems in SSA to provide NCD services and scale up existing chronic care management pathways. The aim of this study was to identify key components, outcomes, and best practice in integrated service provision for the prevention, identification and treatment of HIV, hypertension and diabetes. Methods: An international, multi stakeholder e-Delphi consensus study was conducted over two successive rounds. In Round 1, 24 participants were asked to score 27 statements, under the headings 'Service Provision' and 'Benefits of Integration', by importance. In Round 2, the 16 participants who completed Round 1 were shown the distribution of scores from other participants along with the score that they attributed to an outcome and were asked to reflect on the score they gave, based on the scores of the other participants and then to rescore if they wished to. Nine participants completed Round 2. Results: Based on the Round 1 ranking, 19 of the 27 outcomes met the 70% threshold for consensus. Four additional outcomes suggested by participants in Round 1 were added to Round 2, and upon review by participants, 22 of the 31 outcomes met the consensus threshold. The five items participants scored from 7 to 9 in both rounds as essential for effective integrated healthcare delivery of health services for chronic conditions were improved data collection and surveillance of NCDs among people living with HIV to inform integrated NCD/HIV programme management, strengthened drug procurement systems, availability of equipment and access to relevant blood tests, health education for all chronic conditions, and enhanced continuity of care for patients with multimorbidity. Conclusions: This study highlights the outcomes which may form key components of future complex interventions to define a model of integrated healthcare delivery for diabetes, hypertension and HIV in sub-Saharan Africa. abstract_id: PUBMED:31304842 Classical Cardiovascular Risk Factors and HIV are Associated With Carotid Intima-Media Thickness in Adults From Sub-Saharan Africa: Findings From H3Africa AWI-Gen Study. Background Studies on the determinants of carotid intima-media thickness ( CIMT ), a marker of sub-clinical atherosclerosis, mostly come from white, Asian, and diasporan black populations. We present CIMT data from sub-Saharan Africa, which is experiencing a rising burden of cardiovascular diseases and infectious diseases. Methods and Results The H3 (Human Hereditary and Health) in Africa's AWI-Gen (African-Wits-INDEPTH partnership for Genomic) study is a cross-sectional study conducted in adults aged 40 to 60 years from Burkina Faso, Kenya, Ghana, and South Africa. Cardiovascular disease risk and ultrasonography of the CIMT of right and left common carotids were measured. Multivariable linear and mixed-effect multilevel regression modeling was applied to determine factors related to CIMT. Data included 8872 adults (50.8% men), mean age of 50±6 years with age- and sex-adjusted mean (±SE) CIMT of 640±123μm. Participants from Ghana and Burkina Faso had higher CIMT compared with other sites. Age (β = 6.77, 95%CI [6.34-7.19]), body mass index (17.6[12.5-22.8]), systolic blood pressure (7.52[6.21-8.83]), low-density lipoprotein cholesterol (5.08[2.10-8.06]) and men (10.3[4.75- 15.9]) were associated with higher CIMT. Smoking was associated with higher CIMT in men. High-density lipoprotein cholesterol (-12.2 [-17.9- -6.41]), alcohol consumption (-13.5 [-19.1--7.91]) and HIV (-8.86 [-15.7--2.03]) were inversely associated with CIMT. Conclusions Given the rising prevalence of cardiovascular diseases risk factors in sub-Saharan Africa, atherosclerotic diseases may become a major pan-African epidemic unless preventive measures are taken particularly for prevention of hypertension, obesity, and smoking. HIV -specific studies are needed to fully understand the association between HIV and CIMT in sub-Saharan Africa. abstract_id: PUBMED:38300475 Does Engagement in HIV Care Affect Screening, Diagnosis, and Control of Noncommunicable Diseases in Sub-Saharan Africa? A Systematic Review and Meta-analysis. Low- and middle-income countries are facing a growing burden of noncommunicable diseases (NCDs). Providing HIV treatment may provide opportunities to increase access to NCD services in under-resourced environments. We conducted a systematic review and meta-analysis to evaluate whether use of antiretroviral therapy (ART) was associated with increased screening, diagnosis, treatment, and control of diabetes, hypertension, chronic kidney disease, or cardiovascular disease among people living with HIV in sub-Saharan Africa (SSA). A comprehensive search of electronic literature databases for studies published between 01 January 2011 and 31 December 2022 yielded 26 studies, describing 13,570 PLWH in SSA, 61% of whom were receiving ART. Random effects models were used to calculate summary odds ratios (ORs) of the risk of diagnosis by ART status and corresponding 95% confidence intervals (95% CIs), where appropriate. ART use was associated with a small but imprecise increase in the odds of diabetes diagnosis (OR 1.07; 95% CI 0.71, 1.60) and an increase in the odds of hypertension diagnosis (OR 2.10, 95% CI 1.42, 3.09). We found minimal data on the association between ART use and screening, treatment, or control of NCDs. Despite a potentially higher NCD risk among PLWH and regional efforts to integrate NCD and HIV care, evidence to support effective care integration models is lacking. abstract_id: PUBMED:24415610 Association of HIV and ART with cardiometabolic traits in sub-Saharan Africa: a systematic review and meta-analysis. Background: Sub-Saharan Africa (SSA) has the highest burden of HIV in the world and a rising prevalence of cardiometabolic disease; however, the interrelationship between HIV, antiretroviral therapy (ART) and cardiometabolic traits is not well described in SSA populations. Methods: We conducted a systematic review and meta-analysis through MEDLINE and EMBASE (up to January 2012), as well as direct author contact. Eligible studies provided summary or individual-level data on one or more of the following traits in HIV+ and HIV-, or ART+ and ART- subgroups in SSA: body mass index (BMI), systolic blood pressure (SBP), diastolic blood pressure (DBP), high-density lipoprotein (HDL), low-density lipoprotein (LDL), triglycerides (TGs) and fasting blood glucose (FBG) or glycated hemoglobin (HbA1c). Information was synthesized under a random-effects model and the primary outcomes were the standardized mean differences (SMD) of the specified traits between subgroups of participants. Results: Data were obtained from 49 published and 3 unpublished studies which reported on 29 755 individuals. HIV infection was associated with higher TGs [SMD, 0.26; 95% confidence interval (CI), 0.08 to 0.44] and lower HDL (SMD, -0.59; 95% CI, -0.86 to -0.31), BMI (SMD, -0.32; 95% CI, -0.45 to -0.18), SBP (SMD, -0.40; 95% CI, -0.55 to -0.25) and DBP (SMD, -0.34; 95% CI, -0.51 to -0.17). Among HIV+ individuals, ART use was associated with higher LDL (SMD, 0.43; 95% CI, 0.14 to 0.72) and HDL (SMD, 0.39; 95% CI, 0.11 to 0.66), and lower HbA1c (SMD, -0.34; 95% CI, -0.62 to -0.06). Fully adjusted estimates from analyses of individual participant data were consistent with meta-analysis of summary estimates for most traits. Conclusions: Broadly consistent with results from populations of European descent, these results suggest differences in cardiometabolic traits between HIV-infected and uninfected individuals in SSA, which might be modified by ART use. In a region with the highest burden of HIV, it will be important to clarify these findings to reliably assess the need for monitoring and managing cardiometabolic risk in HIV-infected populations in SSA. abstract_id: PUBMED:35477067 An Analysis of Stroke Risk Factors by HIV Serostatus in Uganda: Implications for Stroke Prevention in Sub-Saharan Africa. Objective: HIV infection is an important stroke risk factor in sub-Saharan Africa. However, data on stroke risk factors in the era of antiretroviral therapy (ART) are sparse. We aimed to determine if stroke risk factors differed by HIV serostatus in Uganda. Methods: We conducted a matched cohort study, enrolling persons living with HIV (PWH) with acute stroke, matched by sex and stroke type to HIV uninfected (HIV-) individuals. We collected data on stroke risk factors and fitted logistic regression models for analysis. Results: We enrolled 262 participants:105 PWH and 157 HIV-. The median ART duration was 5 years, and the median CD4 cell count was 214 cells/uL. PWH with ischemic stroke had higher odds of hypertriglyceridemia (AOR 1.63; 95% CI 1.04, 2.55, p=0.03), alcohol consumption (AOR 2.84; 95% CI 1.32, 6.14, p=0.008), and depression (AOR 5.64; 95%CI 1.32, 24.02, p=0.02) while HIV- persons with ischemic stroke were more likely to be > 55 years of age (AOR 0.43; 95%CI 0.20-0.95, p=0.037), have an irregular heart rhythm (AOR 0.31; 95%CI 0.10-0.98, p=0.047) and report low fruit consumption (AOR 0.39; 95%CI 0.18-0.83, p=0.014). Among all participants with hemorrhagic stroke (n=78) we found no differences in the prevalence of risk factors between PWH and HIV-. Conclusions: PWH with ischemic stroke in Uganda present at a younger age, and with a combination of traditional and psychosocial risk factors. By contrast, HIV- persons more commonly present with arrhythmia. A differential approach to stroke prevention might be needed in these populations. abstract_id: PUBMED:31011634 Evaluation of Vascular Event Risk while on Long-term Anti-retroviral Suppressive Therapy [EVERLAST]: Protocol for a prospective observational study. Background & Objective: Cardiovascular disease (CVD) risk among the HIV population is high due to a combination of accelerated atherosclerosis from the pro-inflammatory milieu created by chronic HIV infection and the potentially adverse metabolic side effects from cART (combination antiretroviral therapy) medications. Although sub-Saharan Africa (SSA) bears 70% of the global burden of HIV disease, there is a relative paucity of studies comprehensively assessing CVD risk among people living with HIV on the continent. The overarching objective of the Evaluation of Vascular Event Risk while on Long-term Anti-retroviral Suppressive Therapy (EVERLAST) Study is to characterize the burden of CVD among HIV patients on ART in Ghana, and explore factors influencing it. Methods: The EVERLAST study incorporates prospective CVD risk assessments and a convergent mixed methods approach. This prospective study will evaluate CVD risk by measuring Carotid Intimal Media Thickness (CIMT) and presence of traditional medical and lifestyle vascular risk among 240 Ghanaian HIV patients on antiretroviral therapy compared with age- and sex-matched HIV uninfected (n = 240) and HIV positive ART naïve controls (n = 240). A contextual qualitative analysis will also be conducted to determine attitudes/perceptions of various key local stakeholders about CVD risk among HIV patients. The primary outcome measure will be CIMT measured cross-sectionally and prospectively among the three groups. A host of secondary outcome variables including CVD risk factors, CVD risk equations, HIV associated neurocognitive dysfunction and psychological well-being will also be assessed. Conclusion: EVERLAST will provide crucial insights into the unique contributions of ART exposure and environmental factors such as lifestyle, traditional beliefs, and socio-economic indicators to CVD risk among HIV patients in a resource-limited setting. Ultimately, findings from our study will be utilized to develop interventions that will be tested in a randomized controlled trial to provide evidence to guide CVD risk management in SSA. abstract_id: PUBMED:31060545 Hypertension control in integrated HIV and chronic disease clinics in Uganda in the SEARCH study. Background: There is an increasing burden of hypertension (HTN) across sub-Saharan Africa where HIV prevalence is the highest in the world, but current care models are inadequate to address the dual epidemics. HIV treatment infrastructure could be leveraged for the care of other chronic diseases, including HTN. However, little data exist on the effectiveness of integrated HIV and chronic disease care delivery systems on blood pressure control over time. Methods: Population screening for HIV and HTN, among other diseases, was conducted in ten communities in rural Uganda as part of the SEARCH study (NCT01864603). Individuals with either HIV, HTN, or both were referred to an integrated chronic disease clinic. Based on Uganda treatment guidelines, follow-up visits were scheduled every 4 weeks when blood pressure was uncontrolled, and either every 3 months, or in the case of drug stock-outs more frequently, when blood pressure was controlled. We describe demographic and clinical variables among all patients and used multilevel mixed-effects logistic regression to evaluate predictors of HTN control. Results: Following population screening (2013-2014) of 34,704 adults age ≥ 18 years, 4554 individuals with HTN alone or both HIV and HTN were referred to an integrated chronic disease clinic. Within 1 year 2038 participants with HTN linked to care and contributed 15,653 follow-up visits over 3 years. HTN was controlled at 15% of baseline visits and at 46% (95% CI: 44-48%) of post-baseline follow-up visits. Scheduled visit interval more frequent than clinical indication among patients with controlled HTN was associated with lower HTN control at the subsequent visit (aOR = 0.89; 95% CI 0.79-0.99). Hypertension control at follow-up visits was higher among HIV-infected patients than uninfected patients to have controlled blood pressure at follow-up visits (48% vs 46%; aOR 1.28; 95% CI 0.95-1.71). Conclusions: Improved HTN control was achieved in an integrated HIV and chronic care model. Similar to HIV care, visit frequency determined by drug supply chain rather than clinical indication is associated with worse HTN control. Trial Registration: The SEARCH Trial was prospectively registered with ClinicalTrials.gov : NCT01864603. abstract_id: PUBMED:31353831 HIV, antiretroviral therapy and non-communicable diseases in sub-Saharan Africa: empirical evidence from 44 countries over the period 2000 to 2016. Introduction: The HIV-infected population is growing due to the increased accessibility of antiretroviral therapy (ART) that extends the lifespan of people living with HIV (PLHIV). We aimed to assess whether national HIV prevalence and ART use are associated with an increased prevalence of cardiovascular risk factors. Methods: Using country-level data, we analysed the effect of HIV prevalence and use of ART on cardiovascular risk factors in 44 countries in sub-Saharan Africa between 2000 and 2016. We used fixed-effects estimation to quantify the effect of HIV and ART on the prevalence of diabetes, mean body mass index, the prevalence of overweight, obesity and hypertension, and mean systolic blood pressure. The models were adjusted for calendar time, the age structure of the population, income and education. Results: Diabetes prevalence among PLHIV was 5.8 percentage points higher (95% confidence interval (CI) 1.8 pp to 9.8 pp) compared to individuals without HIV. People receiving ART had a 4.6 percentage point higher prevalence (95% CI 2.6 pp to 6.6 pp). The prevalence of obesity was increased by 14.7 percentage points (95% CI 2.5 pp to 26.9 pp) for PLHIV. Receiving ART was associated with an increased obesity prevalence by 14.0 percentage points (95% CI 4.8 pp to 23.2 pp), whereas it had no significant association with the prevalence of overweight. The population aged 40 to 59 had a significantly higher prevalence of diabetes, overweight and obesity. HIV prevalence and ART use had no significant association with the prevalence of hypertension. Conclusions: An ageing HIV-infected population on ART is associated with a significant increase in the prevalence of diabetes and obesity in sub-Saharan Africa. The increasing prevalence of these cardiovascular risk factors emphasizes the need for comprehensive healthcare programmes that screen and treat both HIV and non-communicable diseases to decrease the associated morbidity and mortality rates. Answer: The available abstracts suggest that there is indeed a double burden of disease in Sub-Saharan Africa (SSA) related to HIV infection and hypertension, among other noncommunicable diseases (NCDs). The region is experiencing a significant burden of both communicable and non-communicable etiologies of heart failure, with hypertension being a notable contributor (PUBMED:23597299). The burden of NCDs such as diabetes and hypertension is rapidly increasing in SSA, where HIV prevalence remains high (PUBMED:34781929). This is further complicated by the fact that the HIV-infected population is growing due to the increased accessibility of antiretroviral therapy (ART), which extends the lifespan of people living with HIV (PLHIV), potentially leading to an increased prevalence of cardiovascular risk factors (PUBMED:31353831). The integration of HIV and chronic disease care, including hypertension, is being explored as a means to improve health outcomes. For instance, the SEARCH study in Uganda demonstrated improved hypertension control in an integrated HIV and chronic care model (PUBMED:31060545). However, despite regional efforts to integrate NCD and HIV care, evidence to support effective care integration models is lacking (PUBMED:38300475). HIV infection and ART are associated with cardiometabolic traits that could contribute to the burden of hypertension and other cardiovascular diseases. HIV infection is linked to higher triglycerides and lower HDL cholesterol, BMI, systolic blood pressure, and diastolic blood pressure, while ART use is associated with higher LDL and HDL cholesterol, and lower HbA1c (PUBMED:24415610). Additionally, HIV is an important stroke risk factor in SSA, and data from Uganda suggest that people living with HIV (PWH) with ischemic stroke present with a combination of traditional and psychosocial risk factors, including hypertension (PUBMED:35477067). In summary, the evidence from the provided abstracts indicates that SSA is facing a double burden of disease with both HIV infection and hypertension, along with other NCDs. This complex health challenge requires integrated healthcare delivery models that address both communicable and non-communicable diseases to improve patient outcomes and reduce the overall disease burden in the region.