input
stringlengths
6.82k
29k
Instruction: Can PET provide the 3D extent of tumor motion for individualized internal target volumes? Abstracts: abstract_id: PUBMED:12654451 Can PET provide the 3D extent of tumor motion for individualized internal target volumes? A phantom study of the limitations of CT and the promise of PET. Purpose: To characterize the limitations of fast, spiral computed tomography (CT) when imaging a moving object and to investigate whether positron emission tomography (PET) can predict the internal target volume (ITV) and ultimately improve the planning target volume (PTV) for moving tumors. Methods And Materials: To mimic tumors, three fillable spheres were imaged while both stationary and during periodic motion using spiral CT and PET. CT- and PET-imaged volumes were defined quantitatively using voxel values. Ideal PTVs for each scenario were calculated. CT-based PTVs were generated using margins of 7.5, 10, and 15 mm to account for both organ motion and setup uncertainties. PET-based PTVs were derived with the assumption that motion was captured in the PET images and only a margin (7.5 mm) for setup errors was necessary. Comparisons between CT-based and PET-based PTVs with ideal PTVs were performed. Results: CT imaging of moving spheres resulted in significant distortions in the three-dimensional (3D) image-based representations, and did not, in general, result in images well representative of either moving or stationary spheres. PET images were similar to the ideal capsular shape encompassing the sphere and its motion. In all cases, CT-imaged volumes were larger than that for the stationary sphere (range of excess volume from 0.4 to 29 cm(3) for stationary volumes of 2.14 to 172 cm(3)), but smaller than that for the true motion volume. PET-imaged volumes were larger than the true motion volume (difference from ideal ranged from 3 to 94 cm(3) for motion volumes of 1.2 to 243 cm(3)) and much larger than the stationary volume. Using CT data, geographic miss of some part of the ideal PTV occurred for 0 of 24 cases, 11 of 24 cases, and 18 of 24 cases using a 15-mm, 10-mm, and 7.5-mm margin, respectively. Geographic miss did not occur in any case for the PET-based PTV. The amount of "normal tissue" included in CT-based PTVs was dramatically greater than that included in PET-based PTVs. Conclusion: Fast CT imaging of a moving tumor can result in poor representation of the time-averaged position and shape of the tumor. PET imaging can provide a more accurate representation of the 3D volume encompassing motion of model tumors and has potential to provide patient-specific motion volumes for an individualized ITV. abstract_id: PUBMED:24044792 Motion-specific internal target volumes for FDG-avid mediastinal and hilar lymph nodes. Background And Purpose: To quantify the benefit of motion-specific internal target volumes for FDG-avid mediastinal and hilar lymph nodes generated using 4D-PET, vs. conventional internal target volumes generated using non-respiratory gated PET and 4D-CT scans. Materials And Methods: Five patients with FDG-avid tumors metastatic to 11 hilar or mediastinal lymph nodes were imaged with respiratory-correlated FDG-PET (4D-PET) and 4D-CT. FDG-avid nodes were contoured by a radiation oncologist in two ways. Standard-of-care volumes were contoured using conventional un-gated PET, 4D-CT, and breath-hold CT. A second, motion-specific, set of volumes were contoured using 4D-PET.Contours based on 4D-PET corresponded directly to an internal target volume (ITV(4D)), whereas contours based on un-gated PET were expanded by a series of exploratory isotropic margins (from 5 to 13 mm) based on literature recommendations on lymph node motion to form internal target volumes (ITV(3D)). Results: A 13 mm expansion of the un-gated PET nodal volume was needed to cover the ITV(4D) for 10 of 11 nodes studied. The ITV(3D) based on a 13 mm expansion included on average 45 cm(3) of tissue that was not included in the ITV(4D). Conclusions: Motion-specific lymph-node internal target volumes generated from 4D-PET imaging could be used to improve accuracy and/or reduce normal-tissue irradiation compared to the standard-of-care un-gated PET based internal target volumes. abstract_id: PUBMED:23139661 Image-Guided Radiation Therapy for Muscle-Invasive Carcinoma of the Urinary Bladder with Cone Beam CT Scan: Use of Individualized Internal Target Volumes for a Single Patient. Introduction: While planning radiation therapy (RT) for a carcinoma of the urinary bladder (CaUB), the intra-fractional variation of the urinary bladder (UB) volume due to filling-up needs to be accounted for. This internal target volume (ITV) is obtained by adding internal margins (IM) to the contoured bladder. This study was planned to propose a method of acquiring individualized ITVs for each patient and to verify their reproducibility. Methods: One patient with CaUB underwent simulation with the proposed 'bladder protocol'. After immobilization, a planning CT scan on empty bladder was done. He was then given 300 ml of water to drink and the time (T) was noted. Planning CT scans were performed after 20 min (T+20), 30 min (T+30) and 40 min (T+40). The CT scan at T+20 was co-registered with the T+30 and T+40 scans. The bladder volumes at 20, 30 and 40 min were then contoured as CTV20, CTV30 and CTV40 to obtain an individualized ITV for our patient. For daily treatment, he was instructed to drink water as above, and the time was noted; treatment was started after 20 min. Daily pre- and post-treatment cone beam CT (CBCT) scans were done. The bladder visualized on the pre-treatment CBCT scan was compared with CTV20 and on the post-treatment CBCT scan with CTV30. Results: In total, there were 65 CBCT scans (36 pre- and 29 post-treatment). Individualized ITVs were found to be reproducible in 93.85% of all instances and fell outside in 4 instances. Conclusions: The proposed bladder protocol can yield a reproducible estimation of the ITV during treatment; this can obviate the need for taking standard IMs. abstract_id: PUBMED:24011671 Preoperative radiotherapy for rectal cancer: target volumes Preoperative radiochemotherapy followed by total mesorectal excision is the standard of care for T3-T4-N0 or TxN1 rectal cancer. Defining target volumes relies on the patterns of nodal and locoregional failures. The lower limit of the clinical target volume depends also on the type of surgery. Conformational radiotherapy with or without intensity-modulated radiotherapy implies an accurate definition of volumes and inherent margins in the context of mobile organs such as the upper rectum. Tumoral staging recently improved with newer imaging techniques such as MRI with or without USPIO and FDG-PET-CT. The role of PET-CT remains unclear despite encouraging results and MRI is a helpful tool for a reliable delineation of the gross tumour volume. Co-registration of such modalities with the planning CT may particularly guide radiation oncologists through the gross tumour volume delineation. Acute digestive toxicity can be reduced with intensity modulation radiation therapy. Different guidelines and CT-based atlas regarding the target volumes in rectal cancer give the radiation oncologist a lot of ground for reproducible contours. abstract_id: PUBMED:18562782 Evaluation of the combined effects of target size, respiratory motion and background activity on 3D and 4D PET/CT images. Gated (4D) PET/CT has the potential to greatly improve the accuracy of radiotherapy at treatment sites where internal organ motion is significant. However, the best methodology for applying 4D-PET/CT to target definition is not currently well established. With the goal of better understanding how to best apply 4D information to radiotherapy, initial studies were performed to investigate the effect of target size, respiratory motion and target-to-background activity concentration ratio (TBR) on 3D (ungated) and 4D PET images. Using a PET/CT scanner with 4D or gating capability, a full 3D-PET scan corrected with a 3D attenuation map from 3D-CT scan and a respiratory gated (4D) PET scan corrected with corresponding attenuation maps from 4D-CT were performed by imaging spherical targets (0.5-26.5 mL) filled with (18)F-FDG in a dynamic thorax phantom and NEMA IEC body phantom at different TBRs (infinite, 8 and 4). To simulate respiratory motion, the phantoms were driven sinusoidally in the superior-inferior direction with amplitudes of 0, 1 and 2 cm and a period of 4.5 s. Recovery coefficients were determined on PET images. In addition, gating methods using different numbers of gating bins (1-20 bins) were evaluated with image noise and temporal resolution. For evaluation, volume recovery coefficient, signal-to-noise ratio and contrast-to-noise ratio were calculated as a function of the number of gating bins. Moreover, the optimum thresholds which give accurate moving target volumes were obtained for 3D and 4D images. The partial volume effect and signal loss in the 3D-PET images due to the limited PET resolution and the respiratory motion, respectively were measured. The results show that signal loss depends on both the amplitude and pattern of respiratory motion. However, the 4D-PET successfully recovers most of the loss induced by the respiratory motion. The 5-bin gating method gives the best temporal resolution with acceptable image noise. The results based on the 4D scan protocols can be used to improve the accuracy of determining the gross tumor volume for tumors in the lung and abdomen. abstract_id: PUBMED:25511904 Geographic miss of lung tumours due to respiratory motion: a comparison of 3D vs 4D PET/CT defined target volumes. Background: PET/CT scans acquired in the radiotherapy treatment position are typically performed without compensating for respiratory motion. The purpose of this study was to investigate geographic miss of lung tumours due to respiratory motion for target volumes defined on a standard 3D-PET/CT. Methods: 29 patients staged for pulmonary malignancy who completed both a 3D-PET/CT and 4D-PET/CT were included. A 3D-Gross Tumour Volume (GTV) was defined on the standard whole body PET/CT scan. Subsequently a 4D-GTV was defined on a 4D-PET/CT MIP. A 5 mm, 10 mm, 15 mm symmetrical and 15×10 mm asymmetrical Planning Target Volume (PTV) was created by expanding the 3D-GTV and 4D-GTV's. A 3D conformal plan was generated and calculated to cover the 3D-PTV. The 3D plan was transferred to the 4D-PTV and analysed for geographic miss. Three types of miss were measured. Type 1: any part of the 4D-GTV outside the 3D-PTV. Type 2: any part of the 4D-PTV outside the 3D-PTV. Type 3: any part of the 4D-PTV receiving less than 95% of the prescribed dose. The lesion motion was measured to look at the association between lesion motion and geographic miss. Results: When a standard 15 mm or asymmetrical PTV margin was used there were 1/29 (3%) Type 1 misses. This increased 7/29 (24%) for the 10 mm margin and 23/29 (79%) for a 5 mm margin. All patients for all margins had a Type 2 geographic miss. There was a Type 3 miss in 25 out of 29 cases in the 5, 10, and 15 mm PTV margin groups. The asymmetrical margin had one additional Type 3 miss. Pearson analysis showed a correlation (p < 0.01) between lesion motion and the severity of the different types of geographic miss. Conclusion: Without any form of motion suppression, the current standard of a 3D- PET/CT and 15 mm PTV margin employed for lung lesions has an increasing risk of significant geographic miss when tumour motion increases. Use of smaller asymmetric margins in the cranio-caudal direction does not comprise tumour coverage. Reducing PTV margins for volumes defined on 3D-PET/CT will greatly increase the chance and severity of a geometric miss due to respiratory motion. 4D-imaging reduces the risk of geographic miss across the population of tumour sizes and magnitude of motion investigated in the study. abstract_id: PUBMED:28123302 A comparative study of target volumes based on 18F-FDG PET-CT and ten phases of 4DCT for primary thoracic squamous esophageal cancer. Purpose: To investigate the correlations in target volumes based on 18F-FDG PET/CT and four-dimensional CT (4DCT) to detect the feasibility of implementing PET in determining gross target volumes (GTV) for tumor motion for primary thoracic esophageal cancer (EC). Methods: Thirty-three patients with EC sequentially underwent contrast-enhanced 3DCT, 4DCT, and 18F-FDG PET-CT thoracic simulation. The internal gross target volume (IGTV)10 was obtained by combining the GTV from ten phases of 4DCT. The GTVs based on PET/CT images were defined by setting of different standardized uptake value thresholds and visual contouring. The difference in volume ratio, conformity index (CI), and degree of inclusion (DI) between IGTV10 and GTVPET was compared. Results: The images from 20 patients were suitable for further analysis. The optimal volume ratio of 0.95±0.32, 1.06±0.50, 1.07±0.49 was at standardized uptake value (SUV)2.5, SUV20%, or manual contouring. The mean CIs were from 0.33 to 0.54. The best CIs were at SUV2.0 (0.51±0.11), SUV2.5 (0.53±0.13), SUV20% (0.53±0.12), and manual contouring (0.54±0.14). The mean DIs of GTVPET in IGTV10 were from 0.60 to 0.90, and the mean DIs of IGTV10 in GTVPET ranged from 0.35 to 0.78. A negative correlation was found between the mean CI and different SUV (P=0.000). Conclusion: None of the PET-based contours had both close spatial and volumetric approximation to the 4DCT IGTV10. Further evaluation and optimization of PET as a tool for target identification are required. abstract_id: PUBMED:25292484 Assessing margin expansions of internal target volumes in 3D and 4D PET: a phantom study. Background And Purpose: To quantify tumor volume coverage and excess normal tissue coverage using margin expansions of mobile target internal target volumes (ITVs) in the lung. Materials And Methods: FDG-PET list-mode data were acquired for four spheres ranging from 1 to 4 cm as they underwent 1D motion based on four patient breathing trajectories. Both ungated PET images and PET maximum intensity projections (PET-MIPs) were examined. Amplitude-based gating was performed on sequential list-mode files of varying signal-to-background ratios to generate PET-MIPs. ITVs were first post-processed using either a Gaussian filter or a custom two-step module, and then segmented by applying a gradient-based watershed algorithm. Uniform and non-uniform 1 mm margins were added to segmented ITVs until complete target coverage was achieved. Results: PET-MIPs required smaller uniform margins (4.7 vs. 11.3 mm) than ungated PET, with correspondingly smaller over-coverage volumes (OCVs). Non-uniform margins consistently resulted in smaller OCVs when compared to uniform margins. PET-MIPs and ungated PET had comparable OCVs with non-uniform margins, but PET-MIPs required smaller longitudinal margins (4.7 vs. 8.5 mm). Non-uniform margins were independent of sphere size. Conclusions: Gated PET-MIP images and non-uniform margins result in more accurate ITV delineation while reducing normal tissue coverage. abstract_id: PUBMED:25136514 4D PET/CT as a Strategy to Reduce Respiratory Motion Artifacts in FDG-PET/CT. The improved accuracy in tumor identification with FDG-PET has led to its increased utilization in target volume delineation for radiotherapy treatment planning in the treatment of lung cancer. However, PET/CT has constantly been influenced by respiratory motion-related image degradation, which is especially prominent for small lung tumors in the peri-diaphragmatic regions of the thorax. Here, we describe the current findings on respiratory motion-related image degradation in PET/CT, which may bring uncertainties to target volume delineation for image guided radiotherapy (IGRT) for lung cancer. Furthermore, we describe the evidence suggesting 4D PET/CT to be one strategy to minimize the impact of respiratory motion-related image degradation on tumor target delineation for thoracic IGRT. This, in our opinion, warrants further investigation in future IGRT-based lung cancer trials. abstract_id: PUBMED:31105880 4D-CT-based motion correction of PET images using 3D iterative deconvolution. Objectives: Positron emission tomography acquisition takes several minutes representing an image averaged over multiple breathing cycles. Therefore, in areas influenced by respiratory movement, PET-positive lesions occur larger, but less intensive than they actually are, resulting in false quantitative assessment. We developed a motion-correction algorithm based on 4D-CT without the need to adapt PET-acquisition. Methods: The algorithm is based on a full 3D iterative Richardson-Lucy-Deconvolution using a point-spread-function constructed using the motion information obtained from the 4D-CT. In a motion phantom study (3 different hot spheres in background activity), optimal parameters for the algorithm in terms of number of iterations and start image were estimated. Finally, the correction method was applied to 3 patient data sets. In phantom and patient data sets lesions were delineated and compared between motion corrected and uncorrected images for activity uptake and volume. Results: Phantom studies showed best results for motion correction after 6 deconvolution steps or higher. In phantom studies, lesion volume improved up to 23% for the largest, 43% for the medium and 49% for the smallest sphere due to the correction algorithm. In patient data the correction resulted in a significant reduction of the tumor volume up to 33.3 % and an increase of the maximum and mean uptake of the lesion up to 62.1 and 19.8 % respectively. Conclusion: In conclusion, the proposed motion correction method showed good results in phantom data and a promising reduction of detected lesion volume and a consequently increasing activity uptake in three patients with lung lesions. Answer: Yes, PET can provide the 3D extent of tumor motion for individualized internal target volumes (ITVs). A study demonstrated that PET imaging could predict the ITV and ultimately improve the planning target volume (PTV) for moving tumors. The PET images were found to be similar to the ideal capsular shape encompassing the sphere and its motion, providing a more accurate representation of the 3D volume encompassing motion of model tumors. This has the potential to provide patient-specific motion volumes for an individualized ITV (PUBMED:12654451). Additionally, motion-specific lymph-node internal target volumes generated from 4D-PET imaging could be used to improve accuracy and/or reduce normal-tissue irradiation compared to the standard-of-care un-gated PET based internal target volumes (PUBMED:24044792). Furthermore, the use of 4D-PET/CT has been suggested as a strategy to reduce respiratory motion artifacts in FDG-PET/CT, which may bring uncertainties to target volume delineation for image-guided radiotherapy (IGRT) for lung cancer (PUBMED:25136514). Moreover, a comparative study of target volumes based on 18F-FDG PET-CT and ten phases of 4DCT for primary thoracic squamous esophageal cancer found that none of the PET-based contours had both close spatial and volumetric approximation to the 4DCT IGTV10, indicating that further evaluation and optimization of PET as a tool for target identification are required (PUBMED:28123302). In summary, PET, particularly 4D-PET, shows promise in providing the 3D extent of tumor motion for individualized ITVs, which can lead to more accurate radiation therapy planning and potentially better outcomes for patients with moving tumors.
Instruction: Is surgical treatment of deltoid ligament rupture necessary in ankle fractures? Abstracts: abstract_id: PUBMED:29798555 Progress of diagnosis and treatment of ankle fractures combined with acute deltoid ligament injury Objective: To review the diagnosis and treatment of ankle fractures combined with acute deltoid ligament injury. Methods: Recent literature concerning the diagnosis and treatment of ankle fractures combined with acute deltoid ligament injury was reviewed. Results: Misdiagnosis is common for ankle fractures combined with acute deltoid ligament injury. A diagnosis is given based on patients' complaints, symptoms, and imaging examination, even surgical exploration is necessary. Whether to repair the deltoid ligament remains controversial. Conclusion: Deltoid ligament is an important structure to stabilize the medial ankle joint. However, treatment of different kinds of ankle fractures combined with acute deltoid ligament injury should be standardized; whether or not repair deltoid ligament is determined by the intraoperative ankle stability. abstract_id: PUBMED:30482440 Adding deltoid ligament repair in ankle fracture treatment: Is it necessary? A systematic review. Background: Deltoid ligament injuries are typically caused by supination-external rotation or pronation injury. Numerous ligament reconstruction techniques have been proposed; however, clear indications for operative repair have not yet been well established in the literature. Methods: We reviewed primary research articles comparing ORIF treatment for ankle fracture with versus without deltoid ligament repair. Results: Five studies were identified with a total of 281 patients. 137 patients underwent ORIF with deltoid repair, while 144 patients underwent ORIF without deltoid ligament repair. Clinical, radiographic, and functional outcomes, as well as complications were considered. The average follow-up was 31 months (range, 5-120). Conclusions: Current literature does not provide clear indication for repair of the deltoid ligament at the time of ankle fracture repair. There may be some advantages of adding deltoid ligament repair for patients with high fibular fractures or in patients with concomitant syndesmotic fixation. Level Of Clinical Evidence: III. abstract_id: PUBMED:9515134 Is surgical treatment of deltoid ligament rupture necessary in ankle fractures? Purpose Of The Study: Fractures of the lateral malleolus associated with rupture of the deltoid ligament are severe fractures types. There is still discussion about wether the ruptured deltoid ligament should be sutured or not. To elucidate further the need for surgical repair of this structure a comparative and retrospective review was conducted at a mean follow-up of 4 years and 8 months. Material And Methods: Twenty nine men and 15 women were included with a mean age of 34 years. Patients were subdivided into two groups according to the attitude regarding the ligament. In the first group (n = 18), an operative repair of the ligament was made and in the second group (n = 17) we leaved it unrepaired. Nine patients were evaluated separately because of an associated osteochondral fracture (n = 7) or a worse reduction of the fibula (n = 2). Subjective and objective clinical assessment were evaluated according to a modified Cedell classification. Roentgenograms including A.P, lateral, mortise view and a external rotation stress view described by Kleiger were obtained in all patients. Results: Subjective and objective analysis showed no significant difference between the two groups, likewise no differences were observed for post operative complications rate. Medial instability was observed in four cases (2 in group 1 and 2 in group II). Roentgenographicaly, more ossifications of the deltoid ligament were founded in group II (p = 0.013), and only one degenerative osteoarthritis of the ankle was seen in group II. Clinical results in the group of patients with osteochondral fracture were statistically worse than in the two previous groups (p = 0.001), with frequent progression to osteoarthritis in four cases. Discussion: In our experience it is impossible to advise surgical repair of the deltoid ligament in accordance to the type of lateral malleolar fracture like other authors have suggested. The existence of a significant widening of the medial space greater than 3 mm was nearly correlated with a deltoid ligament disruption, of the 23 patients treated with a medial approach, the ligament was ruptured in 22 cases. In this study, we may conclude than an untreated rupture of the deltoid ligament does not lead to instability. The advantages of the deltoid repair may be obtained if the fixation of the lateral malleolus allows a perfect congruency of the mortise. The most predictive radiographic factors for a poor outcome were a persistent widening of the medial joint greater than 3 mm, an associated osteochondral fracture and a poor reduction of the lateral malleolus which results in degenerative arthritis of the ankle at long term follow-up. Conclusion: Repair of the deltoid ligament is unnecessary if the internal fixation of the fibula achieves an anatomical reconstitution of the mortise. Exploration of the medial side is indicated only with a medial incongruency greater than 3 mm on intra operative roentgenograms. abstract_id: PUBMED:33218869 The Role of Deltoid Ligament Repair in Ankle Fractures With Syndesmotic Instability: A Systematic Review. Ankle fractures are the fourth most common fracture requiring surgical management. The deltoid ligament is a primary ankle stabilizer against valgus forces. It is frequently ruptured in ankle fractures; however, there is currently no consensus regarding repair. A systematic database search was conducted with Medline, PubMed, and Embase for relevant studies discussing patients with ankle fractures involving deltoid ligament rupture and repair. Screening, quality assessment, and data extraction were performed independently and in duplicate. Data extracted included pain, range of motion (ROM), function, medial clear space (MCS), syndesmotic malreduction, and complications. After screening, 9 eligible studies from 1990 to 2018 were included (N = 508). Compared to nonrepair groups, deltoid ligament repair patients had lower syndesmotic malreduction rates (0%-9% vs 20%-35%, p ≤ .05), fewer implant removals (5.8% vs 41% p ≤ .05), and longer operating time by 16-20 minutes (p ≤ .05). There was no significant difference for pain, function, ROM, MCS, and complication rate (p ≤ .05). In conclusion, deltoid ligament repair offers lower syndesmotic malreduction rates and reduced re-operation rates for hardware removal in comparison to trans-syndesmotic screws. Repair groups demonstrated equivalent or better outcomes for pain, function, ROM, MCS, and complication rates. Other newer syndesmotic fixation methods such as suture-button fixation require further evaluation when compared to the outcomes of deltoid ligament repair. A randomized control trial is required to further examine the outcomes of ankle fracture patients who undergo deltoid ligament repair versus trans-syndesmotic screw fixation. abstract_id: PUBMED:33990258 Deltoid Rupture in Ankle Fractures: To Repair or Not to Repair? The most common injury mechanism for ankle fractures with concomitant deltoid ligament injury is a supination external rotation type 4 trauma. In the acute setting, malalignment, ecchymosis, and profound edema of the affected ankle can be found. Clinical examination is a poor indicator for deltoid ligament injury. There is a lack of high-quality studies with suturing the deltoid as the primary question. The authors found 4 comparative studies that found it unnecessary to explore and to reconstruct the deltoid ligament and 4 comparative studies that find it unnecessary to explore and to reconstruct the deltoid ligament. abstract_id: PUBMED:37449714 The Biomechanical Role of the Deltoid Ligament on Ankle Stability: Injury, Repair, and Augmentation. Background: Deltoid ligament injuries occur in isolation as well as with ankle fractures and other ligament injuries. Both operative treatment and nonoperative treatment are used, but debate on optimal treatment continues. Likewise, the best method of surgical repair of the deltoid ligament remains unclear. Purpose: To determine the biomechanical role of native anterior and posterior components of the deltoid ligament in ankle stability and to determine the efficacy of simple suture versus augmented repair. Study Design: Controlled laboratory study. Methods: Ten cadaveric ankles (mean age, 51 years; age range, 34-64 years; all male specimens) were mounted on a 6 degrees of freedom robotic arm. Each specimen underwent biomechanical testing in 8 states: (1) intact, (2) anterior deltoid cut, (3) anterior repair, (4) tibiocalcaneal augmentation, (5) deep anterior tibiotalar augmentation, (6) posterior deltoid cut, (7) posterior repair, and (8) complete deltoid cut. Testing consisted of anterior drawer, eversion, and external rotation (ER), each performed at neutral and 25° of plantarflexion. A 1-factor, random-intercepts, linear mixed-effect model was created, and all pairwise comparisons were made between testing states. Results: Cutting the anterior deltoid introduced ER (+2.1°; P = .009) and eversion laxity (+6.2° of eversion; P < .001) at 25 degrees of plantarflexion. Anterior deltoid repair restored native ER but not eversion. Tibiocalcaneal augmentation reduced eversion laxity, but tibiotalar augmentation provided no additional benefit. The posterior deltoid tear showed no increase in laxity. Complete tear introduced significant anterior translation, ER, and eversion laxity (+7.6 mm of anterior translation, +13.8° ER and +33.6° of eversion; P < .001). Conclusion: A complete deltoid tear caused severe instability of the ankle joint. Augmented anterior repair was sufficient to stabilize the complete tear, and no additional benefit was provided by posterior repair. For isolated anterior tear, repair with tibiocalcaneal augmentation was the optimal treatment. Clinical Relevance: Deltoid repair with augmentation may reduce or avoid the need for prolonged postoperative immobilization and encourage accelerated rehabilitation, preventing stiffness and promoting earlier return to preinjury activity. abstract_id: PUBMED:37936789 Clinical effectiveness of suture anchor repair combined with open reduction and internal fixation in the treatment of deltoid ligament rupture in ankle fracture. Objective: To explore the clinical effectiveness of suture anchor (SA) repair combined with open reduction and internal fixation (ORIF) in the treatment of deltoid ligament rupture (DLR) in ankle fractures. Methods: This is a retrospective analysis of 210 patients with DLR in ankle fracture who were treated in Beijing Chaoyang Hospital from January 2020 to June 2022. According to the surgical records, 125 patients received SA repair combined with ORIF (Repair group) and 85 patients received ORIF only (Non-repair group). The curative effect, recovery of ankle joint function, pain, and bone metabolism of the two groups were observed. Results: The clinical effectiveness (overall good) was higher in the Repair group (P<0.05). The American Orthopedic Foot and Ankle Society (AOFAS) score was higher three and six months post-operation in the Repair group, and the Visual Analogue Scale (VAS) score was lower than that of the Non-repair group (P<0.05). The Repair group had higher levels of bone-specific alkaline phosphatase (BALP) and bone gla protein (BGP) than the Non-repair group six months post-operation (P<0.05). Conclusions: SA combined with ORIF has a good effect in the treatment of DLR in ankle fracture patients, which can promote the recovery of ankle function, relieve postoperative pain and improve bone metabolism. abstract_id: PUBMED:24136262 Surgical treatment of deltoid ligament injury associated with ankle fractures Objective: To observe the clinical outcome after the surgical treatment of the deltoid ligament injury associated with ankle fractures. Methods: From January 2005 to December 2009, 16 deltoid ligament ruptures associated with ankle fractures were repaired. According to the AO/OTA system, 2 cases belonged to fracture A, 8 to B, and 6 to C. Radiographs, American Orthopaedic Foot and Ankle Society (AOFAS) ankle-hindfoot scores and visual analogue scale (VAS) were used for the outcome measurements. Results: The 16 patients were followed up for 30 to 84 months,with the mean follow-up of 47 months. All wounds healed at the first stage. The mean time of bone union was 12.8 weeks (range: 10-14 weeks). The mean AOFAS ankle-hindfoot score in the last follow-up was 93 points (range: 85-100 points). The mean score of VAS was 0.94 points (range: 0-2 points). Conclusion: Surgical treatment of ankle fractures associated with deltoid ligament rupture can achieve satisfactory outcomes, but it is important to decide the operation indication. abstract_id: PUBMED:29078057 Short-Term Results of a Ruptured Deltoid Ligament Repair During an Acute Ankle Fracture Fixation. Background: There is no consensus on the optimal treatment or preferred method of operation for the management of acute deltoid ligament injuries during an ankle fracture fixation. This study aimed to analyze the outcomes of repairing the deltoid ligament during the fixation of an ankle fracture compared to conservative management. Methods: We retrospectively evaluated 78 consecutive cases of a ruptured deltoid ligament with an associated ankle fracture between 2001 and 2016. All of the ankle fractures were treated with a plate and screw fixation. Patients in the conservative treatment for ruptured deltoid ligament underwent management from 2001 to 2008 (37 fractures, group 1), while the operative treatment for ruptured deltoid ligament was included from 2009 to 2016 (41 fractures, group 2). The outcome measures included radiographic findings, the American Orthopaedic Foot & Ankle Society ankle-hindfoot scores, visual analog scale scores, and the Foot Function Index. All patients were followed for an average of 17 months. Results: Radiologic findings in both groups were comparable, but the final follow-up of the medial clear space (MCS) was significantly smaller in the group 2 ( P < .01). Clinical outcomes were similar between the two groups ( P > .05). Comparing those who underwent syndesmotic fixation between both groups, group 2 showed a significantly smaller final follow-up MCS, and all clinical outcomes were better in group 2 ( P < .05). Linear regression analysis showed that the final follow-up MCS had a significant influence on clinical outcomes ( P < .05). Conclusion: Although the clinical outcomes were not significantly different between the 2 groups, we obtained a more favorable final follow-up MCS in the deltoid repair group. Particularly when accompanied by a syndesmotic injury, the final follow-up MCS and the clinical outcomes were better in the deltoid repair group. In the case of high-grade unstable fractures of the ankle with syndesmotic instability, a direct repair of the deltoid ligament was adequate for restoring medial stability. Level Of Evidence: Level III, retrospective comparative case series. abstract_id: PUBMED:25618804 Repair of the acute deltoid ligament complex rupture associated with ankle fractures: a multicenter clinical study. Controversy exists concerning the need for operative repair of the deltoid ligament during management of acute ankle fractures. The purpose of our report was to identify the indications for surgical intervention for deltoid ligament injury in the setting of ankle fractures. Furthermore, we aimed to elucidate the clinical outcomes after deltoid ligament repair in this setting. This was a multicenter study, involving 4 clinical institutions. From January 2006 to December 2011, 1533 ankle fractures underwent surgical intervention. Of this group, 131 deltoid ligament ruptures (8.55%) were identified and repaired operatively. Of the 131 patients, 74 were male (56.5%) and 57 were female (43.5%), with a mean age of 33.2 (range 16 to 63) years. The outcome measures included the clinical examination findings, radiographic findings, American Orthopaedic Foot and Ankle Society ankle-hindfoot scores, visual analog scale (VAS) scores, and Medical Outcomes Study Short Form 36-item questionnaire scores. All incisions healed primarily. A total of 106 patients were followed up for a minimum of 12 (range 12 to 72) months, with an average follow-up period of 27 months. The mean interval to fracture union was 14.5 (range 9 to 16) weeks. The mean American Orthopaedic Foot and Ankle Society ankle-hindfoot score at the latest follow-up visit was 91.4 (range 83 to 100) points. The mean visual analog scale score was 1.2 (range 0 to 6) points. The mean Short Form-36 score was 91.2 (range 80 to 96) points. Compared with the preoperative scores, all the 3 outcome measures had improved significantly postoperatively (p < .05). The postoperative stress radiographs did not reveal any ankle instability. None had evidence of post-traumatic arthritis of the ankle from the clinical examination and radiographs. A reasonable clinical evaluation and surgical repair was executed, with an appropriate repair technique chosen according to the site of deltoid ligament rupture. The results of the present multicenter study have shown that deltoid ligament rupture can be repaired in patients with an unstable medial ankle after fracture fixation and prevent ankle stabilization-related complications. Answer: The necessity of surgical treatment for deltoid ligament rupture in ankle fractures is a subject of ongoing debate and research. Some studies suggest that surgical repair of the deltoid ligament may not be necessary if the internal fixation of the fibula achieves an anatomical reconstitution of the mortise, and that exploration of the medial side is indicated only with a medial incongruency greater than 3 mm on intraoperative roentgenograms (PUBMED:9515134). Other studies indicate that there is no clear consensus on the repair of the deltoid ligament at the time of ankle fracture repair, although there may be some advantages of adding deltoid ligament repair for patients with high fibular fractures or in patients with concomitant syndesmotic fixation (PUBMED:30482440). A systematic review found that compared to nonrepair groups, deltoid ligament repair patients had lower syndesmotic malreduction rates and fewer implant removals, suggesting that deltoid ligament repair offers lower syndesmotic malreduction rates and reduced re-operation rates for hardware removal in comparison to trans-syndesmotic screws (PUBMED:33218869). Another study found that suture anchor repair combined with open reduction and internal fixation in the treatment of deltoid ligament rupture in ankle fractures can promote the recovery of ankle function, relieve postoperative pain, and improve bone metabolism (PUBMED:37449714). However, there are also studies that have found no significant difference in subjective and objective analysis between groups with and without deltoid ligament repair, suggesting that an untreated rupture of the deltoid ligament does not lead to instability (PUBMED:9515134). Additionally, some studies have reported satisfactory outcomes with surgical treatment of ankle fractures associated with deltoid ligament rupture, but emphasize the importance of deciding the operation indication (PUBMED:24136262). In conclusion, while some evidence supports the surgical repair of the deltoid ligament in certain cases of ankle fractures, particularly those with syndesmotic instability or high fibular fractures, there is still no clear consensus on the necessity of this procedure for all cases. Further research, including randomized control trials, may be required to establish definitive guidelines for the treatment of deltoid ligament rupture in the context of ankle fractures (PUBMED:33218869).
Instruction: Does repair of mitral regurgitation eliminate the need for left ventricular volume reduction? Abstracts: abstract_id: PUBMED:12930276 Does repair of mitral regurgitation eliminate the need for left ventricular volume reduction? Background: Effects of partial left ventriculectomy (PLV) remain ill-defined because mitral regurgitation (MR) repair by isolated annuloplasty alone has been reported to improve patients with dilated left ventricle and severe MR. Methods: Among patients undergoing PLV, 120 had paired pre- and postoperative (<1 week) Doppler echocardiograms. Effects of preoperative MR were studied by comparing 45 patients with no preoperative MR (MR-) and 75 patients with significant MR (MR+; MR = 1.51 when MR is enumerated as none = 0, mild = 1, moderate = 2). Results: MR- patients as compared with the MR+ group were older (53.8 vs. 49.2 years, P = 0.047), had less frequent dilated cardiomyopathy (33.3% vs 49.3%,P <0.01), similar ventricular dimension (72.3 mm vs 73.0 mm), septal thickness (9.5 mm vs 9.6 mm), posterior wall, fractional shortening (15.9% vs 16.8%) and ventricular mass (330 g vs 345 g), resulting in comparably reduced functional capacity (NYHA 3.40 vs 3.67). Although the MR- group required significantly less frequent mitral procedure (64.4% vs 84.0%, P < 0.01) and shorter cardiac arrest time, they had similar postoperative MR (0.22 vs 0.39), highly significant parallel reduction in ventricular dimension (P < 0.001 in either group), and improved %FS (P <0.001 in either group), resulting in similar hospital survival (87.1% vs 86.4%) and 90-day survival (71.1% vs 78.7%) with significantly comparable improvement in functional class (P = 0.011 in both groups). Histological severity of interstitial fibrosis (P = 0.80), weight (P = 0.93), and thickness (P = 0.76) of excised myocardium was comparable between the two groups. Conclusion: Patients with no preoperative MR were found to benefit from PLV as did patients with significant MR. Beneficial effects of PLV appeared to derive mainly from volume reduction rather than abolished MR in this study. abstract_id: PUBMED:35748086 Stage-based approach to predict left ventricular reverse remodeling after mitral repair. Background: Although predictors of reverse left ventricular (LV) remodeling postmitral valve repair are critical for guiding perioperative decision-making, there remains a paucity of randomized, prospective data to support the criteria that potential predictor variables must meet. Methods And Results: The CAMRA CardioLink-2 randomized trial allocated 104 patients to either leaflet resection or preservation strategies for mitral repair. The correlation of indexed left ventricular end-systolic volume (LVESVI), indexed left ventricular end-diastolic volume (LVEDVI), and left ventricular ejection fraction (LVEF) were tested with univariate analysis and subsequently with multivariate analysis to determine independent predictors of reverse remodeling at discharge and at 12 months postoperatively. At discharge, both LVESVI and LVEDVI were independently associated with their preoperative values (p < .001 for both) and LVEF by preoperative LVESVI (p < .001). Mitral ring size was favorably associated with the change in LVESVI (p < .05) and LVEF (p < .01) from predischarge to 12 months, while the mean mitral valve gradient after repair was adversely associated with the change in LVESVI (p < .05) and LVEDVI (p < .05). No significant associations were found between reverse remodeling and coaptation height nor mitral repair technique. Conclusions: Beyond confirming the lack of impact of mitral repair technique on reverse remodeling, this investigation suggests that recommending surgery before significant LV dilatation or dysfunction, as well as higher postoperative mitral valve hemodynamic performance, may enhance remodeling capacity following mitral repair. abstract_id: PUBMED:26189162 Left ventricular performance early after repair for posterior mitral leaflet prolapse: Chordal replacement versus leaflet resection. Objective: To review hemodynamic performance early after valve repair with chordal replacement versus leaflet resection for posterior mitral leaflet prolapse. Methods: Between April 2006 and September 2014, 72 consecutive patients underwent valve repair with chordal replacement (30 patients) or leaflet resection (42 patients) for isolated posterior mitral leaflet prolapse. Left ventricular ejection fraction, end-systolic elastance, effective arterial elastance, and ventricular efficiency were noninvasively measured by echocardiography and analyzed preoperatively and ∼ 1 month postoperatively. Mitral valve repair was accomplished in all patients, and no regurgitation (including trivial) was observed postoperatively. Results: Chordal replacement resulted in significantly less reduction in left ventricular ejection fraction, and significantly greater increase in end-systolic elastance than leaflet resection (left ventricular ejection fraction, 4.8% vs 16.7% relative decrease [P = .005] and end-systolic elastance, 19.0% vs -1.3% relative increase [P = .012]). Despite comparable preoperative ventricular efficiency between the groups, the postoperative ventricular efficiency in the chordal replacement group was superior to that in the leaflet resection group (ventriculoarterial coupling, 32.0% vs 89.3% relative increase [P = .007] and ratio of stroke work to pressure-volume area, 4.3% vs 13.4% relative decrease [P = .008]). In multivariate analysis, operative technique was a significant determinant of left ventricular ejection fraction and ratio of stroke work to pressure-volume area (P = .030 and P = .030, respectively). Conclusions: Chordal replacement might provide patients undergoing valve repair for posterior mitral leaflet prolapse with better postoperative ventricular performance than leaflet resection. Longer follow-up is required to compare long-term outcomes. abstract_id: PUBMED:7979733 Regression of left ventricular mass after mitral valve repair of pure mitral regurgitation. To evaluate the effect of mitral valve repair on the regression of left ventricular mass, we studied 50 consecutive patients with severe, pure mitral regurgitation undergoing mitral valve repair. Two-dimensional echocardiograms were recorded a mean 2.5 +/- 2.0 weeks before and 6.5 +/- 2.5 months after valve operation. Postoperative significant mitral regurgitation was present in 3 patients. After mitral valve repair there were significant decreases in left ventricular end-diastolic volume index (133 +/- 39 mL/m2 to 79 +/- 35 mL/m2; p < 0.001), end-systolic volume index (44 +/- 26 mL/m2 to 30 +/- 26 mL/m2; p < 0.001), stroke volume index (89 +/- 29 mL/m2 to 49 +/- 19 mL/m2; p < 0.001), and mass index (211 +/- 82 g/m2 to 134 +/- 52 g/m2; p < 0.001). There also were significant decreases in left atrial dimension (47 +/- 9 mm to 38 +/- 9 mm; p < 0.001), left ventricular end-diastolic dimension (61 +/- 8 mm to 48 +/- 7 mm; p < 0.001), and end-systolic dimension (39 +/- 8 mm to 32 +/- 7 mm; p < 0.001). Left ventricular ejection fraction decreased slightly from 0.69 +/- 0.12 to 0.64 +/- 0.12; p < 0.01) after repair. Thus, correction of pure mitral regurgitation leads to reduction of the cardiac chamber size and left ventricular volumes as well as regression of the left ventricular mass. abstract_id: PUBMED:23633132 Percutaneous mitral valve repair in the initial EVEREST cohort: evidence of reverse left ventricular remodeling. Background: Percutaneous repair of mitral regurgitation (MR) permits examination of the effect of MR reduction without surgery and cardiopulmonary bypass on left ventricular (LV) dimensions and function. The goal of this analysis was to determine the extent of reverse remodeling at 12 months after successful percutaneous reduction of MR with the MitraClip device. Methods And Results: Of 64 patients with 3 and 4+ MR who achieved acute procedural success after treatment with the MitraClip device, 49 patients had moderate or less MR at 12-month follow-up. Their baseline and 12-month echocardiograms were compared between the group with and without LV dysfunction. In patients with persistent MR reduction and pre-existing LV dysfunction, there was a reduction in LV wall stress, reduced LV end-diastolic volume, LV end-systolic volume and increase in LV ejection fraction in contrast to those with normal baseline LV function, who showed reduction in LV end-diastolic volume, LV wall stress, no change in LV end-systolic volume, and a fall in LV ejection fraction. Conclusions: Patients with pre-existing LV dysfunction demonstrate reverse remodeling and improved LV ejection fraction after percutaneous mitral valve repair. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00209339, NCT00209274. abstract_id: PUBMED:10730612 Left ventricular performance in chronic mitral regurgitation: temporal response to valve repair and prognostic value of early postoperative echocardiographic parameters. Background: The temporal response of the left ventricle due to the relief of volume loading after mitral valve repair, and the prognostic value of early changes in left ventricular size and function, are not fully documented. The purpose of this study was to analyze the evolution of left ventricular performance after surgery, and to evaluate how early postoperative echocardiographic parameters compare with late ventricular function. Methods: We studied 58 patients with chronic degenerative mitral regurgitation using echocardiography, before, and 9 +/- 3 days and 38 +/- 6 months after mitral valve repair. Results: Between the preoperative and early postoperative study, left ventricular end-diastolic and left atrial size, and ejection fraction decreased, whereas left ventricular end-systolic dimension did not change. Between the early and late postoperative study left ventricular end-systolic size decreased significantly, there was a further decrease in left ventricular end-diastolic dimension and a significant increase in ejection fraction; left atrial size did not change. Multivariate analysis showed that preoperative and early postoperative ejection fraction, and the early postoperative reduction in diastolic dimension were the best predictors of late left ventricular function. Conclusions: In patients with chronic degenerative mitral regurgitation, the greatest reduction in end-diastolic dimension occurs within 2 weeks of the reversal of volume overload; a significant reduction in end-systolic dimension with an increase in ejection fraction occurs later. In our experience, early postoperative echocardiographic measurements of left ventricular size and function can provide important prognostic information. abstract_id: PUBMED:38351687 Timing of Surgery for Asymptomatic Primary Mitral Regurgitation: Possible Value of Early, Serial Measurements of Left Ventricular Sphericity. Asymptomatic primary mitral regurgitation due to myxomatous degeneration of the mitral valve leaflets may remain so for long periods, even as left ventricular function progresses to a decompensated stage. During the early compensated stage, the ventricle's initial response to the volume overload is an asymmetric increase in the diastolic short axis dimension, accomplished by a diastolic shift of the interventricular septum into the right ventricular cavity, creating a more spherical left ventricular diastolic shape, increasing diastolic filling and stroke volume. Early valve repair is recommended to reduce postoperative left ventricular dysfunction. Early serial measurements of left ventricular sphericity index [LV-Si]. during the compensated stage of mitral regurgitation might identify subtle changes in left ventricular shape and assist in determining the optimal earliest timing for surgical intervention. abstract_id: PUBMED:37345813 Acute Reduction in Left Ventricular Function Following Transcatheter Mitral Edge-to-Edge Repair. Background Little is known about the impact of transcatheter mitral valve edge-to-edge repair on changes in left ventricular ejection fraction (LVEF) and the effect of an acute reduction in LVEF on prognosis. We aimed to assess changes in LVEF after transcatheter mitral valve edge-to-edge repair for both primary and secondary mitral regurgitation (PMR and SMR, respectively), identify rates and predictors of LVEF reduction, and estimate its impact on prognosis. Methods and Results In this international multicenter registry, patients with both PMR and SMR undergoing transcatheter mitral valve edge-to-edge repair were included. We assessed rates of acute LVEF reduction (LVEFR), defined as an acute relative decrease of >15% in LVEF, its impact on all-cause mortality, major adverse cardiac event (composite end point of all-cause death, mitral valve surgery, and residual mitral regurgitation grade ≥2), and LVEF at 12 months, as well as predictors for LVEFR. Of 2534 patients included (727 with PMR, and 1807 with SMR), 469 (18.5%) developed LVEFR. Patients with PMR were older (79.0±9.2 versus 71.8±8.9 years; P<0.001) and had higher mean LVEF (54.8±14.0% versus 32.7±10.4%; P<0.001) at baseline. After 6 to 12 months (median, 9.9 months; interquartile range, 7.8-11.9 months), LVEF was significantly lower in patients with PMR (53.0% versus 56.0%; P<0.001) but not in patients with SMR. The 1-year mortality was higher in patients with PMR with LVEFR (16.9% versus 9.7%; P<0.001) but not in those with SMR (P=0.236). LVEF at baseline (odds ratio, 1.03 [95% CI, 1.01-1.05]; P=0.002) was predictive of LVEFR for patients with PMR, but not those with SMR (P=0.092). Conclusions Reduction in LVEF is not uncommon after transcatheter mitral valve edge-to-edge repair and is correlated with worsened prognosis in patients with PMR but not patients with SMR. Registration URL: https://www.clinicaltrials.gov; Unique identifier: NCT05311163. abstract_id: PUBMED:34300179 Forward Left Ventricular Ejection Fraction as a Predictor of Postoperative Left Ventricular Dysfunction in Patients with Degenerative Mitral Regurgitation. Background: Left ventricular dysfunction (LVD) can occur immediately after mitral valve repair (MVr) for degenerative mitral regurgitation (DMR) in some patients with normal preoperative left ventricular ejection fraction (LVEF). This study investigated whether forward LVEF, calculated as left ventricular outflow tract stroke volume divided by left ventricular end-diastolic volume, could predict LVD immediately after MVr in patients with DMR and normal LVEF. Methods: Echocardiographic and clinical data were retrospectively evaluated in 234 patients with DMR ≥ moderate and preoperative LVEF ≥ 60%. LVD and non-LVD were defined as LVEF < 50% and ≥50%, respectively, as measured by echocardiography after MVr and before discharge. Results: Of the 234 patients, 52 (22.2%) developed LVD at median three days (interquartile range: 3-4 days). Preoperative forward LVEF in the LVD and non-LVD groups were 24.0% (18.9-29.5%) and 33.2% (26.4-39.4%), respectively (p < 0.001). Receiver operating characteristic (ROC) analyses showed that forward LVEF was predictive of LVD, with an area under the ROC curve of 0.79 (95% confidence interval: 0.73-0.86), and an optimal cut-off was 31.8% (sensitivity: 88.5%, specificity: 58.2%, positive predictive value: 37.7%, and negative predictive value: 94.6%). Preoperative forward LVEF significantly correlated with preoperative mitral regurgitant volume (correlation coefficient [CC] = -0.86, p < 0.001) and regurgitant fraction (CC = -0.98, p < 0.001), but not with preoperative LVEF (CC = 0.112, p = 0.088). Conclusion: Preoperative forward LVEF could be useful in predicting postoperative LVD immediately after MVr in patients with DMR and normal LVEF, with an optimal cut-off of 31.8%. abstract_id: PUBMED:33004041 Mitral valve repair for degenerative mitral regurgitation in patients with left ventricular systolic dysfunction: early and mid-term outcomes. Background: This study aims to evaluate the early and mid-term outcomes of mitral valve repair for degenerative mitral regurgitation (MR) in patients with left ventricular systolic dysfunction. Methods: From January 2005 to December 2016, the profiles of patients with degenerative MR who underwent mitral valve repair at our institution were analyzed. Left ventricular systolic dysfunction was defined as an ejection fraction < 60% or left ventricular end-systolic dimension > 40 mm. Finally, 322 patients with left ventricular systolic dysfunction were included in this study. The prognosis of left ventricular function during follow-up was evaluated and preoperative factors associated with deteriorated left ventricular systolic function during follow-up were analyzed. Results: The in-hospital mortality rate was 1.6%. The rate of eight-year overall survival, freedom from reoperation for mitral valve and freedom from recurrent MR were 96.9, 91.2 and 73.4%, respectively. Intraoperative residual mild MR (hazard ratio 4.82) and an isolated anterior leaflet lesion (hazard ratio 2.48) were independent predictive factors for recurrent MR. During follow-up, 212 patients underwent echocardiography examinations at our institution. Among them, 132 patients had improved left ventricular systolic function, and 80 patients had deteriorated left ventricular systolic. Freedom from recurrent MR was found in 75.9% of the improved left ventricular systolic function group and 56.2% of the deteriorated left ventricular systolic function group (P = 0.047). An age > 50 years (odds ratio 2.40), ejection fraction≤52% (odds ratio 2.79) and left ventricular end-systolic dimension≥45 mm (odds ratio 2.31) were independent risk factors for deteriorated left ventricular systolic function during follow-up. Conclusions: Mitral valve repair could be safely performed for degenerative MR in patients with left ventricular systolic dysfunction. Intraoperative residual mild MR and an isolated anterior leaflet lesion were independent predictive factors for recurrent MR. An age > 50 years, ejection fraction≤52% and left ventricular end-systolic dimension≥45 mm were independent risk factors for deteriorated left ventricular systolic function during follow-up. Answer: The repair of mitral regurgitation (MR) does not necessarily eliminate the need for left ventricular volume reduction. The effects of partial left ventriculectomy (PLV) remain ill-defined because isolated annuloplasty alone has been reported to improve patients with dilated left ventricle and severe MR. A study comparing patients with no preoperative MR (MR-) and those with significant MR (MR+) found that both groups benefited from PLV, suggesting that the beneficial effects of PLV mainly derive from volume reduction rather than abolished MR (PUBMED:12930276). Moreover, while predictors of reverse left ventricular (LV) remodeling post-mitral valve repair are critical for guiding perioperative decision-making, there is a lack of randomized, prospective data to support the criteria that potential predictor variables must meet. A study found that recommending surgery before significant LV dilatation or dysfunction, as well as higher postoperative mitral valve hemodynamic performance, may enhance remodeling capacity following mitral repair (PUBMED:35748086). Additionally, a study on hemodynamic performance early after valve repair with chordal replacement versus leaflet resection for posterior mitral leaflet prolapse indicated that chordal replacement might provide better postoperative ventricular performance than leaflet resection (PUBMED:26189162). Another study showed that correction of pure MR leads to a reduction of cardiac chamber size and left ventricular volumes as well as regression of left ventricular mass (PUBMED:7979733). In the context of percutaneous mitral valve repair, patients with pre-existing LV dysfunction demonstrated reverse remodeling and improved LV ejection fraction after the procedure (PUBMED:23633132). Overall, while mitral valve repair can lead to reverse remodeling and improvement in LV function, the need for left ventricular volume reduction may still be present depending on the individual patient's condition and the extent of LV remodeling. The decision to perform additional volume reduction procedures should be based on a comprehensive assessment of the patient's cardiac function and the potential benefits and risks of additional interventions.
Instruction: Does fetal gender affect cytotrophoblast cell activity in the human term placenta? Abstracts: abstract_id: PUBMED:7033346 How Kergaradec listened to the fetus (author's transl) Laennec wrote in the second edition of his Treaty on Auscultation (1826, II, 457) : "I never thought of applying auscultation to studying the phenomena of pregnancy. This fortunate idea came to my compatriot and friend Monsieur le docteur de Kergaradec." It was in fact on the 26th December 1821 that Kergaradec read his "Memoire on Auscultation as applied to the study of pregnancy" and posed the vital question : "Would it not be possible to judge the state of health or illness of the fetus from the variations in the strength and frequency of the fetal heart beat?" The answer 160 years later, after so much work has been done by innumerable authors is : yes, the fetal heart does make it possible to judge as to the vitality or the distress of the fetus! It is right that today obstetricians should stop a little and think about Kergaradec's existence on earth and on the everlastingness of his inspired prophecy, for he was the first to think of it. abstract_id: PUBMED:30930420 Human Placenta Hydrolysate Promotes Liver Regeneration via Activation of the Cytokine/Growth Factor-Mediated Pathway and Anti-oxidative Effect. Liver regeneration is a very complex process and is regulated by several cytokines and growth factors. It is also known that liver transplantation and the regeneration process cause massive oxidative stress, which interferes with liver regeneration. The placenta is known to contain various physiologically active ingredients such as cytokines, growth factors, and amino acids. In particular, human placenta hydrolysate (hPH) has been found to contain many amino acids. Most of the growth factors found in the placenta are known to be closely related to liver regeneration. Therefore, in this study, we investigated whether hPH is effective in promoting liver regeneration in rats undergoing partial hepatectomy. We confirmed that cell proliferation was significantly increased in HepG2 and human primary cells. Hepatocyte proliferation was also promoted in partial hepatectomized rats by hPH treatment. hPH increased liver regeneration rate, double nucleic cell ratio, mitotic cell ratio, proliferating cell nuclear antigen (PCNA), and Ki-67 positive cells in vivo as well as interleukin (IL)-6, tumor necrosis factor alpha (TNF-α), and hepatocyte growth factor (HGF). Moreover, Kupffer cells secreting IL-6 and TNF-α were activated by hPH treatment. In addition, hPH reduced thiobarbituric acid reactive substances (TBARs) and significantly increased glutathione (GSH), glutathione peroxidase (GPx), and superoxide dismutase (SOD). Taken together, these results suggest that hPH promotes liver regeneration by activating cytokines and growth factors associated with liver regeneration and eliminating oxidative stress. abstract_id: PUBMED:32072566 Amphiphilic dextran-vinyl laurate-based nanoparticles: formation, characterization, encapsulation, and cytotoxicity on human intestinal cell line. Dextran has been the model material for the therapeutic applications owing to its biodegradable and biocompatible properties, and the ability to be functionalized in variety of ways. In this study, the amphiphilic dextran was successfully synthesized through lipase-catalyzed transesterification between dextran and vinyl laurate. In aqueous solution, the produced dextran ester could self-assemble into spherical nanoparticles ("Dex-L NPs") with approximately 200-nm diameter, and could incorporate porcine placenta hydrolysate with 60% encapsulation efficiency. Furthermore, Dex-L NPs exhibited low cytotoxic effects on human intestinal cell line and, thus, were potentially safe for oral administration. Taken together, the findings illustrate the potential of the newly developed nanoparticles to serve as an efficient and safe drug delivery system. abstract_id: PUBMED:30132871 Antioxidant effect of human placenta hydrolysate against oxidative stress on muscle atrophy. Sarcopenia, which refers to the muscle loss that accompanies aging, is a complex neuromuscular disorder with a clinically high prevalence and mortality. Despite many efforts to protect against muscle weakness and muscle atrophy, the incidence of sarcopenia and its related permanent disabilities continue to increase. In this study, we found that treatment with human placental hydrolysate (hPH) significantly increased the viability (approximately 15%) of H2 O2 -stimulated C2C12 cells. Additionally, while H2 O2 -stimulated cells showed irregular morphology, hPH treatment restored their morphology to that of cells cultured under normal conditions. We further showed that hPH treatment effectively inhibited H2 O2 -induced cell death. Reactive oxygen species (ROS) generation and Mstn expression induced by oxidative stress are closely associated with muscular dysfunction followed by atrophy. Exposure of C2C12 cells to H2 O2 induced abundant production of intracellular ROS, mitochondrial superoxide, and mitochondrial dysfunction as well as myostatin expression via nuclear factor-κB (NF-κB) signaling; these effects were attenuated by hPH. Additionally, hPH decreased mitochondria fission-related gene expression (Drp1 and BNIP3) and increased mitochondria biogenesis via the Sirt1/AMPK/PGC-1α pathway and autophagy regulation. In vivo studies revealed that hPH-mediated prevention of atrophy was achieved predominantly through regulation of myostatin and PGC-1α expression and autophagy. Taken together, our findings indicate that hPH is potentially protective against muscle atrophy and oxidative cell death. abstract_id: PUBMED:3364414 Development of squamous cell carcinoma of the esophagus after endoscopic variceal sclerotherapy. We describe the case of a 45-yr-old white male with portal hypertension and presumed Laennec's cirrhosis who developed squamous cell carcinoma of the esophagus 8 months after completion of a course of endoscopic variceal sclerotherapy. The epidemiology and natural history of esophageal cancer and their relationship to our patient are analyzed. This report emphasizes that squamous cell carcinoma of the esophagus should be considered in the differential diagnosis of postsclerotherapy dysphagia. Further studies will be required to determine whether or not esophageal variceal sclerotherapy is associated coincidently or causally with the development of squamous cell carcinoma of the esophagus in patients at increased risk for this condition. abstract_id: PUBMED:3557313 Depression of peripheral blood monocyte aryl hydrocarbon hydroxylase activity in patients with liver disease: possible involvement of macrophage factors. Aryl hydrocarbon hydroxylase activity was detectable in cultured macrophage monolayers of peripheral blood monocyte origin. Peripheral blood monocytes were isolated from patients with biopsy-confirmed liver disease and healthy volunteers. Macrophage monolayers were prepared and incubated at 37 degrees C. After 24 hr, the aryl hydrocarbon hydroxylase activity and cellular protein concentration were assayed on cell homogenates. The monocyte aryl hydrocarbon hydroxylase activity in cultured macrophages from normal volunteers was 1.23 +/- 0.16 (n = 19). The aryl hydrocarbon hydroxylase activity in macrophage cultures from patients with biopsy-confirmed liver disease was 0.48 +/- 0.05 (n = 20). This represents a significant (61%) decrease in monocyte aryl hydrocarbon hydroxylase compared to controls. The 20 patients have established cirrhosis or early stage liver disease. The established cirrhosis group includes alpha 1-antitrypsin deficiency-associated cirrhosis; primary biliary cirrhosis; alcoholic (Laennec's) cirrhosis; cryptogenic cirrhosis, and hemochromatosis. Early stage liver disease is attributed to methotrexate (Stage III), early stage primary biliary cirrhosis and alpha 1-antitrypsin deficiency. Our results indicate that the depression in monocyte aryl hydrocarbon hydroxylase activity is greater in patients with established cirrhosis than early stage liver disease. Our results further suggest that cultured monocytes from patients with liver disease spontaneously release soluble factors into the culture medium. Incubation of this medium, containing macrophage factors, with isolated hepatocytes significantly depress hepatocyte aryl hydrocarbon hydroxylase activity compared to medium obtained from cultures of monocytes from normal volunteers.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:11003357 Long-term results of liver transplantation in older patients 60 years of age and older. Background: Advances in perioperative care and immunosuppression have enabled clinicians to broaden the indications for organ transplantation. Advanced age is no longer considered a contraindication to transplantation at most centers. Although short-term studies of elderly liver transplant recipients have demonstrated that the incidence of complications and overall patient survival are similar to those of younger adults, transplant center-specific, long-term data are not available. Methods: From August of 1984 to September of 1997, 91 patients 60 years of age or older received primary liver transplants at the University of Wisconsin, Madison. This group of patients was compared with a group of younger adults (n=387) ranging in age from 18 to 59 years who received primary liver transplants during the same period. The most common indications for transplantation in both groups were Laennec's cirrhosis, hepatitis C, primary biliary cirrhosis, primary sclerosing cholangitis, and cryptogenic cirrhosis. There was no difference in the preoperative severity of illness between the groups. Results. The length of hospitalization was the same for both groups, and there were no significant differences in the incidence of rejection, infection (surgical or opportunistic), repeat operation, readmission, or repeat transplantation between the groups. The only significant difference identified between the groups was long-term survival. Five-year patient survival was 52% in the older group and 75% in the younger group (P<0.05). Ten-year patient survival was 35% in the older group and 60% in the younger group (P<0.05). The most common cause of late mortality in elderly liver recipients was malignancy (35.0%), whereas most of the young adult deaths were the result of infectious complications (24.2%). Conclusion: Although older recipients at this center did as well as younger recipients in the early years after liver transplantation, long-term survival results were not as encouraging. abstract_id: PUBMED:37404996 Human placental extract: a potential therapeutic in treating osteoarthritis. Osteoarthritis (OA) is a degenerative joint disease marked by cartilage degradation and loss of function. Recently, there have been increased efforts to attenuate and reverse OA by stimulating cartilage regeneration and preventing cartilage degradation. Human placental extract (HPE) may be an option due to its anti-inflammatory, antioxidant, and growth stimulatory properties. These properties are useful in preventing cell death and senescence, which may optimize in-situ cartilage regeneration. In this review, we discuss the anatomy and physiology of the placenta, as well as explore in vivo and in vitro studies assessing its effects on tissue regeneration. Finally, we assess the potential role of HPE in cartilage regenerative medicine and OA. The Medline database was utilized for all studies that involved the use of HPE or human placenta hydrolysate. Exclusion criteria included articles not written in English, conference reviews, editorials, letters to the editor, surveys, case reports, and case series. HPE had significant anti-inflammatory and regenerative properties in vitro and in vivo. Furthermore, HPE had a role in attenuating cellular senescence and cell apoptosis via reduction of reactive oxidative species both in vitro and in vivo. One study explored the effects of HPE in OA and demonstrated reduction in cartilage catabolic gene expression, indicating HPE's effect in attenuating OA. HPE houses favorable properties that can attenuate and reverse tissue damage. This may be a beneficial therapeutic in OA as it creates a more favorable environment for in-situ cartilage regeneration. More well designed in-vitro and in-vivo studies are needed to define the role of HPE in treating OA. abstract_id: PUBMED:29739458 Anti-stress effects of human placenta extract: possible involvement of the oxidative stress system in rats. Background: Human placenta hydrolysate (hPH) has been utilized to improve menopausal, fatigue, liver function. Its high concentration of bioactive substances is known to produce including antioxidant, anti-inflammatory and anti-nociceptive activities. However, its mechanisms of stress-induced depression remain unknown. Methods: The present study examined the effect of hPH on stress-induced depressive behaviors and biochemical parameters in rats. hPH (0.02 ml, 0.2 ml or 1 ml/rat) was injected intravenously 30 min before the daily stress session in male Sprague-Dawley rats exposed to repeated immobilization stress (4 h/day for 7 days). The depressive-like behaviors of all groups were measured by elevated plus maze (EPM) and forced swimming test (FST). After the behavior tests, brain samples of all groups were collected for the analysis of glutathione peroxidase (GPx) and nicotinamide adenine dinucleotide phosphate-diaphorase (NADPH-d) staining. Results: Treatment with hPH produced a significant decrease of immobility time in the FST compared to the controls. Additionally, hPH treatment elicited a slightly decreasing trend in anxiety behavior on the EPM. Furthermore, hPH increased the level of GPx protein in the hippocampus, and decreased the expression of NADPH-d in the paraventricular nucleus (PVN). Conclusion: This study demonstrated that hPH has anti-stress effects via the regulation of nitric oxide (NO) synthase and antioxidant activity in the brain. These results suggest that hPH may be useful in the treatment of stress-related diseases such as chronic fatigue syndrome. abstract_id: PUBMED:7839442 Weaning of immunosuppression in long-term liver transplant recipients. Seventy-two long-surviving liver transplant recipients were evaluated prospectively, including a baseline allograft biopsy for weaning off of immunosuppression. Thirteen were removed from candidacy because of chronic rejection (n = 4), hepatitis (n = 2), patient anxiety (n = 5), or lack of cooperation by the local physician (n = 2). The other 59, aged 12-68 years, had stepwise drug weaning with weekly or biweekly monitoring of liver function tests. Their original diagnoses were PBC (n = 9), HCC (n = 1), Wilson's disease (n = 4), hepatitides (n = 15), Laennec's cirrhosis (n = 1), biliary atresia (n = 16), cystic fibrosis (n = 1), hemochromatosis (n = 1), hepatic trauma (n = 1), alpha-1-antitrypsin deficiency (n = 9), and secondary biliary cirrhosis (n = 1). Most of the patients had complications of long-term immunosuppression, of which the most significant were renal dysfunction (n = 8), squamous cell carcinoma (n = 2) or verruca vulgaris of skin (n = 9), osteoporosis and/or arthritis (n = 12), obesity (n = 3), hypertension (n = 11), and opportunistic infections (n = 2). When azathioprine was a third drug, it was stopped first. Otherwise, weaning began with prednisone, using the results of corticotropin stimulation testing as a guide. If adrenal insufficiency was diagnosed, patients reduced to < 5 mg/day prednisone were considered off of steroids. The baseline agents (azathioprine, cyclosporine, or FK506) were then gradually reduced in monthly decrements. Complete weaning was accomplished in 16 patients (27.1%) with 3-19 months drug-free follow-up, is progressing in 28 (47.4%), and failed in 15 (25.4%) without graft losses or demonstrable loss of graft function from the rejections. This and our previous experience with self-weaned and other patients off of immunosuppression indicate that a significant percentage of appropriately selected long-surviving liver recipients can unknowingly achieve drug-free graft acceptance. Such attempts should not be contemplated until 5-10 years posttransplantation and then only with careful case selection, close monitoring, and prompt reinstitution of immunosuppression when necessary. Answer: The provided abstracts do not contain any information regarding the effect of fetal gender on cytotrophoblast cell activity in the human term placenta. The abstracts cover a range of topics including the history of fetal auscultation (PUBMED:7033346), the regenerative effects of human placenta hydrolysate on liver (PUBMED:30930420), the development of nanoparticles for drug delivery using dextran and placenta hydrolysate (PUBMED:32072566), the antioxidant effects of human placenta hydrolysate on muscle atrophy (PUBMED:30132871), the development of squamous cell carcinoma of the esophagus after endoscopic variceal sclerotherapy (PUBMED:3364414), the depression of monocyte aryl hydrocarbon hydroxylase activity in patients with liver disease (PUBMED:3557313), the long-term results of liver transplantation in older patients (PUBMED:11003357), the potential therapeutic use of human placental extract in treating osteoarthritis (PUBMED:37404996), the anti-stress effects of human placenta extract (PUBMED:29739458), and the weaning of immunosuppression in long-term liver transplant recipients (PUBMED:7839442). None of these abstracts discuss the impact of fetal gender on the activity of cytotrophoblast cells in the placenta.
Instruction: Can training improve the results with infrared tympanic thermometers? Abstracts: abstract_id: PUBMED:9311408 Can training improve the results with infrared tympanic thermometers? Background: Infrared tympanic thermometry (ITT) is a method for body temperature measurement. Correct measuring technique is said to be important to achieve good results with this method. The objective of this study was to investigate the accuracy and effect of training in the use of infrared tympanic thermometry (ITT) on the measurement results. Method: Nurses trained in the use of ITT, and nurses not trained performed measurement sequences on 65 patients: one rectal and two ITT measurements in each sequence. Results: Mean rectal temperatures were significantly (P < 0.01) higher than with ITT (0.44 +/- 0.42 (SD) degree C for trained, 0.56 +/- 0.4 (SD) degree C for untrained). Coefficient of repeatability for ITT measurements was +/- 0.54 degree C for trained nurses, and +/- 0.48 degree C for untrained. With ITT temperatures adjusted upwards of 0.5 degree C, the sensitivity of ITT for detecting fever as defined by rectal measurements would be 70% for trained, and 54% for untrained nurses. Repeatability and sensitivity for trained and untrained nurses were not significantly (P > 0.05) different. Conclusion: Training had little effect on the accuracy of the measurements. According to our results, ITT is often unreliable and should be used with caution. abstract_id: PUBMED:22319287 Evaluation of performance and uncertainty of infrared tympanic thermometers. Infrared tympanic thermometers (ITTs) are easy to use and have a quick response time. They are widely used for temperature measurement of the human body. The accuracy and uncertainty of measurement is the importance performance indicator for these meters. The performance of two infrared tympanic thermometers, Braun THT-3020 and OMRON MC-510, were evaluated in this study. The cell of a temperature calibrator was modified to serve as the standard temperature of the blackbody. The errors of measurement for the two meters were reduced by the calibration equation. The predictive values could meet the requirements of the ASTM standard. The sources of uncertainty include the standard deviations of replication at fixed temperature or the predicted values of calibration equation, reference standard values and resolution. The uncertainty analysis shows that the uncertainty of calibration equation is the main source for combined uncertainty. Ambient temperature did not have the significant effects on the measured performance. The calibration equations could improve the accuracy of ITTs. However, these equations did not improve the uncertainty of ITTs. abstract_id: PUBMED:20736400 Accuracy of tympanic and infrared skin thermometers in children. Background: Rectal measurement is considered a gold standard in many healthcare systems for body temperature measurement in children. Although this method has several disadvantages, an ideal alternative thermometer has not yet been introduced. However tympanic and infrared skin thermometers are potential alternatives. Methods: A prospective cohort study was performed including 100 children between 0 and 18 years of age admitted to the general paediatric ward of Spaarne Hospital in The Netherlands between January and March 2009. The objectives of this study are to evaluate the accuracy of tympanic and two types of infrared skin thermometers (Beurer and Thermofocus) compared to rectal measurement and furthermore to evaluate the influence of different variables on temperature measurements. Results: Compared to rectal measurement (37.56°C), the mean temperatures of the tympanic (37.29°C), Beurer (36.79°C) and Thermofocus (37.30°C) thermometers differed significantly (p<0.001). Mean and SD of differences between rectal temperature and temperature measured with these alternative devices varied significantly (p<0.001). Sensitivity, specificity, positive and negative predictive values for detecting rectal fever measured with the tympanic, Beurer and Thermofocus thermometers are unacceptable, especially for the Beurer thermometer. This difference in temperature between rectal and the alternative thermometers remained after stratification on gender, age, skin colour and otoscopic abnormalities. Conclusions: In this study the authors demonstrated that the tympanic, Beurer and Thermofocus thermometers cannot reliably predict rectal temperature. Therefore the authors do not advise replacement of rectal measurement as the gold standard for detecting fever in children by one of these devices. When rectal measurement is not used, the infrared skin thermometers appear to perform less well than tympanic measurements. abstract_id: PUBMED:10110257 Laboratory and hospital testing of new infrared tympanic thermometers. A patented approach to infrared thermometry based on the use of a standard pyrosensor has resulted in the development of two new infrared tympanic thermometers, one for professional use, the other for home use. Both were tested to evaluate accuracy in the laboratory and to evaluate equivalence to standards, correlation to standards, and precision in human subjects. Accuracy was found to be well within ASTM standards on both models. Mean ear temperatures were 0.2 degrees C below oral and 0.7 degrees C below bladder temperature. Correlations between ear and oral and ear and bladder temperatures were r = .77 to .84. Repeatability in the same ear was very high at r = .95 (left) and .97 (right). Reproducibility between left and right ear ranged from r = .89 to .92. abstract_id: PUBMED:22860884 Can we trust the new generation of infrared tympanic thermometers in clinical practice? Aims And Objectives: To explore the reliability and validity of the new generation of infrared tympanic thermometers, comparing with rectal and core temperature, and to decide their applicability to clinical practice. Background: Digital contact thermometers for rectal measurements and infrared tympanic thermometers are the most common way to measure patients' temperature. Previous studies of the infrared tympanic thermometers revealed misdiagnosis, and validity of early models was questioned. Design: Reliability and validity study. Methods: Temperature was measured with two infrared tympanic thermometers brands in both ears and compared with rectal temperature twice a day at the ward (n = 200). At the intensive care unit, patients (n = 42) underwent the same measurement procedures every fourth hour for 24 hours. In addition, core temperature was measured. Statistical analyses included descriptive and mixed models analyses. Results: Ward: Infrared tympanic thermometers measured the temperature lower than the rectal temperature. Descriptive statistics indicate higher variation in temperature measurements made in the ear. No statistically significant difference in temperature was found for left ear vs. right ear. Intensive care unit: The mean rectal temperature was higher than the mean core and ear temperature. Mixed models analyses of the temperatures at the ward and the intensive care unit showed the same overall trends, but with less discrepancy between the two infrared tympanic thermometers brands, compared with the rectal temperature. Only rectal temperature measurements differed significantly from the core temperature. Conclusion: Our study shows good reliability using the new generation of infrared tympanic thermometers. We found good agreement between core and infrared tympanic thermometers at the intensive care unit, but the measuring inaccuracy for infrared tympanic thermometers was greater than expected. Relevance To Clinical Practice: The new generation of infrared tympanic thermometers may be used in clinical practice, but it is important to do repeatedly measurements if there is discrepancy between the temperature and the observation of the patient condition. abstract_id: PUBMED:24127699 Accuracy of tympanic and forehead thermometers in private paediatric practice. Aim: To compare infrared tympanic and infrared contact forehead thermometer measurements with traditional rectal digital thermometers. Methods: A total of 254 children (137 girls) aged one to 24 months (median 7 months) consulting a private paediatric practice because of fever were prospectively recruited. Body temperature was measured using the three different devices. Results: The median and interquartile range for rectal, tympanic and forehead thermometers were 37.6 (37.1-38.4)°C, 37.5 (37.0-38.1)°C and 37.5 (37.1-37.9)°C, respectively (p < 0.01). The limits of agreement in the Bland-Altman plots were -0.73 to +1.04°C for the tympanic thermometer and -1.18 to +1.64°C for the forehead thermometer. The specificity of both the tympanic and forehead thermometers for detecting fever above 38°C was good, but sensitivity was low. Forehead measurements were susceptible to the use of a radiant warmer. Conclusion: Both the tympanic and forehead devices recorded lower temperatures than the rectal thermometers. The limits of agreement were particularly wide for the forehead thermometer and considerable for the tympanic thermometer. In the absence of valid alternatives, because of the ease to use and little degree of discomfort, tympanic thermometers can still be used with some reservations. Forehead thermometers should not be used in paediatric practice. abstract_id: PUBMED:9290138 An assessment of infrared tympanic thermometers for body temperature measurement. This article provides an experimental assessment of three commercially available clinical thermometers, using different thermal infrared sensors. This kind of thermometer measures body temperature by detecting infrared radiation from the tympanic membrane. These thermometers are growing in popularity thanks to their simplicity of use, rapid response and minimal distress to the patient. The purpose of the laboratory tests presented here was to assess the effect of varying ambient temperature and varying simulated patient temperature on the performance of the three infrared tympanic thermometers. abstract_id: PUBMED:8955971 A comparison of four infrared tympanic thermometers with tympanic membrane temperatures measured by thermocouples. Purpose: To compare measurements made with four infrared tympanic thermometers (Genius, Thermopit, Quickthermo, and Thermoscan) with those recorded from thermocouples positioned in the contralateral ear. Methods: Four tympanic thermometers were evaluated in 50 healthy volunteers (12 female and 38 male). Temperatures were measured, in random order, at the right tympanic membrane four times and the highest temperature was considered to be the true value measured by each thermometer. The control temperature was measured at the left tympanic membrane using Mon-a-Therm thermocouples. Results: The tympanic membrane temperature measured by Genius correlated best with the Mon-a-therm measurement (TM) (r = 0.74). The tympanic membrane temperatures measured by Thermopit, Quickthermo, and Thermoscan correlated moderately with TM (r = 0.56, 0.63, and 0.58, respectively). Mean differences between TM and each temperature (TG, TTP, TQ, and TTS) were -0.3, 0.73, 0.42, and -0.3 degrees C, respectively. Likewise standard deviations were 0.33, 0.37, 0.35, and 0.35. Conclusion: We conclude that all but the Thermopit (TTP) are similarly useful for the management of patients during anaesthesia. abstract_id: PUBMED:11044005 The accuracy of oral predictive and infrared emission detection tympanic thermometers in an emergency department setting. Objective: To assess the accuracy of an oral predictive thermometer and an infrared emission detection (IRED) tympanic thermometer in detecting fever in an adult emergency department (ED) population, using an oral glass mercury thermometer as the criterion standard. Methods: This was a single-center, nonrandomized trial performed in the ED of a metropolitan tertiary referral hospital with a convenience sample of 500 subjects. The temperature of each subject was taken by an oral predictive thermometer, an IRED tympanic thermometer set to "oral" equivalent, and an oral glass mercury thermometer (used as the criterion standard). A fever was defined as a temperature of 37.8 degrees C or higher. The subject's age, sex, triage category, and diagnostic group were also recorded. Sensitivity, specificity, positive and negative likelihood ratios, positive and negative predictive values, and corresponding 95% confidence intervals were calculated. Logistic regression was used to identify predictors of fever. Results: The sensitivities and specificities for detection of fever of the predictive and the IRED tympanic thermometers were similar (sensitivity 85.7%/88.1% and specificity 98.7%/95.8%, respectively). The predictive thermometer had a better positive predictive value (85.7%) compared with the IRED tympanic thermometer (66.1%). The positive and negative likelihood ratios for the predictive oral thermometer were 65 and 0.14, respectively, and for the IRED tympanic thermometer 21 and 0.12, respectively, indicating that the predictive thermometer will "miss" 1 in about 7 fevers and the IRED tympanic thermometer will "miss" 1 in about 8 fevers. Conclusions: Although quick and convenient, oral predictive and IRED tympanic thermometers give readings that cannot always be relied on in the detection of fever. If we are to continue using electronic thermometers in the ED setting, we need to recognize their limitations and maintain the importance of our clinical judgment. abstract_id: PUBMED:15373157 Reviewing the effectiveness of tympanic thermometers. Tympanic thermometers have become a popular alternative to glass mercury thermometers for recording patients' temperatures. This article reviews studies that have been performed to audit their use in clinical practice. It examines the accuracy of tympanic thermometers and discusses ways to improve the use of this instrument and the implications for nursing practice. Answer: Training does not appear to significantly improve the results with infrared tympanic thermometers (ITT). A study investigating the accuracy and effect of training in the use of ITT found that trained nurses did not achieve significantly better repeatability or sensitivity in detecting fever compared to untrained nurses. The mean rectal temperatures were significantly higher than those measured with ITT for both trained and untrained nurses, and the sensitivity of ITT for detecting fever was only 70% for trained and 54% for untrained nurses. The study concluded that training had little effect on the accuracy of the measurements and that ITT is often unreliable and should be used with caution (PUBMED:9311408). This finding is consistent with other studies that have evaluated the performance of ITT. For instance, a study evaluating the performance of two ITT brands found that calibration equations could improve the accuracy of ITTs, but these equations did not improve the uncertainty of ITTs (PUBMED:22319287). Another study demonstrated that tympanic, Beurer, and Thermofocus thermometers could not reliably predict rectal temperature, indicating that these devices should not replace rectal measurement as the gold standard for detecting fever in children (PUBMED:20736400). Moreover, a study assessing the reliability and validity of the new generation of ITTs found that while there was good reliability, the measuring inaccuracy for ITTs was greater than expected, suggesting that repeated measurements might be necessary when there is a discrepancy between the temperature and the patient's condition (PUBMED:22860884). In summary, the evidence suggests that while training may slightly improve the use of ITTs, it does not significantly enhance the accuracy of the measurements, and ITTs should be used with caution, especially when accurate body temperature measurement is critical (PUBMED:9311408).
Instruction: Can factors related to mortality be used to predict the follow-up health-related quality of life (HRQoL) in cardiac surgery patients? Abstracts: abstract_id: PUBMED:23669053 Can factors related to mortality be used to predict the follow-up health-related quality of life (HRQoL) in cardiac surgery patients? Background: Optimal selection of patients and choice of treatment methods in cardiac surgery calls for methods to predict outcome both in terms of mortality and health-related quality of life (HRQoL). Our target was to evaluate whether indicators predicting mortality can also be used to predict follow-up HRQoL. Methods: Preoperative and intensive care-related data of 571 elective cardiac surgery patients treated in the Helsinki University Central Hospital were used to predict, in a stepwise (forward) binary logistic regression, the probability of being dead at six months after operation. Furthermore, Tobit regression models were employed to predict the follow-up HRQoL of patients using also treatment complications and patients' experiences of pain and restlessness during treatment as explanatory variables. Results: The EuroSCORE, renal, respiratory and neurological complications as well as urgent sternotomy were all statistically significant predictors of mortality. By contrast, follow-up HRQoL was predicted by the baseline HRQoL, diabetes and male gender as well as experience of pain and restlessness during the ICU stay. Conclusion: Mortality and HRQoL after cardiac surgery appear to be explained by different factors. Pain and restlessness during ICU treatment affect follow-up HRQoL in a negative manner and as potentially modifiable factors, need attention during treatment. abstract_id: PUBMED:34353087 Evaluating health-related quality of life in gastric cancer patients in Suzhou, China. Background: Health-related quality of life (HRQOL) has become an important part of the evaluation of clinical efficacy and prognosis in gastric cancer. This study aimed to assess the HRQOL of patients with gastric cancer using the a five-level EuroQol five-dimensional questionnaire (EQ-5D-5L) and explore the factors influencing patients' perceived quality of life. For those significant factors, we can take appropriate measures to intervene to extend patient survival and improve the quality of life. Methods: A cross-sectional questionnaire survey was administered to 243 patients with gastric cancer in the First Affiliated Hospital of Suzhou University from December 2018 to December 2020. HRQOL was measured by the Chinese version of the EQ-5D-5L. Nonparametric test analyses and a Tobit regression model were used to identify the independent variables associated with the EQ-5D-5L utility scores. Results: In this research, the mean score was 0.810, and the median was 0.893. Approximately 25% of patients reported no problems at all in any of the five dimensions. Problems in pain and discomfort were the most frequently reported (64.2%). Nonparametric test analyses showed that patients who did not have health insurance, or who had a history of alcohol use, a family history of cancer, had received surgery only, or had an interval of less than 1 week between taking this survey and their last treatment, demonstrated lower EQ-5D-5L scores. The Tobit regression model confirmed that health insurance, family history, and treatment were significantly associated with EQ-5D-5L scores. Conclusions: The HRQOL of gastric cancer patients can be measured by EQ-5D-5L, and the results may provide a guide for choosing an appropriate individualized treatment plan. abstract_id: PUBMED:25432210 The effect of gender on health-related quality of life and related factors in post-lobectomy lung-cancer patients. Purpose: While studies have documented gender differences by histologic type among lung cancer patients, the effect of these differences on the health-related quality of life (HRQoL) of post-lobectomy lungcancer patients and related factors remain uncertain. This study examines gender-specific HRQoL and related factors in post-lobectomy lung-cancer patients. Methods: A cross-sectional study design was applied. A convenience sample of 231 post-lobectomy lungcancer patients was recruited from the thoracic surgery outpatient departments of two teaching hospitals in Taipei, Taiwan from March to December 2012. Patients performed a spirometry test and completed instruments that included a Beck Depression Inventory-II, an Interpersonal Support Evaluation List, and the symptom and function scales of the Quality of Life Questionnaire. Data analysis used descriptive statistics, including mean and standard deviations, frequency, and percentage values. Independent-sample Student's t-tests and multivariate analyses were used for comparative purposes. Results: This study confirmed a significant gender effect on HRQoL and HRQoL-related factors such as marital status, religious affiliation, smoking status, histologic type, symptoms, pulmonary function, depression, and family support. Moreover, multivariate analysis found gender to be a significant determinant of the HRQoL aspects of physical functioning, emotional functioning, and cognitive functioning. Finally, results indicated that factors other than gender were also significant determinants of HRQoL. Conclusion: Gender impacts the HRQoL and related factors of postoperative lung-cancer patients. Therefore, gender should be considered in assessing and addressing the individual care needs of these patients in order to attain optimal treatment outcomes. abstract_id: PUBMED:36923016 Effect of Psychosocial, Behavioral, and Disease Characteristics on Health-Related Quality of Life (HRQoL) After Breast Cancer Surgery: A Cross-Sectional Study of a Regional Australian Population. Background: Increasing long-term breast cancer survivorship has highlighted the importance of patient-reported outcomes such as health-related quality of life (HRQoL) in addition to traditional outcomes that were used to define successful operative management. This study aimed to describe HRQoL in patients who underwent breast cancer resection in a regional Australian setting and identify the psychosocial, demographic, and operative characteristics associated with poor HRQoL. Methods: Consecutive patients who underwent breast cancer resection between 2015 and 2022 were included. Patients were asked to complete a survey instrument that included validated measures of HRQoL, emotional distress, fear of cancer recurrence (FCR), and social support. Demographic, disease, and operative data were collected from the medical record of the respondents. Results: Forty-six patients completed the survey (100% female, mean age = 62.68 years). Most HRQoL domains were significantly lower than an Australian reference population. HRQoL was more strongly associated with psychosocial factors (emotional distress, FCR, and social support) but was also associated with socioeconomic status, stage of cancer at presentation, and surgical complications. HRQoL was not related to breast conservation, management of the Axilla, or time since operation. Conclusion: Long-term changes in HRQoL should be considered during the management and surveillance of breast cancer patients in regional Australia. abstract_id: PUBMED:25515950 Health-related quality of life, personality and choice of coping are related in renal cell carcinoma patients. Objective: To investigate whether health-related quality of life (HRQoL) depends on psychosocial factors, rather than on factors related to the cancer treatment, this study explored the associations between HRQoL, personality, choice of coping and clinical parameters in surgically treated renal cell carcinoma (RCC) patients. Materials And Methods: After exclusions (e.g. death, dementia), 260 patients were found to be eligible and invited to participate. The response rate was 71%. HRQoL was determined by the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30), personality by the Eysenck Personality Inventory and coping by the COPE Questionnaire. Given tumour treatment, TNM stage and patient-reported comorbidity were also determined. The HRQoL indices were also summarized in general quality of life/health, functional sum and symptom sum scores. Results: EORTC C30 sum scores were negatively associated with the personality trait of neuroticism [common variance (CV) 19-36%]. Avoidant choice of coping inversely accounted for 9-18% of the total HRQoL variance, while reported coping by humour was to some extent negatively associated with HRQoL score (CVmax 4%). Indeed, all of the quality of life indices except for one were significantly negatively correlated with neuroticism and avoidance coping. Patients with low HRQoL due to treatment, secondary to flank or open surgery, reported a closer association between problem-focused choice of coping and HRQoL than the other patients. Moreover, present comorbidities were uniquely associated with a lowered HRQoL. Conclusions: HRQoL is related to treatment-related factors in RCC patients, but shown here to be more strongly associated with psychological factors and present comorbidity. These findings suggest that attention should be paid to supportive treatment of RCC patients. abstract_id: PUBMED:35737143 Anxiety, depression, health-related quality of life, and mortality among colorectal patients: 5-year follow-up. Purpose: Health-related quality of life (HRQoL) measurement represents an important outcome in cancer patients. We describe the evolution of HRQoL over a 5-year period in colorectal cancer patients, identifying predictors of change and how they relate to mortality. Methods: Prospective observational cohort study including colorectal cancer (CRC) patients having undergone surgery in nineteen public hospitals who were monitored from their diagnosis, intervention and at 1-, 2-, 3-, and 5-year periods thereafter by gathering HRQoL data using the EuroQol-5D-5L (EQ-5D-5L), European Organization for Research and Treatment of Cancer's Quality of Life Questionnaire-Core 30 (EORTC-QLQ-C30), and Hospital Anxiety and Depression Scale (HADS) questionnaires. Multivariable generalized linear mixed models were used. Results: Predictors of Euroqol-5D-5L (EQ-5D-5L) changes were having worse baseline HRQoL; being female; higher Charlson index score (more comorbidities); complications during admission and 1 month after surgery; having a stoma after surgery; and needing or being in receipt of social support at baseline. For EORTC-QLQ-C30, predictors of changes were worse baseline EORTC-QLQ-C30 score; being female; higher Charlson score; complications during admission and 1 month after admission; receiving adjuvant chemotherapy; and having a family history of CRC. Predictors of changes in HADS anxiety were being female and having received adjuvant chemotherapy. Greater depression was associated with greater baseline depression; being female; higher Charlson score; having complications 1 month after intervention; and having a stoma. A deterioration in all HRQoL questionnaires in the previous year was related to death in the following year. Conclusions: These findings should enable preventive follow-up programs to be established for such patients in order to reduce their psychological distress and improve their HRQoL to as great an extent as possible. Clinicaltrials: Gov Identifier: NCT02488161. abstract_id: PUBMED:38105481 Management and health-related quality of life among patients with prostate cancer in a Kenyan tertiary health facility. Introduction: Advances made in the screening, diagnosis and management of prostate cancer have improved the survival rates of the patients. However, many of these treatments including surgery, radiotherapy, and pharmacotherapy, have an impact on the subsequent health-related quality of life (HRQoL) of these patients. Since it is an important prognostic factor of survival, failure to evaluate the HRQoL and its predictors in these patients typically results in long-term deficits in their overall well-being, that is, their physical, social, emotional, and mental health. The objective of this study was to evaluate the management and HRQoL among patients with prostate cancer at Kenyatta National Hospital. Methods: This was a descriptive cross-sectional study. The sample size of 62 patients who met the eligibility criteria was selected through simple random sampling on the respective clinic days of the cancer treatment centre and urology clinic. Data was collected through a pre-tested structured questionnaire and HRQoL tools which are EORTC-QLQ-C30 and EORTC-QLQ-PR25 and analysed using STATA version 13 software. Descriptive analysis was used to summarise the continuous and categorical variables. Spearman's rho (rs) correlation was used to determine the predictors of HRQoL based on the strength and significance of association at 0.05 level of significance. Results: The mean age of the participants was 70.5 (±7.35) years. The majority (52, 83.9%) of the patients had a prostate specific antigen (PSA) above 20 ng/ml. Twenty-one (33.9%) were graded as Gleason group 5 and 41 (66.1%) had stage IV disease at diagnosis. Fifty (80.9%) participants were on hormonal therapy, with most of them being on combined androgen blockade. The overall HRQoL was 65.1. Fatigue, one of the major complaints among these patients, was negatively associated with physical functioning (p = 0.0005), role functioning (p = 0.0026), social functioning (p = 0.0001), financial difficulties (p = 0.0077) and quality of life (p = 0.0050). Conclusion: Fatigue was the most common predictor of poor HRQoL in several scales of measurement. For those on management, frequent assessment of HRQoL should be carried out and interventions instituted immediately. abstract_id: PUBMED:36409664 Health-related quality of life of children treated for non-syndromic craniosynostosis. Health-related quality of life (HRQoL) allows the acquisition of the subjective perspective of patients regarding their health and function; yet a very few studies have been evaluated HRQoL of patients treated for craniosynostosis (CS). In this retrospective, descriptive cohort study, school-aged children (7-16 years) treated for non-syndromic CS were assessed using the Pediatric Quality of Life Inventory (PedsQL) 4.0 Generic Core Scales. Seventy-three patients and their parents responded to the PedsQL (response rate: 80.2%). Patients generally estimated average HRQoL with no difference compared to the normal population sample. Further, no difference in HRQoL was found between treated sagittal (SS) or metopic synostosis. In the SS group, surgical methods involving spring-assisted surgery and pi-plasty were unrelated to HRQoL outcomes. Additionally, HRQoL was highly correlated with intelligence quotient (IQ, r = 0.42; p = 0.0004) and adaptive behavior skills (ABAS, r = 0.57; p = 0.0001). Furthermore, differences were observed in estimated physical function (p = 0.002) and school function (p = 0.012) between self- and proxy reports (i.e. parents estimated child HRQoL as higher than did the children). Children treated for CS have a generally average HRQoL, and neither CS type nor surgical method influenced HRQoL outcomes. Moreover, children and parents estimated HRQoL differently, suggesting the importance of using both self- and proxy reporting in patient-reported measures. HRQoL was strongly related to IQ and ABAS, indicating that the PedsQL can be used as a screening instrument to identify craniofacial patients in need of further psychological assessment. abstract_id: PUBMED:23279591 The impact of early postoperative pain on health-related quality of life. Objectives: To examine how the severity of postoperative pain affects patient's health-related quality of life (HRQoL) at 1 week following surgery and to compare two generic validated HRQoL instruments. Methods: Patients undergoing general or orthopaedic surgery at the Royal London Hospital were randomly sampled. The following patient outcome data were collected EQ-5D (EuroQoL) pre-operatively and the Revised American Pain Society Patient Outcome Questionnaire (APS-POQ-R) at 24 hours postoperation; and EQ-5D, Short-Form-12 (SF-12) and APS-POQ-R at 7 days postoperation. The degree of association between pain and HRQoL was assessed using Pearson's correlation coefficient and multivariate generalized linear regression models. Results: Of the 228 patients included, 166 patients provided data at 7 days. Sixteen percent reported severe pain ≥ 50% of the day at 7 days. The severity of pain on both the APS-POQ-R pain severity and interference and affective impairment domains at 7 days was highly correlated with a decrease in HRQoL as assessed by the SF-12 Physical Component Score (PCS), SF-12 Mental Component Score (MCS), and EQ-5D scores (r = -0.34 to -0.61, P < 0.0001). Multivariate regression analyses showed that irrespective of confounding factors (eg, age, gender, and pre-operative HRQoL) patients with severe postoperative pain experience important reductions in both physical and mental well-being domains of their HRQoL. Conclusions: A proportion of patients continue to experience severe pain at 7 days postoperatively, even after minor surgery. HRQoL is strongly associated with the level of pain and provides additional data on the impact of postsurgery pain on patient's function and well-being. Additional studies are needed to elucidate the interaction between pain severity and HRQoL during the peri-operative period. abstract_id: PUBMED:32388708 Prospective Analysis Regarding Health-Related Quality of Life (HR-QOL) between Morbid Obese Patients Following Bariatric Surgery Versus on a Waiting List. Background: Major impairment of health-related quality of life (HRQoL) is one of the main reasons why obese patients request surgical treatment. Objective: To prospectively analyze the impact of HRQoL between obese patients who underwent surgery and those who were wait-listed. Methods: Between April 2017 and March 2018, 70 surgical and 69 wait-listed patients were interviewed twice, at baseline and at the 12-month follow-up. Quality of life was measured by the SF-12v2 and the Impact of Weight on Quality of Life-Lite (IWQoL-Lite) questionnaires. Sociodemographic-, clinical-, and surgical-related variables were collected. Results: One hundred thirty-nine patients were analyzed, showing similar baseline characteristics but differences in HRQoL. Performing more qualified work improved scores on some aspects of the SF-12 survey. In contrast, women scored worse on the self-esteem domain, and men scored worse on the mental health domain. By group, at the 12-month follow-up, statistically significant differences were found among all aspects of the questionnaires between both groups (P < 0.001) and between baseline and postoperative 12-month follow-up in the surgical group (P < 0.001). Furthermore, scores were lower in all domains in the evolution of wait-listed patients, with statistically significant differences among the Bodily Pain, Emotional Role, Mental Health, and Mental Component Summary Domains (P < 0.05). Conclusion: HRQoL is a multimodal concept that allows the identification of factors impacting obese patients' quality of life. It promotes the benefit of surgery against waiting list delays, which can take up to 4 years in our hospital. Therefore, HRQoL is an important pillar to justify more resources for reducing unacceptable surgical delays. Answer: No, factors related to mortality cannot necessarily be used to predict the follow-up health-related quality of life (HRQoL) in cardiac surgery patients. According to the study with PUBMED:23669053, mortality and HRQoL after cardiac surgery are explained by different factors. While the EuroSCORE, renal, respiratory, and neurological complications, as well as urgent sternotomy, were significant predictors of mortality, follow-up HRQoL was predicted by different factors. These included baseline HRQoL, diabetes, male gender, and the experience of pain and restlessness during the ICU stay. The study concluded that pain and restlessness during ICU treatment negatively affect follow-up HRQoL and, as potentially modifiable factors, need attention during treatment. Therefore, while some factors may be common to both mortality and HRQoL, they are not interchangeable, and each outcome has its unique predictors.
Instruction: Levels of perfluorochemicals in water samples from Catalonia, Spain: is drinking water a significant contribution to human exposure? Abstracts: abstract_id: PUBMED:18763004 Levels of perfluorochemicals in water samples from Catalonia, Spain: is drinking water a significant contribution to human exposure? Background, Aim, And Scope: In recent years, due to a high persistence, biomagnification in food webs, presence in remote regions, and potential toxicity, perfluorochemicals (PFCs) have generated a considerable interest. The present study was aimed to determine the levels of perfluorooctane sulfonate (PFOS), perfluorooctanoic acid (PFOA), and other PFCs in drinking water (tap and bottled) and river water samples from Tarragona Province (Catalonia, Spain). Materials And Methods: Municipal drinking (tap) water samples were collected from the four most populated towns in the Tarragona Province, whereas samples of bottled waters were purchased from supermarkets. River water samples were collected from the Ebro (two samples), Cortiella, and Francolí Rivers. After pretreatment, PFC analyses were performed by HPLC-MS. Quantification was done using the internal standard method, with recoveries between 68% and 118%. Results: In tap water, PFOS and PFOA levels ranged between 0.39 and 0.87 ng/L (0.78 and 1.74 pmol/L) and between 0.32 and 6.28 ng/L (0.77 and 15.2 pmol/L), respectively. PFHpA, PFHxS, and PFNA were also other detected PFCs. PFC levels were notably lower in bottled water, where PFOS could not be detected in any sample. Moreover, PFHpA, PFHxS, PFOA, PFNA, PFOS, PFOSA, and PFDA could be detected in the river water samples. PFOS and PFOA concentrations were between &lt;0.24 and 5.88 ng/L (&lt;0.48 and 11.8 pmol/L) and between &lt;0.22 and 24.9 ng/L (&lt;0.53 and 60.1 pmol/L), respectively. Discussion: Assuming a human water consumption of 2 L per day, the daily intake of PFOS and PFOA by the population of the area under evaluation was calculated (0.78-1.74 and 12.6 ng, respectively). It was found that drinking water might be a source of exposure to PFCs as important as the dietary intake of these pollutants. Conclusions: The contribution of drinking water (tap and bottled) to the human daily intake of various PFCs has been compared for the first time with data from dietary intake of these PFCs. It was noted that in certain cases, drinking water can be a source of exposure to PFCs as important as the dietary intake of these pollutants although the current concentrations were similar or lower than those reported in the literature for surface water samples from a number of regions and countries. Recommendations And Perspectives: Further studies should be carried out in order to increase the knowledge of the role of drinking water in human exposure to PFCs. abstract_id: PUBMED:22494245 Human exposure to perfluorinated compounds in Catalonia, Spain: contribution of drinking water and fish and shellfish. In this study, the concentrations of 15 perfluorinated compounds (PFCs) were analyzed in 30 water samples collected in Catalonia (Spain) at three stages of the drinking water treatment process in several water purification plants. In addition, the concentrations of 13 PFCs were determined in samples of fish and shellfish collected from coastal areas of Catalonia. The intake of PFCs through both pathways, drinking water intake and fish and shellfish consumption, was also estimated. In water samples, the highest mean concentrations corresponded to perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) (1.81 and 2.40 ng/L, respectively), whereas perfluorodecanosulfonate (PFDS) and perfluorotetradecanoic acid (PFTDA) were under their respective limits of detection in all analyzed samples. The results show that although the current treatment processes caused slight reductions in PFC concentrations, these processes did not mean significant changes in the amounts of PFCs already contained in the raw water. Among the analyzed PFCs in fish and shellfish, only seven compounds could be detected in at least one composite sample. PFOS showed the highest mean concentration (2.70 ng/g fw), being detected in all species with the exception of mussels. With regard to PFOA (mean, 0.074 ng/g fw), the highest concentrations were detected in prawn and hake (0.098 and 0.091 ng/g fw, respectively). The current exposure to PFCs through consumption of fish and shellfish indicates that it should not be of concern for the consumers. The amounts ingested are well below the recommended tolerable daily intakes, at least for those PFCs for which information is available. abstract_id: PUBMED:18547612 Perfluorochemicals in water reuse. Faced with freshwater shortages, water authorities are increasingly utilizing wastewater reclamation to augment supplies. However, concerns over emerging trace contaminants that persist through wastewater treatment need to be addressed to evaluate potential risks. In the present study, perfluorinated surfactant residues were characterized in recycled water from four California wastewater treatment plants that employ tertiary treatment and one that treats primary sewage in a wetland constructed for both treatment and wildlife habitat. Effluent concentrations were compared with surface and groundwater from a creek where recycled water was evaluated as a potential means to augment flow (Upper Silver and Coyote Creeks, San Jose, CA). In the recycled water, 90-470 ng/l perfluorochemicals were detected, predominantly perfluorooctanoate (PFOA; 10-190 ng/l) and perfluorooctanesulfonate (PFOS; 20-190 ng/l). No significant removal of perfluorochemicals was observed in the wetland (total concentration ranged 100-170ng/l across various treatment stages); in this case, 2-(N-ethylperfluorooctanesulfonamido) acetic acid (N-EtFOSAA), perfluorodecanesulfonate (PFDS), and PFOS were dominant. Though there is currently no wastewater discharge into the creeks, perfluorochemicals were found in the surface water and underlying groundwater at a total of 20-150 ng/l with PFOS and PFOA again making the largest contribution. With respect to ecotoxicological effects, perfluorochemical release via recycled water into sensitive ecosystems requires evaluation. abstract_id: PUBMED:19685096 Levels of perfluorinated chemicals in municipal drinking water from Catalonia, Spain: public health implications. In this study, the concentrations of 13 perfluorinated compounds (PFCs) (PFBuS, PFHxS, PFOS, THPFOS, PFHxA, PFHpA, PFOA, PFNA, PFDA, PFUnDA, PFDoDA, PFTDA, and PFOSA) were analyzed in municipal drinking water samples collected at 40 different locations from 5 different zones of Catalonia, Spain. Detection limits ranged between 0.02 (PFHxS) and 0.85 ng/L (PFOA). The most frequent compounds were PFOS and PFHxS, which were detected in 35 and 31 samples, with maximum concentrations of 58.1 and 5.30 ng/L, respectively. PFBuS, PFHxA, and PFOA were also frequently detected (29, 27, and 26 samples, respectively), with maximum levels of 69.4, 8.55, and 57.4 ng/L. In contrast, PFDoDA and PFTDA could not be detected in any sample. The most contaminated water samples were found in the Barcelona Province, whereas none of the analyzed PFCs could be detected in two samples (Banyoles and Lleida), and only one PFC could be detected in four of the samples. Assuming a human water consumption of 2 L/day, the maximum daily intake of PFOS and PFOA from municipal drinking water would be, for a subject of 70 kg of body weight, 1.7 and 1.6 ng/kg/day. This is clearly lower than the respective Tolerable Daily Intake set by the European Food Safety Authority. In all samples, PFOS and PFOA also showed lower levels than the short-term provisional health advisory limit for drinking water (200 ng PFOS/L and 400 ng PFOA/L) set by the US Environmental Protection Agency. Although PFOS and PFOA concentrations found in drinking water in Catalonia are not expected to pose human health risks, safety limits for exposure to the remaining PFCs are clearly necessary, as health-based drinking water concentration protective for lifetime exposure is set to 40 ng/L for PFOA. abstract_id: PUBMED:28800414 Global distribution of perfluorochemicals (PFCs) in potential human exposure source-A review. Human exposure to perfluorochemicals (PFCs) has attracted mounting attention due to their potential harmful effects. Breathing, dietary intake, and drinking are believed to be the main routes for PFC entering into human body. Thus, we profiled PFC compositions and concentrations in indoor air and dust, food, and drinking water with detailed analysis of literature data published after 2010. Concentrations of PFCs in air and dust samples collected from home, office, and vehicle were outlined. The results showed that neutral PFCs (e.g., fluorotelomer alcohols (FTOHs) and perfluorooctane sulfonamide ethanols (FOSEs)) should be given attention in addition to PFOS and PFOA. We summarized PFC concentrations in various food items, including vegetables, dairy products, beverages, eggs, meat products, fish, and shellfish. We showed that humans are subject to the dietary PFC exposure mostly through fish and shellfish consumption. Concentrations of PFCs in different drinking water samples collected from various countries were analyzed. Well water and tap water contained relatively higher PFC concentrations than other types of drinking water. Furthermore, PFC contamination in drinking water was influenced by the techniques for drinking water treatment and bottle-originating pollution. abstract_id: PUBMED:8085048 Manganese in drinking water and its contribution to human exposure Methylcyclopentadienyl manganese tricarbonyl (MMT) has been used in Canada since 1976 as an additive in unleaded gasoline. The combustion of MMT leads to the emission of Mn oxides to the environment and may represent a potential risk to public health. It therefore seems important to assess the associated Mn exposure. The present study is part of a broader research program on total human exposure to Mn and aims specifically at assessing the level of exposure to Mn and other metals via drinking water. A comparative study was performed between two groups of workers (garage mechanics and blue collar workers of the University of Montreal) differentiated by their exposure to inhaled Mn. For Pb, Cu and Zn in residential tap water, significant differences were observed between the first sample and the one taken after one minute of flow. A significant difference was also found between the two groups of workers (combined flow time) for Mn, Cu and Ca. The Mn contribution from water is estimated to be 1% of the total dose from ingested food. This low exposure may become important (17%) for persons drinking well water, especially if we consider interactions between metals following multimedia exposure. abstract_id: PUBMED:32836148 Photochemical decomposition of perfluorochemicals in contaminated water. Perfluorochemicals (PFCs) are a set of chemicals containing C-F bonds, which are concerned due to their bioaccumulation property, persistent and toxicological properties. Photocatalytic approaches have been widely studied for the effective removal of PFCs due to the mild operation conditions. This review aims to provide a comprehensive and up-to-date summary on the homogenous and heterogeneous photocatalytic processes for PFCs removal. Specifically, the homogenous photocatalytic methods for remediating PFCs are firstly discussed, including generation of hydrated electrons (eaq‒) and its performance and mechanisms for photo-reductive destruction of PFCs, the active species responsible for photo-oxidative degradation of PFCs and the corresponding mechanisms, and metal-ion-mediated (Fe(III) mainly used) processes for the remediation of PFCs. The influences of molecular structures of PFCs and water matrix, such as dissolved oxygen, humic acid, nitrate, chloride on the homogenous photocatalytic degradation of PFCs are also discussed. For heterogeneous photocatalytic processes, various semiconductor photocatalysts used for the decomposition of perfluorooctanoic acid (PFOA) are then discussed in terms of their specific properties benefiting photocatalytic performances. The preparation methods for optimizing the performance of photocatalysts are also overviewed. Moreover, the photo-oxidative and photo-reductive pathways are summarized for remediating PFOA in the presences of different semiconductor photocatalysts, including active species responsible for the degradation. We finally put forward several key perspectives for the photocatalytic removal of PFCs to promote its practical application in PFCs-containing wastewater treatment, including the treatment of PFCs degradation products such as fluoride ion, and the development of noble-metal free photocatalysts that could efficiently remove PFCs under solar light irradiation. abstract_id: PUBMED:30776750 Relationship between perfluorooctanoate and perfluorooctane sulfonate blood concentrations in the general population and routine drinking water exposure. In regions with heavily contaminated drinking water, a significant contribution of drinking water to overall human perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) exposure has been well documented. However, the relationship of PFOA/PFOS blood concentrations in the general population to routine drinking water exposure is not well characterized. This study determined the PFOA and PFOS concentrations in 166 drinking water samples across 28 cities in China. For 13 of the studied cities, PFOA and PFOS concentrations were analyzed in 847 human blood samples which were collected in parallel with the drinking water samples. The geometric mean PFOA and PFOS concentrations in drinking water were 2.5 ± 6.2 ng/L and 0.7 ± 11.7 ng/L, and population-weighted geometric mean blood concentrations were 2.1 ± 1.2 ng/mL and 2.6 ± 1.3 ng/mL, respectively. We found a significant correlation between the PFOA concentration in drinking water and blood (r = 0.87, n = 13, p &lt; 0.001). The total daily intake of PFOA (0.24-2.13 ng/kg/day) and PFOS (0.19-1.87 ng/kg/day) were back-calculated from the blood concentrations with a one-compartment toxicokinetic model. We estimated relative source contributions (RSCs) of drinking water to total daily intake in China of 23 ± 3% for PFOA and 12.7 ± 5.8% for PFOS. Using the mean RSCs, we derived the health advisory values of 85 ng/L for PFOA and 47 ng/L for PFOS in China. abstract_id: PUBMED:24262873 Human exposure to arsenic from drinking water in Vietnam. Vietnam is an agricultural country with a population of about 88 million, with some 18 million inhabitants living in the Red River Delta in Northern Vietnam. The present study reports the chemical analyses of 68 water and 213 biological (human hair and urine) samples conducted to investigate arsenic contamination in tube well water and human arsenic exposure in four districts (Tu Liem, Dan Phuong, Ly Nhan, and Hoai Duc) in the Red River Delta. Arsenic concentrations in groundwater in these areas were in the range of &lt;1 to 632 μg/L, with severe contamination found in the communities Ly Nhan, Hoai Duc, and Dan Phuong. Arsenic concentrations were markedly lowered in water treated with sand filters, except for groundwater from Hoai Duc. Human hair samples had arsenic levels in the range of 0.07-7.51 μg/g, and among residents exposed to arsenic levels ≥50 μg/L, 64% of them had hair arsenic concentrations higher than 1 μg/g, which is a level that can cause skin lesions. Urinary arsenic concentrations were 4-435 μg/g creatinine. Concentrations of arsenic in hair and urine increased significantly with increasing arsenic content in drinking water, indicating that drinking water is a significant source of arsenic exposure for these residents. The percentage of inorganic arsenic (IA) in urine decreased with age, whereas the opposite trend was observed for monomethylarsonic acid (MMA) in urine. Significant co-interactions of age and arsenic exposure status were also detected for concentrations of arsenic in hair and the sum of IA, MMA, and dimethylarsinic acid (DMA) in urine and %MMA. In summary, this study demonstrates that a considerable proportion of the Vietnamese population is exposed to arsenic levels of chronic toxicity, even if sand filters reduce exposure in many households. Health problems caused by arsenic ingestion through drinking water are increasingly reported in Vietnam. abstract_id: PUBMED:26849047 Antibiotics in Drinking Water in Shanghai and Their Contribution to Antibiotic Exposure of School Children. A variety of antibiotics have been found in aquatic environments, but antibiotics in drinking water and their contribution to antibiotic exposure in human are not well-explored. For this, representative drinking water samples and 530 urine samples from schoolchildren were selected in Shanghai, and 21 common antibiotics (five macrolides, two β-lactams, three tetracyclines, four fluoquinolones, four sulfonamides, and three phenicols) were measured in water samples and urines by isotope dilution two-dimensional ultraperformance liquid chromatography coupled with high-resolution quadrupole time-of-flight mass spectrometry. Drinking water included 46 terminal tap water samples from different spots in the distribution system of the city, 45 bottled water samples from 14 common brands, and eight barreled water samples of different brands. Of 21 antibiotics, only florfenicol and thiamphenicol were found in tap water, with the median concentrations of 0.0089 ng/mL and 0.0064 ng/mL, respectively; only florfenicol was found in three bottled water samples from a same brand, with the concentrations ranging from 0.00060 to 0.0010 ng/mL; no antibiotics were found in barreled water. In contrast, besides florfenicol and thiamphenicol, an additional 17 antibiotics were detected in urine samples, and the total daily exposure doses and detection frequencies of florfenicol and thiamphenicol based on urine samples were significantly and substantially higher than their predicted daily exposure doses and detection frequencies from drinking water by Monte Carlo Simulation. These data indicated that drinking water was contaminated by some antibiotics in Shanghai, but played a limited role in antibiotic exposure of children. Answer: The study conducted in Catalonia, Spain, aimed to determine the levels of perfluorochemicals (PFCs) in drinking water (tap and bottled) and river water samples from Tarragona Province. The results showed that in tap water, levels of perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA) ranged between 0.39 and 0.87 ng/L and between 0.32 and 6.28 ng/L, respectively. Other PFCs such as PFHpA, PFHxS, and PFNA were also detected. Notably, PFC levels were lower in bottled water, with PFOS not detected in any sample. River water samples also contained various PFCs, including PFOS and PFOA, with concentrations between <0.24 and 5.88 ng/L for PFOS and <0.22 and 24.9 ng/L for PFOA. Based on an assumed human water consumption of 2 L per day, the daily intake of PFOS and PFOA was calculated to be 0.78-1.74 and 12.6 ng, respectively. The study concluded that drinking water could be a source of exposure to PFCs as important as dietary intake, although the concentrations were similar or lower than those reported for surface water samples from other regions and countries. Further studies were recommended to increase knowledge of the role of drinking water in human exposure to PFCs (PUBMED:18763004). In another study, the intake of PFCs through drinking water and fish and shellfish consumption was estimated. The highest mean concentrations in water samples corresponded to PFOS and PFOA (1.81 and 2.40 ng/L, respectively). The study found that current treatment processes caused slight reductions in PFC concentrations but did not significantly change the amounts of PFCs in the raw water. Among the analyzed PFCs in fish and shellfish, only seven compounds were detected, with PFOS showing the highest mean concentration. The exposure to PFCs through consumption of fish and shellfish was not considered a concern for consumers, as the ingested amounts were well below the recommended tolerable daily intakes (PUBMED:22494245). In summary, drinking water in Catalonia, Spain, does contribute to human exposure to PFCs, and in certain cases, it can be as significant as dietary intake. However, the levels found in drinking water are not expected to pose human health risks based on current safety limits and tolerable daily intakes.
Instruction: Can a sentinel node mapping algorithm detect all positive lymph nodes in cervical cancer? Abstracts: abstract_id: PUBMED:12722417 Lymph node mapping and sentinel node detection in gynecological oncology The aim of this paper is the presentation of the latest opinions on the lymph node mapping and the sentinel node localization in the female genital organ neoplasm. The current strategies of lymph node resection in gynecologic oncology have been presented. The methods of lymph node staining and detection has been expounded as well. The paper also contains the results of sentinel node localization in the vulvar, cervical and endometrial cancers. abstract_id: PUBMED:37624147 Feasibility of Sentinel Lymph Node Mapping With Carbon Nanoparticles in Cervical Cancer: A Retrospective Study. Introduction: This retrospective study aims to investigate the feasibility of using carbon nanoparticles to detect sentinel lymph nodes (SLNs) in cervical cancer. Methods: This study involved 174 patients with cervical cancer. Cervix tissues adjacent to the cancer were injected with 1 mL of carbon nanoparticles (CNPs) at the 3 and 9 o'clock positions according to the instructions. The pelvic lymph nodes were then dissected, and the black-stained sentinel lymph nodes were sectioned for pathological examination. Results: Of 174 cases, 88.5% of patients (154/174) had at least 1 sentinel lymph node, and 131 patients (75.29%) had bilateral pelvic sentinel lymph nodes. The left pelvic lymph node was the most common sentinel lymph node (34.16%). At least 1 sentinel lymph node was observed in 285 out of 348 hemipelvises, with a detection rate of a side-specific sentinel lymph node of 81.89%. In total, 47 hemipelvises had metastasis of the lymph node, and 33 involved the sentinel lymph node, with a sensitivity of 70.21% and a false-negative rate of 29.79%. There were 238 hemipelvises with no metastasis of the lymph node, as well as negative sentinel lymph nodes, with a specificity of 100% and a negative predictive value of 94.44%. The univariate analysis demonstrated that risk factors included tumor size (OR .598, 95% CI: .369-.970) and deep stromal invasion (OR .381, 95% CI: .187-.779). The deep stromal invasion was the only variable for the false-negative detection of a sentinel lymph node. Conclusion: Sentinel lymph node mapping with carbon nanoparticles might be applied to predict the metastasis of pelvic lymph nodes in cervical cancer. However, tumor size and deep stromal invasion might negative influence the detection rate of SLN. abstract_id: PUBMED:27792042 Sentinel Lymph Nodes Mapping in Cervical Cancer a Comprehensive Review. Objective: A comprehensive literature search for more recent studies pertaining to sentinel lymph node mapping in the surveillance of cervical cancer to assess if sentinel lymph node mapping has sensitivity and specificity for evaluation of the disease; assessment of posttreatment response and disease recurrence in cervical cancer. Materials And Methods: The literature review has been constructed on a step wise study design that includes 5 major steps. This includes search for relevant publications in various available databases, application of inclusion and exclusion criteria for the selection of relevant publications, assessment of quality of the studies included, extraction of the relevant data and coherent synthesis of the data. Results: The search yielded numerous studies pertaining to sentinel lymph node mapping, especially on the recent trends, comparison between various modalities and evaluation of the technique. Evaluation studies have appraised high sensitivity, high negative predictive values and low false-negative rate for metastasis detection using sentinel lymph node mapping. Comparative studies have established that of all the modalities for sentinel lymph node mapping, indocyanine green sentinel lymph node mapping has higher overall and bilateral detection rates. Corroboration of the deductions of these studies further establishes that the sentinel node detection rate and sensitivity are strongly correlated to the method or technique of mapping and the history of preoperative neoadjuvant chemotherapy. Conclusions: The review takes us to the strong conclusion that sentinel lymph node mapping is an ideal technique for detection of sentinel lymph nodes in cervical cancer patients with excellent detection rates and high sensitivity. The review also takes us to the supposition that a routine clinical evaluation of sentinel lymph nodes is feasible and a real-time florescence mapping with indocyanine green dye gives better statistically significant overall and bilateral detection than methylene blue. abstract_id: PUBMED:18502488 Lymphatic mapping and sentinel lymph node detection in women with cervical cancer. Lymphatic mapping and sentinel node detection have been applied to almost every solid tumor and sentinel node status have become part of the American Joint Commission on Cancer (AJCC) staging criteria in both breast cancer and malignant melanoma. As the presence of metastatic disease in lymph nodes is the most important prognostic factor on survival in women with cervical cancer, the ability to reliably detect sentinel nodes might triage women to adjuvant radiotherapy without the need for full lymphadenectomies and their associated morbidity. To date, multiple international investigators have performed single institution investigations with promising results. Overall, 831 women have been undergoing lymphatic mapping and sentinel node detection as part of their cervical cancer therapy as reported in the literature. Combining results from all these studies, a sentinel node was identified in 90% of cases with an overall sensitivity of detecting metastatic disease of 92% with an 8% false negative rate. The overall negative predictive value was over 97%. There remain controversies in moving forward with accepting sentinel node biopsy as the standard in treating women cervical cancer including 1) determining an acceptable false-negative rate, 2) establishing the importance of micrometastatic disease or isolated tumor cells in sentinel nodes, and 3) discovering the minimum number of cases a surgeon needs to become proficient in mapping techniques. Large, multi-institutional studies in both Europe and the United States are nearing completion and their results should help guide the future direction for sentinel node technologies in the treatment of cervical cancer. abstract_id: PUBMED:24883119 Simultaneous mapping of pan and sentinel lymph nodes for real-time image-guided surgery. The resection of regional lymph nodes in the basin of a primary tumor is of paramount importance in surgical oncology. Although sentinel lymph node mapping is now the standard of care in breast cancer and melanoma, over 20% of patients require a completion lymphadenectomy. Yet, there is currently no technology available that can image all lymph nodes in the body in real time, or assess both the sentinel node and all nodes simultaneously. In this study, we report an optical fluorescence technology that is capable of simultaneous mapping of pan lymph nodes (PLNs) and sentinel lymph nodes (SLNs) in the same subject. We developed near-infrared fluorophores, which have fluorescence emission maxima either at 700 nm or at 800 nm. One was injected intravenously for identification of all regional lymph nodes in a basin, and the other was injected locally for identification of the SLN. Using the dual-channel FLARE intraoperative imaging system, we could identify and resect all PLNs and SLNs simultaneously. The technology we describe enables simultaneous, real-time visualization of both PLNs and SLNs in the same subject. abstract_id: PUBMED:25454828 Sentinel node biopsy for lymph nodal staging of uterine cervix cancer: a systematic review and meta-analysis of the pertinent literature. Background: We reviewed the available literature on the accuracy of sentinel node mapping in the lymph nodal staging of uterine cervical cancers. Methods: MEDLINE and Scopus were searched by using "sentinel AND (cervix OR cervical)" as key words. Studies evaluating the accuracy of sentinel node mapping in the lymph nodal staging of uterine cervical cancers were included if enough data could be extracted for calculation of detection rate and/or sensitivity. Results: Sixty-seven studies were included in the systematic review. Pooled detection rate was 89.2% [95% CI: 86.3-91.6]. Pooled sensitivity was 90% [95% CI: 88-92]. Sentinel node detection rate and sensitivity were related to mapping method (blue dye, radiotracer, or both) and history of pre-operative neoadjuvant chemotherapy. Sensitivity was higher in patients with bilaterally detected pelvic sentinel nodes compared to those with unilateral sentinel nodes. Lymphatic mapping could identify sentinel nodes outside the routine lymphadenectomy limits. Conclusion: Sentinel node mapping is an accurate method for the assessment of lymph nodal involvement in uterine cervical cancers. Selection of a population with small tumor size and lower stage will ensure the lowest false negative rate. Lymphatic mapping can also detect sentinel nodes outside of routine lymphadenectomy areas providing additional histological information which can improve the staging. Further studies are needed to explore the impact of sentinel node mapping in fertility sparing surgery and in patients with history of neoadjuvant chemotherapy. abstract_id: PUBMED:12893189 Laparoscopic detection of sentinel lymph nodes followed by lymph node dissection in patients with early stage cervical cancer. Objective: The purpose of this study was to investigate the feasibility of sentinel node detection through laparoscopy in patients with early cervical cancer. Furthermore, the results of laparoscopic pelvic lymph node dissection were studied, validated by subsequent laparotomy. Methods: Twenty-five patients with early stage cervical cancer who planned to undergo a radical hysterectomy and pelvic lymph node dissection received an intracervical injection of technetium-99m colloidal albumin as well as blue dye. With a laparoscopic gamma probe and with visual detection of blue nodes, the sentinel nodes were identified and separately removed via laparoscopy. If frozen sections of the sentinel nodes were negative, a laparoscopic pelvic lymph node dissection, followed by radical hysterectomy via laparotomy, was performed. If the sentinel nodes showed malignant cells on frozen section, only a laparoscopic lymph node dissection was performed. Results: One or more sentinel nodes could be detected via laparoscopy in 25/25 patients (100%). A sentinel node was found bilaterally in 22/25 patients (88%). Histological positive nodes were detected in 10/25 patients (40%). One patient (11%) had two false negative sentinel nodes in the obturator fossa, whereas a positive lymph node was found in the parametrium removed together with the primary tumor. In seven patients (28%), the planned laparotomy and radical hysterectomy were abandoned because of a positive sentinel node. Bulky lymph nodes were removed through laparotomy in one patient, and in six patients only laparoscopic lymph node dissection and transposition of the ovaries were performed. These patients were treated with chemoradiation. In two patients, a micrometastasis in the sentinel node was demonstrated after surgery. Ninety-two percent of all lymph nodes was retrieved via laparoscopy, confirmed by laparotomy. Detection and removal of the sentinel nodes took 55 +/- 17 min. Together with the complete pelvic lymph node dissection, the procedure lasted 200 +/- 53 min. Conclusion: Laparoscopic removal of sentinel nodes in cervical cancer is a feasible technique. If radical hysterectomy is aborted in the case of positive lymph nodes, sentinel node detection via laparoscopy, followed by laparoscopic lymph node dissection, prevents potentially harmful and unnecessary surgery. abstract_id: PUBMED:25404479 Can a sentinel node mapping algorithm detect all positive lymph nodes in cervical cancer? Objectives: The aims of this study were to determine the sensitivity and negative predictive value (NPV) of sentinel lymph node (SLN) detection in cervical cancer using a combination technique, and to test the SLN algorithm that was proposed by the Memorial Sloan Kettering Cancer Center (MSKCC). Methods: The study included 57 FIGO stage IA2-IIA patients who were treated at the Erasto Gaertner Hospital, Curitiba, from 2008 to 2010. The patients underwent SLN mapping by technetium lymphoscintigraphy and patent blue dye injection. Following SLN detection, standard radical hysterectomy, including parametrectomy and systematic bilateral pelvic lymphadenectomy, was performed. The SLNs were examined by immunohistochemistry (IHC) when the hematoxylin and eosin results were negative. Results: The median age of patients was 42 years (range 24-71), the median SLN count was 2 (range 1-4), and the median total lymph node (LN) count was 19 (range 11-28). At least one SLN was detected in 48 (84.2 %) patients, while bilateral pelvic detection of SLNs was noted in 28 (58.3 %) cases-one case had bilateral pelvic SLNs and a para-aortic SLN, 19 (39.6 %) had unilateral pelvic LNs, and one (2.1 %) had an SLN in the para-aortic area. Metastatic LNs were found in 9 of 57 (15.8 %) patients. Eight of nine patients with LN metastasis had a positive SLN, yielding an overall sensitivity of 88.9 % and NPV of 97.5 %. Of the 75 sides that were mapped, the SLN detection method predicted LN involvement in 74 (98.6 %) hemi-pelvises. A total of ten hemi-pelvises had LN metastasis, nine of which involved the SLN, resulting in a sensitivity of 90 %, NPV of 98.5 %, and a false negative (FN) of 10 %. In two cases (4.2 %), the SLN was positive only after IHC. Conclusions: Our SLN procedure is a safe and accurate technique that increases metastatic nodal detection rates by 4.2 % after IHC. The SLN method performed better when analyzing each side; however, one FN occurred, even after applying the MSKCC algorithm. abstract_id: PUBMED:28533155 Indications and techniques for robotic pelvic and para-aortic lymphadenectomy with sentinel lymph node mapping in gynecologic oncology. Robotic-assisted laparoscopic surgery is the most common approach for the treatment of early-stage endometrial and cervical cancers in the US. Surgical staging requires pelvic and often aortic lymphadenectomy, depending on the primary tumor characteristics. Pelvic and aortic lymphadenectomy procedures may also be indicated for debulking of larger metastases to improve disease control. The infra-renal basin is an important anatomic site of metastasis from pelvic tumors, and robotic dissection techniques for this site have been described. In recent years, sentinel lymph node (SLN) mapping has been adopted into the National Comprehensive Cancer Network guidelines' surgical algorithm for uterine and cervical cancers. SLN mapping has higher sensitivity for the detection of nodal metastasis, despite removing fewer lymph nodes, and potentially reduces morbidities such as lower extremity lymphedema. This article reviews current robotic pelvic and para-aortic lymphadenectomy dissection techniques for endometrial and cervical cancers and discusses the recent integration of pelvic SLN mapping in the surgical algorithm. abstract_id: PUBMED:31474589 Feasibility of sentinel lymph node mapping of the ovary: a systematic review. Pelvic and para-aortic lymphadenectomy is routinely performed in early ovarian cancer to define the stage of the disease. However, it may be associated with increased blood loss, operative time, and length of hospitalization. The sentinel lymph node technique has been shown to be safe and feasible in vulvar, uterine, and cervical cancer. Data detailing feasibility and outcomes of sentinel lymph node mapping in ovarian cancer are scarce.To summarize the studies evaluating the feasibility of sentinel lymph node detection from the ovary, examining the technique and detection rate.A systematic search of the literature was performed using PubMed and Embase from June 1991 to February 2019. Studies describing the sentinel lymph node technique and lymphatic drainage of the ovaries were incorporated in this review. Ten articles were selected, comprising a total of 145 patients. A variety of agents were used, but the primary markers were technetium-99m radiocolloid (Tc-99m), patent blue, or indocyanine green, and the most common injection site was the ovarian ligaments.The overall sentinel lymph node detection rate was 90.3%.We propose a standardized technique sentinel lymph node mapping in ovarian cancer, using indocyanine green, or Tc-99m and blue dye as alternative tracers, injected in both the suspensory and the infundibulopelvic ligament of the ovary. Answer: The sentinel lymph node (SLN) mapping algorithm has been shown to be a sensitive and accurate technique for detecting lymph node metastasis in cervical cancer, but it may not detect all positive lymph nodes. According to a study by PUBMED:25404479, the sensitivity and negative predictive value (NPV) of SLN detection in cervical cancer using a combination technique were high, with an overall sensitivity of 88.9% and NPV of 97.5%. However, there was a false negative rate of 10%, indicating that not all positive lymph nodes were detected. The study also noted that the SLN detection method predicted lymph node involvement in 98.6% of hemi-pelvises, but there was still one false negative occurrence even after applying the Memorial Sloan Kettering Cancer Center (MSKCC) algorithm. Other studies have reported similar findings, with high detection rates and sensitivity for SLN mapping. PUBMED:12722417 discusses the results of SLN localization in various gynecological cancers, including cervical cancer, suggesting that SLN mapping is a valuable technique in this context. PUBMED:37624147 found that SLN mapping with carbon nanoparticles in cervical cancer had a sensitivity of 70.21% and a false-negative rate of 29.79%, indicating that while the technique is feasible, it may not detect all positive lymph nodes. PUBMED:27792042 highlights that the detection rate and sensitivity of SLN mapping are strongly correlated with the method or technique of mapping and the patient's history of preoperative neoadjuvant chemotherapy. PUBMED:18502488 reports an overall sensitivity of 92% with an 8% false negative rate in detecting metastatic disease using SLN mapping in cervical cancer, while PUBMED:25454828 suggests that SLN mapping is accurate for lymph nodal staging in cervical cancer, with a pooled sensitivity of 90%. However, the selection of patients with small tumor size and lower stage can ensure the lowest false negative rate. In conclusion, while SLN mapping algorithms have high sensitivity and can detect the majority of positive lymph nodes in cervical cancer, they may not detect all positive nodes, as evidenced by the presence of false negatives in various studies. Therefore, it is important to recognize that while SLN mapping is a valuable tool in the staging and treatment of cervical cancer, it may not be infallible in detecting all metastatic lymph nodes (PUBMED:25404479, PUBMED:37624147, PUBMED:27792042, PUBMED:18502488, PUBMED:25454828).
Instruction: Can Early Rehabilitation after Total Hip Arthroplasty Reduce Its Major Complications and Medical Expenses? Abstracts: abstract_id: PUBMED:26146625 Can Early Rehabilitation after Total Hip Arthroplasty Reduce Its Major Complications and Medical Expenses? Report from a Nationally Representative Cohort. Objective: To investigate whether early rehabilitation reduces the occurrence of posttotal hip arthroplasty (THA) complications, adverse events, and medical expenses within one postoperative year. Method: We retrospectively retrieve data from Taiwan's National Health Insurance Research Database. Patients who had undergone THA during the period from 1998 to 2010 were recruited, matched for propensity scores, and divided into 2 groups: early rehabilitation (Early Rehab) and delayed rehabilitation (Delayed Rehab). Results: Eight hundred twenty of 999 THA patients given early rehabilitation treatments were matched to 205 of 233 THA patients given delayed rehabilitation treatments. The Delayed Rehab group had significantly (all p &lt; 0.001) higher medical and rehabilitation expenses and more outpatient department (OPD) visits than the Early Rehab group. In addition, the Delayed Rehab group was associated with more prosthetic infection (odds ratio (OR): 3.152; 95% confidence interval (CI): 1.211-8.203; p &lt; 0.05) than the Early Rehab group. Conclusions: Early rehabilitation can significantly reduce the incidence of prosthetic infection, total rehabilitation expense, total medical expenses, and number of OPD visits within the first year after THA. abstract_id: PUBMED:28561255 Total Hip and Knee Arthroplasty - Utilization of Postoperative Rehabilitation Background: After total hip and knee arthroplasty, patients have different options of subsequent treatment: an early postoperative rehabilitation, with or without a period at home, or only outpatient services. The aim of this study was to identify factors predicting the utilization of an early postoperative rehabilitation. Methods: This cross-sectoral analysis is based on claims data of AOK Baden-Württemberg (Statutory Health Insurance), Deutsche Rentenversicherung Bund and Deutsche Rentenversicherung Baden-Württemberg (German Pension Insurance). Predictors for participation in an early postoperative rehabilitation and for an interim period were determined using logistic regression analysis. Results: 82.6% of 9 232 patients were going to an early postoperative rehabilitation after total hip arthroplasty. After total knee arthroplasty, 83.9% of 7 656 patients were utilizing postoperative rehabilitation. Moreover, there was less utilization of postoperative rehabilitation in young, male and foreign patients. The analysis shows that the utilization of post-acute rehabilitation was significantly predicted by sociodemographic variables (age, sex, nationality) as well as comorbidity, outpatient treatment and medication. Conclusion: The results provide an indication of higher severity of patients in group "postoperative rehabilitation without a period at home". Nevertheless there are some indications for under-utilization of certain patient groups. abstract_id: PUBMED:24572057 Rehabilitation following total hip arthroplasty. Rehabilitation professionals play an important role in the comprehensive postoperative management of the patient who has undergone a total hip replacement. Understanding the general surgical considerations that eventually impact the rehabilitation process is essential. Coordination of physicians, physical and occupational therapists, social services, and family members results in better quality of care. The technology and design of hip prostheses and fixation methods impact the functional outcome of total hip arthroplasty. Professionals involved in total hip arthroplasty rehabilitation should also understand the potential complications following total hip arthroplasty that oftentimes cause delays or revisions in the rehabilitation program. When these are combined with appropriate preoperative patient selection and education, as well as postoperative physical and occupational therapy programs, most patients are able to achieve a satisfactory functional outcome, including independence in basic activities of daily living and independent ambulation with an assistive device. abstract_id: PUBMED:37170240 Comparisons of in-hospital complications between total hip arthroplasty and hip resurfacing arthroplasty. Background: Hip resurfacing arthroplasty (HRA) is a less common but effective alternative method to total hip arthroplasty (THA) for hip reconstruction. In this study, we investigated the incidences of in-hospital complications between patients who had been subjected to THA and HRA. Methods: The National Inpatient Sample data that had been recorded from 2005 to 2014 was used in this study. Based on the International Classification of Disease, Ninth Revision, Clinical Modification, patients who underwent THA or HRA were included. Data on demographics, preoperative comorbidities, length of hospital stay, total charges, and in-hospital mortality and complications were compared. Multiple logistic regression analysis was used to determine whether different surgical options are independent risk factors for postoperative complications. Results: A total of 537,506 THAs and 9,744 HRAs were obtained from the NIS database. Patients who had been subjected to HRA exhibited less preoperative comorbidity rates, shorter length of stay and extra hospital charges. Moreover, HRA was associated with more in-hospital prosthesis loosening. Notably, patients who underwent HRA were younger and presented less preoperative comorbidities but did not show lower incidences in most complications. Conclusions: The popularity of HRA gradually reduced from the year 2005 to 2014. Patients who underwent HRA were more likely to be younger, male, have less comorbidities and spend more money on medical costs. The risk of in-hospital prosthesis loosening after HRA was higher. The HRA-associated advantages with regards to most in-hospital complications were not markedly different from those of THA. In-hospital complications of HRA deserve more attention from surgeons. abstract_id: PUBMED:38467458 Direct anterior approach complications for total hip arthroplasty. The direct anterior approach (DAA) for total hip arthroplasty has been popularized in the last decade as a minimally invasive approach used by many surgeons, including the authors, to preserve the integrity of muscle groups and their insertions and the dynamic hip stability resulting in less surgical trauma and faster recovery process with decreased postoperative pain. This surgical approach is not without a variety of complications and pitfalls. This review aims to identify any potential drawbacks and challenges associated with the DAA in THA and guide surgeons on minimizing and avoiding them. abstract_id: PUBMED:25432684 Physical rehabilitation after total joint arthroplasty in companion animals. Patients who have total joint arthroplasty have varying needs related to rehabilitation. In the short term, rehabilitation should be used in all dogs to identify high-risk patients and to minimize the likelihood of postoperative complications. Many patients undergoing total hip replacement recover uneventfully without needing long-term physiotherapy. All patients undergoing total knee replacement and total elbow replacement need rehabilitation to restore limb use and maximize their functional recovery. This article presents rehabilitation considerations for companion animals undergoing total hip replacement, total knee replacement, and total elbow replacement; postoperative complications and how to mitigate risks; and anticipated patient outcomes. abstract_id: PUBMED:27536571 Clinical Implication of Diabetes Mellitus in Primary Total Hip Arthroplasty. Purpose: The purpose of this study was to investigate the effect of diabetes mellitus on primary total hip arthroplasty by comparing the clinical outcomes of patients diagnosed to have diabetes mellitus before the operation with those without diabetes. Materials And Methods: A total 413 patients who underwent unilateral cementless total hip arthroplasty from June 2006 to May 2009 were recruited and divided into diabetic and non-diabetic groups. The comparative analysis between the two groups was made. We evaluated Harris hip score, postoperative complications such as wound problem, surgical site infection, other medical complication and length of stay in hospital as clinical parameters. Radiographic evaluations were also included to determine loosening, dislocation and osteolysis. Results: Patients with diabetes had an increased incidence of orthopaedic complications including surgical site infection and mortality, but other medical complications were not increased in diabetic patients. All complications after primary total hip arthroplasty were associated with diabetes mellitus, but the degree of diabetes was not associated with complications. Conclusion: Diabetes mellitus increases incidence of orthopaedic complications, particularly deep infection, after cementless primary total hip arthroplasty. abstract_id: PUBMED:15614648 Advantages of minimal invasive total hip replacement in the early phase of rehabilitation Unlabelled: In arthroplasty the term "minimal invasive" not only refers to the length of the skin incision but more so to its soft tissue and thereby muscle-protecting features. Study Aim: The aim of this study is to compare the early postoperative mobilisation and rehabilitation of the different surgical approaches in cementless total hip arthroplasty. Methods: 27 patients underwent a total hip replacement (Trilogy cup, MAYO stem) via a ventral minimal invasive approach (one incision technique) (MIS group). 23 patients underwent a total hip replacement with the same implant via a anterolateral transgluteal approach (standard group). We evaluated the Harris Hip Score (HHS), the visual analogue scale (VAS) for pain and patient satisfaction preoperatively as well as 3 days, 10 days, 6 weeks and 3 months postoperatively. Results: After 3 and 10 days the MIS group showed better scores for pain, gait and mobilisation as well as for the overall HHS compared to the standard group. These differences could not be shown 6 weeks postoperatively. The MIS group had a significantly higher rate of complications with 22 % transient impairment of the lateral cutaneous nerve. Conclusion: The patients of the MIS group showed a better mobilisation and rehabilitation during the early postoperative period. This can be attributed to the lessened intraoperative damage of to soft tissue and especially muscle damage. Due to the increased rate of nerve irritations, we modified our surgical approach. The minimal invasive approach to modern hip joint arthroplasty remains a non-standard technique. Compared to the standard approach it carries additional risks (like nerve damage and malpositioning of the implants and thus should remain in the hands of the experienced orthopaedic surgeon in specialised orthopaedic centres. abstract_id: PUBMED:38314005 Total Hip Arthroplasty in Ankylosing Spondylitis: A Case Report of Ankylosed Hip. Ankylosing spondylitis (AS) is a chronic inflammatory arthritic disease that primarily affects the axial skeleton, and its association with the secondary development of osteoarthritis (OA) in peripheral joints, particularly the hips, is increasingly recognized. This case report elucidates the diagnostic and therapeutic challenges encountered in a patient with bilateral hip osteoarthritis secondary to AS. The patient's medical history included AS and a failed attempt at core decompression of the left hip joint. The patient was managed with total hip arthroplasty (THA) on the left side due to persistent symptoms. Total hip arthroplasty on the left side involved a meticulous surgical approach, addressing the unique challenges posed by underlying ankylosis. The procedure was conducted uneventfully, with the implantation of a modular femoral head, uncemented femoral stem, and modular shell. Postoperatively, the patient experienced significant pain relief and improved functionality. Successful rehabilitation and management were integral to the overall positive outcome. This case report highlights the complex interplay between AS and hip osteoarthritis, emphasizing the importance of tailored diagnostic and therapeutic strategies. Successful total hip arthroplasty in the setting of AS-related hip osteoarthritis suggests that joint replacement can be effective, but ongoing research is necessary to optimize surgical planning and long-term outcomes in this patient population. abstract_id: PUBMED:38357069 Early Recovery of Physical Function After Total Hip Arthroplasty in a Patient With Osteopetrosis: A Case Report. Osteopetrosis is an uncommon and inherited disorder. Some disease-specific characteristics, such as diffuse osteosclerosis and a high incidence of fractures, may potentially affect postoperative rehabilitation. This report presents a case of successful rehabilitation early after total hip arthroplasty for osteopetrosis. A 56-year-old Japanese man, who was diagnosed with osteopetrosis at the age of 11, underwent total hip arthroplasty in the right hip. Full weight-bearing was allowed on the day after the operation; the postoperative rehabilitation program was proceeded based on a standard program as done after total hip arthroplasty for osteoarthritis. A shoe lift in the left leg was used in supervised walking training to correct the imbalanced alignment due to leg length discrepancy. The patient could walk independently with a cane 17 days after the operation. Three weeks after the operation, the patient demonstrated comfortable and maximal walking speed of 1.11 and 1.34 m/s, respectively, and maximal hip abductor muscle strength of 3.96 kgf・m, both of which were better than those before the operation. There were no adverse events during the postoperative rehabilitation course. These findings suggest the safety and efficacy of standard rehabilitation programs after total hip arthroplasty even in individuals with osteopetrosis. In addition, it may be important to consider the whole-body condition in the rehabilitation of individuals with osteopetrosis. Answer: Yes, early rehabilitation after total hip arthroplasty (THA) can reduce major complications and medical expenses. A study investigating the impact of early rehabilitation on post-THA complications and expenses found that patients who received early rehabilitation treatments had significantly lower medical and rehabilitation expenses, fewer outpatient department visits, and a reduced incidence of prosthetic infection compared to those who received delayed rehabilitation treatments (PUBMED:26146625). This suggests that early rehabilitation is beneficial in reducing the occurrence of postoperative complications and in minimizing the overall medical costs associated with THA within the first postoperative year.
Instruction: "Does skinny mean healthy? Abstracts: abstract_id: PUBMED:32131487 SKINNY-Based RFID Lightweight Authentication Protocol. With the rapid development of the Internet of Things and the popularization of 5G communication technology, the security of resource-constrained IoT devices such as Radio Frequency Identification (RFID)-based applications have received extensive attention. In traditional RFID systems, the communication channel between the tag and the reader is vulnerable to various threats, including denial of service, spoofing, and desynchronization. Thus, the confidentiality and integrity of the transmitted data cannot be guaranteed. In order to solve these security problems, in this paper, we propose a new RFID authentication protocol based on a lightweight block cipher algorithm, SKINNY, (short for LRSAS). Security analysis shows that the LRSAS protocol guarantees mutual authentication and is resistant to various attacks, such as desynchronization attacks, replay attacks, and tracing attacks. Performance evaluations show that the proposed solution is suitable for low-cost tags while meeting security requirements. This protocol reaches a balance between security requirements and costs. abstract_id: PUBMED:31562703 Ocular findings and selected ophthalmic diagnostic tests in a group of young commercially available Guinea and Skinny pigs (Cavia porcellus). Objective: The purpose of this study is to evaluate a group of young commercially available Skinny pigs, to gain information regarding ocular findings in this breed of guinea pig. Comparisons of ocular findings are to be made between Skinny pigs and haired guinea pigs. Animal Studied: Ten haired guinea pigs and ten Skinny pigs were examined. Procedure: A complete ophthalmic examination including Schirmer tear test-II (STT-II), phenol red thread test (PRTT), rebound tonometry with TonoVet PLUS, Fluorescein and Rose Bengal stain was performed. Microbiology swabs for aerobic bacterial growth were collected from conjunctiva of both eyes prior to the ophthalmic examination. Results: The ophthalmic examination revealed seven abnormal ocular findings: trichiasis, mucopurulent discharge, hyperemia/chemosis of the conjunctiva, corneal fibrosis, corneal vascularization, and foreign body on the cornea or conjunctiva. Skinny pigs had a significantly higher amount of mucopurulent discharge (P = .0133) and a significantly higher STT-II (P &lt; .001) than haired guinea pigs. Although not significant, trichiasis, keratitis with corneal vascularization, and foreign body presence were more common in Skinny pigs. Significantly more Skinny pigs had Pasteurellaceae isolated from their conjunctiva than haired guinea pigs (P = .0112). Antimicrobial susceptibility for the five Pasteurellaceae organisms isolated revealed susceptibility toward oxytetracycline, tobramycin, ciprofloxacin, and ofloxacin, whereas resistance was found toward erythromycin, trimethoprim-sulfamethoxazole, and moxifloxacin. Conclusion: Young Skinny pigs have a higher risk of Pasteurellaceae-associated conjunctivitis. Oxytetracycline, tobramycin, ciprofloxacin, and ofloxacin were identified as topical antibiotics that may be useful for Pasteurellaceae-associated conjunctivitis in Skinny pigs. abstract_id: PUBMED:37180941 Fat Cantor sets and their skinny companions. The terms fat and skinny in the title are vernacular references to Cantor sets of positive and zero measure respectively. The paper demonstrates that a fat Cantor subset of [0,L], L&gt;0, possesses a skinny companion that forms a Cantor subset of [0,G] where G&lt;L is the total length of all the gaps associated with the ternary construction of the fat Cantor set. Moreover, elements of the fat Cantor set can be decomposed and expressed as the sum of two components. One of the components is an element of [0,L-G]. The other component is an element of the skinny companion contained in [0,G]. abstract_id: PUBMED:31177096 Acyltransferase skinny hedgehog regulates TGFβ-dependent fibroblast activation in SSc. Objectives: Systemic sclerosis (SSc) is characterised by aberrant hedgehog signalling in fibrotic tissues. The hedgehog acyltransferase (HHAT) skinny hedgehog catalyses the attachment of palmitate onto sonic hedgehog (SHH). Palmitoylation of SHH is required for multimerisation of SHH proteins, which is thought to promote long-range, endocrine hedgehog signalling. The aim of this study was to evaluate the role of HHAT in the pathogenesis of SSc. Methods: Expression of HHAT was analysed by real-time polymerase chain reaction(RT-PCR), immunofluorescence and histomorphometry. The effects of HHAT knockdown were analysed by reporter assays, target gene studies and quantification of collagen release and myofibroblast differentiation in cultured human fibroblasts and in two mouse models. Results: The expression of HHAT was upregulated in dermal fibroblasts of patients with SSc in a transforming growth factor-β (TGFβ)/SMAD-dependent manner. Knockdown of HHAT reduced TGFβ-induced hedgehog signalling as well as myofibroblast differentiation and collagen release in human dermal fibroblasts. Knockdown of HHAT in the skin of mice ameliorated bleomycin-induced and topoisomerase-induced skin fibrosis. Conclusion: HHAT is regulated in SSc in a TGFβ-dependent manner and in turn stimulates TGFβ-induced long-range hedgehog signalling to promote fibroblast activation and tissue fibrosis. Targeting of HHAT might be a novel approach to more selectively interfere with the profibrotic effects of long-range hedgehog signalling. abstract_id: PUBMED:11158938 Decreased triglyceride-rich lipoproteins in transgenic skinny mice overexpressing leptin. Leptin is an adipocyte-derived circulating satiety factor with a variety of biological effects. Evidence has accumulated suggesting that leptin may modulate glucose and lipid metabolism. In the present study, we examined lipid metabolism in transgenic skinny mice with elevated plasma leptin concentrations. The plasma concentrations of triglycerides and free fatty acids in transgenic skinny mice were 71.5 (P &lt; 0.01) and 89.1% (P &lt; 0.05) of those in their nontransgenic littermates, respectively. Separation of plasma into lipoprotein classes by ultracentrifugation revealed that very low density lipoprotein-triglyceride concentrations were markedly reduced in transgenic skinny mice relative to the controls. The clearance of triglycerides estimated by a fat-loading test was enhanced in transgenic skinny mice; the triglyceride concentration in transgenic skinny mice 3 h after fat loading was 39.7% (P &lt; 0.05) of that of their nontransgenic littermates. Postheparin plasma lipoprotein lipase activity increased 1.4-fold (P &lt; 0.05) in transgenic skinny mice. Our data demonstrated a significant reduction in plasma triglyceride concentrations, accompanied by increased lipoprotein lipase activity in transgenic skinny mice overexpressing leptin, suggesting that leptin plays a role in long-term triglyceride metabolism. abstract_id: PUBMED:27375462 Change in Mean Frequency of Resting-State Electroencephalography after Transcranial Direct Current Stimulation. Transcranial direct current stimulation (tDCS) is proposed as a tool to investigate cognitive functioning in healthy people and as a treatment for various neuropathological disorders. However, the underlying cortical mechanisms remain poorly understood. We aim to investigate whether resting-state electroencephalography (EEG) can be used to monitor the effects of tDCS on cortical activity. To this end we tested whether the spectral content of ongoing EEG activity is significantly different after a single session of active tDCS compared to sham stimulation. Twenty participants were tested in a sham-controlled, randomized, crossover design. Resting-state EEG was acquired before, during and after active tDCS to the left dorsolateral prefrontal cortex (15 min of 2 mA tDCS) and sham stimulation. Electrodes with a diameter of 3.14 cm(2) were used for EEG and tDCS. Partial least squares (PLS) analysis was used to examine differences in power spectral density (PSD) and the EEG mean frequency to quantify the slowing of EEG activity after stimulation. PLS revealed a significant increase in spectral power at frequencies below 15 Hz and a decrease at frequencies above 15 Hz after active tDCS (P = 0.001). The EEG mean frequency was significantly reduced after both active tDCS (P &lt; 0.0005) and sham tDCS (P = 0.001), though the decrease in mean frequency was smaller after sham tDCS than after active tDCS (P = 0.073). Anodal tDCS of the left DLPFC using a high current density bi-frontal electrode montage resulted in general slowing of resting-state EEG. The similar findings observed following sham stimulation question whether the standard sham protocol is an appropriate control condition for tDCS. abstract_id: PUBMED:31527611 Audio-Tactile Skinny Buttons for Touch User Interfaces. This study proposes a novel skinny button with multimodal audio and haptic feedback to enhance the touch user interface of electronic devices. The active material in the film-type actuator is relaxor ferroelectric polymer (RFP) poly(vinylidene fluoride-trifluoroethylene-chlorofluoroethylene) [P(VDF-TrFE-CFE)] blended with poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)], which produces mechanical vibrations via the fretting vibration phenomenon. Normal pressure applied by a human fingertip on the film-type skinny button mechanically activates the locally concentrated electric field under the contact area, thereby producing a large electrostrictive strain in the blended RFP film. Multimodal audio and haptic feedback is obtained by simultaneously applying various electric signals to the pairs of ribbon-shaped top and bottom electrodes. The fretting vibration provides tactile feedback at frequencies of 50-300 Hz and audible sounds at higher frequencies of 500 Hz to 1 kHz through a simple on-off mechanism. The advantage of the proposed audio-tactile skinny button is that it restores the "click" sensation to the popular virtual touch buttons employed in contemporary electronic devices. abstract_id: PUBMED:37588629 Skinny wire and locking plate fixation for comminuted intra-articular distal humerus fractures: a technical trick and case series. Introduction: Intra-articular distal humerus fractures present a challenge to orthopedic surgeons. Stable fixation is difficult to achieve in fractures with articular and metaphyseal comminution and osteoporotic bone. Hence, these fractures are more commonly being managed with total elbow arthroplasty. We describe a novel surgical technique that confers stable fixation, allowing for early range of motion resulting in a high rate of union, a functional range of motion, and excellent patient reported outcome scores without the activity restrictions of total elbow arthroplasty. Methods: Retrospective case series of 30 patients with AO/OTA type B and C intra-articular distal humerus fractures who underwent ORIF from 2014-2019 utilizing a novel surgical technique that focuses on reconstructing a comminuted articular surface through meticulous, transverse fixation of the tiny articular fragments with long, thin Kirchner wires, which are then bent over and trapped under locking compression plates to create a fixed angle support to the metadiaphysis. Results: Patient mean age of 59 (19-90) years and 61% were female. Median follow up was 1.2 years. Twenty-seven (87%) were type C fractures and 3 (13%) were type B. Five patients (16%) suffered a concurrent ipsilateral upper extremity injury and four (13%) had an open fracture. Two were polytrauma patients. All fractures healed with an average time to union of 11 weeks. Over 80% patients reported no or mild pain at final follow up. Mean arc of elbow motion was 102 degrees, mean QuickDASH score 25.2. Post-operative complications included ulnar nerve paresthesias (38%), wound infection (3.2%), heterotopic ossification (3.2%), and olecranon nonunion (3.2%). Eight patients underwent secondary procedures: 7 (23%) removal hardware, 3(9.6%) capsular release, 2 (6.4%) ulnar nerve transpositions, and 1 (3.2%) total elbow arthroplasty. Conclusion: We describe a novel surgical technique that we believe results in strong, stable fixation of complex intra-articular distal humerus fractures irrespective of bone quality. In our series, all fractures healed and post-operatively patients reported low levels of pain, achieved excellent elbow range of motion, high patient reported outcome scores. Patients should be counseled about high rates of post-operative ulnar nerve paresthesias that can be expected to improve over time and high reoperation rates for symptomatic hardware. abstract_id: PUBMED:37372252 Efficient Attack Scheme against SKINNY-64 Based on Algebraic Fault Analysis. Lightweight block ciphers are normally used in low-power resource-constrained environments, while providing reliable and sufficient security. Therefore, it is important to study the security and reliability of lightweight block ciphers. SKINNY is a new lightweight tweakable block cipher. In this paper, we present an efficient attack scheme for SKINNY-64 based on algebraic fault analysis. The optimal fault injection location is given by analyzing the diffusion of a single-bit fault at different locations during the encryption process. At the same time, by combining the algebraic fault analysis method based on S-box decomposition, the master key can be recovered in an average time of 9 s using one fault. To the best of our knowledge, our proposed attack scheme requires fewer faults, is faster to solve, and has a higher success rate than other existing attack methods. abstract_id: PUBMED:37255829 HOMA-IR mean values in healthy individuals: a population-based study in iranian subjects. Purpose: Homeostasis Model Assessment-estimated Insulin Resistance (HOMA-IR) is an important indicator of insulin resistance. In this population-based investigation, we sought to report the mean value of HOMA-IR in different subgroups of a large population-based database of Iranian healthy subjects. Methods: This study recruited adult healthy individuals between the ages of 18 to 70 years old to Massoud Medical Laboratory, Tehran, Iran. Fasting insulin was measured by using the Electro Chemiluminescence method using Roche Cobas 6000 e601/602 instrument. Results: The mean ± SD value of the HOMA-IR index in the studied population was 2.11 ± 0.99 (2.5-97.5% percentiles: 0.66-4.50). In addition, the mean ± SD of HOMA-IR index in male and female groups were 2.35 1.0 (2.5-97.5 percentile: 0.57-4.37) and 2.05 ± 1.0 (2.5-97.5 percentiles: 0.53-4.35), respectively. Interestingly, it was observed a significant increment for the HOMA-IR index in the male group compared with the female group in all age subgroups (P &lt; 0.01). Conclusions: Our findings showed the mean value of 2.11 ± 0.99 HOMA-IR in the Iranian healthy population. Considering the large sample size in our study, more clinical investigations in terms of ethnicity should be done to provide a precise standardized HOMA-IR index in the Iranian population. Supplementary Information: The online version contains supplementary material available at 10.1007/s40200-022-01099-9. Answer: The term "skinny" does not necessarily equate to "healthy." While being skinny might be perceived as a sign of health by some, it is not a comprehensive indicator of overall well-being. For instance, transgenic skinny mice overexpressing leptin showed decreased triglyceride-rich lipoproteins, suggesting a role for leptin in long-term triglyceride metabolism (PUBMED:11158938). However, this is a specific physiological context and does not generalize to human health. In humans, health is determined by a range of factors, including metabolic function, cardiovascular health, muscle strength, and the absence of disease, among others. For example, the Homeostasis Model Assessment-estimated Insulin Resistance (HOMA-IR) is an important indicator of insulin resistance, a condition that can occur regardless of body weight and is associated with metabolic disorders such as type 2 diabetes (PUBMED:37255829). The mean HOMA-IR values in a study of Iranian healthy subjects were reported, indicating that insulin resistance can be measured and is relevant to health assessments. Moreover, the term "skinny" can sometimes refer to individuals who are underweight, which can be associated with its own health risks, such as malnutrition, decreased immune function, osteoporosis, and fertility issues. Therefore, being skinny is not a definitive measure of health, and a comprehensive health assessment should consider multiple factors, including body composition, metabolic health, and the presence or absence of disease.
Instruction: Risk modification for diabetic patients. Are other risk factors treated as diligently as glycemia? Abstracts: abstract_id: PUBMED:15557673 Risk modification for diabetic patients. Are other risk factors treated as diligently as glycemia? Background: The importance of glucose control is recognized both by patients with diabetes and their physicians. However, other preventative interventions, such as using medications to manage lipid and blood pressure levels, are underused for diabetic patients. Objectives: To determine whether patients with diligent glucose management are more likely to use medications that treat lipids and blood pressure. Methods: Administrative data records were evaluated for all diabetic patients aged 65 or older residing in Ontario in 1999 without pre-existing coronary artery disease (n=161,553). Measures of diligent glucose management were insulin use and frequent capillary glucose testing ((3) 2 per day). Outcomes were prescription of a lipid-lowering drug or antihypertensive drug. Using multivariate modeling, odds ratios for each diligence measure were determined for each outcome, adjusting for age, sex, comorbidities, and other covariates. Results: Patients using insulin did not have a clinically important difference in lipid-lowering drug use (adjusted odds ratio 0.9, 99% confidence interval 0.9 - 1.0, P=0.002) or antihypertensive drug use (adjusted odds ratio 1.1, 99% confidence interval 1.0 - 1.1, P&lt;0.001) versus non-users. Adjusted odds ratios for frequent glucose testing were not significantly different from unity for either lipid-lowering or antihypertensive drug use. Conclusions: Patients who required and were capable of diligent glucose management, which is invasive, expensive and time-consuming, were no more likely to use medications to control lipids or blood pressure. Preventative care for patients with diabetes may be too focused on glycemic control, and may be neglecting the management of other cardiovascular risk factors. abstract_id: PUBMED:10684225 Perinatal factors can be risk factors of diabetic nephropathy This article discusses the association of perinatal risk determinants and the future development of diabetic nephropathy. A low birth-weight seems to increase the risk for future cardiovascular disease, hypertension and insulin resistance, all of which are features of diabetic nephropathy. In a nation-wide case-controlled study we found that smoking during pregnancy and low maternal education, rather than low birth weight per se increase the risk of developing incipient nephropathy in offspring with type-1 diabetes. These factors are in addition to, and independent of, a familial disposition for cardiovascular disease and hypertension. Persistent hyperglycaemia is a prerequisite for the influence of these factors. Our findings support the hypothesis of a multifactorial aetiology of diabetic nephropathy. abstract_id: PUBMED:24780453 Epidemiology and risk factors for diabetic kidney disease. Prevalence rates of diabetic kidney disease (DKD) are increasing in parallel with the incidence rates of diabetes mellitus. DKD has already become a significant health problem worldwide. Without radical improvements in prevention and treatment, DKD prevalence will continue to climb. The pathogenesis of DKD is complex and multifactorial, with genetic and environmental factors involved. Several nonmodifiable risk factors contribute to DKD, including genetics, sex, age, age at onset, and duration of diabetes. However, there are also several modifiable risk factors that have a strong effect on the risk of DKD. Traditional modifiable factors include glycemic control, blood pressure, lipids, and smoking. Other recently discovered modifiable risk factors include chronic low-grade inflammation, advanced glycation end products, and lack of physical activity. Efficient management of these modifiable risk factors may improve the prognosis of diabetic patients at risk of DKD. abstract_id: PUBMED:11688065 Prevalence and therapy of vascular risk factors in hospitalized type 2 diabetic patients Type 2 diabetes mellitus is often associated with other risk factors for atherosclerotic disease, resulting in a marked increase in cardiovascular events and deaths. Combined treatment of hyperglycaemia, dyslipidaemia and hypertension significantly decreases the frequency and severity of diabetic microvascular and macrovascular complications. In a prospective cohort study including 356 type 2 diabetic patients (= 14% of all in-patients during a 6 months' period) the prevalence and treatment of cardiovascular risk factors were determined. Hypertension was diagnosed in 54% of the diabetic patients, albuminuria in 53% and dyslipidaemia in 47%; there were 40 smokers (17%). On admission the mean HbA1c was 7.7 +/- 2.0%, the mean fasting plasma glucose 10.0 +/- 4.2 mmol/l (and 8.9 +/- 3.9 mmol/l, p = 0.03, when discharged), the mean systolic blood pressure was 144 +/- 28 mm Hg (and 131 +/- 20, p &lt; 0.0001, when discharged), and the triglycerides were 2.6 +/- 0.4 mmol/l. 34% of the hypertensive diabetic patients were treated with a combination of anti-hypertensive drugs, 44% of the dyslipidaemic diabetic patients were treated with statins, and 58% of all diabetic patients received aspirin or oral anticoagulation. 23% of the diabetic patients were treated by diet alone, 36% with insulin, 25% with sulfonylureas and 5% with metformin, while 11% were given a combination of antihyperglycaemic medication. In-hospital mortality was 11%. The diabetic patients were discharged on 2.9 +/- 1.7 different drugs. The prevalence of associated cardiovascular risk factors is high in type 2 diabetic patients, and thus a combination of drugs is often warranted. The rate of admissions and in-hospital mortality is high in type 2 diabetic patients. abstract_id: PUBMED:12643183 Cardiovascular risk factors in diabetic patients with hypertension. Individuals with diabetes mellitus have cardiovascular disease (CVD) mortality comparable to nondiabetics who have suffered a myocardial infarction or stroke. Aggressive management of risk factors such as hypertension, dyslipidemia, and platelet dysfunction in persons with diabetes has been shown to reduce morbidity and mortality in prospective randomized controlled clinical trials. Accordingly, there are national mandates to lower blood pressure to less than 130/85 mm Hg, reduce low-density lipoprotein cholesterol to less than 100 mg/dL, and institute aspirin therapy in adult patients with diabetes. Although not definitively shown to reduce CVD, there are also recommendations to control the level of glycemia, as well. This article discusses CVD risk factors in the diabetic patient with hypertension. abstract_id: PUBMED:36568108 Risk factors of chronic kidney disease among type 2 diabetic patients with longer duration of diabetes. Background: Chronic kidney disease (CKD) in patients with type 2 diabetes mellitus (T2DM) is the major cause of end stage renal disease, characterized by proteinuria with a subsequent decline in glomerular filtration rate. Although hyperglycemia is the major risk factor for the development and progression of kidney disease among diabetic patients, many other risk factors also contribute to structural and functional changes in the kidneys. As recommended by Kidney Disease Improving Global Outcomes (KDIGO), CKD classification based on cause and severity, links to risk of adverse outcomes including mortality and kidney outcomes. Objective: The aim of this study is to investigate the involvement of risk factors associated with the severity of CKD among participants with longer duration of diabetes. This study also aims to find whether number of risk factors vary among risk of CKD progression categories based on KDIGO classification. Material And Methods: This cross-sectional study retrospectively selected 424 participants from type 2 diabetic cohort and categorized them based on the classifications for the diagnosis of kidney diseases in patients with diabetes, according to the KDIGO guidelines. Odds ratios and 95% CI of each risk factors according to severity of renal disease were determined. Results: Based on KDIGO classification, participants with type 2 diabetes (T2D) were categorized in to low risk (n=174); moderately increased risk (n=98); and high/very high risk (n=152). Type 2 diabetic participants with risk factors such as, hyperlipidemia, hypertension, DM duration ≥15 years and diabetic retinopathy showed a high/very high risk of CKD progression when compared with low-risk category. While T2D participants with risk factors such as, lack of exercise, hypertension, and diabetic retinopathy showed a moderately increased risk of CKD progression. In addition, participants with highest number of risk factors were significantly distributed among high/very high risk of CKD progression category. Conclusion: This study findings conclude that patients with T2DM and duration of ≥15 years, hyperlipidemia, hypertension and diabetic retinopathy have an increased prevalence of advanced CKD. In addition to this, increased number of risk factors could be an indicator of the severity of CKD in T2D. abstract_id: PUBMED:12769160 The need for tighter control of cardiovascular risk factors in diabetic patients. Diabetic patients have a two- to four-fold increase in macrovascular disease compared with non-diabetic subjects, with coronary heart disease (CHD) and stroke being the most common causes of death in type 2 diabetes. Diabetic nephropathy has become the most common single cause of end-stage renal disease in industrialized countries. Risk factors, including hyperglycaemia, high blood lipids and high blood pressure (BP), often co-exist in diabetic subjects. One recent metaanalysis, including more than 90,000 patients with a 12.4-year follow-up, has demonstrated a continuous increase in the relative risks of morbidity and mortality with increasing blood glucose concentration. Both the Multiple Risk Factor Intervention Trial (MRFIT) and the United Kingdom Prospective Diabetes Study (UKPDS) have confirmed in diabetes the close relationship between total cholesterol levels and elevated risk of cardiovascular events. For every 1 mmol/l increase in low-density lipoprotein cholesterol in type 2 diabetes, the relative risk of CHD increases by 1.57. Furthermore, about 40% of newly diagnosed diabetic patients are also hypertensive. Elevated BP is related to the presence of left ventricular hypertrophy (LVH) and, indeed, LVH is observed in more than 70% of diabetic patients with hypertension. Several studies in diabetes have proven treatment benefits when different risk factors are addressed. The need for tighter control of cardiovascular risk factors in diabetic patients is clear. This may include better control of raised BP, hyperlipidaemia and hyperglycaemia as well as closer monitoring for the appearance of LVH and microalbuminuria. There is a clear need to translate the results of clinical trials into everyday clinical practice. abstract_id: PUBMED:22634917 Risk factors and diabetic retinopathy. The aim of the study was to determine the correlation between risk factors and diabetic retinopathy, which is the leading cause of blindness in developed countries for patients aged 20 to 65. We compared risk factors between patients without retinopathy, with non-proliferate and with proliferate retinopathy (p&lt;0.05). Duration of diabetes is most important for the development of retinopathy. Hyperglycaemia and high blood pressure are important for progression. Better control of blood sugar and elevated blood pressure can reduce progression of retinopathy and risk of vision loss. abstract_id: PUBMED:24231493 Risk factors and management of diabetic nephropathy. To determine the risk factors for nephropathy in diabetic patients and to study the management of diabetic nephropathy (DN), we conducted a hospital-based prospective study in the Internal Medicine department of our hospital on 60 patients with DN and 60 diabetic patients without DN. An odds ratio (OR) disclosed the following risk factors: Hypertension (OR = 2.06), family history of diabetes (OR = 1.23), family history of DN (OR = 2.86), uncontrolled hyperglycemia (OR = 11.80), obesity (OR = 1.07), duration of diabetes between 11 and 20 years (OR = 4.69), smoking (OR = 2.79), alcohol consumption (OR = 3.75), other complications (OR = 2.03), lack of physical activity (OR = 1.51) and anemia (OR = 2.29). According to these risk factors, we suggest that improving patient's knowledge on diabetes and its treatment, life style modifications and aggressive management of the disease may delay the progression of disease to advanced stages. abstract_id: PUBMED:16395194 Risk factors for diabetic macular edema Aim: To determine the risk factors involved in the development of macular edema. Patients And Methods: A retrospective, case-control study conducted on 200 diabetic patients with diabetic retinopathy: 100 patients with macular edema and 100 control patients. All patients were examined using slit-lamp biomicroscopy completed by fluorescein angiography. A physical examination and the appropriate biological investigations were also carried out. Statistical significance was determined by the chi square test and Fisher's exact test. A value of p&lt;0.05 was considered statistically significant. Results: The incidence of macular edema was significantly associated with longer duration of diabetes, a higher level of glycemia, the type of diabetes in the older onset group using insulin, the severity of diabetic retinopathy at baseline, and old age. Panretinal photocoagulation was also found as a risk factor for macular edema. Conclusion: These data suggest that laser photocoagulation, reduction of hyperglycemia and better control of other factors may result in a beneficial decrease in macular edema. Answer: The management of diabetes involves addressing multiple risk factors to prevent complications such as cardiovascular disease and diabetic nephropathy. However, evidence suggests that other risk factors may not be treated as diligently as glycemia in diabetic patients. A study examining administrative data records for diabetic patients aged 65 or older in Ontario in 1999 found that patients who were diligent in glucose management, as indicated by insulin use and frequent capillary glucose testing, were no more likely to use medications to control lipids or blood pressure than those who were not as diligent in glucose management. This suggests that preventative care for patients with diabetes may be too focused on glycemic control and may be neglecting the management of other cardiovascular risk factors (PUBMED:15557673). Furthermore, the prevalence of cardiovascular risk factors is high in type 2 diabetic patients, and a combination of drugs is often warranted to manage these risks. Despite this, the rate of admissions and in-hospital mortality remains high in type 2 diabetic patients, indicating a potential gap in the comprehensive management of risk factors (PUBMED:11688065). National mandates recommend aggressive management of risk factors such as hypertension, dyslipidemia, and platelet dysfunction in persons with diabetes to reduce morbidity and mortality. However, there is a need for tighter control of these cardiovascular risk factors in diabetic patients, which may include better control of raised blood pressure, hyperlipidemia, and hyperglycemia, as well as closer monitoring for the appearance of left ventricular hypertrophy and microalbuminuria (PUBMED:12769160). In conclusion, while glycemic control is a critical aspect of diabetes management, other risk factors such as lipid levels and blood pressure may not be managed with the same level of diligence. This could lead to an increased risk of complications, highlighting the need for a more balanced approach to diabetes care that addresses all relevant risk factors.
Instruction: Do patients' comfort levels and attitudes regarding medical student involvement vary across specialties? Abstracts: abstract_id: PUBMED:18278651 Do patients' comfort levels and attitudes regarding medical student involvement vary across specialties? Background: Studies on patient comfort with medical student involvement have been conducted within several specialties and have consistently reported positive results. However, it is unknown whether the intrinsic differences between specialties may influence the degree to which patients are comfortable with student involvement in their care. Aim: This is the first study to investigate whether patient comfort varies across specialties. Methods: A total of 625 patients were surveyed in teaching clinics in Family Medicine, Obstetrics/Gynaecology, Urology, General Surgery, and Paediatrics. Seven patient attitudes and patients' comfort levels based on student gender, level of training, and type of clinical involvement were assessed. Results: Patients in all specialties shared similar comfort levels and attitudes regarding medical student involvement for the majority of parameters assessed, suggesting that findings in this area may be generalised between specialties. Most of the inter-specialty variation found pertained to patient preference for student gender and the genitourinary specialties. Conclusion: As there are numerous specialties that have never undergone a similar investigation of their patients, this study has important implications for medical educators in those specialties by supporting their ability to apply the results and recommendations of studies conducted in other specialties to their own. abstract_id: PUBMED:11031149 Patients' attitudes and comfort levels regarding medical students' involvement in obstetrics-gynecology outpatient clinics. Purpose: To identify patients' attitudes toward the role of medical students, their preferences regarding medical student involvement, and their comfort level with a medical student's presence during common clinical situations in obstetrics-gynecology. Method: A self-administered questionnaire was distributed to patients waiting for an office visit with the obstetricians or gynecologists who served as preceptors for both male and female medical students. The questionnaire asked patients about their comfort levels with having medical students present during commonly encountered clinical situations. A random subsample of these patients were also asked whether they would allow a medical student to be present during future visits, and why or why not. Results: A total of 229 patients completed the survey and 124 responded to the supplemental survey. Sixteen respondents were excluded due to missing data or a lack of an adequate comparison group. A majority responded they would feel comfortable having a medical student present during most clinical situations. Almost half of the patients preferred to see the doctor and medical student together, while less than a quarter wanted to see just the physician. Patients with more experience with medical students were more likely to favor medical student involvement and would feel more comfortable having a medical student present during obstetrics or gynecology clinical situations. Conclusion: Patients are willing to involve and feel comfortable with medical students in the obstetrics-gynecology clinic. However, physicians and clinics need to take steps to ensure that patient willingness and comfort are maintained by asking patients about their comfort with medical student involvement, clearly outlining the roles and responsibilities of participating medical students, and gradually increasing medical students' responsibilities as patients gain more experience with them. abstract_id: PUBMED:36082526 Patients' Attitudes Towards Medical Student Presence in Psychiatric Consultations. Objectives: Studies on patient-student relationships have to date largely focused on student attitudes. This study explores attitudes of patients with psychiatric illness in Ireland, towards medical students. Patients' experience of consent for student involvement is an area of concern in previous studies and is also quantified here. Methods: This was a mixed-methods cross-sectional survey of Irish adult psychiatric patients. Quantitative analysis was carried out using SPSS 22 (Statistical Product and Service Solutions, Version 22, IBM). Differences on Likert score between groups (male/female, hospital site, past experience with students/ no experience) were analysed using ordinal logistic regression with a p-value below 0.05 being significant. Qualitative data were analysed by thematic analysis using OpenCode 4.03. Results: A total of 340 patients completed the survey. The mean age (sd) was 44.8 (16.3). 52.8% were female, 75.2% were outpatients. 24.3% had never met a medical student. Most patients were comfortable seeing students, but preferred students being passive observers. Patients with previous student experience had higher comfort levels and more positive attitudes. Although most patients (63.7%) strongly agreed they had been asked for consent, only 49.3% felt they had been given sufficient information. Qualitative data revealed preference for adequate information and notice of involvement. Patients felt pressured by student presence in certain circumstances. Conclusions: Psychiatric patients are comfortable with students but many feel inadequately informed. Patients recognise the benefits of interacting with students. More information is needed regarding circumstances in which patients give consent to involvement with students. abstract_id: PUBMED:24715936 Obstetric and gynecologic patients' attitudes and perceptions toward medical students in saudi arabia. Objective: To identify patients' attitudes, preferences and comfort levels regarding the presence and involvement of medical students during consultations and examinations. Methods: A cross-sectional descriptive study was conducted from September 2011 to December 2011 at King Abdulaziz University Hospital in Jeddah, Saudi Arabia. Participants were randomly selected from the outpatient and inpatient clinics at the Department of Obstetrics and Gynecology and the Emergency Department, provided they were admitted for obstetric or gynecology-related conditions. Data were collected using a structured questionnaire, and data analysis was performed using the Statistical Package for Social Sciences. Results: Of the 327 patients who were recruited, 272 (83%) were elective patients who were seen at the outpatient and inpatient clinics of the Department of Obstetrics and Gynecology (group I). The other 55 (16.8%) were seen at the Emergency Department or the Labor and Delivery Ward (group II). One hundred seventy-nine participants (160 [58.8%] in group I and 19 [34.5%] in group II) reported positive attitudes about the presence of female medical students during consultations. Fewer participants (115 [42.3%] were in group I and 17 [30.9%] in group II) reported positive attitudes regarding the presence of male medical students during consultations (p=0.095). The gender of the medical student was the primary factor that influenced patients' decision to accept or decline medical student involvement. No significant associations were observed between patients' attitudes and perceptions toward medical students and the patients' age, educational level, nationality or the gender of the consultant. Conclusion: Obstetrics and Gynecology patients are typically accepting of female medical student involvement during examinations. Student gender is the primary factor that influences patient attitudes regarding student involvement during physical examinations. abstract_id: PUBMED:26158326 Patients' Attitudes Toward Medical Student Participation Across Specialties: A Systematic Review. Unlabelled: Phenomenon: Medical students commonly participate in patient care in a variety of different settings. However, a systematic review of patients' attitudes toward medical student participation across specialties has not been performed. Approach: The authors searched 7 databases (CINAHL, Cochrane Library, ERIC, MEDLINE, PsycINFO, Scopus, and Web of Science) between January 1, 1999, and August 5, 2014. Two authors independently screened the results and selected articles that were written in English, were published in a peer-reviewed journal, and used a structured or semistructured survey or interview to determine patients' attitudes toward medical student participation in their care. Study quality was assessed using the Medical Education Research Study Quality Instrument. Findings: Fifty-nine studies were included. Average study quality was low. Sixty-one unique evaluation instruments were used, and 34 instruments (56%) lacked validity data. Patient satisfaction was not significantly affected by medical student participation. However, patients' acceptance of medical student participation varied widely between studies and depended on the type of participation. The most common reason for acceptance was a desire to contribute to the education of others, and the most common reason for refusal was concerns about privacy. Minorities were more likely to refuse medical student participation. Patients preferred to be informed before medical students participated in their care. Insights: Patient satisfaction is not significantly affected by medical student participation. However, patient satisfaction may be a poor surrogate marker of patients' acceptance of medical students. Future research should employ validated evaluation instruments to further explore patients' attitudes toward medical student participation. abstract_id: PUBMED:34457724 Medical Student Involvement and Perceptions of the Admissions Process. Minimal attention has been given to student involvement in medical school admissions practices. This study explores the role of medical students and perceptions of their involvement on admissions committees. Survey responses from US medical schools were varied regarding student role, service on the committee, and voting privileges. Medical student admissions committee members surveyed at our institution felt they were able to offer different perspectives to applicant evaluation than faculty. Findings suggest that medical students may be able to contribute to the admissions process in a variety of ways depending on institution-specific missions and goals. abstract_id: PUBMED:33527055 Medical Student Comfort With Procedural Skills Performance Based on Elective Experience and Career Interest. Introduction Despite increased efforts, studies suggest that exposure to procedural skills in undergraduate medical training is insufficient. As medical students have low self-reported competence in many skills, a significant concern is that medical students are underprepared for a clerkship. Furthermore, pre-clerkship electives selected based on student career interests can provide students with additional skills learning opportunities. The impact of career interest and elective choice on student comfort with procedural skills is unclear. This study examines the relationship between student procedural skills comfort, career interest, and elective choices. Materials and methods An evidence-based questionnaire was synthesized following a literature search using PubMed, Embase, and Google Scholar. Surveys were completed by second-year medical students. A Likert scale was used to evaluate students' exposure, comfort, and motivation to learn common procedural skills. Descriptive, Pearson's chi-square and Spearman's rho correlation coefficient analyses were performed to evaluate the relationship between career interests, elective exposure, and procedural skills. Results Medical students (&gt;60%) reported poor comfort levels for most skills, despite &gt;80% of students displaying high motivation to learn. Elective choice impacted student comfort levels as students who completed electives in anesthesiology were more comfortable with performing intubation (23% vs 10%, p = 0.026) and IV insertion (38% vs 13%, p = 0.002). Those with surgical career interests were less comfortable performing Foley catheter insertion in males (7% vs 5%, p = 0.033) and in females (7% vs 5%, p = 0.008). Conclusions This study supports that medical students feel low levels of comfort with performing procedural skills despite high motivation for learning. Comfort was influenced by both career interest and elective experience. Programs aiming to increase students' comfort levels in performing procedural skills should adapt curricula toward increasing early exposure to these skills. abstract_id: PUBMED:31809635 An educational intervention to improve attitudes regarding HPV vaccination and comfort with counseling among US medical students. Many medical students are not comfortable recommending the human papillomavirus (HPV) vaccine because they do not feel prepared to discuss it with their patients. A prior study demonstrated that this is particularly a problem among unvaccinated students. Our purpose was to determine if medical student attitudes and comfort with counseling could be improved by attending a single lecture delivered by an expert on the topic. To assess the effects of the educational program, we conducted pre- and posttests on medical students before and after a single lecture on HPV vaccination. Changes in items related to attitude and comfort were examined. Student characteristics associated with changes in scores were also examined and compared. A total of 256 medical students participated in the pre- and posttests. Before the lecture, students demonstrated low knowledge of HPV vaccination and did not feel comfortable counseling parents of younger patients. However, students &lt;30 years of age demonstrated significant improvements after the lecture in comfort. Asian and Hispanic students showed the greatest improvement in comfort with counseling, as did students who reported they had not received the HPV vaccine. Attending a single lecture given by an expert can improve medical students' attitudes and comfort with HPV vaccine counseling, especially if the students were not vaccinated themselves. This study suggests that including material on HPV vaccination in the standard medical student curriculum could help increase physician recommendation for the HPV vaccine. abstract_id: PUBMED:25010234 Comparison of patient attitudes and provider perceptions regarding medical student involvement in obstetric/gynecologic care. Background: Community physicians are becoming increasingly involved in clinical medical education. Some obstetrician/gynecologists have expressed reluctance to participate as clinical preceptors for medical students due to the sensitive nature of many of their patient encounters and concern for diminished patient satisfaction. Purposes: The purpose was to evaluate the willingness of community ob/gyn patients to participate in clinical medical education and to determine the accuracy of provider perceptions regarding this issue. Methods: Surveys were distributed to women seeking ob/gyn care at 4 private practice sites in Tucson, Arizona. The surveys explored patient attitudes toward community physician involvement in clinical medical education as well as factors influencing personal willingness to include students as part of their healthcare team. Similar surveys were administered to the ob/gyn providers in those sites and evaluated their expectations of aggregate patient responses. Results: Of 234 patient respondents, 87.6% believed that physicians have a responsibility to participate in medical education. Providers underestimated the number of patients for whom such participation would positively influence their personal provider choice (12.7% vs. 30.8%, p&lt;.01) and overestimated negative (16.7% vs. 6.8%, p&lt;.01) influence. Providers also underestimated acceptance rates of student pelvic examinations based on learner gender (13.8% vs. 24.3% male students, p=.01; 28.1% vs. 44.4% female students, p&lt;.01). Conclusions: Patients in southern Arizona recognize and appreciate physicians' responsibility to educate future providers of women's healthcare. Providers may underestimate patient acceptance and value of students as part of their healthcare team. This bias may unnecessarily limit student exposure to clinical learning opportunities. abstract_id: PUBMED:31791810 First year allopathic medical student attitudes about vaccination and vaccine hesitancy. First year medical students at an allopathic medical school regional campus were asked to complete a 10-question survey at both the beginning and the conclusion of a required course on immunology, hematology and oncology. The survey was designed to solicit student attitudes about vaccination and the students' level of comfort with and exposure to vaccine hesitant patients. Surveys were administered to five consecutive classes from 2013 to 2017. Total response rate for completion of both surveys was 58.0% (178/307). Pre- and post-course surveys were administered to assess whether curricular experiences altered the students' perceptions about vaccinations and their ability to counsel vaccine hesitant patients. Curricular elements were presented in several different formats aimed at increasing student knowledge about vaccinations and student capacity to work with vaccine hesitant patients and families (problem-based learning, didactic lecture and interactive panel discussion). In the pre-survey, the majority of students reported having already encountered people who decline vaccinations (78.8%). Additionally, in the pre-survey the majority of medical students expressed strong support for vaccination (99.6% agreed with the statement that Vaccinations are a great public health accomplishment) and strong disagreement with both of the assertions that childhood vaccines cause autism and vaccine preservatives cause adverse health effects. In response to questions about comfort level while talking with patients about vaccine choices, baseline responses demonstrated a moderately high level of comfort. Post-course surveys revealed a statistically significant increase in student-rated comfort level in talking with patients about vaccine choices (pre-survey 79.2% report comfort versus post 97.8%; p &lt; 0.001). Though this study is limited to student self-reporting, survey responses suggest that targeted curricular elements can improve medical student confidence in counseling patients about vaccinations. Future curricular elements designed to directly observe student performance could provide verification of counseling skill acquisition. Answer: Patients' comfort levels and attitudes regarding medical student involvement do not significantly vary across a wide range of specialties, as suggested by the studies. A study surveying 625 patients across Family Medicine, Obstetrics/Gynaecology, Urology, General Surgery, and Paediatrics found that patients in all specialties shared similar comfort levels and attitudes regarding medical student involvement for the majority of parameters assessed. This suggests that findings in this area may be generalized between specialties, although some variation was noted pertaining to patient preference for student gender in genitourinary specialties (PUBMED:18278651). In obstetrics-gynecology outpatient clinics, a majority of patients felt comfortable having medical students present during most clinical situations, and patients with more experience with medical students were more likely to favor their involvement (PUBMED:11031149). Similarly, in psychiatry, most patients were comfortable seeing students, especially if they had previous experience with students, but many felt inadequately informed about the students' roles (PUBMED:36082526). In Saudi Arabia, obstetric and gynecologic patients generally accepted female medical student involvement during examinations, with student gender being a primary factor influencing patient attitudes (PUBMED:24715936). A systematic review also indicated that patient satisfaction was not significantly affected by medical student participation, but acceptance varied depending on the type of participation and patients preferred to be informed before medical students participated in their care (PUBMED:26158326). Overall, while there may be some differences in patient comfort and attitudes based on specific circumstances, such as the type of clinical involvement or the gender of the medical student, the general trend across various specialties indicates a level of acceptance and comfort with medical student involvement in patient care.
Instruction: Are religiousness and death attitudes associated with the wish to die in older people? Abstracts: abstract_id: PUBMED:26300555 Are religiousness and death attitudes associated with the wish to die in older people? Background: A wish to die is common in older persons and is associated with increased mortality. Several risk factors have been identified, but the association between religiousness and a wish to die in older adults has been underexplored, and the association between death attitudes and the presence of a wish to die has not been investigated yet. The aim of this study is to explore the relationship between religiousness and death attitudes on the one hand and wish to die on the other hand, adjusting for clinical factors such as the presence of depression or somatic disorder. Methods: The sample comprised 113 older inpatients (from a psychiatric and somatic ward) with a mean age of 74 years. Psychiatric diagnoses were assessed by the Structured Clinical Interview for DSM-IV Disorders, and logistic regression analyses estimated the unique contribution of religiousness and death attitudes to the wish to die, controlling for socio-demographic variables, depressive disorder, and somatic symptoms. Results: Both religiousness and death attitudes were associated with a wish to die in univariate models. Adding these variables in a multivariate logistic hierarchical model, death attitudes remained significant predictors but religiousness did not; 55% of the pseudovariance of the wish to die was explained by these variables, with an effective size of 0.89. Major depressive episode, somatic symptoms, Fear of Death, and Escape Acceptance were the most important predictors of the wish to die. Conclusions: This study suggests that how older adults perceive death partly determines whether they have a wish to die. There may be a clinical, patient-oriented benefit in discussing with older patients about how they perceive death, as this can play a role in the early detection (and prevention) of death or suicide ideation and associated behaviors in older adults. abstract_id: PUBMED:32112569 Wish to Die in Older Patients: Development and Validation of Two Assessment Instruments. Objectives: The wish to die may be different in geriatric patients than in younger terminally ill patients. This study aimed to develop and validate instruments for assessing the wish to die in geriatric patients. Design: Cross-sectional study. Setting: Geriatric rehabilitation unit of a university hospital. Participants: Patients (N = 101) aged 65 years or older with a Mini-Mental State Examination score of 20 or higher, admitted consecutively over a 5-month period. Measurements: The Schedule of Attitudes Toward Hastened Death (SAHD) was adapted to the older population (SAHD-Senior). A second tool was developed based on qualitative literature, the Categories of Attitudes Toward Death Occurrence (CADO). After cognitive pretesting, these instruments were validated in a sample of patients admitted to a geriatric rehabilitation unit. Results: The SAHD-Senior showed good psychometric properties and a unifactorial structure. In the studied sample, 12.9% had a SAHD-Senior score of 10 or higher, suggesting a significant wish to die. Associations were observed between high levels of the SAHD-Senior and advanced age, high levels of depressive symptoms, lower quality of life, and lower cognitive function. The CADO allowed for passive death wishes to be distinguished from wishes to actively hasten death. According to the CADO, 14.9% of the sample had a wish to die. The two instruments showed a concordance rate of 90.1%. Conclusion: The wish to die in older patients admitted to rehabilitation can be validly assessed with two novel instruments. The considerable proportion with a wish to die warrants investigation into concept, determinants, and management of the wish to die. J Am Geriatr Soc 68:1202-1209, 2020. abstract_id: PUBMED:33570600 The 'Wish to Die' in later life: prevalence, longitudinal course and mortality. Data from TILDA. Background: 'Wish to Die' (WTD) involves thoughts of or wishes for one's own death or that one would be better off dead. Objective: To examine the prevalence, longitudinal course and mortality-risk of WTD in community-dwelling older people. Design: Observational study with 6-year follow-up. Setting: The Irish Longitudinal Study on Ageing, a nationally representative cohort of older adults. Subjects: In total, 8,174 community-dwelling adults aged ≥50 years. Methods: To define WTD, participants were asked: 'In the last month, have you felt that you would rather be dead?' Depressive symptoms were measured using the CES-D. Mortality data were compiled by linking administrative death records to individual-level survey data from the study. Results: At Wave 1, 3.5% of participants (279/8,174) reported WTD. Both persistent loneliness (OR 5.73 (95% CI 3.41-9.64)) and depressive symptoms (OR 6.12 (95% CI 4.33-8.67)) were independently associated with WTD.Of participants who first reported WTD at Wave 1 or 2, 72% did not report WTD when reassessed after 2 years, and the prevalence of depressive symptoms (-44%) and loneliness (-19%) was more likely to decline in this group at follow-up.Fifteen per cent of participants expressing WTD at Wave 1 died during a 6-year follow-up. Conclusions: WTD amongst community-dwelling older people is frequently transient and is strongly linked with the course of depressive symptoms and loneliness. An enhanced focus on improving access to mental health care and addressing social isolation in older people should therefore be a public health priority, particularly in the current context of the Covid-19 pandemic. abstract_id: PUBMED:32928145 Life worth living: cross-sectional study on the prevalence and determinants of the wish to die in elderly patients hospitalized in an internal medicine ward. Background: Elderly people frequently express the wish to die: this ranges from a simple wish for a natural death to a more explicit request for death. The frequency of the wish to die and its associated factors have not been assessed in acute hospitalization settings. This study aimed to investigate the prevalence and determinants of the wish to die in elderly (≥65 years) patients hospitalized in an internal medicine ward. Methods: This cross-sectional study was conducted between 1 May, 2018, and 30 April, 2019, in an acute care internal medicine ward in a Swiss university hospital. Participants were a consecutive sample of 232 patients (44.8% women, 79.3 ± 8.1 years) with no cognitive impairment. Wish to die was assessed using the Schedule of Attitudes toward Hastened Death-senior and the Categories of Attitudes toward Death Occurrence scales. Results: Prevalence of the wish to die was 8.6% (95% confidence interval [CI]: 5.3-13.0). Bivariate analysis showed that patients expressing the wish to die were older (P = .014), had a lower quality of life (P &lt; .001), and showed more depressive symptoms (P = .044). Multivariable analysis showed that increased age was positively (odds ratio [OR] for a 5-year increase: 1.43, 95% CI 0.99-2.04, P = .048) and quality of life negatively (OR: 0.54, 95% CI 0.39-0.75, P &lt; 0.001) associated with the likelihood of wishing to die. Participants did not experience stress during the interview. Conclusions: Prevalence of the wish to die among elderly patients admitted to an acute hospital setting is low, but highly relevant for clinical practice. Older age increases and better quality of life decreases the likelihood of wishing to die. Discussion of death appears to be well tolerated by patients. abstract_id: PUBMED:28933658 A Captive, a Wreck, a Piece of Dirt: Aging Anxieties Embodied in Older People With a Death Wish. The aims of this present study were to explore the use and meaning of metaphors and images about aging in older people with a death wish and to elucidate what these metaphors and images tell us about their self-understanding and imagined feared future. Twenty-five in-depth interviews with Dutch older people with a death wish (median 82 years) were analyzed by making use of a phenomenological-hermeneutical metaphor analysis approach. We found 10 central metaphorical concepts: (a) struggle, (b) victimhood, (c) void, (d) stagnation, (e) captivity, (f) breakdown, (g) redundancy, (h) subhumanization, (i) burden, and (j) childhood. It appears that the group under research does have profound negative impressions of old age and about themselves being or becoming old. The discourse used reveals a strong sense of distance, disengagement, and nonbelonging associated with their wish to die. This study empirically supports the theory of stereotype embodiment. abstract_id: PUBMED:24989084 Are gender and life attitudes associated with the wish to die in older psychiatric and somatic inpatients? An explorative study. Background: Death wishes are not uncommon in older persons, and to date, several risk factors have been identified. The presence of these risk factors is insufficient to fully understand why some older people, who are exposed to them, develop a wish to die and why others do not. The purpose of the study was to explore whether Purpose in Life as well as other life attitudes are associated with a death wish in older males and females. Methods: The sample comprised 113 older inpatients (from a psychiatric and somatic ward) with a mean age of 74 years. Psychiatric diagnoses were assessed by the SCID-II. Logistic regression analyses estimated the unique contribution of (the interaction between) life attitudes and gender to the wish to die, controlling for sociodemographic variables, depressive disorder, and somatic symptoms. Results: We observed a statistically significant relationship between life attitudes and the wish to die. Purpose in Life and the Purpose in Life*Gender interaction explained significant additional variance in the prediction of the wish to die. Purposelessness in life might therefore be an important correlate of a wish to die, especially in older men, independently from sociodemographic and clinical features. Conclusions: In assessing a wish to die in older adults, life attitudes need to be taken into account, besides the presence of a depressive disorder and/or somatic health. More specifically, finding or maintaining a purpose in later life might be an important feature in the prevention of the wish to die, especially in male persons. abstract_id: PUBMED:27237707 Factors determining the balance between the wish to die and the wish to live in older adults. Background: The "Internal Struggle Hypothesis" (Kovacs and Beck, ) suggests that suicidal persons may have both a wish to live (WTL) and a wish to die (WTD). The current study investigates whether the three-group typology - "WTL", "ambivalent (AMB)", and "WTD" - is determined by common correlates of suicidality and whether these groups can be ordinally ranked. Methods: The sample comprised 113 older inpatients. Discriminant analysis was used to create two functions (combining social, psychiatric, psychological, and somatic variables) to predict the assignment of older inpatients into the groups WTL, AMB, and WTD. Results: The functions "Subjective Well-being" and "Social Support" allowed us to assign patients into these three distinct groups with good accuracy (66.1%). "Subjective Well-being" contrasted the groups WTD and WTL and "Social Support" discriminated between the groups WTD and AMB. "Social Support" was highest in the AMB group. Conclusions: Our results suggest a simultaneous presence of a WTL and a WTD in older inpatients, and also that the balance between them is determined by "Subjective Well-being" and "Social Support". Unexpectedly, the AMB group showed the highest scores on "Social Support". We hypothesize that higher social support might function as an important determinant of a remaining WTL when a WTD is present because of a lower sense of well-being. The study suggests that the groups WTL-AMB-WTD can not situated on a one-dimensional continuum. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:33807000 Assessing the Determinants of the Wish to Die among the Elderly Population in Ghana. Background: A wish to die is common in elderly people. Concerns about death wishes among the elderly have risen in Ghana, where the ageing transition is comparable to other low-and middle-income countries. However, nationally representative research on death wishes in the elderly in the country is not readily available. Our study aimed to assess the determinants of the wish to die among the elderly in Ghana. Methods: We analysed data from the World Health Organisation Global Ageing and Adult Health Survey, Wave 1 (2007-2008) for Ghana. Data on the wish to die, socio-demographic profiles, health factors and substance abuse were retrieved from 2147 respondents aged 65 and above. Ages of respondents were categorised as 65-74 years; 75-84 years; 85+ to reflect the main stages of ageing. Logistic regression models were fitted to assess the association between these factors and the wish to die. Results: Age, sex, place of residence, education, body mass index, hypertension, stroke, alcohol consumption, tobacco use, income, diabetes, visual impairment, hopelessness and depression had statistically significant associations with a wish to die. Older age cohorts (75-84 and 85+) were more likely to have the wish to die (AOR = 1.05, CI = 1.02-1.16; AOR = 1.48, CI = 1.22-1.94), compared to younger age cohorts (65-74 years). Persons who felt hopeless had higher odds (AOR = 2.15, CI = 2.11-2.20) of experiencing the wish to die as compared to those who were hopeful. Conclusions: In view of the relationship between socio-demographic (i.e., age, sex, education and employment), hopelessness, anthropometric (body mass index), other health factors and the wish to die among the elderly in Ghana, specific biopsychosocial health promotion programmes, including timely identification of persons at risk, for appropriate intervention (e.g., psychotherapy, interpersonal support, alcohol-tobacco cessation therapy, clinical help) to promote their wish for a longer life is needed. abstract_id: PUBMED:34020628 Current wishes to die; characteristics of middle-aged and older Dutch adults who are ready to give up on life: a cross-sectional study. Background: Literature shows that middle-aged and older adults sometimes experience a wish to die. Reasons for these wishes may be complex and involve multiple factors. One important question is to what extent people with a wish to die have medically classifiable conditions. Aim: (1) Estimate the prevalence of a current wish to die among middle-aged and older adults in The Netherlands; (2) explore which factors within domains of vulnerability (physical, cognitive, social and psychological) are associated with a current wish to die; (3) assess how many middle-aged and older adults with a current wish to die do not have a medically classifiable condition and/or an accumulation of age-related health problems. Methods: Data of 2015/16 from the Longitudinal Aging Study Amsterdam were used for this cross-sectional study (1563 Dutch middle-aged and older adults aged between 57 and 99 years), obtained through structured medical interviews and self-reported questionnaires. Three experienced physicians assessed whether the participants with a current wish to die could be classified as having a medically classifiable condition and/or an accumulation of age-related health problems. Results: N = 62 participants (4.0%) had a current wish to die. Having a current wish to die was associated with multiple characteristics across four domains of vulnerability, among which: self-perceived health, problems with memory, self-perceived quality of life and meaningfulness of life. Fifty-four participants with a current wish to die were assessed with having a medically classifiable condition, of which one was also assessed with having an accumulation of age-related health problems. Six people were assessed to have neither, and for two people it was unclear. Conclusion: A small minority of middle-aged and older adults in the Netherlands have a current wish to die. Most of them can be classified with a medical condition and one person with an accumulation of age-related health problems. Furthermore, the findings show that having a current wish to die is multi-faceted. There is still a need for more knowledge, such as insight in to what extent suffering stemming from the medical classifiable disease contributes to the development of the wish to die. abstract_id: PUBMED:31466508 Wish to Die: Suicide and Its Link to Euthanasia. This article analyzes suicidal behavior and how its inherent processes of death ideation can overlap with those seeking euthanasia. We present a literature review of three main events in suicide (suicidal ideation, suicide attempt, and suicide) in different populations and evaluate implications for health-care practice and risk assessment taking into account the context of euthanasia. We ponder upon the motives behind suicide and its link with wish to die requests to hasten death. We discuss the possibility of the reversal of a wish to die as well as a potential process of differentiating between individuals who would maintain their wish and benefit from termination of life and others who would later change their minds. Answer: Yes, religiousness and death attitudes are associated with the wish to die in older people. A study found that both religiousness and death attitudes were linked to a wish to die in univariate models. However, when these variables were included in a multivariate logistic hierarchical model, death attitudes remained significant predictors, but religiousness did not. Major depressive episode, somatic symptoms, Fear of Death, and Escape Acceptance were identified as the most important predictors of the wish to die (PUBMED:26300555). This suggests that older adults' perceptions of death are partly determinative of whether they have a wish to die, and discussing their perceptions of death could be beneficial in early detection and prevention of death or suicide ideation and associated behaviors in older adults.
Instruction: Marital status and colon cancer outcomes in US Surveillance, Epidemiology and End Results registries: does marriage affect cancer survival by gender and stage? Abstracts: abstract_id: PUBMED:21466984 Marital status and colon cancer outcomes in US Surveillance, Epidemiology and End Results registries: does marriage affect cancer survival by gender and stage? Background: Marital status has been associated with outcomes in several cancer sites including breast cancer in the literature, but little is known about colon cancer, the fourth most common cancer in the US. Methods: A total of 127,753 patients with colon cancer were identified who were diagnosed between 1992 and 2006 in the US Surveillance, Epidemiology and End Results (SEER) Program. Marital status consisted of married, single, separated/divorced and widowed. Chi-square tests were used to examine the association between marital status and other variables. The Kaplan-Meier method was used to estimate survival curves. Cox proportional hazards models were fit to estimate the effect of marital status on survival. Results: Married patients were more likely to be diagnosed at an earlier stage (and for men also at an older age) compared with single and separated/divorced patients, and more likely to receive surgical treatment than all other marital groups (all p&lt;0.0001). The five-year survival rate for the single was six percentage points lower than the married for both men and women. After controlling for age, race, cancer stage and surgery receipt, married patients had a significantly lower risk of death from cancer (for men, HR: 0.86, CI: 0.82-0.90; for women, HR: 0.87, CI: 0.83-0.91) compared with the single. Within the same cancer stage, the survival differences between the single and the married were strongest for localized and regional stages, which had overall middle-range survival rates compared to in situ or distant stage so that support from marriage could make a big difference. Conclusions: Marriage was associated with better outcomes of colon cancer for both men and women, and being single was associated with lower survival rate from colon cancer. abstract_id: PUBMED:25749515 The influence of marital status on stage at diagnosis and survival of patients with colorectal cancer. Marital status was found to be an independent prognostic factor for survival in various cancer types, but it hasn't been fully studied in colorectal cancer (CRC). The Surveillance, Epidemiology and End Results database was used to compare survival outcomes with marital status in each stage. In total, 112, 776 eligible patients were identified. Patients in the widowed group were more frequently elderly women, more common of colon cancer, and more stage I/II in tumor stage (P &lt; 0.001), but the surgery rate was comparable to that for the married group (94.72% VS 94.10%). Married CRC patients had better 5year cause-specific survival (CSS) than those unmarried (P &lt; 0.05). Further analysis showed that widowed patients always presented the lowest CSS compared with that of other' group. Widowed patients had 5% reduction 5-year CSS compared with married patients at stage I (94.8% vs 89.8%, P &lt; 0.001), 9.4% reduction at stage II (85.9% vs 76.5%, P &lt; 0.001), 16.7% reduction at stage III (70.6% vs 53.9%, P &lt; 0.001) and 6.2% reduction at stage IV(14.4% VS 8.2%, P &lt; 0.001). These results showed that unmarried patients were at greater risk of cancer specific mortality. Despite favorable clinicpathological characteristics, widowed patients were at highest risk of death compared with other groups. abstract_id: PUBMED:34353300 The effect of marital and insurance status on the survival of elderly patients with stage M1b colon cancer: a SEER-based study. Background: Colon cancer is largely implicated in elderly patients (age ≥ 60 years). The prognosis of patients diagnosed with the M1b stage is vastly poor. Marital and insurance status has been considered important prognostic factors in various cancer types. However, how these factors influence elderly patients with stage M1b colon cancer remains to be explored. This study aims to uncover the role of marital and insurance status in the survival of elderly patients with stage M1b colon cancer. Methods: We retrieved data for patients diagnosed with stage M1b colon cancer between 2010 and 2016 from the Surveillance, Epidemiology, and End Results (SEER) database. Our analysis of the clinicopathological features, overall survival (OS), and cancer-specific survival (CSS) was based on the marital and insurance status, respectively. Results: In sum, 5709 stage M1b colon cancer patients with complete information from SEER were enrolled for analysis. The OS and CSS of the Non-married group were poorer compared to that of the Married group. The OS and CSS of the Uninsured group were poorer than both of the Insured group and Medicaid group. However, OS was comparable between Uninsured group and Medicaid groups. The findings allude that marital and insurance status potentially impact the long-term survival of elderly patients with M1b colon cancer. The subgroup survival analyses revealed the lowest risk for death among the Insured Married group based on the comparison of the OS and CSS across all other groups. Moreover, Univariate and multivariate analyses revealed race, marital status, surgery, and chemotherapy as independent predictors for OS, whereas insurance status, surgery,and chemotherapy were independent predictors for CSS in elderly patients with M1b colon cancer. Conclusion: The marital and insurance status greatly impact the survival of elderly patients with M1b colon cancer. Therefore, it is imperative to provide more support to this vulnerable patient group who are lonely and uninsured, particularly in the psychological and health insurance aspect. abstract_id: PUBMED:36388692 Development and validation of a survival prediction model for 113,239 patients with colon cancer: a retrospective cohort study. Background: Colon cancer (CC) is the third most commonly diagnosed malignant tumor and remains the second leading cause of cancer-related deaths worldwide. However, the risk assessment of poor prognosis of CC is limited in previous studies. This study aimed to develop a predictive nomogram for the survival of CC patients. Methods: In this retrospective cohort study, 113,239 CC patients from the Surveillance, Epidemiology, and End Results (SEER) database were randomly divided into training (n=56,619) and testing (n=56,620) sets with a ratio of 1:1. Demographic, clinical data and survival status of patients were extracted. The outcomes were 3- and 5-year survival of CC. Univariate and multivariate Cox regression analyses were used to screen the predictors to develop the predictive nomogram. Internal validation and stratified analyses were further assessed the nomogram. The C-index and area under the curve (AUC) were calculated to estimate the model's predictive capacity, and calibration curves were adopted to estimate the model fit. Results: Totally 38,522 (34.02%) patients died during the 5-year follow-up. The nomogram incorporated variables associated with the prognosis of CC patients, including age, gender, marital status, insurance status, tumor grade, stage (T/N/M), surgery, and number of nodes examined, with a C-index of 0.775 in the training set and 0.774 in the testing set. The AUCs of the nomogram for the 3- and 5-year survival prediction in the training set were 0.817 and 0.808, with the sensitivity of 0.688 and 0.716, and the specificity of 0.785 and 0.740, respectively. Similar results were found in the testing set. The C-index of the predictive nomogram for male, female, White, Black, and other races was 0.769, 0.779, 0.773, 0.770, and 0.770, respectively. The calibration curves for the nomogram in the above five cohorts showed a good agreement between actual and predicted values. Conclusions: The nomogram may exhibit a certain predictive performance based on the SEER database, which may provide individual survival predictions for CC patients. abstract_id: PUBMED:35641198 Marital Status, Living Arrangement, and Cancer Recurrence and Survival in Patients with Stage III Colon Cancer: Findings from CALGB 89803 (Alliance). Background: Limited and conflicting findings have been reported regarding the association between social support and colorectal cancer (CRC) outcomes. We sought to assess the influences of marital status and living arrangement on survival outcomes among patients with stage III colon cancer. Patients And Methods: We conducted a secondary analysis of 1082 patients with stage III colon cancer prospectively followed in the CALGB 89803 randomized adjuvant chemotherapy trial. Marital status and living arrangement were both self-reported at the time of enrollment as, respectively, married, divorced, separated, widowed, or never-married, and living alone, with a spouse or partner, with other family, in a nursing home, or other. Results: Over a median follow-up of 7.6 years, divorced/separated/widowed patients experienced worse outcomes relative to those married regarding disease free-survival (DFS) (hazards ratio (HR), 1.44 (95% CI, 1.14-1.81); P =.002), recurrence-free survival (RFS) (HR, 1.35 (95% CI, 1.05-1.73); P = .02), and overall survival (OS) (HR, 1.40 (95% CI, 1.08-1.82); P =.01); outcomes were not significantly different for never-married patients. Compared to patients living with a spouse/partner, those living with other family experienced a DFS of 1.47 (95% CI, 1.02-2.11; P = .04), RFS of 1.34 (95% CI, 0.91-1.98; P = .14), and OS of 1.50 (95% CI, 1.00-2.25; P =.05); patients living alone did not experience significantly different outcomes. Conclusion: Among patients with stage III colon cancer who received uniform treatment and follow-up within a nationwide randomized clinical trial, being divorced/separated/widowed and living with other family were significantly associated with greater colon cancer mortality. Interventions enhancing social support services may be clinically relevant for this patient population. Trial Registration: ClinicalTrials.gov Identifier: NCT00003835. abstract_id: PUBMED:36644182 Chemotherapy exacerbates the survival paradox of colon cancer: a propensity score matching analysis. Background: Colon cancer is one of the most common tumor diseases in the world. Currently, clinicians usually evaluate the survival and prognosis of patients according to their tumor-node-metastasis (TNM) stage. However, current studies have found that there is a certain survival paradox in TNM staging. Methods: In the Surveillance, Epidemiology, and End Results (SEER) database, patients diagnosed with colon cancer by surgical pathology from 2004 to 2011 were selected for analysis of 5-year overall survival (OS). Propensity score matching (PSM) was performed to analyze the difference in survival between different stages and the effect of chemotherapy on prognosis. Results: The OS of stage IIIA colon cancer sufferers was significantly superior to stage IIB/IIC and separate stage IIB or IIC colon cancer patients before and after PSM analysis (P&lt;0.05 for all). Moreover, the difference in survival was more significant when stage IIB/IIC patients were compared with stage IIIA patients with chemotherapy. Conclusions: The survival paradox existed both in all stage IIB/IIC patients, or individual stage IIB or IIC patients compared with stage IIIA sufferers, and the survival paradox between stage IIIA and stage IIC was more obvious. Moreover, chemotherapy had a positive effect on the prognosis of patients with stage IIIA, IIC and IIB in this study. Chemotherapy exacerbates the survival paradox of colon cancer, even if it is not the cause of the survival paradox. abstract_id: PUBMED:37428251 Prognostic nomogram for colorectal cancer patients with multi-organ metastases: a Surveillance, Epidemiology, and End Results program database analysis. Background: A nomogram that integrates risk models and clinical characteristics can accurately predict the prognosis of individual patients. We aimed to identify the prognostic factors and establish nomograms for predicting overall survival (OS) and cause-specific survival (CSS) in patients with multi-organ metastatic colorectal cancer (CRC). Methods: Demographic and clinical information on multi-organ metastases from 2010 to 2019 were extracted from the Surveillance, Epidemiology, and End Results (SEER) Program. Univariate and multivariate Cox analyses were used to identify independent prognostic factors that were used to develop nomograms to predict CSS and OS, and to assess the concordance index (C-index), area under the curve (AUC), and calibration curve. Results: The patients were randomly assigned to the training and validation groups at a 7:3 ratio. A Cox proportional hazards model was conducted for CRC patients to identify independent prognostic factors, including age, sex, tumor size, metastases, degree of differentiation, stage T, stage N, primary and metastasis surgery. The competing risk models employed by Fine and Gray were used to identify the risk factors for CRC. Death from other causes was treated as a competing event, and Cox models were used to identify the factors for death to identify the independent factors of CSS. By incorporating the corresponding independent prognostic factors, we established prognostic nomograms for OS and CSS. Finally, we used the C-index, ROC curve, and calibration plots to assess the utility of the nomogram. Conclusions: Using the SEER database, we constructed a predictive model for CRC patients with multi-organ metastases. Nomograms provide clinicians with 1-, 3-, and 5-year OS and CSS predictions for CRC, allowing them to formulate appropriate treatment plans. abstract_id: PUBMED:30882684 The impact of marital status on survival in patients with surgically treated colon cancer. The aim of this study was to investigate the relationship between marital status and disease outcome in patients with surgically treated colon cancer. Between June 2010 and December 2015, a total of 925 patients with newly diagnosed colon cancer receiving curative resection were enrolled. The effect of marital status on 5-year disease-specific survival (DSS) was calculated using Kaplan-Meier method, and was compared by log-rank tests. A Cox regression model was used to find significant independent variables and determine whether marriage had a survival benefit in patients with colon cancer, using stratified analysis. Among these patients, 749 (80.9%) were married, and 176 (19.1%) were unmarried, including 42 (4.5%) never-married, 42 (4.5%) divorced/separated, and 93 (10.1%) widowed. There was no significant difference between the married and unmarried groups in cancer stage or adjuvant treatment. Married patients had better 5-year DSS compared with unmarried patients (69.1% vs 55.9%, P &lt; .001). Uni- and multivariate analyses also indicated that unmarried patients had worse 5-year DSS after adjusting for various confounders (adjusted HR [aHR], 1.66; 95% CI, 1.24-2.22). Further stratified analysis according to demographic variables revealed that unmarried status was a significant negative factor in patients with the following characteristics: age &gt;65 years, female sex, well/moderately differentiated tumor, and advanced tumor-node-metastasis (TNM) stage disease (III-IV). Thus, marriage has a protective effect, and contributes to better survival in patients with surgically treated colon cancer. Additional social support for unmarried colon cancer patients may lead to improve outcomes. abstract_id: PUBMED:31630307 Disparities in surgery for early-stage cancer: the impact of refusal. Background: For early-stage cancer surgery is often curative, yet refusal of recommended surgical interventions may be contributing to disparities in patient treatment. This study aims to assess predictors of early-stage cancers surgery refusal, and the impact on survival. Methods: Patients recommended surgery with primary stage I and II lung, prostate, breast, and colon cancers, diagnosed between 2007-2014, were identified in the Surveillance, Epidemiology and End Results database (n = 498,927). Surgery refusal was reported for 5,757 (1.2%) patients. Associations between sociodemographic variables and surgery refusal by cancer type were assessed in adjusted multivariable logistic regression models. The impact of refusal on survival was investigated using adjusted Cox-Proportional Hazard regression in a propensity score-matched cohort. Results: Increasing age (p &lt; 0.0001 for all four cancer types), non-Hispanic Black race/ethnicity (ORadjBREAST 2.00, 95% CI 1.68-2.39; ORadjCOLON 3.04, 95% CI 2.17-4.26; ORadjLUNG 2.19, 95% CI 1.77-2.71; ORadjPROSTATE 2.02, 95% CI 1.86-2.20; vs non-Hispanic White), insurance status (uninsured: ORadjBREAST 2.75, 95% CI 1.89-3.99; ORadjPROSTATE 2.10, 95% CI 1.72-2.56; vs insured), marital status (ORadjBREAST 2.16, 95% CI 1.85-2.51; ORadjCOLON 1.56, 95% CI 1.16-2.10; ORadjLUNG 2.11, 95% CI 1.80-2.47; ORadjPROSTATE 1.94, 95% CI 1.81-2.09), and stage (ORadjBREAST 1.94, 95% CI 1.70-2.22; ORadjCOLON 0.13, 95% CI 0.09-0.18; ORadjLUNG 0.71, 95% CI 0.52-0.96) were all associated with refusal; patients refusing surgery were at increased risk of death compared to patients who underwent surgery. Conclusions: More vulnerable patients are at higher risk of refusing recommended surgery, and this decision negatively impacts their survival. abstract_id: PUBMED:38111780 Advantage of log odds of positive lymph nodes in prognostic evaluation of patients with early-onset colon cancer. Background: Colon cancer (CC) is one of the most common cancers of the digestive tract, the third most common cancer worldwide, and the second most common cause of cancer-related deaths. Previous studies have demonstrated a higher risk of lymph node metastasis (LNM) in young patients with CC. It might be reasonable to treat patients with early-onset locally advanced CC with extended lymph node dissection. However, few studies have focused on early-onset CC (ECC) patients with LNM. At present, the methods of predicting and evaluating the prognosis of ECC patients with LNM are controversial. Aim: To compare the prognostic values of four lymph node staging indices and establish the best nomogram for patients with ECC. Methods: From the data of patients with CC obtained from the Surveillance, Epidemiology, and End Results (SEER) database, data of young patients with ECC (≤ 50 years old) was screened. Patients with unknown data were excluded from the study, while the remaining patients were included. The patients were randomly divided into a training group (train) and a testing group (test) in the ratio of 7:3, while building the model. The model was constructed by the training group and verified by the testing group. Using multiple Cox regression models to compare the prediction efficiency of LNM indicators, nomograms were built based on the best model selected for overall survival (OS) and cause-specific survival (CSS). In the two groups, the performance of the nomogram was evaluated by constructing a calibration plot, time-dependent area under the curve (AUC), and decision curve analysis. Finally, the patients were grouped based on the risk score predicted by the prognosis model, and the survival curve was constructed after comparing the survival status of the high and low-risk groups. Results: Records of 26922 ECC patients were screened from the SEER database. N classification, positive lymph nodes (PLN), lymph node ratio (LNR) and log odds of PLN (LODDS) were considered to be independent predictors of OS and CSS. In addition, independent risk factors for OS included gender, race, marital status, primary site, histology, grade, T, and M classification, while the independent prognostic factors for CSS included race, marital status, primary site, grade, T, and M classification. The prediction model including LODDS is composed of minimal Akaike information criterion, maximal concordance indexes, and AUCs. Factors including gender, race, marital status, primary site, histology, grade, T, M classification, and LODDS were integrated into the OS nomogram, while race, marital status, primary site, grade, T, M classification, and LODDS were included into the CSS nomogram. The nomogram representing both cohorts had been successfully verified in terms of prediction accuracy and clinical practicability. Conclusion: LODDS is superior to N-stage, PLN, and LNR of ECC. The nomogram containing LODDS might be helpful in tumor evaluation and clinical decision-making, since it provides an appropriate prediction of ECC. Answer: Marital status has been found to significantly affect colon cancer outcomes, with several studies indicating that married patients generally have better survival rates compared to unmarried patients. Married patients with colon cancer were more likely to be diagnosed at an earlier stage and to receive surgical treatment than their unmarried counterparts, which contributed to a higher five-year survival rate for married individuals (PUBMED:21466984). This survival advantage for married patients was observed for both men and women and was particularly strong for localized and regional stages of cancer (PUBMED:21466984). Further research has shown that widowed patients had the lowest cause-specific survival (CSS) rates compared to other marital statuses, with a notable reduction in CSS across all stages of colorectal cancer (CRC) when compared to married patients (PUBMED:25749515). Additionally, the survival of elderly patients with stage M1b colon cancer was also influenced by marital status, with non-married and uninsured patients experiencing poorer overall survival (OS) and CSS (PUBMED:34353300). A study that developed a survival prediction model for colon cancer patients also identified marital status as one of the variables associated with the prognosis of colon cancer patients (PUBMED:36388692). Similarly, among patients with stage III colon cancer, those who were divorced, separated, or widowed had worse disease-free survival (DFS), recurrence-free survival (RFS), and overall survival (OS) compared to married patients (PUBMED:35641198). Moreover, unmarried status was identified as a significant negative factor in patients with surgically treated colon cancer, with unmarried patients having worse five-year disease-specific survival (DSS) after adjusting for various confounders (PUBMED:30882684). Refusal of recommended surgery, which was more common among more vulnerable patients, including those who were unmarried, also negatively impacted survival (PUBMED:31630307). In summary, marital status appears to be an independent prognostic factor for survival in patients with colon cancer, with marriage being associated with better outcomes across various stages of the disease and for both genders. The reasons for this may include better social support, earlier detection, and more aggressive treatment among married individuals.
Instruction: Does laparoscopy increase the bacteriological risk of appendectomy? Abstracts: abstract_id: PUBMED:9297886 Does laparoscopy increase the bacteriological risk of appendectomy? Results of a randomized prospective study Objective: The authors compare the risk of bacteraemia in open and laparoscopic appendectomy in a prospective randomized study. Methods: 35 patients with a presumptive diagnosis of acute appendicitis were randomized to have conventional open or laparoscopic surgical procedures. Before randomization, patients signed a consent form to participate in the study. Patients who were converted from laparoscopic to open appendectomy (3 cases), HIV+, allergic to Augmentin or who had contraindications to laparoscopic surgery were excluded from the study. A total of 32 patients were randomized: 17 to open (group I) and 15 to laparoscopic surgery (group II). There were no significant differences with regard to age, ASA score, symptoms or macroscopic aspect of the appendix. Two patients had a normal appendix, 12 had acute appendicitis, 14 gangrenous appendicitis and 4 ruptured or abscessed appendicitis. All patients received preoperative antibiotic prophylaxis (Augmentin) after blood cultures (H1) were drawn. Five other blood cultures were performed in standard medium and medium neutralizing Augmentin: at the time of opening the peritoneum (H2), after appendectomy (H3), after closure of the abdomen (H4), and at 6 (H5) and 12 hours (H6) after the operation. Bacterial cultures from the appendix site were performed before (P1) and after (P2) appendectomy. Results: The operative mortality rate after conventional or laparoscopic appendectomy was nil. The incidence of post-operative morbidity was 4 cases in group I and 2 cases in group II. No positive bacterial culture was obtained in 17 patients. The distribution of these patients was similar in groups I and II. Samples P1 and P2 were positive in 5 cases. Nine of 27 cases with negative P1 became positive in P2 (33%). There was no significant difference between the two groups with regard to the appearance of the appendix. Only two patients had positive blood cultures at H1. One of them had blood cultures at H3, H4 positive for a second germ. Conclusion: A low risk of bacteraemia exists for both open and laparoscopic appendectomy. This risk did not appear to increase for laparoscopy. Conventional and laparoscopic surgical procedures led to positive peritoneal bacterial cultures after appendectomy in 33% of cases. abstract_id: PUBMED:37772242 Risk Factors for Post-appendectomy Surgical Site Infection in Laparoscopy and Laparotomy - Retrospective Cohort Study. Background Appendicitis is a frequent emergency condition. Surgical site infections (SSI) are a common complication of appendectomy. Despite improvements in infection control, SSIs continue to cause harm, prolonged hospital stays, and even death. Objective The objective of this study is to compare the risk of developing surgical site infections (SSIs) between open laparotomy and laparoscopic appendectomies in Al-Baha, Saudi Arabia. Methods This retrospective cohort study compared laparotomy and laparoscopy for post-operative surgical site infection among patients who underwent an appendectomy at King Fahad Hospital (KFH) in Albaha, Saudi Arabia. Medical record numbers (MRNs) of patients who met the inclusion criteria were collected to build the sampling frame. From the final sampling frame, simple random sampling using a random number generator was used to draw a representative sample. Data were collected from the surgical health records of the patients. The collected data included patients' demographics, comorbidities, presenting symptoms, ordered imaging studies, pre-operative shaving, type and duration of surgery, intraoperative findings, and signs of wound inflammation. Results The total number of patients included in the analysis was 256, who underwent surgery for acute appendicitis. Among those who underwent laparoscopy, 5.7% had to be converted to open laparotomy. Signs of surgical wound inflammation were found in 10.2% of the patients. Patients who underwent open laparotomy had a significantly higher risk of wound infection (RR=3.1, p-value=0.001). Further analysis revealed an effect modification of pre-operative shaving. Open laparotomy has a higher risk of wound infection among patients who have not had pre-operative shaving (RR=4.1 vs. RR=2.6), while both risks were statistically significant (p-value=0.033 and p-value=0.035), respectively. Complicated cases in intra-operative findings were found to have a higher risk of post-appendectomy SSI. Conclusion This study demonstrates that laparoscopic appendectomy carries a lower risk of surgical site infection (SSI) compared to open laparotomy. Additionally, pre-operative shaving of the surgical site was found to increase the incidence of SSI. Healthcare providers can use this information to enhance their practice and reduce the occurrence of surgical site infections. Whenever possible, laparoscopic appendectomy should be preferred over open laparotomy due to its substantially lower SSI risk. We also recommend vigilant monitoring of complicated appendectomy, particularly in cases of ruptured appendicitis, for signs of SSI. abstract_id: PUBMED:24095022 Pure NOTES transvaginal appendectomy with gasless laparoscopy. Background: The vagina is the most widely used approach to natural orifice transluminal endoscopic surgery. However, a gas leak can significantly affect transvaginal operations during pneumoperitoneum laparoscopy. We tried to establish the proper technique for transvaginal appendectomy under gasless laparoscopy. Materials And Methods: Five patients with chronic appendicitis were selected to receive gasless laparoscopic transvaginal appendectomy with concurrent vaginal hysterectomy. An abdominal wall-lifting device was applied after removal of the uterus, and the appendix was removed transvaginally. Clinical data such as operative duration, bleeding volume, morbidity, and hospital stay duration were analyzed. Results: All procedures were performed successfully, without intraoperative or major postoperative complications. The appendectomy portion of the procedure took approximately 20-30 minutes, with minimal blood loss. All patients were discharged, scar-free, 3 d after surgery. Conclusions: Transvaginal appendectomy with gasless laparoscopy after vaginal hysterectomy appears to be a feasible and safe modification of established techniques, with acceptable outcomes. abstract_id: PUBMED:9931809 Effect of ultrasonic diagnosis and incidence of appendectomy and laparoscopy A total of 330 of 409 patients with suspected acute appendicitis were examined by ultrasound, and an appendectomy was performed in 146 patients. The negative appendectomy rate was 7% with preoperative ultrasound (n = 72) compared with 31% without (n = 74). Laparoscopy did not reduce the negative appendectomy rate, but was useful in patients with opposing clinical and sonographical findings. abstract_id: PUBMED:32054456 Appendectomy, cholecystectomy and diagnostic laparoscopy conducted before pregnancy and risk of adverse birth outcomes: a nationwide registry-based prevalence study 1996-2015. Background: Non-obstetric surgery conducted during pregnancy may increase the risk of adverse birth outcomes like small for gestational age, preterm birth, and miscarriage. Mechanisms are unclear but possibly longer lasting. We examined whether appendectomy, cholecystectomy and diagnostic laparoscopy conducted before pregnancy affect these outcomes. Methods: This nationwide Danish prevalence study included all pregnancies during 1996-2015 that had an appendectomy, cholecystectomy or diagnostic laparoscopy registered before last menstrual period in the years 1992-2015. We excluded pregnancies with surgery and categorized pre-pregnancy surgery according to timing (0-11, 12-23, and 24+ months before last menstrual period). Outcomes were small for gestational age, late preterm birth (32-37 weeks), early preterm birth (22-31 weeks) and miscarriage (7-21 weeks). We computed absolute risks and used logistic regression comparing pregnancies with surgery 0-11 or 12-23 to 24+ months before last menstrual period, computing odds ratios for each outcome, adjusting for maternal age and smoking. Results: We identified 15,939 pregnancies with appendectomy, 12,869 pregnancies with cholecystectomy and 19,330 pregnancies with diagnostic laparoscopy. The absolute risk of small for gestational age was 2.2% for patients with appendectomy 0-11 months before last menstrual period, 3.2% 12-23 months before compared with 2.2% when appendectomy was conducted more than 24 months before (adjusted OR 0.95 (95% CI; 0.65 to 1.31) and 1.37(95% CI;1.00 to 1.86). For early preterm birth, the absolute risks were 0.7, 0.5 and 0.8%, for late preterm birth 4.8, 4.4 and 4.7% and for miscarriage 5.7, 6.2 and 5.4%.We observed similar results for cholecystectomy. For diagnostic laparoscopy 0-11 months before pregnancy we found increased risks of small for gestational age (4.0, 2.8 and 2.6%) and late preterm birth (5.9, 5.0 and 4.8%). Conclusions: We found no increased risk of adverse birth outcomes among pregnancies with appendectomy or cholecystectomy conducted within 2 years before pregnancy compared to more than 2 years before pregnancy. The increased risks 0-11 months after diagnostic laparoscopy are likely explained by confounding by underlying indication. It appears safe to become pregnant any time following appendectomy and cholecystectomy, but, probably depending on indication, attention should be payed 0-11 months after diagnostic laparoscopy. abstract_id: PUBMED:22776168 Laparoscopy-assisted appendectomy through an umbilical port in children. Introduction: We report surgical techniques for single-incision laparoscopy-assisted surgery (SILAS) in the treatment of pediatric acute appendicitis. Methods: We performed SILAS in 15 cases of acute appendicitis between January and September of 2009. SILAS is a surgical method that involves making the incision at the umbilicus, inserting a wound retractor XS, suspending the abdominal wall with a hook, and appendectomy with the same procedures as conventional appendectomy. Results: SILAS appendectomy was performed in all 15 cases with the exception of one case where one 3-mm port was added. Compared to open appendectomy, blood loss was significantly lower and postoperative hospitalization time was shorter, although there was no significant decrease in operative time, or postoperative fasting time. No postoperative complications, such as wound infection, intestinal obstruction, intra-abdominal abscess, or bleeding, were encountered. Conclusion: SILAS was safely performed and is superior to open appendectomy with regard to cosmetic outcome. abstract_id: PUBMED:17618811 Our experience with selective laparoscopy through an open appendectomy incision in the management of suspected appendicitis. Background: An accurate preoperative diagnosis of suspected appendicitis at times can be extremely difficult. We report our experience with a simple strategy of selective laparoscopy through an open appendectomy incision after finding a noninflamed appendix in the management of suspected appendicitis. Methods: Patients presenting with suspected appendicitis after regular office hours (6 pm to 8 am weekdays and weekends) were recruited prospectively from January 2002 to December 2003. Laparoscopy through an open appendectomy incision was performed only when the appendix was found to be normal. Results: Twenty-five (18.5%) of 135 patients underwent laparoscopy through an open appendectomy incision because of a normal-looking appendix. Laparoscopy through an open appendectomy incision helped to identify additional intra-abdominal pathology in 13 (52%) of the 25 patients; thus improving the overall detection rate of underlying pathology from 81.5% (110 of 135) to 91.2% (123 of 135). Conclusions: Selective laparoscopy through an open appendectomy incision in patients with a noninflamed appendix is a simple technique that can identify potentially fatal pathology and also maintains a valuable training opportunity for young surgeons to perform open abdominal surgery. We recommend using this technique in the management of suspected appendicitis. abstract_id: PUBMED:31138196 Laparoscopy versus open appendectomy for elderly patients, a meta-analysis and systematic review. Background: Appendicitis in elderly patients is associated with increased risk of postoperative complications. The choice between laparoscopy and open appendectomy remains controversial in treating elderly patients with appendicitis. Methods: Comprehensive search of literature of MEDLINE, Embase, Cochrane Library and ClinicalTrials was done in January 2019. Studies compared laparoscopy and open appendectomy for elderly patients with appendicitis were screened and selected. Postoperative mortality, complications, wound infection, intra-abdominal abscess and operating time, length of hospital stay were extracted and analyzed. The Review Manage 5.3 was used for data analysis. Results: Twelve studies with 126,237 patients in laparoscopy group and 213,201 patients in open group. Postoperative mortality was significantly lower following laparoscopy (OR, 0.33; 95% CI, 0.28 to 0.39). Postoperative complication and wound infection were reduced following laparoscopy ((OR, 0.65 95% CI, 0.62 to 0.67; OR,0.27, 95% CI, 0.22 to 0.32). Intra-abdominal abscess was similar between LA and OA (OR,0.44;95% CI, 0.19 to 1.03). Duration of surgery was longer following laparoscopy and length of hospital stay was shorter following laparoscopy (MD, 7.25, 95% CI, 3.13 to 11.36; MD,-2.72, 95% CI,-3.31 to - 2.13). Conclusions: Not only laparoscopy is safe and feasible, but also it is related with decreased rates of mortality, post-operative morbidity and shorter hospitalization. abstract_id: PUBMED:9918613 Diagnostic laparoscopy through the right lower abdominal incision following open appendectomy. Background: Diagnostic laparoscopy through the right lower abdominal incision following open appendectomy for suspected acute appendicitis may help in making the correct diagnosis in the absence of pathology of the appendix. Methods: Fourteen patients with a clinical diagnosis of acute appendicitis underwent diagnostic laparoscopy through the right lower quadrant incision after open appendectomy to exclude further pathology in the case of a noninflamed appendix. Results: In 10 of the 14 patients, laparoscopy helped to correct the diagnosis. In two patients, the etiology of the acute right lower abdominal pain remained unclear. In two others, histological examination showed acute appendicitis despite a normal macroscopic appearance. Conclusions: Diagnostic laparoscopy through the right lower quadrant incision may help to correct the diagnosis in patients who are operated on for clinically acute appendicitis but in whom no acute appendicitis or other pathological findings are seen. abstract_id: PUBMED:36625452 Validity of the coding for appendicitis, appendectomy, and diagnostic laparoscopy in the Danish National Patient Registry. Background And Objective: We aimed to validate the diagnostic accuracy of appendicitis, its severity, its description, and the surgical approach, including open or laparoscopic appendectomy and diagnostic laparoscopy, in the Danish National Patient Registry (DNPR) against information from the electronic medical records. Methods: A random sample of 1046 patients of all ages and sexes recorded in the DNPR from the Capital Region of Denmark during 2010-2015 was investigated. Patients' admission had to include a discharge code for appendicitis (K35-K379) according to the International Classification of Disease version 10 (ICD-10) alone or in combination with a surgical code for appendectomy or the surgical code for a diagnostic laparoscopy. We calculated the positive predictive values (PPVs) with 95% confidence intervals. Results: Data from a total of 1018 patients were available for data analysis. The ICD-10 codes for appendicitis resulted in a good PPV of &gt;95% when combined with a surgical code. ICD-10 codes combined with the surgical codes could discriminate between uncomplicated and complicated appendicitis with some caution as the PPVs ranged from 68% to 93%. Only the surgical code for laparoscopic appendectomy yielded a good PPV (99%) for the actual surgical approach. The surgical code for a diagnostic laparoscopy alone did not cover a normal diagnostic laparoscopy. Conclusions: The diagnostic accuracy of appendicitis and appendectomy in routinely collected administrative health data is good for ICD-10 codes for appendicitis in combination with a surgical code and especially for laparoscopic appendectomy. Uncomplicated and complicated appendicitis can be distinguished with some caution. Answer: No, laparoscopy does not increase the bacteriological risk of appendectomy. A prospective randomized study compared the risk of bacteremia in open and laparoscopic appendectomy and found a low risk of bacteremia exists for both surgical procedures, with no increased risk associated with laparoscopy. Both conventional and laparoscopic surgical procedures led to positive peritoneal bacterial cultures after appendectomy in 33% of cases, indicating that the bacteriological risk is similar for both methods (PUBMED:9297886).
Instruction: Can falls be predicted with gait analytical and posturographic measurement systems? Abstracts: abstract_id: PUBMED:22843355 Can falls be predicted with gait analytical and posturographic measurement systems? A prospective follow-up study in a nursing home population. Objective: To validate previously proposed findings and to develop an objective, feasible and efficient bifactorial (risk factors: gait impairment and balance disorders) fall risk assessment. Design: Prospective follow-up study Setting: Nursing homes (Halle/Saale, Germany). Subjects: One hundred and forty-six nursing home residents (aged 62-101 years) were recruited. Methods: Gait data were collected using a mobile inertial sensor-based system (RehaWatch). Postural regulation data were measured with the Interactive Balance System. Falls were recorded in standardized protocols over a follow-up period of 12 months. Main Measures: Gait parameters (e.g. spatial-temporal parameters), posturographic parameters (e.g. postural subsystems), number of falls. Results: Seventeen (12%) of the participants had more than two falls per year. The predictive validity of the previously selected posturographic parameters was inadequate (sensitivity: 47%). The new measurement tool defined 67 participants showing an increased risk of falls. In reality, only 8 participants actually fell more than twice during the follow-up period (positive predictive value (PPV): 12%). The negative predictive value (NPV) was 88%. The posturographic frequency range F2-4 (peripheral-vestibular system), stride time and standard deviation of landing phase were the most powerful parameters for fall prediction. Gait and postural variability were larger in the high-risk group (e.g. gait speed; confidence interval (CI)(high): 0.57-0.79 vs. CI(low): 0.72-0.81 m/s). Conclusion: RehaWatch and the Interactive Balance System are able to measure two of the most important fall risk factors, but their current predictive ability is not satisfactory yet. The correlation with physiological mechanisms is only shown by the Interactive Balance System. abstract_id: PUBMED:34805392 Objective falls-risk prediction using wearable technologies amongst patients with and without neurogenic gait alterations: a narrative review of clinical feasibility. Objectives: The present narrative review aims to collate the literature regarding the current use of wearable gait measurement devices for falls-risk assessment in neurological and non-neurological populations. Thereby, this review seeks to determine the extent to which the aforementioned barriers inhibit clinical use. Background: Falls contribute a significant disease burden in most western countries, resulting in increased morbidity and mortality with substantial therapeutic costs. The recent development of gait analysis sensor technologies has enabled quantitative measurement of several gait features related to falls risk. However, three main barriers to implementation exist: accurately measuring gait-features associated with falls, differentiating between fallers and non-fallers using these gait features, and the accuracy of falls predictive algorithms developed using these gait measurements. Methods: Searches of Medline, PubMed, Embase and Scopus were screened to identify 46 articles relevant to the present study. Studies performing gait assessment using any wearable gait assessment device and analysing correlation with the occurrence of falls during a retrospective or prospective study period were included. Risk of Bias was assessed using the Centre for Evidence Based Medicine (CEBM) Criteria. Conclusions: Falls prediction algorithms based entirely, or in-part, on gait data have shown comparable or greater success of predicting falls than existing stratification scoring systems such as the 10-meter walk test or timed-up-and-go. However, data is lacking regarding their accuracy in neurological patient populations. Inertial measurement units (IMU) have displayed competency in obtaining and interpreting gait metrics relevant to falls risk. They have the potential to enhance the accuracy and efficiency of falls risk assessment in inpatient and outpatient setting. abstract_id: PUBMED:30796007 Overview of the cholinergic contribution to gait, balance and falls in Parkinson's disease. Mobility deficits, including gait disturbance, balance impairments and falls, are common features of Parkinson's disease (PD) that negatively impact quality of life. Mobility deficits respond poorly to dopaminergic medications, indicating a role for additional neurotransmitters. Due to the critical role of cortical input to gait and balance, acetylcholine-an essential neurotransmitter system for attention-has become an area of interest for mobility. This review aimed to identify the role of cholinergic function on gait, balance, and falls in PD using three techniques; pharmacological, imaging, and electrophysiological. Studies supported the role of the cholinergic system for mobility in PD, with the most promising evidence indicating a role in falls. Imaging studies demonstrated involvement of anterior cholinergic (basal forebrain) systems in gait, and posterior (brainstem) systems in balance. However, this review identified a small number of studies which used varying protocols, making comparisons difficult. Further studies are warranted, measuring comprehensive gait and balance characteristics as well as gold standard falls detection to further quantify the relationship between ACh and mobility in PD. abstract_id: PUBMED:30704677 Gait Disorders and Falls in the Elderly. Gait disorders in the elderly may be based on a neurologic deficit at multiples levels, or may be secondary to nonneurologic causes. The physiology and pathophysiology of gait problems are reviewed and bedside examination and investigative tools are discussed. The reader will have an excellent working knowledge of the subject and will know how to diagnose and treat gait disorders and falls. abstract_id: PUBMED:10815041 Assessment of unexplained falls and gait unsteadiness: the impact of age. When a patient with a balance disorder reports rotational vertigo, the clinician rightly focuses his or her attention on the vestibular system. This article reviews the possible diagnoses in the many patients who primarily report falls or gait disorder. Falls can be caused by predisposing neurologic conditions impairing gait, cardiovascular conditions, or epileptic episodes. The proportion of idiopathic falls, however, remains high. In the elderly, environmental circumstances, visual defects, psychotropic medication, and poor general health are additional risk factors. Clinical assessment of gait is more revealing and less expensive than computerized posture/ gait systems. The diagnosis of orthostatic tremor, however, requires either Fourier analysis of sway platform signals or electromyography. abstract_id: PUBMED:36086546 Predicting risk of falls in elderly using a single Inertial Measurement Unit on the lower-back by estimating spatio-temporal gait parameters. One of the consequences of aging is the increased risk of falls, especially when someone walks in unknown or uncontrolled environments. Usually, gait is evaluated through observation and clinical assessment scales to identify the state and deterioration of the patient's postural control. Lately, technological systems for bio-mechanical analysis have been used to determine abnormal gait states being expensive, difficult to use, and impossible to apply in real conditions. In this article, we explore the ability of a system based on a single inertial measurement unit located in the lower back to estimate spatio-temporal gait parameters by analyzing the signals available in the Physionet repository "Long Term Movement Monitoring Database" which, together with automatic classification algorithms, allow predicting the risk of falls in the elderly population. Different classification algorithms were trained and evaluated, being the Support Vector Machine classifier with a third-degree polynomial kernel, cost function C = 2 with the best performance, with an Accuracy = 59%, Recall = 91%, and F1- score = 71%, providing promising results regarding a proposal for the quantitative, online and realistic evaluation of gait during activities of daily living, which is where falls actually occur in the target population. Clinical Relevance - This work proposes an early risk of falls detection tool, essential to start preventive treatment strategies to maintain the independence of the elderly through a non-invasive, simple, and low-cost alternative. abstract_id: PUBMED:24577502 Freezing of gait and falls in Parkinson's disease. Freezing of gait (FOG) and falls are common and disabling phenomena in Parkinson's disease (PD) and related disorders as they may lead to loss of independence. Both are usually observed in the advanced stage of the disease, although they can also be seen in the early stage. FOG and falls have similar risk factors, such as axial motor disability and cognitive impairment, and FOG is one of the most common causes of falls. The objective of this review is to address recent ideas about the underlying pathophysiology of FOG and falls, and discuss the similarities, differences, and relationships between FOG and falls. Recent advances in studies that elucidate physical and cognitive risk factors to predict future falls are also reviewed. In addition to the history of prior falls and disease severity, the presence of FOG and cognitive dysfunction are associated with falls in PD. abstract_id: PUBMED:27341531 Gait characteristics, balance performance and falls in ambulant adults with cerebral palsy: An observational study. The relationship between spatiotemporal gait parameters, balance performance and falls history was investigated in ambulant adults with cerebral palsy (CP). Participants completed a single assessment of gait using an instrumented walkway at preferred and fast speeds, balance testing (Balance Evaluation Systems Test; BESTest), and reported falls history. Seventeen ambulatory adults with CP, mean age 37 years, participated. Gait speed was typically slow at both preferred and fast speeds (mean 0.97 and 1.21m/s, respectively), with short stride length and high cadence relative to speed. There was a significant, large positive relationship between preferred gait speed and BESTest total score (ρ=0.573; p&lt;0.05) and fast gait speed and BESTest total score (ρ=0.647, p&lt;0.01). The stride lengths of fallers at both preferred and fast speeds differed significantly from non-fallers (p=0.032 and p=0.025, respectively), with those with a prior history of falls taking shorter strides. Faster gait speed was associated with better performance on tests of anticipatory and postural response components of the BESTest, suggesting potential therapeutic training targets to address either gait speed or balance performance. Future exploration of the implications of slow walking speed and reduced stride length on falls and community engagement, and the potential prognostic value of stride length on identifying falls risk is recommended. abstract_id: PUBMED:31428758 Vision-Based Assessment of Gait Features Associated With Falls in People With Dementia. Background: Gait impairments contribute to falls in people with dementia. In this study, we used a vision-based system to record episodes of walking over a 2-week period as participants moved naturally around their environment, and from these calculated spatiotemporal, stability, symmetry, and acceleration gait features. The aim of this study was to determine whether features of gait extracted from a vision-based system are associated with falls, and which of these features are most strongly associated with falling. Methods: Fifty-two people with dementia admitted to a specialized dementia unit participated in this study. Thirty different features describing baseline gait were extracted from Kinect recordings of natural gait over a 2-week period. Baseline clinical and demographic measures were collected, and falls were tracked throughout the participants' admission. Results: A total of 1,744 gait episodes were recorded (mean 33.5 ± 23.0 per participant) over a 2-week baseline period. There were a total of 78 falls during the study period (range 0-10). In single variable analyses, the estimated lateral margin of stability, step width, and step time variability were significantly associated with the number of falls during admission. In a multivariate model controlling for clinical and demographic variables, the estimated lateral margin of stability (p = .01) was remained associated with number of falls. Conclusions: Information about gait can be extracted from vision-based recordings of natural walking. In particular, the lateral margin of stability, a measure of lateral gait stability, is an important marker of short-term falls risk. abstract_id: PUBMED:7782573 Gait training and falls in the elderly. 1. Patients with gait and balance disorder, as measured by the Tinetti Mobility Scale, can benefit from physical therapist-assisted gait training. 2. Score in the Tinetti Mobility Scale negatively correlates with the number of recurrences of falls. 3. The nurse's role includes identification of those who are at risk for falls, assessment of their response to training in preventing recurrence of falls, and the effect of training in their mobility and independence. Answer: Falls can potentially be predicted with gait analytical and posturographic measurement systems, but the predictive ability of these systems is not yet fully satisfactory. A study conducted in nursing homes using a mobile inertial sensor-based system (RehaWatch) and the Interactive Balance System found that while these systems could measure important fall risk factors such as gait impairment and balance disorders, their predictive validity was inadequate, with a positive predictive value (PPV) of only 12% and a negative predictive value (NPV) of 88% (PUBMED:22843355). Wearable technologies, including inertial measurement units (IMUs), have shown promise in obtaining and interpreting gait metrics relevant to falls risk, potentially enhancing the accuracy and efficiency of falls risk assessment in both inpatient and outpatient settings (PUBMED:34805392). However, there is a lack of data regarding their accuracy in neurological patient populations. In Parkinson's disease (PD), mobility deficits such as gait disturbance and balance impairments are common and respond poorly to dopaminergic medications, suggesting a role for additional neurotransmitters like acetylcholine in falls (PUBMED:30796007). Gait disorders in the elderly can be based on multiple neurologic or nonneurologic causes, and while clinical assessment of gait is revealing, the use of technological systems for biomechanical analysis is emerging as a tool to determine abnormal gait states (PUBMED:10815041, PUBMED:30704677). A study using a single inertial measurement unit on the lower back to estimate spatio-temporal gait parameters showed promising results for predicting the risk of falls in the elderly with an accuracy of 59%, although the recall was high at 91% (PUBMED:36086546). In people with dementia, a vision-based system was able to identify gait features associated with falls, with the lateral margin of stability being an important marker of short-term falls risk (PUBMED:31428758). Overall, while there is evidence that gait analytical and posturographic measurement systems can be used to predict falls, the current predictive ability of these systems requires improvement, and further research is needed to enhance their accuracy and clinical feasibility (PUBMED:22843355, PUBMED:34805392).
Instruction: Do negative 124I pretherapy positron emission tomography scans in patients with elevated serum thyroglobulin levels predict negative 131I posttherapy scans? Abstracts: abstract_id: PUBMED:24820222 Do negative 124I pretherapy positron emission tomography scans in patients with elevated serum thyroglobulin levels predict negative 131I posttherapy scans? Background: The management of patients with differentiated thyroid cancer (DTC) who have elevated serum thyroglobulin (Tg) levels and negative (131)I or (123)I scans is problematic, and the decision regarding whether or not to administer (131)I therapy (a "blind" therapy) is also problematic. While (124)I positron emission tomography (PET) imaging has been shown to detect more foci of residual thyroid tissue and/or metastases secondary to DTC than planar (131)I images, the utility of a negative (124)I PET scan in deciding whether or not to consider performing blind (131)I therapy is unknown. The objective of this study was to determine whether a negative (124)I pretherapy PET scan in patients with elevated serum Tg levels and negative (131)I or (123)I scans predicts a negative (131)I posttherapy scan. Methods: Several prospective studies have been performed to compare the radiopharmacokinetics of (124)I PET versus (131)I planar imaging in patients who 1) had histologically proven DTC, 2) were suspected to have metastatic DTC (e.g., elevated Tg, positive recent fine-needle aspiration cytology, suspicious enlarging mass), and 3) had (131)I planar and (124)I PET imaging performed. Using these criteria, we retrospectively identified patients who had an elevated Tg, a negative diagnostic (131)I/(123)I scan, a negative diagnostic (124)I PET scan, therapy with (131)I, a post-therapy (131)I scan, and a prior (131)I therapy with a subsequent positive post-(131)I therapy scan. For each scan, two readers categorized every focus of (131)I and (124)I uptake as positive for thyroid tissue/metastases or physiological. Results: Twelve patients met the above criteria. Ten of these 12 patients (83%) had positive foci on (131)I posttherapy scan. Conclusion: In our selected patient population, (131)I posttherapy scans are frequently positive in patients with elevated serum Tg levels, a negative diagnostic (131)I or (123)I scan, and a negative (124)I PET scan. Thus, for a patient with elevated serum Tg level, negative diagnostic (131)I planar scan, and a prior post-(131)I therapy scan that was positive, a negative (124)I PET scan will have a low predictive value for a negative post-(131)I therapy scan and should not be used to exclude the option of blind (131)I therapy. abstract_id: PUBMED:10404792 18F]-2-fluoro-2-deoxy-D-glucose positron emission tomography localizes residual thyroid cancer in patients with negative diagnostic (131)I whole body scans and elevated serum thyroglobulin levels. Progressive dedifferentiation of thyroid cancer cells leads to a loss of iodine-concentrating ability, with resultant false negative, whole body radioactive iodine scans in approximately 20% of all differentiated metastatic thyroid cancer lesions. We tested the hypothesis that all metastatic thyroid cancer lesions that did not concentrate iodine, but did produce thyroglobulin (Tg), could be localized by [18F]2-fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET). We performed FDG-PET on 37 patients with differentiated thyroid cancer after surgery and radioiodine ablation who had negative diagnostic 131I whole body scans during routine follow-up. Serum Tg, Tg autoantibodies, neck ultrasounds, and other clinically indicated imaging procedures were performed to detect residual disease. In those with elevated Tg levels, FDG-PET localized occult disease in 71%, was false positive in one, and was false negative in five patients. The majority of false negative FDG-PET occurred in patients with minimal cervical adenopathy. Surgical resections, biopsies, 131 therapy, and differentiation therapy were performed based on the PET results. The FDG-PET result changed the clinical management in 19 of the 37 patients. In patients with elevated Tg levels, FDG-PET had a positive predictive value of 92%. In patients with low Tg levels, FDG-PET had a negative predictive value of 93%. No FDG-PET scans were positive in stage I patients; however, they were always positive in stage IV patients with elevated Tg levels. An elevated TSH level (i.e. hypothyroidism) did not increase the ability to detect lesions. FDG-PET is able to localize residual thyroid cancer lesions in patients who have negative diagnostic 131I whole body scans and elevated Tg levels, although it was not sensitive enough to detect minimal residual disease in cervical nodes. abstract_id: PUBMED:26097420 Utility of (99m)Tc-Hynic-TOC in 131I Whole-Body Scan Negative Thyroid Cancer Patients with Elevated Serum Thyroglobulin Levels. Several studies have reported on the expression of somatostatin receptors (SSTRs) in patients with differentiated thyroid cancer (DTC). The aim of this study was to evaluate the imaging abilities of a recently developed Technetium-99m labeled somatostatin analog, (99m)Tc-Hynic-TOC, in terms of precise localization of the disease. The study population consisted of 28 patients (16 men, 12 women; age range: 39-72 years) with histologically confirmed DTC, who presented with recurrent or persistent disease as indicated by elevated serum thyroglobulin (Tg) levels after initial treatment (serum Tg &gt; 10 ng/ml off T4 suppression for 4-6 weeks). All patients were negative on the Iodine-131 posttherapy whole-body scans. Fluorine-18 fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) was performed in all patients. SSTR scintigraphy was true positive in 23 cases (82.1%), true negative in two cases (7.1%) and false negative in three cases (10.7%) which resulted in a sensitivity of 88.46%, specificity of 100% and an accuracy of 89.2%. Sensitivity of (99m)Tc-Hynic-TOC scan was higher (93.7%) for patients with advanced stages, that is stages III and IV. (18)F-FDG showed a sensitivity of 93.7%, a specificity of 50% and an accuracy of 89.3%. (18)F-FDG PET was found to be more sensitive, with lower specificity due to false positive results in 2 patients. Analysis on a lesion basis demonstrated substantial agreement between the two imaging techniques with a Cohen's kappa of 0.66. Scintigraphy with (99m)Tc-Hynic-TOC might be a promising tool for treatment planning; it is easy to perform and showed sufficient accuracy for localization diagnostics in thyroid cancer patients with recurrent or metastatic disease. abstract_id: PUBMED:15902358 Initial experience in use of fluorine-18-fluorodeoxyglucose positron emission tomography/computed tomography in thyroid carcinoma patients with elevated serum thyroglobulin but negative iodine-131 whole body scans. Introduction: This study aims to examine the usefulness of fluorine-18-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) in thyroid carcinoma patients with elevated serum thyroglobulin (Tg) but negative iodine-131 (I-131) whole body scans. Methods: 17 patients with differentiated thyroid carcinoma who underwent FDG PET/CT scans were reviewed retrospectively over a period of one year from July 2003 to June 2004. All these patients had completion thyroidectomy and subsequently presented with elevated serum Tg but negative post-therapy I-131 whole body scans. Nine of these patients underwent FDG PET/CT in a hypothyroid state, while the remainder underwent FDG PET/CT while on thyroxine replacement. Results: 15 out of 17 PET/CT scans revealed lesions consistent with metastases, giving a sensitivity of 88.2 percent. Four of these patients were amendable to surgical treatment. Two scans were negative. Conclusion: FDG PET/CT is a sensitive diagnostic tool to detect radioiodine-negative recurrences/metastases in patients with thyroid carcinoma. Our preliminary results are comparable with published results based on PET. abstract_id: PUBMED:11004330 Utility of fluorine-18-fluorodeoxyglucose positron emission tomography in differentiated thyroid carcinoma with negative radioiodine scans and elevated serum thyroglobulin levels. Background: This study aimed to determine the role of fluorine-18-fluorodeoxyglucose positron emission tomography (FDG-PET) in the follow-up of patients who underwent total thyroidectomy and iodine-131 ((131)I) ablation therapy for differentiated thyroid cancer and presented increased thyroglobulin levels with negative (131)I and thallium-201 ((201)Tl) scans. Methods: Two patients with follicular carcinoma and eight with papillary tumors underwent total thyroidectomy and (131)I therapy until the (131)I scan was negative. (131)I and (201)Tl scans were performed with negative results in all cases, while serum thyroglobulin measurements were all positive with negative thyroglobulin autoantibodies. One week after the (131)I scans, all the patients underwent FDG-PET whole-body scans. Results: The FDG-PET scan detected in 4 patients, a single focal increase of FDG uptake in one lymph node metastasis (subsequently confirmed histologically); in 1 patient, multiple pathological focal uptakes in brain, neck, and chest; and in 1 patient, two mild focal uptakes in the mediastinum, close to the tracheal branch. In 2 other patients, pathological FDG uptakes in cervical spine and mediastinum were not confirmed by other imaging techniques, and in the 2 remaining patients the scan results were inconclusive. The sensitivity of FDG-PET whole-body scan for detecting metastatic thyroid cancer was 60%. Conclusions: This study indicates that the FDG-PET whole-body scan is a useful tool in the follow-up of patients with differentiated thyroid cancer, negative (131)I and (201)Tl scans and elevated serum thyroglobulin levels. The FDG-PET scan detects metastatic disease in 60% of patients with differentiated thyroid cancer, enabling surgical therapy to be performed on accessible lesions. abstract_id: PUBMED:15701340 F-18-fluordeoxyglucose positron emission tomography on patients with differentiated thyroid cancer who present elevated human serum thyroglobulin levels and negative I-131 whole body scan Unlabelled: This study aimed to evaluate the role of Fluorine-18-fluorodeoxyglucose positron emission tomography (PET-FDG) in patients with elevated serum thyroglobulin (hTg) levels where thyroid cancer tissue does not concentrate radioiodine, rendering false-negative results on I-131 scanning. Material And Methods: Whole-body PET imaging using FDG was performed in 54 patients (37 female, 17 male) aged 17-88 years: 45 with papillary tumors and 9 with follicular tumors who were suspected of having recurrent thyroid carcinoma due to elevated thyroglobulin levels (hTg &gt; 2 ng/ml) under thyroid-stimulating hormone (TSH &gt; or = 30 microIU/ml) in whom the iodine scan was negative. All whole body scans were obtained with diagnostic doses (185 MBq). Whole body PET imaging was performed in fasting patients following i.v. administration of 370 MBq FDG while the patients were receiving full thyroid hormone replacement. Before PET, 99mTc methoxyisobutylisonitrile scintigraphy (99mTc-MIBI) was done in 14 patients and morphologic imaging in 26 by CT scan. Results: Positive PET results confirmed the presence of hypermetabolic foci in 25/54 patients (46.29 %). Positive findings were found for PET-FDG in patients with hTg levels higher than 10 ng/ml receiving full thyroid hormone replacement. 99mTc-MIBI demonstrated lesions in 7/14 patients (50 %). PET-FDG and 99mTc-MIBI had congruent positive results in 4/7 patients. All the lesions found by CT were detected by PET-FDG, while recurrent disease was found in 12/21 patients with previous negative CT. Conclusions: These results suggest that PET-FDG seems to be a promising tool in the follow-up of thyroid cancer and should be considered in patients suffering from differentiated thyroid cancer with suspected recurrence and/or metastases by elevated thyroglobulin levels, and negative I-131 whole body scans. PET-FDG might be more useful at hTg levels &gt; 10 ng/ml. abstract_id: PUBMED:9641890 Positron emission tomography with F-18-deoxyglucose in patients with differentiated thyroid carcinoma, elevated thyroglobulin levels, and negative iodine scans. Introduction: In patients with differentiated thyroid carcinoma, elevated serum levels of thyroglobulin (hTg) may occur in spite of otherwise negative diagnostic procedures and in particular in spite of a negative iodine-131 scan. Positron emission tomography with F-18-deoxyglucose (FDG-PET) is a potentially useful method for the detection of metastatic lesions or the recurrence of thyroid cancer. We aimed to investigate whether FDG-PET is capable of detecting metastatic lesions or recurrence in patients with differentiated thyroid carcinoma, elevated serum levels of thyroglobulin, and otherwise negative diagnostic procedures, including the iodine-131 scan. Methods: From a group of 500 patients with differentiated thyroid carcinoma, a subgroup of 32 patients had elevated serum hTg-levels, negative iodine- 131 scans, negative cervical and abdominal ultrasound, and negative X-ray of the chest. In 12 of these patients (hTg 77.8+/-94.3 ng/ml, range 1.5-277 ng/ml, median 20 ng/ml), FDG-PET was performed. All but one FDG-PET study was performed in a state of hypothyroidism (TSH 75.8+/-32.2 microIU/ml, range 31-116 microIU/ml, median 74.6 microIU/ml). Results: In 6 of the 12 patients investigated, the FDG-PET was positive. In three of the patients, the diagnosis was confirmed by computed tomography or magnetic resonance imaging. In patients with a positive FDG-PET finding, the hTg level was 146.7+/-90.1 ng/ml (23-277 ng/ml, median 144.5 ng/ml). In contrast, in patients with a negative finding the hTg level was only 9.0+/-7.6 ng/ml (range 1.5-17 ng/ml, median 8.1 ng/ml), P=0.01. Conclusion: These preliminary results show that in patients with differentiated thyroid carcinoma, elevated hTg levels, and otherwise negative "conventional" diagnostic procedures, FDG-PET is helpful in detecting metastatic lesions. abstract_id: PUBMED:17983080 The role of positron emission tomography scanning in patients with radioactive iodine scan-negative, recurrent differentiated thyroid cancer. An elevated thyroglobulin (Tg) level after total thyroidectomy for differentiated thyroid cancer is often associated with disease recurrence. 131I-whole body scans (131I-WBS) and cross-sectional imaging are commonly used to localize occult metastases in these patients. Localizing disease when 131I-WBS are negative and cross-section imaging is equivocal remains a challenge. The medical records of 12 patients with thyroid cancer undergoing positive positron emission tomography (PET) scans for 131I-WBS-negativeTg elevations or the presence of anti-Tg antibodies were identified and charts were reviewed in a retrospective fashion. All had been treated with total thyroidectomy and 131I ablation in the past. Computed tomography, magnetic resonance imaging, or ultrasound studies revealed suspicious lesions in eight patients. All 12 patients underwent resection of the PET-positive lesions. All resections were positive for thyroid cancer in the regions predicted by the positive PET scan. All nine (100%) patients with elevated preoperative Tg levels experienced a reduction in Tg level after resection. PET scans accurately predict the presence of recurrent thyroid cancer when 131I-WBS are negative. PET scans should be considered in the follow up of 131I-WBS-negative patients with thyroid cancer who are suspected of having recurrent disease. abstract_id: PUBMED:19607742 Evaluation of 18fluoro-2-deoxyglucose positron emission tomography in iodine scan negative, differentiated thyroid cancer recurrence. Background: Follow up of patients with differentiated thyroid cancer is based upon anatomical imaging, thyroglobulin assay and functional imaging in the form of iodine uptake scanning. A significant cohort of such patients have rising thyroglobulin levels but negative iodine scans. In this group, 18fluoro-2-deoxyglucose positron emission tomography scans have been commonly employed. The aim of this study was to assess the usefulness of such investigation. Methods: The sensitivity of 18fluoro-2-deoxyglucose positron emission tomography for detecting recurrence of differentiated thyroid cancer was calculated from a retrospective review of scan results from patients with iodine scan negative recurrence. Results: Eighteen patients with rising thyroglobulin levels underwent 18fluoro-2-deoxyglucose positron emission tomography scanning. Fourteen patients had negative (and four equivocal) whole body iodine scintigraphy scans. Of these 14, six patients had a positive 18fluoro-2-deoxyglucose positron emission tomography scan, giving a sensitivity of 42.9 per cent. Conclusions: When assessed in the clinical setting and restricted to patients with negative iodine scans, the sensitivity of 18fluoro-2-deoxyglucose positron emission tomography was found to be lower than in previous case series. abstract_id: PUBMED:25246834 Clinical determinants of fluorodeoxyglucose positron emission tomography/computed tomography in differentiated thyroid cancer patients with elevated thyroglobulin and negative (131)iodine whole body scans after (131)iodine therapy. Background: A cross-sectional prospective study has been conducted on differentiated thyroid cancer (DTC) patients using negative (131)Iodine ((131)I) whole body scans and elevated thyroglobulin (Tg) levels. The main objective of this research was to determine the prevalence of the conversion of differentiated to dedifferentiated thyroid cancer patients during follow up at the Hospital Kuala Lumpur. It has been demonstrated that fluorodeoxyglucose (FDG) uptake is inversely proportional to the iodine concentration and to differentiation of the cells. Methods: Thirty-five patients with histologically proven DTC that have undergone total or near total thyroidectomy, and post (131)I radioactive iodine ablation therapy, were selected and prospectively analysed. The patients also had to show at least one negative whole body scan and Tg levels of 10 μg/L and above. The results of the FDG-Positron Emission Tomography/Computed Tomography (PET/CT) were then studied to determine the association and the predictors influencing the outcome by using univariable and multivariable analyses. Results: Out of the thirty-five patients, 60% of them (twenty-one) showed positive results and 40% (fourteen) showed negative. Age, gender, and type of histopathology (HPE) showed significant associations with the positive results of the FDG-PET/CT. The results also showed no correlations observed between the Tg levels and standardised uptake value (SUV)max in the DTC patients with positive disease findings in the FDG-PET/CT. The predictor for this study was age. Conclusion: The prevalence of the conversion of differentiated to dedifferentiated thyroid cancer among patients with negative (131)I and elevated Tg was 60%, with age as the predictor. DTC patients aged 45 year-old and older were seven times more likely to have positive results of FDG-PET/CT imaging. Answer: No, negative ^124I pretherapy positron emission tomography (PET) scans in patients with elevated serum thyroglobulin levels do not predict negative ^131I posttherapy scans. A study found that in a selected patient population, ^131I posttherapy scans were frequently positive in patients with elevated serum thyroglobulin levels, a negative diagnostic ^131I or ^123I scan, and a negative ^124I PET scan. Therefore, a negative ^124I PET scan has a low predictive value for a negative post-^131I therapy scan and should not be used to exclude the option of blind ^131I therapy (PUBMED:24820222).
Instruction: II. Do chronic pain patients' perceptions about their preinjury jobs differ as a function of worker compensation and non-worker compensation status? Abstracts: abstract_id: PUBMED:8788575 II. Do chronic pain patients' perceptions about their preinjury jobs differ as a function of worker compensation and non-worker compensation status? Objectives: (1) To demonstrate a relationship between intent to return to preinjury job and preinjury job perceptions about that job; and (2) to demonstrate that worker compensation chronic pain patients (WC CPPs) would be more likely than non-worker compensation chronic pain patients (NWC CPPs) not to intend to return to a preinjury type of job because of preinjury job perceptions. Study Design: The relationship between preinjury job perceptions and intent to return to the preinjury job was investigated and compared between worker compensation (WC) and nonworker compensation (NWC) chronic pain patients (CPPs). Within the WC and NWC groups CPPs not intending to return to their preinjury type of work were compared to those CPPs intending to return on preinjury job perception. Background Data: Compensation status, being a WC CPPs or being a non-WC CPPs, has been claimed to be predictive or not predictive of return to work post pain treatment. These studies have, however, ignored the preinjury job stress perception variable as an area of research. Methods: WC CPPs were age- and sex-matched to NWC CPPs and statistically compared on their responses to rating scale and yes/no questionnaires for intent to return to work and perceived preinjury job stress. In a second analysis, both the WC and NWC groups were divided according to their intent to return to work and statistically compared on their responses to these questionnaires. Results: Both male and female WC CPPs were less likely than their counterparts to intend to return to their preinjury job. Both WC and NWC were found to complain of preinjury job complaints, and these complaints were found to differ between WC and NWC CPPs. An association between intent not to return to work and the perceptions of preinjury job dissatisfaction and job dislike was found for male and female WC CPPs and for male and female NWC CPPs. Conclusions: There may be a relationship between some preinjury job perceptions and intent to return to the preinjury type of work in some groups of CPPs. However, a specific relationship between WC status, intent not to return to the preinjury type of work, and preinjury job perceptions in comparison to NWC CPPs could not be demonstrated. abstract_id: PUBMED:2975163 Compensation status and symptoms reported by patients with chronic pain. This study examined the initial symptoms of patients with chronic pain who were (n = 70) or were not (n = 52) involved in some aspect of the compensation system--worker's compensation, litigation, or Social Security Disability Insurance. Analyses indicated that compensation patients were discriminable from noncompensation patients (p less than 0.0001). Compensation patients were younger and less likely to be female; they also tended to report fewer surgeries, shorter pain durations, and more vocational and sexual disability. Finally, they perceived their medical conditions to be more severe than had been diagnosed by physicians. The groups did not seem to differ in severity of pain or psychologic distress. These data are consistent with studies indicating that compensation patients are not "symptom magnifiers," although the data do indicate that the life disruptions reported by these patients may be greater than those reported by patients not involved in compensation systems. abstract_id: PUBMED:37400542 Effects of the 2016 CDC opioid prescription guidelines on opioid use and worker compensation case length in patients with back pain. Background: Narcotic consumption in the workers' compensation population contributes to prolonged case duration, worse clinical outcomes, and opioid dependence. In 2016, the CDC provided recommendations guiding clinicians on prescribing opioids to adult patients with chronic pain. The objective of our study was to evaluate a cause-and-effect relationship between narcotic consumption and worker compensation claim length before and following guideline revision. Methods: An administration database was retrospectively queried to identify patients evaluated for spine-related workers' compensation claimants from 2011 to 2021. Data was recorded for age, sex, BMI, case length, narcotic usage, and injury location. Cases were grouped together by exam date before (2011-2016) and after (2017-2021) the 2016 CDC opioid guideline revision. Results: Six hundred twenty-five patients were evaluated. Males composed 58% of the study population. From 2011 to 2016, narcotic consumption was reported in 54% of subjects versus no narcotic consumption in 46% of subjects (135 cases). From 2017 to 2021, narcotic consumption decreased to 37% (P = 0.00298). Prior to the guideline revision, mean case length was 635 days. Following CDC guideline revision, there was a significant decline in mean case length duration to 438 days (31% reduction) (P = 0.000868). Conclusion: This study demonstrates that following revised opioid prescription recommendations by the CDC in 2016, there was a statistically significant decline in opioid consumption and workers' compensation case length duration. Opioid use may influence prolonged worker disability and delayed return to work. abstract_id: PUBMED:2148975 Litigation and employment status: effects on patients with chronic pain. In order to study the effects of compensation and litigation, 201 chronic pain patients were selected from a sample of 444: 99 were working, 15 were working and litigating, 53 were receiving Worker's Compensation, and 34 were receiving Worker's Compensation and litigating. Employment (working vs. Worker's Compensation) and litigation status (litigating vs. not litigating) were analyzed in a 2 x 2 factorial design with measures of pain, disability, psychological distress, and selected demographics as dependent variables. Compared to Worker's Compensation patients, working patients reported significantly less disability (down-time, days spent in bed, interference of pain in daily activities) and pain of a longer duration. Compared to litigating patients, non-litigating patients reported less pain (on the McGill Pain Questionnaire) and less disability (stopping activity, interference of pain in daily activities). On two measures of psychological distress (depression, anxiety), there were significant interactions: Worker's Compensation patients who were litigating reported less distress than non-litigants, while working patients who were litigating reported more distress than non-litigants. The results indicate clear differences in self-reports of disability associated with both employment and litigation status. They also suggest that litigation may function as a coping response for patients who are distressed by the adversarial nature of the Worker's Compensation system. Limitations of the study as well as suggestions for further research also are discussed. abstract_id: PUBMED:2972831 Effects of time-limited vs unlimited compensation on pain behavior and treatment outcome in low back pain patients. A common theme in the pain literature is that worker's compensation reinforces pain behavior and adversely influences treatment outcome of chronic pain patients. This study compared 110 chronic low back pain males divided into three groups: 44 receiving no compensation, 27 receiving time-limited worker's compensation, and 39 receiving unlimited social security disability benefits. All patients participated in a multimodal treatment program (e.g. nerve blocks, transcutaneous electrical nerve stimulation, relaxation training, biofeedback). Physician ratings of pain behavior and self-report measures of pain characteristics, activity level, and medication intake were gathered pretreatment; self-report measures were collected again approximately one year following treatment. The results showed disability patients to have a higher percentage of physician rated symptom dramatization and pain behavior and a greater usage of medication compared with the non-compensation and time-limited worker's compensation patients. At follow-up, no between group differences were found on measures of pain intensity, medication usage and activity. In general, however, more worker's compensation and non-compensation patients who were initially not working had returned to work at the time of follow-up compared with the disability patients. These results suggest that time-limited compensation may not affect treatment outcome or interfere with return-to-work chances while unlimited compensation may adversely influence the probability that patients will return to work. These findings support the notion that worker's compensation patients receiving time-limited financial benefits do not necessarily represent a 'problem' subgroup of chronic pain patients. abstract_id: PUBMED:2933623 The role of compensation in chronic pain: analysis using a new method of scoring the McGill Pain Questionnaire. Patients who receive worker's compensation or are awaiting litigation after an accident have long been regarded as neurotics or malingerers who are exaggerating their pain for financial gain. However, there is a growing body of evidence that patients who receive worker's compensation are no different from patients who do not. In particular, a recent study found no differences between compensation and non-compensation patients based on pain scores obtained with the McGill Pain Questionnaire (MPQ). Since the MPQ is usually scored by using rank values rather than more complex scale values, the negative finding might be attributable to the loss of information by using rank values. Consequently, a simple technique was developed to convert rank values to weighted-rank values which are equivalent to scale values. A study of 145 patients suffering low-back and musculoskeletal pain revealed that compensation and non-compensation patients had virtually identical pain scores and pain descriptor patterns. They were also similar on the MMPI pain triad (depression, hysteria, hypochondriasis) and on several other personal that were examined. The only differences were significantly lower affective or evaluative MPQ scores and fewer visits to health professionals by compensation patients compared to non-compensation patients. These results suggest that the financial security provided by compensation decreases anxiety, which is reflected in the lower affective or evaluative ratings but not the sensory or total MPQ scores. Compensation patients, contrary to traditional opinion, appear not to differ from people who do not receive compensation. Accidents which produce injury and pain should be considered as potentially psychologically traumatic as well as conducive to the development of subtle physiological changes such as trigger points. Patients on compensation or awaiting litigation deserve the same concern and compassion as all other patients who suffer chronic pain. abstract_id: PUBMED:2975372 Treatment outcome in low back pain patients: do compensation benefits make a difference? Some evidence suggests that chronic pain patients who receive worker's compensation benefits have a tendency to exaggerate their symptoms and not benefit from treatment. This study compared 110 male chronic low back pain patients receiving either no compensation, time-limited compensation, or unlimited compensation on pretreatment and follow-up variables. The patients who received unlimited compensation tended to have a higher percentage of physician-rated symptom dramatization, to have more pain behavior, and to use more medication than the no-compensation and time-limited compensation patients. At follow-up, fewer patients with unlimited compensation had returned to work as compared with the other groups. These results suggest that time-limited compensation may not affect treatment outcome or interfere with return to work, while unlimited compensation may adversely influence overall treatment outcome and the probability that patients will return to work. abstract_id: PUBMED:2966332 Compensation and non-compensation chronic pain patients compared for DSM-III operational diagnoses. Two hundred and eighty-three mixed chronic pain patients, consecutive admissions, were diagnostically evaluated as per DSM-III, Axis I, Axis II or personality type psychiatric operational criteria. Controlling for primary organic treatment diagnosis, age and race, statistical comparisons were made between male compensation patients (n = 93) and male non-compensation patients (n = 23) and between female compensation patients (n = 38) and female non-compensation patients (n = 28) for all DSM-III diagnoses. Male compensation patients were significantly overrepresented for these diagnostic groups: conversion disorder (somatosensory type); combined personality disorders; and passive-aggressive personality disorder. Male non-compensation patients were significantly overrepresented for these diagnostic groups: no diagnosis on Axis I; combined personality types; and compulsive personality type. Female compensation patients were significantly overrepresented for conversion disorder (somatosensory) only. Female non-compensation patients were significantly overrepresented for generalized anxiety disorder and combined anxiety syndromes. Compensation chronic pain patients may be at risk for some psychiatric disorders not previously identified: conversion disorder (somatosensory), and personality disorders. abstract_id: PUBMED:11783831 Compensation and chronic pain. Background: The literature contains many different viewpoints on the impact of compensation on recovery from chronic pain. Objective: What is the role of compensation in chronic pain and/or chronic pain disability? Methodology: The literature search identified 11 observational studies to provide evidence about this question. Results: There is a paucity of high-quality data on the subject of the impact of compensation on chronic pain. This subject was reviewed under the headings of (1) injury claim rate and duration; (2) recovery; and (3) rehabilitation treatment programs. The studies were of subjects with musculoskeletal pain, mainly low back pain. Conclusions: Filing a compensation claim for costs, retaining a lawyer, or higher pain intensities were limited predictors of longer claims (level 3). As the ratio of compensation to preinjury wage increases, there is moderate evidence (level 2) that the duration of the claim increases and that disability is more likely. Compensation status, particularly combined with higher pain intensities, is associated with poorer prognosis after rehabilitation treatment programs (level 3). abstract_id: PUBMED:30499587 Effect of an opioid management program for Colorado workers' compensation providers on adherence to treatment guidelines for chronic pain. Objective: The aim of this study was to examine adherence of state guidelines for Colorado workers' compensation physicians/providers treating individuals as injured workers with chronic pain after initiation of an opioid management program and provider incentives. Methods: A retrospective cohort of chronic, non-cancer pain claims was constructed from the Colorado's workers' compensation database. Adherence to treatment guidelines and opioid prescribing practices were evaluated during implementation of a new billing code to incentivize adherence. Results: Overall, less than 33% of claims showed evidence of opioid management. Comprehensive opioid management was observed in only 4.4% of claims. In 2010, after implementing the new billing code, the ratio of long acting opioids to short acting opioids decreased from 0.2 to 0.13; returning to 0.2 in one year. Similarly, morphine equivalent doses declined for a short period. Conclusions: Incentivizing physicians to adhere to chronic pain management guidelines only temporarily improves prescribing practices. Answer: Yes, chronic pain patients' perceptions about their preinjury jobs can differ as a function of worker compensation (WC) and non-worker compensation (NWC) status. The study by Tait et al. (PUBMED:8788575) aimed to demonstrate a relationship between intent to return to preinjury job and preinjury job perceptions, and to show that WC chronic pain patients (CPPs) would be less likely than NWC CPPs to intend to return to a preinjury type of job due to preinjury job perceptions. The results indicated that both male and female WC CPPs were less likely than their counterparts to intend to return to their preinjury job. An association was found between intent not to return to work and the perceptions of preinjury job dissatisfaction and job dislike for both male and female WC CPPs and for male and female NWC CPPs. However, a specific relationship between WC status, intent not to return to the preinjury type of work, and preinjury job perceptions in comparison to NWC CPPs could not be conclusively demonstrated. Other studies have also explored the relationship between compensation status and various aspects of chronic pain. For instance, a study by Polatin et al. (PUBMED:2975163) found that compensation patients perceived their medical conditions to be more severe than diagnosed by physicians and reported greater vocational and sexual disability, although they did not differ in severity of pain or psychological distress compared to noncompensation patients. Similarly, a study by Rainville et al. (PUBMED:2972831) suggested that unlimited compensation might adversely influence the probability that patients will return to work, while time-limited compensation may not affect treatment outcome or interfere with return-to-work chances. In summary, chronic pain patients' perceptions about their preinjury jobs and their intent to return to work can be influenced by their compensation status, with some evidence suggesting that WC CPPs may have different perceptions and intentions compared to NWC CPPs. However, the relationship is complex and may not be solely determined by compensation status.
Instruction: Can targeting nondependent problem drinkers and providing internet-based services expand access to assistance for alcohol problems? Abstracts: abstract_id: PUBMED:11513231 Can targeting nondependent problem drinkers and providing internet-based services expand access to assistance for alcohol problems? A study of the moderation management self-help/mutual aid organization. Objective: Moderation Management (MM) is the only alcohol self-help organization to target nondependent problem drinkers and to allow moderate drinking goals. This study evaluated whether MM drew into assistance an untapped segment of the population with nondependent alcohol problems. It also examined how access to the organization was influenced by the provision of Internet-based resources. Method: A survey was distributed to participants in MM face-to-face and Internet-based self-help groups. MM participants (N = 177, 50.9% male) reported on their demographic characteristics, alcohol consumption, alcohol problems and utilization of professional and peer-run helping resources. Results: MM appears to attract women and young people, especially those who are nondependent problem drinkers. It was also found that a significant minority of members experienced multiple alcohol dependence symptoms and therefore may have been poorly suited to a moderate drinking program. Conclusions: Tailoring services to nondependent drinkers and offering assistance over the Internet are two valuable methods of broadening the base of treatment for alcohol problems. Although interventions like MM are unlikely to benefit all individuals who access them, they do attract problem drinkers who are otherwise unlikely to use existing alcohol-related services. abstract_id: PUBMED:22954459 Comparison of two internet-based interventions for problem drinkers: randomized controlled trial. Background: Alcohol problems are a serious public health concern, and few problem drinkers ever seek treatment. The Internet is one means of promoting access to care, but more research is needed to test the best types of interventions to employ. Evaluation of Internet-based interventions that contain a variety of research-validated cognitive-behavioral tools, which have been shown to be helpful to those with more severe alcohol concerns, should be a priority. Objective: To evaluate whether providing access to an extended Internet intervention for alcohol problems offers additional benefits in promoting reductions in alcohol consumption compared with a brief Internet intervention. The hypothesis for the current trial was that respondents who were provided with access to an extended Internet intervention (the Alcohol Help Center [AHC]) would display significantly improved drinking outcomes at 6-month follow-up, compared with respondents who were provided with access to a brief Internet intervention (the Check Your Drinking [CYD] screener). Methods: A single-blinded randomized controlled trial with a 6-month follow-up. A general population sample of problem drinkers was recruited through newspaper advertisements in a large metropolitan city. Baseline and follow-up data were collected by postal mail. Results: A volunteer sample of problem drinkers of legal drinking age with home access to the Internet were recruited for the trial. Of 239 potential respondents recruited in 2010, 170 met inclusion criteria (average age 45 years; 101/170, 59.4% male; average Alcohol Use Disorders Identification Test [AUDIT] score of 22). Follow-up rates were 90.0% (153/170) with no adverse effects of the interventions reported. A repeated-measures multivariate analysis of variance of the outcome measures using an intent-to-treat approach found a significantly greater reduction in amount of drinking among participants provided access to the AHC than among participants provided access to the CYD (P = .046). Conclusions: The provision of the AHC gave additional benefit in the short term to problem drinkers over that seen from the research-validated CYD, indicating the benefits of promoting access to these interventions as one means of helping people with problem drinking concerns. Trial Registration: ClinicalTrials.gov NCT01114919; http://clinicaltrials.gov/ct2/show/NCT01114919 (Archived by WebCite at http://www.webcitation.org/68t1dCkRZ). abstract_id: PUBMED:20973846 Internet-based interventions for problem drinkers: From efficacy trials to implementation. Aims: Internet-based interventions (IBIs) for problem drinkers have been in existence for over a decade. In that time, IBIs have increased in sophistication and there is the beginning of a solid research base suggesting their efficacy. A growing number of problem drinkers are using IBIs and attempts have been made to explore how IBIs can be integrated within primary care and other health-care settings. This symposium provided an overview of IBIs for problem drinkers and highlighted some of the important issues in their development and implementation. Rationale: IBIs appear to be at a 'cusp' as technology and intervention practices are merged together in an attempt to provide better health care for problem drinkers. The timing of the 2009 International Network on Brief Interventions for Alcohol Problems Conference was ideal for a presentation and discussion of the role that IBIs play now that IBIs have started to shift into the mainstream of services for problem drinkers. Summary: The presentations in this symposium covered the 'bench to bedside' aspects of the development and evaluation of IBIs. They included a systematic review of the research to-date in this field, a report on the results from a just completed randomised controlled trial, a report on an effectiveness trial of implementing IBIs in multiple university settings and a consideration of the cost-effectiveness of IBIs.[Cunningham JA, Khadjesari Z, Bewick BM, Riper H. Internet-based interventions for problem drinkers: From efficacy trials to implementation. abstract_id: PUBMED:25604206 Randomized controlled trial of a minimal versus extended Internet-based intervention for problem drinkers: study protocol. Background: Problem drinking causes great harm to the person and to society. Most problem drinkers will never seek treatment. The current trial will test the efficacy of two Internet interventions for problem drinking - one minimal and the other extended - as an alternate means of providing help to those in need. Methods/design: A double blinded, four-wave panel design with random assignment to two experimental conditions will be used in this study. Participants will be recruited through a comprehensive recruitment strategy consisting of online and print advertisements asking for people who are 'interested in helping us develop and evaluate Internet-based interventions for problem drinkers.' Potential participants will be screened to select problem drinkers who have home access to the Internet. Participants will be sent to a password-protected Internet site and, upon signing in, will be randomized to be provided access to the minimal or extended Internet-based intervention. Six-month, twelve-month, and two-year drinking outcomes will be compared between experimental conditions. The primary hypothesis is that participants in the extended Internet intervention condition will display significantly improved drinking outcomes at twelve months compared to participants in the minimal intervention. Discussion: The findings of this trial will contribute to the growing literature on Internet interventions for problem drinkers. In addition, findings from this trial will contribute to the scarce literature available evaluating the long-term efficacy of brief interventions for alcohol problems. Trial Registration: Clinical Trials.gov # NCT01874509; First submitted June 17, 2013. abstract_id: PUBMED:34305681 A Deep Learning Algorithm to Predict Hazardous Drinkers and the Severity of Alcohol-Related Problems Using K-NHANES. Purpose: The number of patients with alcohol-related problems is steadily increasing. A large-scale survey of alcohol-related problems has been conducted. However, studies that predict hazardous drinkers and identify which factors contribute to the prediction are limited. Thus, the purpose of this study was to predict hazardous drinkers and the severity of alcohol-related problems of patients using a deep learning algorithm based on a large-scale survey data. Materials and Methods: Datasets of National Health and Nutrition Examination Survey of South Korea (K-NHANES), a nationally representative survey for the entire South Korean population, were used to train deep learning and conventional machine learning algorithms. Datasets from 69,187 and 45,672 participants were used to predict hazardous drinkers and the severity of alcohol-related problems, respectively. Based on the degree of contribution of each variable to deep learning, it was possible to determine which variable contributed significantly to the prediction of hazardous drinkers. Results: Deep learning showed the higher performance than conventional machine learning algorithms. It predicted hazardous drinkers with an AUC (Area under the receiver operating characteristic curve) of 0.870 (Logistic regression: 0.858, Linear SVM: 0.849, Random forest classifier: 0.810, K-nearest neighbors: 0.740). Among 325 variables for predicting hazardous drinkers, energy intake was a factor showing the greatest contribution to the prediction, followed by carbohydrate intake. Participants were classified into Zone I, Zone II, Zone III, and Zone IV based on the degree of alcohol-related problems, showing AUCs of 0.881, 0.774, 0.853, and 0.879, respectively. Conclusion: Hazardous drinking groups could be effectively predicted and individuals could be classified according to the degree of alcohol-related problems using a deep learning algorithm. This algorithm could be used to screen people who need treatment for alcohol-related problems among the general population or hospital visitors. abstract_id: PUBMED:32934575 Concurrent use of addictive substances among alcohol drinkers: Prevalence and problems in a Swedish general population sample. Aims: To examine concurrent use of addictive substances among alcohol drinkers in the Swedish general population and to assess to what extent this increases the risk of alcohol problems. Methods: Data were retrieved from a nationally representative survey from 2013 on use of and problems related to alcohol, tobacco, illicit drugs and non-prescribed use of analgesics and sedatives with 15,576 respondents. Alcohol users were divided into different groups on the basis of frequency of drinking overall and binge drinking. Tobacco use was measured in terms of daily use and use of illicit drugs and non-prescribed use of analgesics and sedatives were measured in terms of last 12 months prevalence. A dichotomous indicator of a DSM-IV dependence or abuse diagnosis was used. Logistic regression models were estimated to examine the relationship between various patterns of drinking in combination with other substance use and risk of alcohol abuse and/or dependence. Results: People who drink alcohol in Sweden were more likely to use other addictive substances than non-drinkers and such concurrent use becomes more common the more alcohol is consumed. Alcohol drinkers using other substances have a higher prevalence of alcohol abuse and dependence at all frequencies of drinking. Multivariate models controlling for sex, age and drinking frequency found that an elevated risk of harm remained for drinkers using addictive substances other than snuff. Conclusion: A large group of drinkers in the Swedish general population have an accumulation of risks as a result of using both alcohol and other addictive substances. Concurrent use of cigarettes, illicit drugs and non-prescribed use of analgesics and sedatives adds an independent risk of alcohol abuse/dependence in this group in addition to their drinking. The findings point at the importance of taking multiple substance-use patterns into account when combating drinking problems. Screening for concurrent use of other addictive substances could help healthcare providers to identify patients in need of treatment for alcohol problems. abstract_id: PUBMED:19922569 A randomized controlled trial of an internet-based intervention for alcohol abusers. Objective: Misuse of alcohol imposes a major public health cost, yet few problem drinkers are willing to access in-person services for alcohol abuse. The development of brief, easily accessible ways to help problem drinkers who are unwilling or unable to seek traditional treatment services could therefore have significant public health benefit. The objective of this project is to conduct a randomized controlled evaluation of the internet-based Check Your Drinking (CYD) screener ( http://www.CheckYourDrinking.net). Method: Participants (n = 185) recruited through a general telephone population survey were assigned randomly to receive access to the CYD, or to a no-intervention control group. Results: Follow-up rates were excellent (92%). Problem drinkers provided access to the CYD displayed a six to seven drinks reduction in their weekly alcohol consumption (a 30% reduction in typical weekly drinking) at both the 3- and 6-month follow-ups compared to a one drink per week reduction among control group respondents. Conclusions: The CYD is one of a growing number of internet-based interventions with research evidence supporting its efficacy to reduce alcohol consumption. The internet could increase the range of help-seeking options available because it takes treatment to the problem drinker rather than making the problem drinker come to treatment. abstract_id: PUBMED:16754367 Access to the Internet among drinkers, smokers and illicit drug users: is it a barrier to the provision of interventions on the World Wide Web? Background: Expanding Internet-based interventions for substance use will have little benefit if heavy substance users are unlikely to have Internet access. This paper explored whether access to the Internet was a potential barrier to the provision of services for smokers, drinkers and illicit drug users. Methods: As part of a general population telephone survey of adults in Ontario, Canada, respondents were asked about their use of different drugs and also about their use of the Internet. Results: Pack-a-day smokers were less likely (48%) to have home Internet access than non-smokers (69%), and current drinkers (73%) were more likely to have home access than abstainers (50%). These relationships remained true even after controlling for demographic characteristics. Internet access was less clearly associated with cannabis or cocaine use. Conclusions: Even though there is variation in access among smokers, drinkers and illicit drug users, the World Wide Web remains an excellent opportunity to potentially provide services for substance abusers who might never access treatment in person because, in absolute terms, the majority of substance abusers do use the Internet. abstract_id: PUBMED:27770293 Randomized Controlled Trial of a Brief Versus Extended Internet Intervention for Problem Drinkers. Purpose: Brief Internet interventions have been shown to reduce alcohol consumption. This trial intended to compare the effects of one such brief intervention to an extended Internet intervention for problem drinkers. Method: Using online advertising, 490 participants, 18 years or older, were recruited and randomized to receive a brief ( CheckYourDrinking.net ) versus an extended ( AlcoholHelpCentre.net ) Internet intervention and were followed up at 6, 12, and 24 months. The per protocol primary analysis assessed difference between condition at the 12-month follow-up. Results: The follow-up rate at 12 months was 83.3 %. ANCOVAs of the primary (Alcohol Use Disorder Identification Test (AUDIT)-C) and secondary outcome variables (drinks in a typical week, highest number of drinks on one occasion-baseline drinking as covariate) revealed no significant (p &gt; 0.05) differences between the interventions. Similarly, combined analyses of the 6-, 12-, and 24-month follow-up revealed no significant differences between interventions at all time points. Conclusion: The present study does not provide support for the added benefit of an extended Internet intervention for problem drinkers over a brief Internet intervention. abstract_id: PUBMED:21584037 A study of problem drinkers in a general hospital. 349 new admissions in the wards of Medicine, General Surgery &amp; Orthopedics in a general hospital were screened with MAST &amp; AUDIT for problem use of alcohol. Problem drinking was present in 14.6% of the inpatients. The severity and the need for additional treatment were measured with Addiction Severity Index (ASI). Majority of the patients had problems in more than one ar?a. Nevertheless, only one fourth of the patients were referred for psychiatric treatment. The findings indicate the need to develop services towards the recognition and referrals of the problem drinkers in general hospitals. Answer: Yes, targeting nondependent problem drinkers and providing internet-based services can expand access to assistance for alcohol problems. Moderation Management (MM), a self-help organization that targets nondependent problem drinkers and allows moderate drinking goals, appears to attract an untapped segment of the population with nondependent alcohol problems, including women and young people. This suggests that tailoring services to nondependent drinkers and offering assistance over the Internet are valuable methods for broadening the base of treatment for alcohol problems (PUBMED:11513231). Internet-based interventions (IBIs) for problem drinkers have increased in sophistication and have a solid research base suggesting their efficacy. They have been shown to be helpful to those with more severe alcohol concerns and can be integrated within primary care and other health-care settings. IBIs are now shifting into the mainstream of services for problem drinkers, indicating their potential to expand access to assistance (PUBMED:20973846). Moreover, a randomized controlled trial comparing two internet-based interventions for problem drinkers found that providing access to an extended Internet intervention offered additional benefits in promoting reductions in alcohol consumption compared with a brief Internet intervention. This indicates the benefits of promoting access to these interventions as a means of helping people with problem drinking concerns (PUBMED:22954459). Additionally, the internet could increase the range of help-seeking options available because it takes treatment to the problem drinker rather than making the problem drinker come to treatment. The Check Your Drinking (CYD) screener, an internet-based intervention, has been shown to reduce alcohol consumption among problem drinkers (PUBMED:19922569). Lastly, access to the Internet among drinkers, smokers, and illicit drug users is not a significant barrier to the provision of interventions on the World Wide Web. The majority of substance abusers do use the Internet, making it an excellent opportunity to potentially provide services for those who might never access treatment in person (PUBMED:16754367).
Instruction: Should we allow a trial of labor after a previous cesarean for dystocia in the second stage of labor? Abstracts: abstract_id: PUBMED:11576583 Should we allow a trial of labor after a previous cesarean for dystocia in the second stage of labor? Objective: To estimate the rate of successful vaginal birth including operative vaginal delivery in patients with a previous cesarean for cephalopelvic disproportion in the second stage of labor. Methods: Data from all patients who underwent trial of labor after a previous cesarean between 1990 and 2000 at our tertiary care institution were analyzed. Medical records were reviewed and data collected for the following variables: indication for the previous cesarean, birth weight and cervical dilatation at previous cesarean delivery, as well as the mode of delivery (spontaneous, vacuum, forceps, cesarean) and the birth weight for the subsequent pregnancy. Pearson's chi(2) test and one-way analysis of variance were used for statistical analyses. Results: There were 2002 patients included in the study. Two hundred fourteen (11%) had their previous cesarean for dystocia in the second stage of labor, 654 (33%) for dystocia in the first stage of labor, and 1134 (57%) for other indications. The vaginal birth after cesarean success rate was 75.2% (P = .015 vs other indications), 65.6% (P &lt; .001 vs other indications), and 82.5%, respectively. The rate of operative vaginal delivery was 15%, 12%, and 10% (P = .109). Conclusion: A trial of labor is reasonable in women whose previous cesarean was for dystocia in the second stage of labor. In this series, patients who underwent a trial of labor after a previous cesarean for dystocia in the second stage had 75.2% (95% confidence interval 69.5, 81.0) chance of achieving vaginal delivery. abstract_id: PUBMED:22836821 Adverse obstetric outcomes in women with previous cesarean for dystocia in second stage of labor. Objective: To evaluate obstetric outcomes in women undergoing a trial of labor (TOL) after a previous cesarean for dystocia in second stage of labor. Methods: A retrospective cohort study of women with one previous low transverse cesarean undergoing a first TOL was performed. Women with previous cesarean for dystocia in first stage and those with previous dystocia in second stage were compared with those with previous cesarean for nonrecurrent reasons (controls). Multivariable regressions analyses were performed. Results: Of 1655 women, those with previous dystocia in second stage of labor (n = 204) had greater risks than controls (n = 880) to have an operative delivery [odds ratio (OR): 1.5; 95% confidence intervals (CI) 1.1 to 2.2], shoulder dystocia (OR: 2.9; 95% CI 1.1 to 8.0), and uterine rupture in the second stage of labor (OR: 4.9; 95% CI 1.1 to 23), and especially in case of fetal macrosomia (OR: 29.6; 95% CI 4.4 to 202). The median second stage of labor duration before uterine rupture was 2.5 hours (interquartile range: 1.5 to 3.2 hours) in these women. Conclusion: Previous cesarean for dystocia in the second stage of labor is associated with second-stage uterine rupture at next delivery, especially in cases of suspected fetal macrosomia and prolonged second stage of labor. abstract_id: PUBMED:34030598 The continuum of a prolonged labor and a second stage cesarean delivery. Objective: To investigate the association of the timing of primary cesarean delivery with the progress of labor and the operative delivery rate at the subsequent successful trial of labor. Methods: A retrospective study of women with a primary cesarean and subsequent term cephalic vaginal delivery in two medical centers. Cesarean deliveries were classified as planned, intrapartum first stage or intrapartum second stage. The second stage duration and the operative delivery rate, adjusted to epidural analgesia and oxytocin use, were compared between the groups. χ2 and Kruskal-Wallis tests were used for analysis of categorical and continuous variables, respectively. Results: The study population included 1166 women. The second stage of labor was longer when the previous cesarean delivery occurred during the second stage compared to planned or first stage (1.7 h vs 1.3 h vs 1.3 h, p = 0.005). The proportion of operative deliveries was greater among women with previous cesarean in the second stage of labor (39.6%), compared to planned (26.9%) or first stage (28.8%), p = 0.006. Conclusion: Cesarean delivery at the second stage of labor is associated with a longer second stage and an increased operative delivery rate at the subsequent vaginal birth. Our findings attest to the delicate passenger-passage relations that can exist in some parent-couples. abstract_id: PUBMED:15804787 Cesarean delivery during second-stage labor: characteristics and diagnostic accuracy. Objective: To characterize dysfunctional labors that lead to cesarean delivery in the second stage and to assess the accuracy of diagnoses of abnormal fetal descent. Methods: Thirty-one patients delivered by cesarean during the second stage because of abnormal labor or presumed cephalopelvic disproportion were studied and compared to 62 control cesarean cases delivered for the same indications in the first stage. The clinical diagnosis of dysfunctional labor that led to the cesarean was compared to the diagnosis made by retrospective analysis of the labor curves. Results: Cases did not differ from controls delivered in the first stage in maternal age, race, parity, gestational age, weight gain, or the frequency of associated medical complications. The newborns were not significantly different in birth weight,ponderal index, sex, or the incidence of low Apgar scores. Among study patients, 94% had a second stage labor dysfunction determined by graphic labor analysis, predominantly arrest of descent (69%) and failure of descent (28%). In 79% of cases a dysfunctional first stage preceded the abnormal second stage. Among these first stage labor abnormalities, 68% were not recognized during the labor. Conclusion: Characteristics of patients delivered by cesarean during the second stage were similar to those delivered before full cervical dilatation. Second stage labor abnormalities were usually preceded by an abnormal first stage. There was considerable inaccuracy in the diagnosis of second stage labor dysfunction. abstract_id: PUBMED:33202319 Successful vaginal birth after cesarean in the second delivery is not associated with the stage of labor of the primary unplanned cesarean delivery. Background: Candidates for trial of labor after cesarean must be carefully screened to maximize success and minimize morbidity. Demographic and obstetric characteristics affecting success rates must be delineated. Objective: We examined whether the labor stage of the primary delivery in which a woman underwent an unplanned cesarean delivery would affect the likelihood that she could achieve a subsequent vaginal birth. Study Design: Electronic medical records-based study of 676 parturients. Trial of labor rates and outcomes were compared between women whose primary cesarean delivery was performed in the first vs. the second stage of labor. Setting: Hadassah Medical Center, Israel POPULATION: Women in their second pregnancies, with singleton fetuses, who underwent unplanned cesarean delivery in their first pregnancy and elected trial of labor in the second delivery. The main outcome measures were maternal and neonatal complications and vaginal birth rates in first vs. second stage of labor groups. Results: In our population, 76 % of women attempt trial of labor after cesarean. Rates of successful vaginal delivery did not differ significantly between those who underwent primary cesarean in the first vs. second stage of labor: 67.4 % vs. 70.2 %, p = 0.483, respectively. Among women whose primary UCD was in the second stage, only 18.2 % (35/192) required a UCD in the second stage in the subsequent delivery, while 58.9 % (113/192) underwent UCD in the first stage in both deliveries. Conclusion: Labor stage of the primary unplanned cesarean delivery, should not dissuade women from a trial of labor after cesarean in their second delivery. abstract_id: PUBMED:26348381 Effect of stage of initial labor dystocia on vaginal birth after cesarean success. Objective: The objective of the study was to examine whether the stage of labor dystocia causing a primary cesarean delivery (CD) affects a trial of labor after cesarean (TOLAC) success. Study Design: This was a retrospective cohort study of women who had primary CD of singleton pregnancies for first- or second-stage labor dystocia and attempted TOLAC at a single hospital between 2002 and 2014. We compared TOLAC success rates between women whose primary CD was for first- vs second-stage labor dystocia and investigated whether the effect of prior dystocia stage on TOLAC success was modified by previous vaginal delivery (VD). Results: A total of 238 women were included; nearly half (49%) achieved vaginal birth after cesarean (VBAC). Women with a history of second-stage labor dystocia were more likely to have VBAC compared with those with first-stage dystocia, although this trend was not statistically significant among the general population (55% vs 45%, adjusted odds ratio, 1.4, 95% confidence interval, 0.8-2.5]). However, among women without a prior VD, those with a history of second-stage dystocia did have statistically higher odds of achieving VBAC than those with prior first-stage dystocia (54% vs 38%, adjusted odds ratio, 1.8 [95% confidence interval, 1.0-3.3], P for interaction = .043). Conclusion: Nearly half of women with a history of primary CD for labor dystocia will achieve VBAC. Women with a history of second-stage labor dystocia have a slightly higher VBAC rate, seen to a statistically significant degree in those without a history of prior VD. TOLAC should be offered to all eligible women and should not be discouraged in women with a prior second-stage arrest. abstract_id: PUBMED:29078938 Defining and Managing Normal and Abnormal Second Stage of Labor. The American College of Obstetricians and Gynecologists (ACOG) Practice Bulletin No. 49 on Dystocia and Augmentation of Labor defines a prolonged second stage as more than 2 hours without or 3 hours with epidural analgesia in nulliparous women, and 1 hour without, or 2 hours with epidural in multiparous women. This definition diagnoses 10% to 14% of nulliparous and 3% to 3.5% of multiparous women as having a prolonged second stage. Although current labor norms remained largely based on data established by Friedman in the 1950s, modern obstetric population and practice have evolved with time. abstract_id: PUBMED:31109302 Success of trial of labor in women with a history of previous cesarean section for failed labor induction or labor dystocia: a retrospective cohort study. Background: The rates of cesarean section (CS) are increasing worldwide leading to an increased risk for maternal and neonatal complications in the subsequent pregnancy and labor. Previous studies have demonstrated that successful trial of labor after cesarean (TOLAC) is associated with the least maternal morbidity, but the risks of unsuccessful TOLAC exceed the risks of scheduled repeat CS. However, prediction of successful TOLAC is difficult, and only limited data on TOLAC in women with previous failed labor induction or labor dystocia exists. Our aim was to evaluate the success of TOLAC in women with a history of failed labor induction or labor dystocia, to compare the delivery outcomes according to stage of labor at time of previous CS, and to assess the risk factors for recurrent failed labor induction or labor dystocia. Methods: This retrospective cohort study of 660 women with a prior CS for failed labor induction or labor dystocia undergoing TOLAC was carried out in Helsinki University Hospital, Finland, between 2013 and 2015. Data on the study population was obtained from the hospital database and analyzed using SPSS. Results: The rate of vaginal delivery was 72.9% and the rate of repeat CS for failed induction or labor dystocia was 17.7%. The rate of successful TOLAC was 75.6% in women with a history of labor arrest in the first stage of labor, 73.1% in women with a history of labor arrest in the second stage of labor, and 59.0% in women with previous failed induction. The adjusted risk factors for recurrent failed induction or labor dystocia were maternal height &lt; 160 cm (OR 1.9 95% CI 1.1-3.1), no prior vaginal delivery (OR 8.3 95% CI 3.5-19.8), type 1 or gestational diabetes (OR 1.8 95% CI 1.0-3.0), IOL for suspected non-diabetic fetal macrosomia (OR 10.8 95% CI 2.1-55.9) and birthweight ≥4500 g (OR 3.3 95% CI 1.3-7.9). Conclusions: TOLAC is a feasible option to scheduled repeat CS in women with a history of failed induction or labor dystocia. However, women with no previous vaginal delivery, maternal height &lt; 160 cm, diabetes or suspected neonatal macrosomia (≥4500 g) may be at increased risk for failed TOLAC. abstract_id: PUBMED:33278288 Fetal Head Station at Second-Stage Dystocia and Subsequent Trial of Labor After Cesarean Delivery Success Rate. Objective: To investigate whether fetal head station at the index cesarean delivery is associated with a subsequent trial of labor success rate among primiparous women. Methods: A retrospective cohort study conducted at two tertiary medical centers included all primiparous women with subsequent delivery after cesarean delivery for second-stage dystocia during 2009-2019, identified from the electronic medical record databases. Univariate and multivariate analyses were performed to assess the factors associated with successful trial of labor after cesarean (TOLAC) (primary outcome). Additionally, all women with failed TOLAC were matched one-to-one to women with successful TOLAC, according to factors identified in the univariate analysis. Results: Of 481 primiparous women with prior cesarean delivery for second-stage dystocia, 64.4% (n=310) attempted TOLAC, and 222 (71.6%) successfully delivered vaginally. The rate of successful TOLAC was significantly higher in those with fetal head station below the ischial spines at the index cesarean delivery, as compared with those with higher head station (79.0% vs 60.5%, odds ratio [OR] 2.46, 95% CI 1.49-4.08). The proportion of neonates weighing more than 3,500 g in the subsequent delivery was lower in those with successful TOLAC compared with failed TOLAC (29.7% vs 43.2%, OR 0.56, 95% CI 0.33-0.93). In a multivariable analysis, lower fetal head station at the index cesarean delivery was the only independent factor associated with TOLAC success (adjusted OR 2.38, 95% CI 1.43-3.96). Matching all women with failed TOLAC one-to-one to women with successful TOLAC, according to birth weight and second-stage duration at the subsequent delivery, lower fetal head station at the index cesarean delivery remained the only factor associated with successful TOLAC. Conclusion: Lower fetal head station at the index cesarean delivery for second-stage dystocia was independently associated with a higher vaginal birth after cesarean rate, with an overall acceptable success rate. These findings should improve patient counseling and reassure those who wish to deliver vaginally after prior second-stage arrest. abstract_id: PUBMED:38398380 Ultrasonographic Evaluation of the Second Stage of Labor according to the Mode of Delivery: A Prospective Study in Greece. Background And Objectives: Accurate diagnosis of labor progress is crucial for making well-informed decisions regarding timely and appropriate interventions to optimize outcomes for both the mother and the fetus. The aim of this study was to assess the progress of the second stage of labor using intrapartum ultrasound. Material And Methods: This was a prospective study (December 2022-December 2023) conducted at the Third Department of Obstetrics and Gynecology, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece. Maternal-fetal and labor characteristics were recorded, and two ultrasound parameters were measured: the angle of progression (AoP) and the head-perineum distance (HPD). The correlation between the two ultrasonographic values and the maternal-fetal characteristics was investigated. Multinomial regression analysis was also conducted to investigate any potential predictors of the mode of delivery. Results: A total of 82 women at the second stage of labor were clinically and sonographically assessed. The mean duration of the second stage of labor differed between vaginal and cesarean deliveries (65.3 vs. 160 min; p-value &lt; 0.001) and between cesarean and operative vaginal deliveries (160 vs. 88.6 min; p-value = 0.015). The occiput anterior position was associated with an increased likelihood of vaginal delivery (OR: 24.167; 95% CI: 3.8-152.5; p-value &lt; 0.001). No significant differences were identified in the AoP among the three different modes of delivery (vaginal: 145.7° vs. operative vaginal: 139.9° vs. cesarean: 132.1°; p-value = 0.289). The mean HPD differed significantly between vaginal and cesarean deliveries (28.6 vs. 41.4 mm; p-value &lt; 0.001) and between cesarean and operative vaginal deliveries (41.4 vs. 26.9 mm; p-value = 0.002); it was correlated significantly with maternal BMI (r = 0.268; p-value = 0.024) and the duration of the second stage of labor (r = 0.256; p-value = 0.031). Low parity (OR: 12.024; 95% CI: 6.320-22.876; p-value &lt; 0.001) and high HPD (OR: 1.23; 95% CI: 1.05-1.43; p-value = 0.007) were found to be significant predictors of cesarean delivery. Conclusions: The use of intrapartum ultrasound as an adjunctive technique to the standard clinical evaluation may enhance the diagnostic approach to an abnormal labor progress and predict the need for operative vaginal or cesarean delivery. Answer: Based on the information provided in the abstracts, it appears that allowing a trial of labor after a previous cesarean for dystocia in the second stage of labor is a reasonable option. A study found that women whose previous cesarean was for dystocia in the second stage had a 75.2% chance of achieving vaginal delivery in a subsequent trial of labor (PUBMED:11576583). Another study indicated that while there are greater risks associated with a trial of labor after a previous cesarean for dystocia in the second stage, such as operative delivery and uterine rupture, especially in cases of fetal macrosomia and prolonged second stage of labor, it does not preclude the possibility of a successful vaginal birth (PUBMED:22836821). Further research showed that cesarean delivery during the second stage of labor is associated with a longer second stage and an increased operative delivery rate at the subsequent vaginal birth, suggesting that the decision should be made carefully considering the individual circumstances (PUBMED:34030598). Additionally, it was found that the stage of labor during the primary cesarean delivery does not significantly affect the likelihood of achieving a subsequent vaginal birth (PUBMED:33202319). Moreover, nearly half of the women with a history of primary cesarean for labor dystocia will achieve a vaginal birth after cesarean (VBAC), and those with a history of second-stage labor dystocia have a slightly higher VBAC rate, especially in those without a history of prior vaginal delivery (PUBMED:26348381). Another study found that the success rate of trial of labor after cesarean was similar regardless of whether the previous cesarean was due to labor arrest in the first or second stage of labor (PUBMED:31109302). Lastly, a study showed that a lower fetal head station at the index cesarean delivery for second-stage dystocia was independently associated with a higher vaginal birth after cesarean rate (PUBMED:33278288). This suggests that certain factors, such as fetal head station, can influence the success of a trial of labor after a previous cesarean for dystocia in the second stage of labor. In conclusion, while there are risks associated with a trial of labor after a previous cesarean for dystocia in the second stage of labor, the evidence suggests that it is a reasonable option for many women, with a significant proportion achieving successful vaginal delivery. Each case should be evaluated individually, taking into account the specific circumstances and potential risk factors.
Instruction: Is Laparoscopic Bariatric Surgery a Safe Option in Extremely High-Risk Morbidly Obese Patients? Abstracts: abstract_id: PUBMED:26301769 Is Laparoscopic Bariatric Surgery a Safe Option in Extremely High-Risk Morbidly Obese Patients? Introduction: Age, superobesity, and cardiopulmonary comorbidities define patients as high risk for bariatric surgery. We evaluated the outcomes following bariatric surgery in extremely high-risk patients. Materials And Methods: Among 3240 patients who underwent laparoscopic bariatric surgery at a single academic center from January 2006 through June 2012, extremely high-risk patients were identified using the following criteria: age ≥ 65 years, body mass index (BMI) ≥ 50 kg/m(2), and presence of at least two of six cardiopulmonary comorbidities, including hypertension, ischemic heart disease, congestive heart failure, chronic obstructive pulmonary disease, obstructive sleep apnea, and history of venous thromboembolism. Perioperative and intermediate-term outcomes were assessed. Results: Forty-four extremely high-risk patients underwent laparoscopic Roux-en-Y gastric bypass (n = 23), adjustable gastric banding (n = 11), or sleeve gastrectomy (n = 10). Patients had a mean age of 67.9 ± 2.7 years, a mean BMI of 54.8 ± 5.5 kg/m(2), and a median of two (range, two to five) cardiopulmonary comorbidities. There was no conversion to laparotomy. Thirteen (29.5%) 30-day postoperative complications occurred; only six were major complications. Thirty-day postoperative re-admission, re-operation, and mortality rates were 15.9%, 2.3%, and 0%, respectively. Within a mean follow-up time of 24.0 ± 18.4 months, late morbidity and mortality rates were 18.2% and 2.3%, respectively. The mean percentage total weight and excess weight losses after at least 1 year of follow-up were 26.7 ± 12.0% and 44.1 ± 20.6%, respectively. Conclusions: Laparoscopic bariatric surgery is safe and can be performed with acceptable perioperative outcomes in extremely high-risk patients. Advanced age, BMI, and severe cardiopulmonary comorbidities should not exclude patients from consideration for bariatric surgery. abstract_id: PUBMED:25547057 The role of bariatric surgery in morbidly obese patients with inflammatory bowel disease. Background: Bariatric surgery is considered as being contraindicated for morbidly obese patients who also have inflammatory bowel disease (IBD). The aim of our study was to report the outcomes of bariatric surgery in morbidly obese IBD patients. Methods: The prospectively collected data of all the patients diagnosed as having IBD who underwent bariatric operations in 2 medical centers between October 2006 and January 2014 were retrieved and analyzed. Results: One male and 9 female morbidly obese IBD patients (8 with Crohn's disease and 2 with ulcerative colitis) underwent bariatric surgery. Their mean age was 40 years, and their mean body mass index was 42.6 kg/m2. Nine of them underwent a laparoscopic sleeve gastrectomy and 1 underwent a laparoscopic adjustable gastric band. Eight patients had obesity-related co-morbidities, including type 2 diabetes, hypertension, sleep apnea, osteoarthropathy, etc. After a median follow-up of 46 months (range 9-67), all of the patients lost weight, with an excess weight loss of 71%, and 10 out of 16 obesity-related co-morbidities were resolved. There was 1 complication not related to IBD, and no IBD exacerbation. Conclusion: Bariatric surgery was safe and effective in our morbidly obese IBD patients. The surgical outcome in this selected patient group was similar to that of comparable non-IBD patients. abstract_id: PUBMED:32317872 Characteristics of morbid obese patients with high-risk cardiac disease undergoing laparoscopic sleeve gastrectomy surgery. Introduction: Bariatric surgery is an efficient and safe method of weight reduction among patients who have morbid obesity which cannot be treated by the conservative approach. Safety and feasibility of bariatric surgery among high-risk patients are understudied. Therefore, we aimed to report the patient-level characteristics and outcome among high-risk obese patients undergoing laparoscopic sleeve gastrectomy surgery in Saudi Arabia. Methods: A retrospective analysis was performed among 13 morbidly obese (BMI &gt;39 kg/m2) patients with high-risk cardiac disease, who were referred to Upper Gastro-Intestinal Surgery Clinic at King Khalid University Hospital, which is a center of excellence in bariatric surgery, for consideration for weight loss surgery. Retrospective data on preoperative weight, height, and BMI, operative details, perioperative complications, length of stay, and information on comorbidities and endocrinal disease were collected for analysis and reporting. Results: A total of 13 patients were included in the analysis. Of the total, 61.5% were males with a mean age 40.38 (SD: 16.28) and a mean BMI 51.87 (SD: 7.69). The mean duration of surgery was 33.30 min (SD: 31.01), while the mean duration of anesthesia was 83.61 min (SD: 34.73). The mean length of stay was 6.76 days (SD: 3.89). Three patients required postoperative HDU admission with a mean length of stay of 1 day, while 5 patients required postoperative ICU admission with a length of stay ranging from 1 to 3 days. Within 30 days after discharge, only 1 patient required ER visit and none of the patients reported any postoperative morbidity and mortality. Conclusion: Through this study, we can conclude that laparoscopic sleeve gastrectomy surgery can be considered a safe procedure. However, further studies with a large sample size and a more robust methodology are needed to build upon the findings of this study. abstract_id: PUBMED:25282193 Can bariatric surgery improve cardiovascular risk factors in the metabolically healthy but morbidly obese patient? Background: Bariatric surgery has been shown to be effective in resolving co-morbid conditions even in patients with a body mass index (BMI)&lt;35 kg/m(2). A question arises regarding the metabolic benefits of bariatric surgery in metabolically healthy but morbidly obese (MHMO) patients, characterized by a low cardiometabolic risk. The objective of this study was to assess the effects of bariatric surgery on cardiometabolic risk factors among MHMO and metabolically unhealthy morbidly obese (MUMO) adults. Methods: A nonrandomized, prospective cohort study was conducted on 222 severely obese patients (BMI&gt;40 kg/m(2)) undergoing either laparoscopic roux-en-Y gastric bypass or laparoscopic sleeve gastrectomy. Patients were classified as MHMO if only 1 or no cardiometabolic factors were present: high blood pressure, triglycerides, blood glucose (or use of medication for any of these conditions), decreased high-density lipoprotein-cholesterol (HDL-C) levels, and insulin resistance defined as homeostasis model assessment for insulin-resistance (HOMA-IR)&gt; 3.29. Results: Forty-two (18.9%) patients fulfilled the criteria for MHMO. They were younger and more frequently female than MUMO patients. No differences between groups were observed for weight, BMI, waist and hip circumference, total and LDL-C. MHMO patients showed a significant decrease in blood pressure, plasma glucose, HOMA-IR, total cholesterol, LDL-C and triglycerides and an increase in HDL-C 1 year after bariatric surgery. Weight loss 1 year after bariatric surgery was similar in both groups. Conclusion: Eighteen percent of patients with morbid obesity fulfilled the criteria for MHMO. Although cardiovascular risk factors in these patients were within normal range, an improvement in all these factors was observed 1 year after bariatric surgery. Thus, from a metabolic point of view, MHMO patients benefited from bariatric surgery. abstract_id: PUBMED:27834081 Current Status of Bariatric and Metabolic Surgery in Korea. Bariatric surgery is considered to be the most effective treatment modality in maintaining long-term weight reduction and improving obesity-related conditions in morbidly obese patients. In Korea, surgery for morbid obesity was laparoscopic sleeve gastrectomy first performed in 2003. Since 2003, the annual number of bariatric surgeries has markedly increased, including adjustable gastric banding (AGB), Roux-en-Y gastric bypass, sleeve gastrectomy, mini-gastric bypass, and others. In Korea, AGB is much more common than in others countries. A large proportion of doctors, the public, and government misunderstand the necessity and effectiveness of bariatric surgery, believing that bariatric surgery has an unacceptably high morbidity, and that it is not superior to non-surgical treatments to improve obesity and obesity-related diseases. The effectiveness, safety, and cost-effectiveness of bariatric surgery have been well demonstrated. The Korean Society of Metabolic and Bariatric Surgery recommend bariatric surgery confining to morbidly obese patients (body mass index ≥40 or &gt;35 in the presence of significant comorbidities). abstract_id: PUBMED:21608276 Perioperative anesthetic management of 300 morbidly obese patients undergoing laparoscopic bariatric surgery and a brief review of relevant pathophysiology Objectives: Laparoscopic bariatric surgery is a challenge for anesthesiologists because morbidly obese patients are at high risk and laparoscopy may complicate respiratory and hemodynamic management. The aim of this study was to analyze the perioperative anesthetic management of morbidly obese patents undergoing laparoscopic bariatric surgery. Material And Methods: Prospective study of 300 consecutive patients diagnosed with morbid obesity and scheduled for laparoscopic bariatric surgery. Patients were positioned with a wedge cushion under the head and shoulders. A rapid sequence induction of anesthesia was carried out. A short-handled, articulated-blade McCoy laryngoscope was used for intubation; an intubation laryngeal mask airway (Fastrach) was on hand as a rescue device. Propofol and remifentanil were used for maintenance of anesthesia and morphine was administered at the end of surgery. Incentive spirometry was initiated in the postanesthetic recovery unit. Results: Eighty percent of the patients were women with a mean (SD) body mass index (kg/m2) of 46 (5). The first choice of direct laryngoscopic intubation was successful in 98.6% of cases. All patients were successfully intubated. Only 5 patients required intensive care. Postoperative complications (mainly respiratory problems, bleeding, and infections) were observed in 17%. No patient died. Conclusions: Perianesthetic management of morbidly obese patients who undergo laparoscopic surgery is safe. To minimize pulmonary complications, preoxygenation and rapid sequence induction should be performed correctly and incentive spirometry should be initiated in the immediate postoperative period. The McCoy laryngoscope ensures intubation in most cases. abstract_id: PUBMED:27639986 A percutaneous technique of liver retraction in laparoscopic bariatric &amp; upper abdominal surgery. Background: Laparoscopic bariatric surgery requires retraction of the left lobe of the liver to provide adequate exposure of the hiatus and the stomach. Currently used approaches utilize retractors that require additional incisions and prolong operative time. Objectives: A retrospective evaluation of the efficacy and safety of a percutaneous liver retractor in a large series of patients undergoing laparoscopic bariatric surgery. Setting: Private practice, United States. Methods: A retrospective chart review was performed on 2601 patients undergoing bariatric surgery from January 2011 to September 2015. A percutaneously introduced grasper (Teleflex MiniLap Percutaneous Surgical System, Morrisville, NC) was used to retract the left lobe of the liver in all cases. The retractor could be repositioned as necessary by releasing and regrasping the diaphragm at different locations. Results: This technique was used in 2601 patients from January 2011 until September 2015. The average body mass index was 43.1 (range: 20.6-80.3). In all patients, this new method was found to be satisfactory to complete the bariatric procedure. The majority of procedures included laparoscopic Roux-en-Y gastric bypass, sleeve gastrectomy, and gastric band placement. No intraoperative liver injuries occurred with use of the Teleflex retractor. Conclusion: Percutaneous retraction of the liver using the Teleflex MiniLap Percutaneous Surgical System was found to be safe and effective in this large series of morbidly obese patients. The rate of complications involving this technique is extremely low. This novel method provides safe and effective retraction with less trauma and better cosmesis than conventional technique. abstract_id: PUBMED:21348922 Cardiovascular benefits of bariatric surgery in morbidly obese patients. Morbid obesity is associated with increased morbidity and represents a major healthcare problem with increasing incidence worldwide. Bariatric surgery is considered an effective option for the management of morbid obesity. We searched MEDLINE, Current Contents and the Cochrane Library for papers published on bariatric surgery in English from 1 January 1990 to 20 July 2010. We also manually checked the references of retrieved articles for any pertinent material. Bariatric surgery results in resolution of major comorbidities including type 2 diabetes mellitus, hypertension, dyslipidemia, metabolic syndrome, non-alcoholic fatty liver disease, nephropathy, left ventricular hypertrophy and obstructive sleep apnea in the majority of morbidly obese patients. Through these effects and possibly other independent mechanisms bariatric surgery appears to reduce cardiovascular morbidity and mortality. Laparoscopic Roux-en-Y gastric bypass (LRYGB) appears to be more effective than laparoscopic adjustable gastric banding (LAGB) in terms of weight loss and resolution of comorbidities. Operation-associated mortality rates after bariatric surgery are low and LAGB is safer than LRYGB. In morbidly obese patients bariatric surgery is safe and appears to reduce cardiovascular morbidity and mortality. abstract_id: PUBMED:24462310 Bariatric surgery: a safe and effective conduit to cardiac transplantation. Background: Obesity and obesity-related co-morbidities, including advanced heart failure, are epidemic. Some of these patients will progress to require cardiac allografts as the only means of long-term survival. Unfortunately, without adequate weight loss, they may never be deemed acceptable transplant candidates. Often surgical weight loss may be the only effective and durable option for these complex patients. The objective of this study was to assess whether bariatric surgery is feasible and safe in patients with severe heart failure, which in turn, after adequate weight loss, would allow these patients to be listed for a heart transplant. Methods: Four patients who underwent bariatric procedures, such as laparoscopic Roux-en-Y gastric bypass (LRYGB) and laparoscopic sleeve gastrectomy (SG), for the purpose of attaining adequate weight loss with the goal to improve their eligibility for orthotopic heart transplants are presented. Results: All patients did well around the time of surgery, and 3 of the 4 progressed to receiving a heart transplant. The fourth patient will be listed pending attaining adequate weight loss. Conclusion: Bariatric surgery may be an important bridge to transplantation for morbidly obese patients with severe heart failure. With the appropriate infrastructure, bariatric surgery is a feasible and effective weight loss method in this population. abstract_id: PUBMED:35111452 Comparative Effectiveness of Laparoscopic Sleeve Gastrectomy in Morbidly Obese and Super Obese Patients. Background Laparoscopic sleeve gastrectomy (LSG) is a modified procedure derived from a biliopancreatic diversion (BPD)-duodenal switch. The present study evaluated the role of LSG in morbidly and super obese patients and compare its efficacy between the two groups. Methodology A retrospective review was conducted in Dr. Sulaiman Al Habib Specialist Hospital, Riyadh, KSA, from January 2020 to April 2021. Patients' records were divided into two groups, morbidly obese (body mass index (BMI): 40-49 kg/m2) and super obese (BMI: 50-59 kg/m2), who were admitted to the department for laparoscopic sleeve gastrectomy during the study duration. However, patients with a history of gut surgery, hernias, comorbid use of illicit substances, and psychiatric disorders were excluded. For all patients, a routine preoperative investigation protocol was conducted. Postoperative surgical complications were also recorded. The Clavien-Dindo classification (CDC) score was applied to record surgical complications. Data collection was done using a semi-structured questionnaire. The Statistical Package for Social Sciences (SPSS) version 26 (IBM, Chicago, USA) was used to perform data analysis. Results A total of 176 patient records were included in this analysis, of which 126 (71.6%) were females. There were 101 (57.1%) patients who were morbidly obese and 76 (42.9%) who were super obese. The mean duration of follow-up records in this study was 23.2 ± 3.6 weeks, which was slightly longer in the morbidly obese group. Change in BMI was higher in the super obese patients (18.6 ± 3.1 versus 10.5 ± 1.9). Final body weight was still lower in the morbidly obese group as they were relatively slimmer even before the procedure. A higher reduction in excess weight loss (EWL) is seen in the morbidly obese group. Comorbidity resolution status was also remarkable with the procedure. Overall, there were procedure-associated complications in 11 (10.9%) patients in the morbidly obese group and 10 (13.2%) in the super obese group. Conclusion Laparoscopic sleeve gastrectomy is a safe procedure in morbidly and super obese patients. It is effective in sustainable total and excess weight loss over time. It is also effective in comorbidity resolution. Complications with LSG are minimal and nonserious. LSG should be the recommended procedure in morbidly and super obese patients with adverse health consequences to improve their morbidity, mortality, and overall quality of life. Answer: Yes, laparoscopic bariatric surgery is considered a safe option in extremely high-risk morbidly obese patients. A study evaluating the outcomes following bariatric surgery in extremely high-risk patients, defined by advanced age, superobesity, and the presence of multiple cardiopulmonary comorbidities, found that laparoscopic bariatric surgery can be performed with acceptable perioperative outcomes in this patient population. The study reported no conversions to laparotomy, a low rate of major complications, and no mortality within 30 days postoperatively. Additionally, the study observed significant weight loss and improvement in comorbidities over a mean follow-up time of 24 months (PUBMED:26301769). Furthermore, other studies have shown that bariatric surgery is safe and effective in morbidly obese patients with specific comorbidities such as inflammatory bowel disease (IBD) and high-risk cardiac disease, with minimal complications and no exacerbation of IBD or postoperative morbidity and mortality reported (PUBMED:25547057; PUBMED:32317872). Even in metabolically healthy but morbidly obese patients, bariatric surgery has been shown to improve cardiometabolic risk factors (PUBMED:25282193). The safety and effectiveness of bariatric surgery have been well demonstrated, and it is recommended for morbidly obese patients with a body mass index (BMI) ≥40 or >35 in the presence of significant comorbidities (PUBMED:27834081). Perioperative anesthetic management of these patients is also considered safe when proper techniques are used (PUBMED:21608276). Additionally, novel percutaneous techniques for liver retraction have been found to be safe and effective in laparoscopic bariatric surgery, further supporting the safety of these procedures (PUBMED:27639986). In summary, laparoscopic bariatric surgery is a safe and viable option for extremely high-risk morbidly obese patients, with studies showing low rates of complications and significant improvements in weight loss and comorbidities.
Instruction: 13C-urea breath test for the diagnosis of Helicobacter pylori infection: are basal samples necessary? Abstracts: abstract_id: PUBMED:10201464 Comparison of [13C]urea blood test to [13C]urea breath test for the diagnosis of Helicobacter pylori. Objective: It has been determined that the [13C]urea breath test (UBT) is a safe and effective way of detecting Helicobacter pylori (H. pylori) infection. Some individuals may have difficulty performing the exhalation component of the test, possibly due to age, or mental or physical compromise. Our aim was to determine if a commercially developed [13C]urea blood test could be utilized as a substitute for the UBT to detect H. pylori infection. Methods: Patients who were referred by their physicians for UBT were offered study inclusion. Patients underwent baseline and 30-min UBT. A simultaneous blood sample of 3 cc was drawn into a heparinized vacutainer at the 30-min period of the UBT. [13C]urea levels in both blood and breath samples were analyzed using isotope ratio mass spectrometry. UBT &gt; or = 6 delta per mil over baseline and urea blood tests &gt; (-17 delta per mil) were considered positive. Results: One hundred sixty-one patients (68 men/93 women) with average age of 47.0 +/- 14.2 yr were tested. Agreement between breath and blood test results occurred in 153/161 (95%) cases. Using the UBT as the diagnostic standard, the urea blood test resulted in 44 true positive, 109 true negative, four false positive, and four false negative results, giving a sensitivity of 92%, specificity of 96%, positive predictive value of 92%, and negative predictive value of 96%. Conclusions: The urea blood test was found to be comparable to the urea breath test in the detection of H. pylori infection. The urea blood test will be accurate in the diagnosis of active H. pylori infection. abstract_id: PUBMED:10457031 The 13C urea breath test in the diagnosis of Helicobacter pylori infection. The urea breath test (UBT) is one of the most important non-invasive methods for detecting Helicobacter pylori infection. The test exploits the hydrolysis of orally administered urea by the enzyme urease, which H pylori produces in large quantities. Urea is hydrolysed to ammonia and carbon dioxide, which diffuses into the blood and is excreted by the lungs. Isotopically labelled CO2 can be detected in breath using various methods. Labelling urea with 13C is becoming increasingly popular because this non-radioactive isotope is innocuous and can be safely used in children and women of childbearing age. Breath samples can also be sent by post or courier to remote analysis centres. The test is easy to perform and can be repeated as often as required in the same patient. A meal must be given to increase the contact time between the tracer and the H pylori urease inside the stomach. The test has been simplified to the point that two breath samples collected before and 30 minutes after the ingestion of urea in a liquid form suffice to provide reliable diagnostic information. The cost of producing 13C-urea is high, but it may be possible to reduce the dosage further by administering it in capsule form. An isotope ratio mass spectrometer (IRMS) is generally used to measure 13C enrichment in breath samples, but this machine is expensive. In order to reduce this cost, new and cheaper equipment based on non-dispersive, isotope selective, infrared spectroscopy (NDIRS) and laser assisted ratio analysis (LARA) have recently been developed. These are valid alternatives to IRMS although they cannot process the same large number of breath samples simultaneously. These promising advances will certainly promote the wider use of the 13C-UBT, which is especially useful for epidemiological studies in children and adults, for screening patients before endoscopy, and for assessing the efficacy of eradication regimens. abstract_id: PUBMED:7847290 13C]urea breath test to confirm eradication of Helicobacter pylori. Objective: To determine the utility of the [13C]urea breath test in confirming the eradication of Helicobacter pylori. Methods: We reviewed our H. pylori database for patients who underwent [13C]urea breath test at baseline and 6 wk after triple therapy with tetracycline, metronidazole, and bismuth subsalicylate. Baseline infection was defined by the identification of the organism on antral biopsies or a reactive CLO test. Eradication was defined as a negative Warthin-Starry stain and a non-reactive CLO test at 24 h. All patients had a positive baseline [13C]urea breath test defined as [13C] enrichment &gt; 6% at 60 min. Results: One hundred eighteen H. pylori-infected patients (mean age 58.3 +/- 13.9 yr) met the review criteria (61 duodenal ulcers, 24 gastric ulcers, 33 non-ulcer dyspepsia). In 101/118 patients (86%), H. pylori was successfully eradicated (mean baseline breath test value 25.8 +/- 1.6). Of 101 patients, 95 had a negative 6-wk follow-up breath test (mean 2.2 +/- 0.2, p &lt; 0.001). Of the 6/101 patients in whom treatment was successful, and who remained breath test positive at 6 wk, 4/6 were breath test negative when retested at 3 months. The remaining two patients were lost to follow-up. In 17/118 (14%) patients, H. pylori failed to be eradicated (mean baseline breath test 22.4 +/- 3.6). Fifteen of 17 patients had a positive breath test at 6 wk (mean 19.9 +/- 3.7). Two of 17 with a negative breath test at 6 wk tested positive when the breath test was repeated at 3 months. The sensitivity and specificity of [13C]urea breath test at 6 wk posttreatment are 97% and 71%, respectively. The positive and negative predictive values are 94% and 88%, respectively. Conclusions: [13C]urea breath test is a sensitive indicator of H. pylori eradication 6 wk after treatment. Antral biopsies are unnecessary to confirm eradication of H. pylori after completion of treatment. abstract_id: PUBMED:9835321 Usefulness of the [13C]-urea breath test for detection of Helicobacter pylori infection in fasting patients. Most of the reported [13C]-urea breath test procedures use a test meal, which is believed to assist in the spread of the [13C]-urea solution into the entire stomach, as results without a test meal may mainly reflect urease activity in the antrum.Yet, procedures for the [13C]-urea breath test and interpretation of the obtained 13C excess value have not been well established. We carried out the present study to validate the usefulness of the [13C]-urea breath test in fasting subjects and to establish cut-off values. [13C]-Urea breath tests were performed on 258 Helicobacter pylori-positive and 151 -negative subjects (247 H. pylori positive and 26 negative prior to any H. pylori cure treatment and 125 H. pylori negative and 11 positive after undergoing H. pylori cure treatment). The breath test procedure was performed under the following conditions: an 8 h fast, mouth washing before and after dosing, administration of 100 mg [13C]-urea, collection of breath sample in a plastic bag, a baseline and a 20 min sampling point and subject in a sitting position. Delta-13C at the 20 min sampling point in H. pylori-positive and -negative subjects was 31.0+/-1.25 and 1.6+/-0.11%, respectively. Although the mean delta13C value was greatest in duodenal ulcer or ulcer scar patients, there were no significant differences among mean delta13C values in the various diseases. From Receiver Operator Characteristic curves and calculation of accuracy of the test, a cut-off value of 5.0% is considered to be appropriate for diagnosis of H. pylori infection, which provides 96.7% specificity and 96.5% sensitivity, suggesting that the [13C]-urea breath test in the fasting state is as effective in detecting the presence of H. pylori as other reported methods. abstract_id: PUBMED:8677930 Noninvasive detection of Helicobacter pylori infection in clinical practice: the 13C urea breath test. Objectives: To validate the 13C urea breath test for the detection of Helicobacter pylori infection both before and after treatment. Methods: 13C urea breath tests with 125-mg and 250-mg doses were carried out on each of 60 infected and 60 noninfected subjects. Results were compared with histological examination of gastric biopsies to establish detection limits. The best cut-off point was used in a clinical trial of the efficacy of the breath test in duodenal ulcer patients before and after antimicrobial therapy. The incremental increase (percentage, delta over baseline in U of delta/mil) in respiratory 13CO2 abundance was associated with histological evidence of H.pylori. Outpatient, tertiary care medical center, and secondary and primary care facilities were included. One hundred twenty healthy asymptomatic subjects and 465 patients with duodenal ulcer disease were studied. The test kit assessed repeatability of breath sample collection and storage and stability of stored samples. Test performance was analyzed by comparison of 125-mg and 250-mg 13C urea with measurements at 30 and 40 min postdose. The test was used to diagnose active H.pylori infection and gauge success of antimicrobial therapy. Results: The test kit results were highly reproducible. The cut-off values were higher with 250-mg compared with 125-mg doses of 13C urea and 40 min compared with 30 min. Using a 125-mg 13C urea and test detection limit of 2.4% at 30 min, the accuracy was 94.8 (95% confidence interval = 92-97%) before antimicrobial therapy and 95.4% (95% confidence interval = 91-98%) after. An increase of 2.4% in the abundance of breath 13CO2 measured 30 min after a 125-mg dose of 13C urea reliably indicated the presence of active H.pylori infection either before or after antimicrobial therapy. The 13C urea breath test provides a simple and reliable and noninvasive method of assessing H.pylori status. abstract_id: PUBMED:10610215 13C urea breath testing to diagnose Helicobacter pylori infection in children. The casual relationship between Helicobacter pylori colonization of the gastric mucosa and gastritis has been proven. Endoscopy and subsequent histological examination of antral biopsies have been regarded as the gold standard for diagnosing H pylori gastritis. The C urea breath test is a noninvasive test with a high specificity and sensitivity for H pylori colonization. Increasingly, it is becoming an important tool for use in diagnosing H pylori infection in pediatric populations. This test is particularly well suited for epidemiological studies evaluating reinfection rates, spontaneous clearance of infection and eradication rates after therapy. However, few groups have validated the test in the pediatric age group. The testing protocol has not yet been standardized. Variables include fasting state, dose of urea labelled with 13C, delta cutoff level of 13C carbon dioxide, choice of test meal and timing of collection of expired breath samples. Further studies are urgently needed to evaluate critically the impact of H pylori infection in the children. The 13C urea breath test should prove very useful in such prospective studies. abstract_id: PUBMED:37366973 Raman Spectroscopy for Urea Breath Test. The urea breath test is a non-invasive diagnostic method for Helicobacter pylori infections, which relies on the change in the proportion of 13CO2 in exhaled air. Nondispersive infrared sensors are commonly used for the urea breath test in laboratory equipment, but Raman spectroscopy demonstrated potential for more accurate measurements. The accuracy of the Helicobacter pylori detection via the urea breath test using 13CO2 as a biomarker is affected by measurement errors, including equipment error and δ13C measurement uncertainty. We present a Raman scattering-based gas analyzer capable of δ13C measurements in exhaled air. The technical details of the various measurement conditions have been discussed. Standard gas samples were measured. 12CO2 and 13CO2 calibration coefficients were determined. The Raman spectrum of the exhaled air was measured and the δ13C change (in the process of the urea breath test) was calculated. The total error measured was 6% and does not exceed the limit of 10% that was analytically calculated. abstract_id: PUBMED:10735540 Endoscopic [13C]-urea breath test for quantification of Helicobacter pylori infection. Background: We previously developed a new diagnostic method for Helicobacter pylori infection and called it the endoscopic [13C]-urea breath test (EUBT). Here we evaluate the relationship between the EUBT results and the histological findings. Methods: The EUBT was performed on 137 patients with gastroduodenal diseases. After the collection of a baseline breath sample, gastroduodenal endoscopy was performed. Twenty milliliters of 0.05% phenol red solution containing 100 mg of [13C]-urea was sprayed over the entire gastric mucosa under endoscopic observation. A breath sample was collected 15 min after spraying. The content of 13CO2 in the breath samples was measured by ratio mass spectrometry. Two biopsy specimens each from the antrum and the middle corpus were obtained for culture and histology. Helicobacter pylori colonization, activity, inflammation, atrophy and intestinal metaplasia were classified on a four-point scale according to the Updated Sydney System. Results: We found positive correlations between the EUBT values and the H. pylori colonization and activity score in the antrum and corpus, and negative correlations between the EUBT values and the atrophy and intestinal metaplasia scores in the corpus. Conclusions: The EUBT can be an indicator of the intragastric bacterial load and the histological findings for H. pylori. abstract_id: PUBMED:11111776 13C-urea breath test for the diagnosis of Helicobacter pylori infection: are basal samples necessary? Aim: The 13C-urea breath test (13C-UBT) is one of the best methods for the diagnosis of Helicobacter pylori infection. Basal breath samples are usually obtained, in addition to those obtained after urea intake, as it has been suggested that basal values may oscillate among a population (e.g. depending on diet). However, the superiority of this strategy has not been sufficiently demonstrated. The elimination of basal samples in the 13C-UBT protocol would have the advantages of higher simplicity and speed. Methods: The 13C-UBT was performed in 714 consecutive patients. Mean age was 48 +/- 16 years, 49% were males, and in 48% of the patients previous H. pylori eradication therapy had been administered. Basal samples (13C-basal) and at 30 min after taking 100 mg of urea labelled with 13C (13C-post-urea) were obtained, delta over baseline (13C-DOB) being the algebraic difference between the ratio 13C/12C at these two points (which is the parameter usually given in studies, being considered positive when &gt; 5%). A citric acid solution was used prior to urea intake. Results: The prevalence of H. pylori infection was 48%. Mean values of 13C-basal, 13C-post-urea, and 13C-DOB were, respectively, -19 +/- 2, 5.9 +/- 33, and 25 +/- 33. 13C-basal values oscillated between -25 and -14, being between -21 and -16 in 90% of the cases. Linear correlation coefficient for 13C-post-urea and 13C-DOB was 0.999 (determination coefficient, 0.998; P&lt; 0.0001). The area under the receiver operating characteristic (ROC) curve for the diagnosis of infection when only 13C-post-urea was used (taking 13C-DOB &gt;5% as the 'gold standard') was 0.997. Best results were obtained with 13C-post-urea cut-off point set at -13, with sensitivity of 97.4% (95% CI, 95-99%), specificity of 99.5% (98-100%), and positive likelihood ratio of 180. The concordance kappa value for both tests (13C-post-urea and 13C-DOB) using the aforementioned cut-off point was 0.97. Conclusion: When performing the 13C-urea breath test it is sufficient to obtain samples 30 min after urea intake. Basal breath samples would not be necessary, which further simplifies this diagnostic method. abstract_id: PUBMED:28280129 Comparison of Accuracy Between 13C- and 14C-Urea Breath Testing: Is an Indeterminate-Results Category Still Needed? Helicobacter pylori infection is the leading cause of peptic ulcer disease. The purpose of this study was, first, to assess the difference in the distribution of negative versus positive results between the older 14C-urea breath test and the newer 13C-urea breath test and, second, to determine whether use of an indeterminate-results category is still meaningful and what type of results should trigger repeated testing. Methods: A retrospective survey was performed of all consecutive patients referred to our service for urea breath testing. We analyzed 562 patients who had undergone testing with 14C-urea and 454 patients who had undergone testing with 13C-urea. Results: In comparison with the wide distribution of negative 14C results, negative 13C results were distributed farther from the cutoff and were grouped more tightly around the mean negative value. Distribution analysis of the negative results for 13C testing, compared with those for 14C testing, revealed a statistically significant difference between the two. Within the 13C group, only 1 patient could have been classified as having indeterminate results using the same indeterminate zone as was used for the 14C group. This is significantly less frequent than what was found for the 14C group. Discussion: Borderline-negative results do occur with 13C-urea breath testing, although less frequently than with 14C-urea breath testing, and we will be carefully monitoring differences falling between 3.0 and 3.5 %Δ. 13C-urea breath testing is safe and simple for the patient and, in most cases, provides clearer positive or negative results for the clinician. Answer: The necessity of basal samples in the 13C urea breath test (13C-UBT) for the diagnosis of Helicobacter pylori infection has been questioned. A study aimed to determine if the elimination of basal samples from the 13C-UBT protocol could simplify the process without compromising accuracy. The study involved 714 consecutive patients, with basal samples and samples taken 30 minutes after ingesting 100 mg of urea labeled with 13C collected. The results showed that the mean values of basal samples oscillated within a narrow range for most cases, and there was a high linear correlation coefficient between the post-urea samples and the delta over baseline (DOB) values. The study concluded that obtaining samples 30 minutes after urea intake was sufficient, and basal breath samples were not necessary, which simplifies the diagnostic method (PUBMED:11111776).
Instruction: Sex-associated differences in non-small cell lung cancer in the new era: is gender an independent prognostic factor? Abstracts: abstract_id: PUBMED:38213660 Albumin‑bilirubin grade is an independent prognostic factor for small lung cell cancer. Albumin-bilirubin (ALBI) grade was first described in 2015 as an indicator of liver dysfunction in patients with hepatocellular carcinoma. ALBI grade has been reported to have prognostic value in several malignancies including non-small cell lung cancer (NSCLC). The present study aimed to explore the prognostic impact of ALBI grade in patients with small cell lung cancer (SCLC). It retrospectively analyzed 135 patients with SCLC treated at Hebei General Hospital between April 2015 and August 2021. Patients were divided into two groups according to the cutoff point of ALBI grade determined by the receiver operating characteristic (ROC) curve: Group 1 with pre-treatment ALBI grade ≤-2.55 for an improved hepatic reserve and group 2 with ALBI grade &gt;-2.55. Kaplan-Meier and Cox regression analysis were performed to assess the potential prognostic factors associated with progression free survival (PFS) and overall survival (OS). Propensity score matching (PSM) was applied to eliminate the influence of confounding factors. PFS and OS (P&lt;0.001) were significantly improved in group 1 compared with in group 2. Multivariate analysis revealed that sex (P=0.024), surgery (P=0.050), lactate dehydrogenase (LDH; P=0.038), chemotherapy (P=0.038) and ALBI grade (P=0.028) are independent risk factors for PFS and that surgery (P=0.013), LDH (P=0.039), chemotherapy (P=0.009) and ALBI grade (P=0.013) are independent risk factors for OS. After PSM, ALBI grade is an independent prognostic factor of PFS (P=0.039) and OS (P=0.007). It was concluded that ALBI grade was an independent prognostic factor in SCLC. abstract_id: PUBMED:19299032 Sex-associated differences in non-small cell lung cancer in the new era: is gender an independent prognostic factor? Background: Women with non-small cell lung cancer (NSCLC) appear to have better survival. This study aimed to evaluate sex differences in NSCLC in recent years. The true effect of gender on the overall survival was analyzed taking other prognostic factors into account. Methods: A cohort of consecutive NSCLC patients was prospectively enrolled from January 2002 to December 2005, and followed-up until December 2006. They were clinically and pathologically staged and underwent homogenous treatment algorithms. Demographics, histology, and disease stage between sexes were compared. The clinical prognostic factors to be analyzed in addition to gender included stage, age, smoking history and histology. The overall survival of females and males within relevant subgroups defined by smoking history and histology was also compared. Results: Of the 738 patients, 695 were analyzed with a definite stage (94.2%; 315 females and 380 males), which was similar in both sexes. Females were younger (median age: 59.5 years vs. 65.0 years; P&lt;0.001) and more likely to have adenocarcinoma (81% vs. 60.5%; P&lt;0.001). Patients with earlier stage, younger patients, never-smokers and females had better overall survival in univariate analyses and no significant survival difference was noted between adenocarcinoma and squamous cell carcinoma. Multivariate analyses demonstrated age, smoking history and gender to have a hazard ratio 1.46 (95% confidence interval, CI 1.21-1.76; P&lt;0.001), 1.27 (95% CI 0.97-1.65; P=0.082), and 1.18 (95% CI 0.90-1.55; P=0.226), respectively. Subgroup analyses revealed the survival of never-smoker males with adenocarcinoma was similar to that of females. Conclusions: There are sex-related differences in the clinico-pathologic characteristics and survival of NSCLC patients. The survival advantages of females could be attributed to the younger age and lower smoking prevalence. Never-smokers with adenocarcinoma should be given special attention regardless of sex as they imply better survival with different treatment outcomes. abstract_id: PUBMED:2166143 Sex-associated differences in presentation and survival in patients with lung cancer. A retrospective study of 478 men and 294 women with primary lung cancer was conducted to characterize sex-associated differences in their presentation and survival. At the time of diagnosis, women were younger than men (mean age, 57.4 +/- 10.4 v 60.2 +/- 9.9 years, respective; P = .0007). Men were more likely to be current or previous smokers (94% v 84%; P less than .005), and in patients with a positive smoking history, cigarette consumption was greater in men (52.2 v 40.2 pack years; P = .0001). The proportion of adenocarcinomas compared with squamous cancers was high in women (45% v 23%), while these cell types were equally represented in men. The majority of patients in both sex groups had regionally advanced or metastatic disease at diagnosis. Survival was related to age, stage at presentation and cell type. In addition, sex was found to be an independent prognostic factor for survival. Women with tumors of all cell types lived longer than their male counterparts (P less than .0001), and survival by stage in patients with nonsmall-cell cancers was greater for women than it was for men. These data demonstrate that important sex-associated differences exist in presentation and survival from lung cancer. Such differences should be considered when planning and analyzing clinical trials. abstract_id: PUBMED:16640805 Sex differences in the predictive power of the molecular prognostic factor HER2/neu in patients with non-small-cell lung cancer. Background: Recent studies imply that HER2/neu is a potential prognostic factor in patients with non-small-cell lung cancer (NSCLC). Whereas considerable evidence indicates sex differences in epidemiologic, hormonal, biologic, and genetic factors in this disease, it has remained unknown whether HER2/neu has a diverse function as a prognostic factor in men and women. Patients And Methods: We investigated the association between gene expression levels of HER2/neu in the primary tumors of 90 patients with curable resected NSCLC and survival, especially analyzing whether there is a different potential of this molecular factor in its prognostic impact between men and women. Results: High HER2/neu gene expression levels were found in 62 patients (68.9%), and low HER2/neu gene expression levels were found in 28 patients (31.1%). High HER2/neu messenger RNA expression levels were associated with inferior survival (P = 0.09) compared with lower HER2/neu expression. Survival analysis was then carried out separately for men and women in this group of patients. An HER2/neu gene expression cutoff point was identified that separated women, but not men, into good and poor prognostic groups. Conclusion: These findings suggest that HER2/neu as a prognostic factor is strongly sex specific, indicating that it is not useful for men but highly predictive for women. abstract_id: PUBMED:32286017 Lung immune prognostic index as a prognostic factor in patients with small cell lung cancer. Background: The lung immune prognostic index (LIPI) is a marker that combines the derived neutrophil-to-lymphocyte ratio (dNLR) and serum lactate dehydrogenase (LDH) level and is a recently reported prognostic factor of immune checkpoint inhibitor therapy for non-small cell lung cancer (NSCLC). However, there are no reports regarding the prognostic value of LIPI in small cell lung cancer (SCLC). Methods: We retrospectively enrolled 171 patients diagnosed with SCLC and treated at Shinshu University School of Medicine between January 2003 and November 2019. Progression-free survival (PFS) and overall survival (OS) were compared according to LIPI, and we investigated whether LIPI could be a prognostic factor in SCLC using the Kaplan-Meier method and univariate and multivariate Cox models. Results: The median OS of the LIPI 0 group was significantly longer than that of the LIPI 1 plus 2 group (21.0 vs. 11.6 months, P &lt; 0.001). The multivariate analysis associated with OS indicated that LIPI 1 plus 2 was an independent unfavorable prognostic factor in addition to poor performance status (2-3), old age (≥ 75 years) and stage (extensive disease [ED]). However, PFS of the LIPI 0 group was not significantly different from that of the LIPI 1 plus 2 group. In ED-SCLC patients, the median PFS and OS of the LIPI 0 group were significantly longer than those of the LIPI 2 group (6.6 vs. 4.0 months, P = 0.006 and 17.1 vs. 5.9 months, P &lt; 0.001, respectively). Conclusions: We confirmed the prognostic value of LIPI in SCLC, especially ED-SCLC. Key Points: Significant findings of the study: The present study is the first to demonstrate that pretreatment lung immune prognostic index is an independent prognostic factor associated with overall survival for small cell lung cancer. What This Study Adds: The utility of the lung immune prognostic index as a prognostic factor for small cell lung cancer. abstract_id: PUBMED:30403900 FGFR genes mutation is an independent prognostic factor and associated with lymph node metastasis in squamous non-small cell lung cancer. Targeting FGFRs is one of the most promising therapeutic strategies in squamous non-small cell lung cancer (SQCC). However, different FGFR genomic aberrations can be associated with distinct biological characteristics that result in different clinical outcomes or therapeutic consequences. Currently, the full spectrum of FGFR gene aberrations and their clinical significance in SQCC have not been comprehensively studied. Here, we used Next-generation sequencing to investigate the presence of FGFR gene mutations in 143 tumors from patients with stage I, II or III SQCC and who had not been treated with chemotherapy or radiotherapy prior to surgery. FGFR gene mutations were identified in 24 cases, resulting in an overall frequency of 16.9%. Among the mutations, 7% (10/143) were somatic mutations, and 9.8% (14/143) germline mutations. FGFR mutations were significantly associated with an increased risk of lymph node metastasis. SQCC patients with a FGFR somatic mutation had shorter OS (overall survival, log rank P = 0.005) and DFS (disease-free survival,log rank P = 0.004) compared with those without an FGFR mutation. The multivariate analysis confirmed that a somatic mutation was an independent poor prognostic factor for OS (HR: 4.26, 95% CI: 1.49-12.16, P = 0.007) and DFS (HR: 3.16, 95% CI: 1.20-8.35, P = 0.020). Our data indicate that FGFR genes mutation is an independent prognostic factor and associated with lymph node metastasis in stage I to III Chinese SQCC patients. abstract_id: PUBMED:20506778 Sex as a prognostic factor for the patients with non-small cell lung cancer Unlabelled: Lung cancer is the leading cause for cancer mortality in the men and has an increasing prevalence in the women worldwide. There is growing evidence that the sex is an important prognostic factor for the patients with lung cancer. Material And Methods: We present 440 radically operated patients with non-small cell lung cancer between 1997 and 2004, age range: 23-82 years (378 male- 85.91% and 62 female14.09%). Results: The 5-years survival rate was 26,19% for the men (99 patients) and 51.61% for the female patients (32 patients). Conclusions: Sex is a significant prognostic factor for the patients with non-small cell lung cancer. Female sex is a good prognostic factor for survival. abstract_id: PUBMED:31471631 Is the prognostic nutritional index a prognostic and predictive factor in metastatic non-small cell lung cancer patients treated with first-line chemotherapy? Purpose: We aimed to assess the prognostic and predictive significance of pretreatment Onodera's prognostic nutritional index (OPNI) in metastatic non-small cell lung cancer patients (NSCLC) treated with first-line chemotherapy. Materials And Methods: Patients with metastatic NSCLC who attended five different medical oncology clinics between December 2008 and January 2018 were retrospectively analyzed. The optimal cut-off point for OPNI was performed by a receiver operating characteristic (ROC) curve analysis. Patients were assigned to either the low OPNI group or high OPNI group. Results: A total of 333 patients were included in the study. Significant differences between the low and high OPNI groups were found regarding the rates of response to chemotherapy, sex, and hemoglobin level (p &lt; 0.05). The patients in high OPNI group had a longer overall survival (OS) (15.3 vs. 10.6 months, p &lt; 0.001) and progression-free survival (PFS) (6.7 vs. 5.3 months, p &lt; 0.001) compared to the patients in low OPNI group. A multivariate analysis using Cox regression model revealed that a high OPNI score was an independent prognostic factor of OS (HR = 1.535, p = 0.002) and PFS (HR = 1.336, p = 0.014), but failed to demonstrate a statistical significance of pretreatment OPNI scores in predicting treatment response (p = 0.56). Conclusions: Pretreatment OPNI is an independent prognostic factor for OS and PFS in metastatic NSCLC patients treated with first-line chemotherapy. Thus, it may be used as easily calculated and low-cost prognostic tool in the routine clinical practice in this patient group. abstract_id: PUBMED:35454799 Lymph but Not Blood Vessel Invasion Is Independent Prognostic in Lung Cancer Patients Treated by VATS-Lobectomy and Might Represent a Future Upstaging Factor for Early Stages. Lung cancer is the most frequent cause of cancer-related death worldwide. The patient’s outcome depends on tumor size, lymph node involvement and metastatic spread at the time of diagnosis. The prognostic value of lymph and blood vessel invasion, however, is still insufficiently investigated. We retrospectively examined the invasion of lymph vessels and blood vessels separately as two possible prognostic factors in 160 patients who underwent a video-assisted thoracoscopic lobectomy for non-small-cell lung cancer at our institution between 2014 and 2019. Lymph vessel invasion was significantly associated with the UICC stage, lymph node involvement, tumor dedifferentiation, blood vessel invasion and recurrence. Blood vessel invasion tended to be negative prognostic, but missed the level of significance (p = 0.108). Lymph vessel invasion, on the other hand, proved to be a prognostic factor for both histological subtypes, adenocarcinoma (p &lt; 0.001) as well as squamous cell carcinoma (p = 0.018). After multivariate analysis apart from the UICC stage, only lymph vessel invasion remained independently prognostic (p = 0.018). Remarkably, we found analogue survival curve progressions of patients with stage I, with lymph vessel invasion, compared to stage II non-small-cell lung cancer. After further validation in prospective studies, lymph vessel invasion might be considered as an upstaging factor in resectable lung cancer. Especially in the early-stage of the disease, it might represent an additional risk factor to consider adjuvant therapy after surgical resection. abstract_id: PUBMED:34804043 Single-Cell RNA Sequencing Reveals the Heterogeneity of Tumor-Associated Macrophage in Non-Small Cell Lung Cancer and Differences Between Sexes. Non-Small Cell Lung Cancer (NSCLC) is a disease with high morbidity and mortality, which has sex-related differences in prognosis and immunotherapy efficacy. However, the difference in the mechanisms remains unclear. Macrophages, characterized by high plasticity and heterogeneity, act as one of the key cells that exert anti-tumor effects in the tumor microenvironment (TME) and play a complicated role in the process of tumor progression. To elucidate the subtype composition and functional heterogeneity of tumor-associated macrophages (TAMs) in NSCLC and further compare the sex-mediated differences, we conducted a single-cell level analysis in early-stage smoking NSCLC patients, combined with ssGSEA analysis, pseudotime ordering, and SCENIC analysis. We found two universally presented immune-suppressive TAMs with different functional and metabolic characteristics in the TME of NSCLC. Specifically, CCL18+ macrophages exerted immune-suppressive effects by inhibiting the production of inflammatory factors and manifested high levels of fatty acid oxidative phosphorylation metabolism. Conversely, the main metabolism pathway for SPP1+ macrophage was glycolysis which contributed to tumor metastasis by promoting angiogenesis and matrix remodeling. In terms of the differentially expressed genes, the complement gene C1QC and the matrix remodeling relevant genes FN1 and SPP1 were differentially expressed in the TAMs between sexes, of which the male upregulated SPP1 showed the potential as an ideal target for adjuvant immunotherapy and improving the efficacy of immunotherapy. According to the early-stage TCGA-NSCLC cohort, high expression of the above three genes in immune cells were associated with poor prognosis and acted as independent prognostic factors. Moreover, through verification at the transcription factor, transcriptome, and protein levels, we found that TAMs from women showed stronger immunogenicity with higher interferon-producing and antigen-presenting ability, while men-derived TAMs upregulated the PPARs and matrix remodeling related pathways, thus were more inclined to be immunosuppressive. Deconstruction of the TAMs at the single-cell level deepens our understanding of the mechanism for tumor occurrence and progress, which could be helpful to achieve the precise sex-specific tumor treatment sooner. Answer: The question of whether sex is an independent prognostic factor in non-small cell lung cancer (NSCLC) has been explored in several studies. According to the abstract from PUBMED:19299032, there are sex-related differences in the clinicopathologic characteristics and survival of NSCLC patients. The study found that women with NSCLC had better overall survival compared to men, which could be attributed to younger age and lower smoking prevalence among women. However, in multivariate analyses, gender did not emerge as an independent prognostic factor, with a hazard ratio of 1.18 (95% CI 0.90-1.55; P=0.226). Subgroup analyses revealed that never-smoker males with adenocarcinoma had similar survival to females, suggesting that the survival advantage of females may be related to factors such as smoking history rather than sex itself. Another study, PUBMED:2166143, found that sex was an independent prognostic factor for survival, with women living longer than men across all cell types of lung cancer. This study suggests that sex-associated differences should be considered when planning and analyzing clinical trials. PUBMED:16640805 explored the predictive power of the molecular prognostic factor HER2/neu and found that it was strongly sex-specific, being highly predictive for women but not useful for men. This indicates that there may be sex-specific molecular differences that could influence prognosis. PUBMED:20506778 presented data showing that sex is a significant prognostic factor for patients with NSCLC, with female sex being a good prognostic factor for survival. In summary, while some studies have found that sex can be an independent prognostic factor in NSCLC, with women generally having better survival outcomes, other studies suggest that the observed survival differences may be due to other factors correlated with sex, such as smoking history and age. The role of sex as an independent prognostic factor in NSCLC remains a subject of ongoing research, and it may interact with other clinical and molecular factors to influence patient outcomes.
Instruction: Diabetic striatopathy-Does it exist in non-Asian subjects? Abstracts: abstract_id: PUBMED:27296589 Diabetic striatopathy-Does it exist in non-Asian subjects? Background: Diabetic striatopathy (DS) is a rare complication of diabetes mellitus (DM). The syndrome appears in patients with uncontrolled DM and is characterized by abrupt onset of movement disorder, mainly hemichorea and accompanied by specific findings on brain imaging. It is believed that DS is unique to the Asian population and affects mainly elderly women with uncontrolled DM. Methods: In order to define existence and characterization of DS in Western population, we reviewed the medical records of all patients admitted to the Chaim Sheba Medical Center between 2004 and 2014 and identified those with documented elevated HbA1c (&gt;10%). The charts and imaging studies of those with elevated HbA1c and undiagnosed neurological symptoms were reviewed to diagnose DS. Results: Out of 697 patients with HbA1c&gt;10%, 328 patients had unknown neurological diagnosis. Among them, we identified 4 patients (3 women, mean age 73 and mean HbA1c of 14.8%) with hemichorea or choreoathetosis and brain imaging findings compatible with the diagnosis of DS. Only one out of the 4 patients was diagnosed during hospitalization with DS. All patients were treated with insulin with improvement of their symptoms during hospitalization. However, there was a recurrence in 2 of them and 1 died during the second episode. Conclusion: Diabetic striatopathy exists but underdiagnosed in the Western population. It is important to increase the awareness for this clinical syndrome in order to treat those patients properly. abstract_id: PUBMED:36381706 Delayed Presentation of Hemichorea in Diabetic Striatopathy. Diabetic striatopathy is a rare condition associated with poorly controlled diabetes that can present as hyperkinetic movements. A 70-year-old Asian female was newly diagnosed with type 2 diabetes mellitus complicated by diabetic ketoacidosis when she presented with lethargy and confusion. Computed tomography and magnetic resonance imaging of the brain performed for the patient showed incidental isolated radiological features of diabetic striatopathy, even though she did not have any hyperkinetic movements. After intensive glycemic control, the patient paradoxically developed a delayed presentation of hemichorea two weeks later. Pathological findings in diabetic striatopathy suggest the contributing role of vascular microangiopathy, similar to the changes seen in proliferative diabetic retinopathy. In order to avoid precipitating hyperkinetic movements, a less intensive diabetic control could be considered for asymptomatic patients with isolated radiological features of diabetic striatopathy. This is especially important in patients at higher risk of the condition. abstract_id: PUBMED:38347980 Characterization of Diabetic Striatopathy With Repeated Follow-Up Using Multiple Imaging Studies. Diabetic striatopathy is a rare condition with a prevalence of less than one in 100,000. Herein, we report a case of diabetic striatopathy exacerbated by hyperglycemia and hypoglycemia, with repeated follow-up with multiple imaging studies. This case suggested that putamen neuronal loss and dysfunction, gliosis, and ischemia are associated with diabetic striatopathy pathophysiology. In addition, striatal hyperintensity on T1-weighted MRI images was more pronounced after symptom remission when evaluated several times over a short period. Therefore, clinicians should be aware that even if MRI findings are normal in the very early stages of the onset of diabetic striatopathy, repeating MRIs at intervals may reveal typical findings. abstract_id: PUBMED:38367007 Diabetic striatopathy: a case report. Diabetic striatopathy, a rare condition also known as hyperglycemic nonketotic hemichorea, is characterized by chorea or hemiballismus and distinctive basal ganglia abnormalities visible on neuroimaging. We present the case of an 86-year-old woman with diabetic striatopathy exhibiting hemichorea. She had a history of poorly controlled type 2 diabetes and presented with involuntary movements of her left limb along with facial expressions suggestive of chorea. Laboratory tests confirmed hyperglycemia, with an elevated hemoglobin A1c level. Neuroimaging revealed T1-hyperintensity in the right basal ganglia. The patient was diagnosed with diabetic striatopathy and responded well to intensive insulin therapy with a rapid resolution of symptoms. abstract_id: PUBMED:28202297 Delayed onset diabetic striatopathy: Hemichorea-hemiballism one month after a hyperglycemic episode. Diabetic striatopathy is an uncommon and life threatening manifestation of diabetes mellitus. It has a tendency to occur in the elderly, female and people of Asian descent. Patients usually present with hemichorea-hemiballism caused by non-ketotic hyperglycemia. However, patients could develop diabetic striatopathy weeks after the hyperglycemic event, even when blood sugar has been well controlled. Herein, we report a case of delayed onset diabetic striatopathy and discuss the importance of detailed history and brain magnetic resonance imaging for making prompt and accurate diagnosis. abstract_id: PUBMED:28431304 "Diabetic striatopathy" and ketoacidosis: Report of two cases and review of literature. "Diabetic striatopathy" is characterized by dyskinesias with basal ganglia hyperintensities on neuroimaging. It is usually reported in elderly females with hyperglycemic hyperosmolar state and rare in patients with diabetic ketoacidosis. Here, we report two young males with diabetic ketoacidosis presenting as striatopathy, along with review of literature. abstract_id: PUBMED:38187814 A rare neurological manifestation of diabetes mellitus-Hemichorea-hemiballismus in a patient with diabetic striatopathy: A case report. Diabetic striatopathy is a rare neurological complication of diabetes mellitus that presents with sudden onset hemichorea or hemiballismus and is associated with hyperglycemia and striatal abnormality, either by hyperdensity on non-contrast computer tomography or hyperintensity on T1-weighted magnetic resonance imaging. Here we report a 55-year-old female, from Sri Lanka, who presented with involuntary movements of the left upper and lower limbs. Her past medical history included diabetes mellitus and she was on warfarin 5 mg daily for a mechanical mitral and tricuspid valve replacement. The random blood sugar on admission was 462 mg/dL and the last INR was 3.03. While hemiballismus has multiple etiologies, intracranial hemorrhage would be the main differential in a patient on anticoagulation. Other differentials include drug-induced dyskinesia, metabolic abnormalities, and autoimmune etiologies. Hemiballismus in the presence of high blood glucose should always raise the suspicion of diabetic striatopathy. The non-contrast computed tomography of the brain showed hyperdensity in the right-side caudate nucleus, lentiform nucleus, and globus pallidus which is a characteristic of diabetic striatopathy but could have been mistaken for an intracranial hemorrhage. The involuntary movements improved with glucose control and treatment with clonazepam and tetrabenazine. This case highlights the potential for misdiagnosis of diabetic striatopathy as an intracranial hemorrhage in a patient on warfarin, which can lead to delays in appropriate management and erroneous omission of warfarin. Early recognition and treatment of diabetic striatopathy can lead to significant improvement in the quality of life. abstract_id: PUBMED:29422853 Persistent Hemichorea and Caudate Atrophy in Untreated Diabetic Striatopathy: A Case Report. Background: Neurological complications of diabetes and hyperglycemia are relatively common but the specific manifestations can vary widely. Diabetic striatal disease or "diabetic striatopathy" is an uncommon condition usually thought to result from hyperglycemic injury to the basal ganglia, producing a hyperkinetic movement disorder, usually choreiform in nature. Symptoms are generally reversible with treatment of the hyperglycemia. Case Description: We report the case of a 57-year-old woman presenting with a unilateral choreoathetosis of the left upper extremity, persistent for 4 years. Contemporaneous imaging demonstrated severe atrophy of the right caudate nucleus, while imaging obtained at the onset of symptoms was consistent with a right diabetic striatopathy. Symptoms improved with the use of dopamine antagonists and benzodiazepines. Conclusion: Although generally considered to be fully reversible, this case demonstrates that diabetic striatopathy can result in permanent structural lesions with persistent symptoms if left untreated. abstract_id: PUBMED:35922720 Prevalence of diabetic striatopathy and predictive role of glycated hemoglobin level. Background: Diabetic striatopathy is defined as a state of hyperglycemia associated with chorea/ballism, striatal hyperdensity at CT, or hyperintensity at T1-weighted MRI. It is considered a rare complication of uncontrolled diabetes but prevalence data are scarce. Objectives: Characterize diabetic striatopathy prevalence in the population afferent to the largest teaching hospital in Genova (Liguria, Italy) and investigate the role of glycated hemoglobin level in predicting the risk. Methods: Data were retrospectively obtained from general population undergoing blood sampling for glycated hemoglobin and resulting with HbA1c values ≥ 8%, from January 2014 to June 2017. Brain neuroimaging of those who underwent at least a brain CT or MRI was examined in search of findings compatible with diabetic striatopathy and clinical information was collected. Logistic regression was used to predict the risk of diabetic striatopathy based on age and HbA1c values. Results: Subjects with uncontrolled diabetes were 4603. Brain neuroimaging was available in 1806 subjects and three patients with diabetic striatopathy were identified, all of them reporting choreic movements. The prevalence of hemichorea due to diabetic striatopathy was therefore 3 cases out of 1806 (0.16%) in our population. Hepatic and hypoxic encephalopathies were the conditions most frequently mimicking diabetic striatopathy. Odds ratio of diabetic striatopathy and HbA1c level was significantly correlated (p = 0.0009). Conclusions: To the best of our knowledge, this study is the first to evaluate the prevalence of diabetic striatopathy in Italy. High HbA1c values may have a role in predicting diabetic striatopathy. abstract_id: PUBMED:37153487 Diabetic striatopathy unusual presentation with ischemic stroke-A case report and literature review. Diabetic striatopathy (DS) is a rare condition present as a combination of hemichorea or hemiballismus with striatal computed tomography (CT) or magnetic resonance imaging (MRI) changes in the presence of a severely hyperglycemic state. Varying severity and manifestations of DS were reported throughout the literature. However, the exact pathogenesis and mechanism remain unclear. In this case report, we are presenting an unusual case of DS combined with acute ischemic stroke. A 74-year-old male presented with the main complaint of acute left arm and leg weakness, and was found to have significantly high blood sugar and evidence of DS in combination with stroke on CT and MRI. Eventually, he was labeled as a case of combined DS and ischemic stroke. Answer: Yes, diabetic striatopathy does exist in non-Asian subjects. A study conducted at the Chaim Sheba Medical Center reviewed medical records of patients admitted between 2004 and 2014 with elevated HbA1c levels (>10%) and identified four patients (three women, mean age 73) with hemichorea or choreoathetosis and brain imaging findings compatible with the diagnosis of diabetic striatopathy. This study concluded that diabetic striatopathy exists but is underdiagnosed in the Western population, indicating that it is not unique to the Asian population (PUBMED:27296589).
Instruction: Are investigations anxiolytic or anxiogenic? Abstracts: abstract_id: PUBMED:33008420 Cannabis, a cause for anxiety? A critical appraisal of the anxiogenic and anxiolytic properties. Background: Cannabis has been documented for use in alleviating anxiety. However, certain research has also shown that it can produce feelings of anxiety, panic, paranoia and psychosis. In humans, Δ9-tetrahydrocannabinol (THC) has been associated with an anxiogenic response, while anxiolytic activity has been attributed mainly to cannabidiol (CBD). In animal studies, the effects of THC are highly dose-dependent, and biphasic effects of cannabinoids on anxiety-related responses have been extensively documented. A more precise assessment is required of both the anxiolytic and anxiogenic potentials of phytocannabinoids, with an aim towards the development of the 'holy grail' in cannabis research, a medicinally-active formulation which may assist in the treatment of anxiety or mood disorders without eliciting any anxiogenic effects. Objectives: To systematically review studies assessing cannabinoid interventions (e.g. THC or CBD or whole cannabis interventions) both in animals and humans, as well as recent epidemiological studies reporting on anxiolytic or anxiogenic effects from cannabis consumption. Method: The articles selected for this review were identified up to January 2020 through searches in the electronic databases OVID MEDLINE, Cochrane Central Register of Controlled Trials, PubMed, and PsycINFO. Results: Acute doses of CBD were found to reduce anxiety both in animals and humans, without having an anxiogenic effect at higher doses. Epidemiological studies tend to support an anxiolytic effect from the consumption of either CBD or THC, as well as whole plant cannabis. Conversely, the available human clinical studies demonstrate a common anxiogenic response to THC (especially at higher doses). Conclusion: Based on current data, cannabinoid therapies (containing primarily CBD) may provide a more suitable treatment for people with pre-existing anxiety or as a potential adjunctive role in managing anxiety or stress-related disorders. However, further research is needed to explore other cannabinoids and phytochemical constituents present in cannabis (e.g. terpenes) as anxiolytic interventions. Future clinical trials involving patients with anxiety disorders are warranted due to the small number of available human studies. abstract_id: PUBMED:2876714 Aversive and appetitive properties of anxiogenic and anxiolytic agents. The place-conditioning paradigm was used to assess the appetitive or aversive nature of anxiolytic and anxiogenic drugs. Rats were given a pre-conditioning preference test in which the time that they spent on each side of a two-compartment chamber was measured; each side was visually distinct. On two days they were confined to one side immediately after drug injection, and on alternate days they were confined to the other side after vehicle injection. The rats were then given a postconditioning preference test. The change in preference was used as the measure of the reinforcing properties of the drugs. The anxiolytic drugs diazepam, lorazepam, alprazolam, adinazolam, U-43, 465 and tracazolate produced a clear preference for the drug-associated side, indicating the appetitive qualities of these drugs. Preference was less clear for the anxiolytics chlordiazepoxide and buspirone. This suggests that it is possible to dissociate the rewarding and anxiolytic properties of drugs. All the anxiogenic drugs tested (CGS 8216, picrotoxin and yohimbine) produced conditioned aversion. abstract_id: PUBMED:16185722 Immediate-early gene expression in the central nucleus of the amygdala is not specific for anxiolytic or anxiogenic drugs. The lateral, basal, and central nuclei of the amygdala are part of a circuitry that instantiates many fear and anxious behaviors. One line of support indicates that immediate-early gene (IEG) expression (e.g., c-fos and egr-1 (zif268)) is increased in these nuclei following fear conditioning. Other research finds that anxiogenic drugs working through various mechanisms induce IEG expression in the central nucleus of the amygdala (CeA) suggesting that expression is a neural marker for fear and anxiety. However, several studies have also found that anxiolytic drugs induce IEG expression in the CeA. Expression of egr-1 in the CeA and lateral nucleus of the amygdala following administration of anxiolytic and anxiogenic benzodiazepine and serotonin agonists and antagonists was investigated. The first experiment determined behaviorally active anxiolytic and anxiogenic doses for two anxiogenic drugs (FG 7142 and mCPP) and two anxiolytic drugs (diazepam and buspirone). The effects of anxiogenic and anxiolytic doses of these drugs on egr-1 expression in the amygdala were then tested in a second experiment. All four drugs increased egr-1 in the CeA indicating that increased egr-1 mRNA expression in the CeA is not specific to anxiolytic or anxiogenic effects of the drugs. We suggest that IEG expression in the CeA may be due to activation of circuits that are associated with systemic physiological homeostasis perturbed by a number of drugs including anxiogenic and anxiolytic compounds. abstract_id: PUBMED:33991579 Unconventional anxiety pharmacology in zebrafish: Drugs beyond traditional anxiogenic and anxiolytic spectra. Anxiety is the most prevalent brain disorder and a common cause of human disability. Animal models are critical for understanding anxiety pathogenesis and its pharmacotherapy. The zebrafish (Danio rerio) is increasingly utilized as a powerful model organism in anxiety research and anxiolytic drug screening. High similarity between human, rodent and zebrafish molecular targets implies shared signaling pathways involved in anxiety pathogenesis. However, mounting evidence shows that zebrafish behavior can be modulated by drugs beyond conventional anxiolytics or anxiogenics. Furthermore, these effects may differ from human and/or rodent responses, as such 'unconventional' drugs may affect zebrafish behavior despite having no such profiles (or exerting opposite effects) in humans or rodents. Here, we discuss the effects of several putative unconventional anxiotropic drugs (aspirin, lysergic acid diethylamide (LSD), nicotine, naloxone and naltrexone) and their potential mechanisms of action in zebrafish. Emphasizing the growing utility of zebrafish models in CNS drug discovery, such unconventional anxiety pharmacology may provide important, evolutionarily relevant insights into complex regulation of anxiety in biological systems. Albeit seemingly complicating direct translation from zebrafish into clinical phenotypes, this knowledge may instead foster the development of novel CNS drugs, eventually facilitating innovative treatment of patients based on novel 'unconventional' targets identified in fish models. abstract_id: PUBMED:37299 Corticosterone -- an anxiogenic or an anxiolytic agent? Corticosterone (3--12 mg kg-1, i.p., giving rise to plasma corticosterone concentrations from 26.7 to 89.0 micrograms/100 ml) failed to have a significant anxiogenic action. Instead, corticosterone (3 mg kg-1) had a significant anxiolytic effect in the social interaction test of anxiety. Adrenalectomized rats had very low levels of social interaction; but adrenalectomized rats that had been given replacement corticosterone therapy did not differ from the sham-operated controls. Thus, corticosterone appears to have the opposite effect to that previously reported for ACTH. Possible mechanisms for the observed results are discussed. abstract_id: PUBMED:7911574 An animal model for measuring behavioral responses to anxiogenic and anxiolytic manipulations. A method for measuring behavioral responses of rats to both anxiolytic and anxiogenic manipulations, the open field drink test (OFDT), is described. This method utilizes the concept that in the open field, appetitive behavior is reduced because of the ambient level of fear experienced in such an environment. For the OFDT, rats were given restricted access to water for 1 h per day for 3 days, and then their behavior was assessed in an open field that contained a water spout at its center. Use of the open field permitted a number of measures to be taken; of these, "time spent drinking" was most sensitive in detecting differences. Three experiments showed that the OFDT: a) permitted dissociation between behavioral responses to an anxiolytic (diazepam) and an anxiogenic (FG7142) drug, b) detected a dose-response relationship for an anxiolytic drug (diazepam), and c) detected behavioral responses to environmental manipulations designed to increase fear (presence of an olfactory cue from rats that had received foot shock). Advantages of this test over previously described methods are outlined, and several guidelines are provided to aid investigators in using this behavioral test. abstract_id: PUBMED:27436722 Nucleus incertus contributes to an anxiogenic effect of buspirone in rats: Involvement of 5-HT1A receptors. The nucleus incertus (NI), a brainstem structure with diverse anatomical connections, is implicated in anxiety, arousal, hippocampal theta modulation, and stress responses. It expresses a variety of neurotransmitters, neuropeptides and receptors such as 5-HT1A, D2 and CRF1 receptors. We hypothesized that the NI may play a role in the neuropharmacology of buspirone, a clinical anxiolytic which is a 5-HT1A receptor partial agonist and a D2 receptor antagonist. Several preclinical studies have reported a biphasic anxiety-modulating effect of buspirone but the precise mechanism and structures underlying this effect are not well-understood. The present study implicates the NI in the anxiogenic effects of a high dose of buspirone. Systemic buspirone (3 mg/kg) induced anxiogenic effects in elevated plus maze, light-dark box and open field exploration paradigms in rats and strongly activated the NI, as reflected by c-Fos expression. This anxiogenic effect was reproduced by direct infusion of buspirone (5 μg) into the NI, but was abolished in NI-CRF-saporin-lesioned rats, indicating that the NI is present in neural circuits driving anxiogenic behaviour. Pharmacological studies with NAD 299, a selective 5-HT1A antagonist, or quinpirole, a D2/D3 agonist, were conducted to examine the receptor system in the NI involved in this anxiogenic effect. Opposing the 5-HT1A agonism but not the D2 antagonism of buspirone in the NI attenuated the anxiogenic effects of systemic buspirone. In conclusion, 5-HT1A receptors in the NI contribute to the anxiogenic effect of an acute high dose of buspirone in rats and may be functionally relevant to physiological anxiety. abstract_id: PUBMED:33175328 Identification of Anxiolytic Potential of Niranthin: In-vivo and Computational Investigations. Anxiety is an unpleasant state, which can critically decrease the quality of life is often accompanied by nervous behaviour and rumination. Niranthin is a lignan isolated from various Phyllanthus sources. The literature survey on niranthin highlights wide ranges of the therapeutic potentials. In a present study, based on our previous investigations, we evaluated pure, isolated and characterized niranthin as an anxiolytic agent. The niranthin [6-[(2R,3R)-3-[(3,4-dimethoxyphenyl)methyl]-4-methoxy-2-(methoxymethyl)butyl]-4-methoxy-1,3-benzodioxole] was purchased from commercial source and further subjected for assessment of its anxiolytic potentials using popular animal models including Elevated plus-maze model/test (EPM) and Light &amp; Dark Exploration test (L&amp;D). GABA-A receptor mediation was evaluated by pretreating the mice with the GABA-A receptor antagonist Flumazenil before the EPM task. Molecular docking simulation studies (pdb id: 4COF) carried out by Vlife QSAR software showed that niranthin (docking score: - 62.1714 kcal/mol) have shown comparatively best docking score compared to the standard drug Diazepam (docking score: - 63.1568 kcal/mol). To conclude, Niranthin has probable potential in the management of anxiety disorder. Our in-silico and in-vivo analysis (indirectly) indicated the plausible role of GABA mediation for anxiolytic activity. Although, these studies are preliminary, future in depth experimental explorations will be required to use Niranthin as anti-anxiety drug in near future. abstract_id: PUBMED:2574443 A two-compartment exploratory model to study anxiolytic/anxiogenic effects of drugs in the rat. The response of a recently described light/dark choice novelty situation to anxiolytic and non-anxiolytic agents as well as to putative anxiogenic drugs was assessed in rats. Diazepam (1.0-10.0 mg/kg, i.p.), chlordiazepoxide (2.5-10.0 mg/kg, i.p.), and pentobarbital (pentobarbitone) (7.5-15.0 mg/kg, i.p.) enhanced rats' activity in the dark and brightly lit compartments as well as crossings between the two, while imipramine (5-20 mg/kg, i.p.) had no effects. None of these drugs changed animal locomotion in activity cages. d-Amphetamine (1.5 mg/kg, i.p.) caused a significant increase in the three parameters used to measure rats' exploratory activity, but the effect was due to an increase in the general activity of the animal. No tolerance to the effects of diazepam developed after daily treatment with 5 mg/kg i.p. for 15 days. Non-sedative and non-convulsant doses of putative anxiogenic drugs such as yohimbine (2.5-5.0 mg/kg, i.p.), picrotoxin (2.0-4.0 mg/ml, i.p.) and ethyl-beta-carboline-3-carboxylate (2.5-5 mg/kg, i.p.) reduced the exploratory activity of rats in the dark compartment. The advantages and problems of using this test to identify anxiolytic and anxiogenic drugs are discussed. abstract_id: PUBMED:25183117 Sedative and anxiolytic effects of ethanolic extract of Calotropis gigantea (Asclepiadaceae) leaves. Objective: To evaluate possible anxiogenic activity, sedative property and anxiolytic potential of crude ethanolic extract of Calotropis gigantea leaves. Methods: The anxiogenic activity of crude ethanolic extract of Calotropis gigantea leaves was evaluated using standard animal behavioral models, such as hole cross and open field; sedative property and anxiolytic potential were evaluated by conducting thiopental sodium induced sleeping time tests and elevated plus-maze test. Results: The crude ethanolic extract exhibited a significant (P&lt;0.05, P&lt;0.001) decrease of motor activity and exploratory behavior in hole cross and open field tests. The extract also markedly increased both the number of visits to and time spent in the corners of the open field. The extract treated rats spent more time in the open arm of elevated plus-maze, showing its antianxiety activity. There was a decrease in the locomotor activity. Conclusions: The obtained results provide support for the use of this species in traditional medicine and warrant further investigation to isolate the specific components that are responsible for the sedative and anxiolytic effects. Components from this plant may have a great potential value as medicinal agents, as leads or model compounds for synthetic or semi synthetic structure modifications and optimization, and as neuropharmacological probes. Answer: The investigations referenced in the provided abstracts explore both anxiolytic and anxiogenic properties of various substances. Anxiolytic investigations: - Studies have found that acute doses of cannabidiol (CBD) reduce anxiety in both animals and humans without anxiogenic effects at higher doses (PUBMED:33008420). - The anxiolytic drugs diazepam, lorazepam, alprazolam, adinazolam, U-43, 465, and tracazolate produced a clear preference for the drug-associated side in rats, indicating their appetitive qualities (PUBMED:2876714). - The open field drink test (OFDT) was used to measure behavioral responses to an anxiolytic drug, diazepam, and detected a dose-response relationship (PUBMED:7911574). - The ethanolic extract of Calotropis gigantea leaves showed anxiolytic potential in the elevated plus-maze test (PUBMED:25183117). - Niranthin, a lignan isolated from various Phyllanthus sources, was evaluated as an anxiolytic agent and showed potential in the management of anxiety disorder (PUBMED:33175328). Anxiogenic investigations: - Certain studies have shown that cannabis can produce feelings of anxiety, panic, paranoia, and psychosis, with THC associated with an anxiogenic response (PUBMED:33008420). - Anxiogenic drugs such as CGS 8216, picrotoxin, and yohimbine produced conditioned aversion in rats (PUBMED:2876714). - The nucleus incertus (NI) was implicated in the anxiogenic effects of a high dose of buspirone, a clinical anxiolytic that can have a biphasic anxiety-modulating effect (PUBMED:27436722). - Putative anxiogenic drugs like yohimbine, picrotoxin, and ethyl-beta-carboline-3-carboxylate reduced the exploratory activity of rats in a light/dark choice novelty situation (PUBMED:2574443). Overall, the investigations cover a spectrum of substances and their effects on anxiety, with some studies focusing on the anxiolytic potential of certain compounds and others examining the anxiogenic effects.
Instruction: Aortic valve replacement in octogenarians: is biologic valve the unique solution? Abstracts: abstract_id: PUBMED:33586247 Transcatheter aortic valve replacement after valve-sparing aortic root surgery. The use of transcatheter aortic valves for aortic regurgitation presents unique challenges. Although studies describe their successful off-label use, there is a paucity of literature on transcatheter aortic valve replacement after valve-sparing aortic root surgery. We present a patient with severe aortic regurgitation following valve-sparing aortic root replacement that was treated with an oversized transcatheter aortic valve. abstract_id: PUBMED:37228546 Surgical Aortic Valve Replacement to Treat Prosthetic Valve Endocarditis After Valve-in-Valve Transcatheter Aortic Valve Replacement. Prosthetic valve endocarditis (PVE) is an uncommon complication after heart valve replacement surgery that can result in increased morbidity and mortality. Current guidelines for management of PVE recommend antibiotic therapy followed by surgical valve replacement. The number of aortic valve replacements is expected to rise in the coming years with the expanded indications for use of transcatheter aortic valve replacement (TAVR) in patients with low, intermediate, and high surgical risk, as well as in patients with a failed aortic bioprosthetic valve. Current guidelines do not address the use of valve-in-valve (ViV) TAVR for management of PVE in patients who are at high risk for surgical intervention. The authors present a case of a patient with aortic valve PVE after surgical aortic valve replacement (SAVR); he was treated with valve-in-valve (ViV) TAVR due to the high surgical risk. The patient was discharged, but he returned to the hospital with PVE and valve dehiscence 14 months after ViV TAVR, after which he successfully underwent re-operative SAVR. abstract_id: PUBMED:29588674 Transcatheter Aortic Valve Replacement for Native Aortic Valve Regurgitation. Transcatheter aortic valve replacement with either the balloon-expandable Edwards SAPIEN XT valve, or the self-expandable CoreValve prosthesis has become the established therapeutic modality for severe aortic valve stenosis in patients who are not deemed suitable for surgical intervention due to excessively high operative risk. Native aortic valve regurgitation, defined as primary aortic incompetence not associated with aortic stenosis or failed valve replacement, on the other hand, is still considered a relative contraindication for transcatheter aortic valve therapies, because of the absence of annular or leaflet calcification required for secure anchoring of the transcatheter heart valve. In addition, severe aortic regurgitation often coexists with aortic root or ascending aorta dilatation, the treatment of which mandates operative intervention. For these reasons, transcatheter aortic valve replacement has been only sporadically used to treat pure aortic incompetence, typically on a compassionate basis and in surgically inoperable patients. More recently, however, transcatheter aortic valve replacement for native aortic valve regurgitation has been trialled with newer-generation heart valves, with encouraging results, and new ancillary devices have emerged that are designed to stabilize the annulus-root complex. In this paper we review the clinical context, technical characteristics and outcomes associated with transcatheter treatment of native aortic valve regurgitation. abstract_id: PUBMED:34917957 Transcatheter Aortic Valve Replacement for Bicuspid Aortic Insufficiency After Valve-Sparing Aortic Root Replacement. Bicuspid aortic insufficiency (BAI) patients with root aneurysm often require aortic valve and root replacement in a composite procedure. The valve-sparing root replacement (VSARR) procedure is aimed at preserving the native valve when possible. This case highlights a successful transcatheter aortic valve replacement procedure in a BAI patient previously treated with VSARR. (Level of Difficulty: Intermediate.). abstract_id: PUBMED:32983472 Reoperative aortic valve replacement in the era of valve-in-valve procedures. Current evidence suggests that the choice between valve-in-valve transcatheter aortic valve implantation and reoperative aortic valve replacement should be based on multiple factors. abstract_id: PUBMED:35711205 Use of a sutureless aortic valve in reoperative aortic valve replacement. Objectives: Management of degenerated bioprosthetic aortic valves remains a challenge. Valve-in-valve transcatheter aortic valve replacement (AVR) has limited utility in the presence of small annuli/prosthetic valves. Sutureless valves may offer an advantage over traditional redo AVR by maximizing effective orifice area due to their unique design as well as ease of implant. Methods: Twenty-two patients undergoing redo AVR received a sutureless valve in our institution over the past 5 years. All patients were determined to be poor candidates for valve-in-valve transcatheter AVR due to a combination of small annulus size, low coronary heights, and/or underlying valve characteristics (ie, mechanical valves). Results: Median time from implant to redo AVR was 8 years. One patient died within 30 days. In the 13 patients who had a 21 mm or smaller valve explanted, 5 small, 7 medium, and 1 large Perceval valves were implanted (all with larger internal diameters than the explanted valve). The average postoperative gradient of the cohort valves was 14.8 mm Hg compared with 38.8 mm Hg preoperatively. Conclusions: In addition to their ease of use and rapid deployment, sutureless bioprosthetic aortic valves offer significant physiological advantages in patients with degenerated prosthetic aortic valves and small anatomical annuli. It can also simplify the surgical approach to redo AVR following a Bentall procedure. If long-term durability is confirmed, sutureless valves should be considered in a broader population of patients for both redo and primary aortic valve replacement surgery. abstract_id: PUBMED:31362540 Transcatheter Aortic Valve Replacement With the HLT Meridian Valve. Background: While most self-expanding transcatheter valves are repositionable, only one fully retrievable valve is currently available. The Meridian valve is a new self-expanding valve with full retrievability properties. The objective of our study was to evaluate the early feasibility, preliminary safety, and efficacy of transcatheter aortic valve replacement with the HLT Meridian valve (HLT, Inc). Methods: This was a multicenter early feasibility study including patients with severe aortic stenosis at high surgical risk undergoing transfemoral transcatheter aortic valve replacement with the 25-mm Meridian valve. All serious adverse events were adjudicated by an independent clinical events committee according to Valve Academic Research Consortium-2 criteria. Echocardiography data were assessed by an independent echocardiography core laboratory. Results: A total of 25 patients (mean age, 85±6 years; 80% of men) were included. The valve was successfully implanted in 22 (88%) patients (annulus too large and extreme horizontal aorta in 2 and 1 unsuccessful cases, respectively). Valve retrieval because of an initial nonadequate positioning was attempted and successfully performed in 10 (40%) patients. Echocardiography post-transcatheter aortic valve replacement showed a low mean residual gradient (10±4 mm Hg) and the absence of moderate-severe aortic regurgitation (none-trace and mild aortic regurgitation in 76% and 24% of patients, respectively). Mortality at 30 days was 8%, with no cases of disabling stroke, valve embolization, or major/life-threatening bleeding complications. At 6-month follow-up, the cumulative mortality rate was 12%, with no changes in echocardiographic parameters and no cases of valve dysfunction. The majority of patients (89%) were in New York Heart Association class I-II at 6 months. Conclusions: Transcatheter aortic valve replacement with the Meridian valve was feasible and associated with acceptable early and 6-month clinical results. Valve retrieval after full valve deployment was successfully performed in all attempted cases, and valve performance was excellent, with low residual gradients, no cases of moderate-severe aortic regurgitation, and none-trace residual aortic regurgitation in the majority of patients. Clinical Trial Registration: URL: https://www.clinicaltrials.gov. Unique identifier: NCT02838680 (RADIANT-Canada); NCT02799823 (RADIANT-US). abstract_id: PUBMED:34548439 Redo Aortic Valve Replacement With increasing number of patients undergoing aortic valve replacement, many patients are at risk for redo aortic valve surgery. It has been reported that 56.2% of the patients receiving a bioprostheis and 7.4% of the patients receiving a mechanical valve need reoperation 20 years after the primary surgery. Although valve in valve transcatheter aortic valve implantation (TAVI) is a less invasive approach, redo aortic valve replacement is preferred for patients with prosthetic valve endocarditis, small aortic valve prosthesis and poor access for TAVI. Special care should be prepared for safe re-sternotomy, cardiopulmonary bypass management and strategy for cardioplegia. As reported from high volume centers, redo aortic valve replacement could be performed at a similar mortality rate as the primary surgery. New prostheses such as sutureless valve and rapid deployment valve could be useful, as well as minimally invasive cardiac surgery approach, which may prevent tissue injury. However, redo aortic valve replacement via re-sternotomy remains a gold standard. Techniques and strategy for redo aortic valve replacement are reviewed. abstract_id: PUBMED:30665760 Repeat aortic valve replacement for failing aortic root homograft. Objective: Published data are limited in comparison of transcatheter aortic valve replacement with surgical aortic valve replacement for the failing aortic root homograft. We reviewed our experience with repeat aortic valve replacement in failing aortic root homografts to compare outcomes of transcatheter aortic valve replacement and surgical aortic valve replacement. Methods: We retrospectively reviewed the records of 51 patients with failing aortic root homografts who received repeat aortic valve replacement between October 2000 and May 2018. Operation included transcatheter aortic valve replacement in 11 patients between June 2014 and May 2018. Surgical aortic valve replacement was performed in 40 patients between October 2000 and January 2018, and operation included repeat composite aortic valve/root replacement in 30 patients (75%). Results: Patient age was 59 years (interquartile range, 50-72 years), sex was female in 9 patients (18%), and time to repeat aortic valve replacement was 12 years (interquartile range, 8-13). Procedure-related complications occurred in 37 patients (73%): vascular injury (any) more commonly in the transcatheter aortic valve replacement group (36% vs 5%; P = .015), bleeding (major or life-threatening) more commonly in the surgical aortic valve replacement group (58% vs 0%; P &lt; .001), and sternal reentry injury only in the surgical aortic valve replacement group (n = 6, 15%). There were 3 procedure-related deaths in the surgical aortic valve replacement group (8%) and 1 (9%) in the transcatheter aortic valve replacement group (P = 1.000). Subsequent cardiac operation occurred in no patients in the transcatheter aortic valve replacement group and in 5 patients in the surgical aortic valve replacement group. Conclusions: Repeat aortic valve replacement for failing aortic root homograft is associated with notable risk of morbidity and mortality regardless of replacement technique. Avoidance of vascular injury could lead to improved outcomes in the transcatheter aortic valve replacement group. abstract_id: PUBMED:35493797 Strut Inversion During Valve-in-Valve Transcatheter Aortic Valve Replacement: An Unknown Complication? A 74-year-old man presented with failure of a bioprosthetic aortic valve implanted 7 years earlier, with a mean gradient of 44 mm Hg across the aortic valve. During valve-in-valve transcatheter aortic valve replacement, we came across an unusual complication of strut inversion at the lower end of the valve. (Level of Difficulty: Advanced.). Answer: The abstracts provided do not directly address whether a biologic valve is the unique solution for aortic valve replacement in octogenarians. However, they do discuss various aspects of aortic valve replacement, including transcatheter aortic valve replacement (TAVR), surgical aortic valve replacement (SAVR), and valve-in-valve (ViV) procedures, which can be relevant to the management of aortic valve disease in older patients. For instance, TAVR has become an established therapeutic modality for severe aortic valve stenosis in patients who are not suitable for surgical intervention due to high operative risk, which may include octogenarians (PUBMED:29588674). The use of transcatheter aortic valves for aortic regurgitation also presents unique challenges, but studies describe their successful off-label use (PUBMED:33586247). Additionally, the Meridian valve, a self-expanding valve with full retrievability properties, has shown acceptable early and 6-month clinical results in high surgical risk patients (PUBMED:31362540). In cases of prosthetic valve endocarditis (PVE) after TAVR, current guidelines recommend antibiotic therapy followed by surgical valve replacement, but ViV TAVR has been used in high-risk patients (PUBMED:37228546). Moreover, sutureless valves may offer an advantage over traditional redo AVR by maximizing effective orifice area and simplifying the surgical approach (PUBMED:35711205). For patients with failing aortic root homografts, both TAVR and SAVR have been used, with each approach associated with its own risks and complications (PUBMED:30665760). Redo aortic valve replacement remains a gold standard, and new prostheses such as sutureless valves and rapid deployment valves could be useful (PUBMED:34548439). In summary, while biologic valves are a common solution for aortic valve replacement in older patients, they are not the unique option. TAVR and other surgical techniques are also viable, depending on the patient's specific condition, risk factors, and the presence of complications such as PVE or valve degeneration. The choice of treatment should be based on a comprehensive evaluation of the patient's health status, the risks and benefits of each procedure, and the patient's preferences.
Instruction: Can questions about social interaction correctly identify preschool aged children with autism? Abstracts: abstract_id: PUBMED:23350819 Can questions about social interaction correctly identify preschool aged children with autism? Aim: We developed a questionnaire to assess social development (SIQ) in preschool children. Social development is often not included in medical assessment, though it may assist in early identification of autism spectrum disorder (ASD). Methods: Parents of 108 children with ASD, speech and language disorders, or 'developmental concerns', recruited from a clinical developmental assessment and community child health service, completed the SIQ, and also a Childhood Autism Rating Scale (CARS) assessment. Receiver Operator Characteristic (ROC) curves were generated to assess the performance of different questionnaire score thresholds in correctly identifying children with a CARS score of 30 or more. Logistic regression models were used to identify the questions which had the most predictive value for a CARS score of 30 or more. Results: An SIQ score of 14 or more correctly identified children with a CARS ≥ 30 with a sensitivity of 85%, specificity 85%, positive likelihood ratio (LR) 8.3 and negative LR 0.2. Two questions were identified as most predictive of ASD. Conclusions: The SIQ may assist clinicians in assessing social development and in making decisions about referral for autism assessment. Evaluation of the SIQ at the point of entry to a clinical service is needed. abstract_id: PUBMED:32774558 Mini-Basketball Training Program Improves Physical Fitness and Social Communication in Preschool Children with Autism Spectrum Disorders. This investigation examined the effects of a 12-week mini-basketball training program (MBTP) on physical fitness and social communication in preschool children with autism spectrum disorders (ASD). The study applied a quasi-experimental design. Fifty-nine preschool children aged 3-6 years with ASD were assigned to either a MBTP group (n = 30) or a control group (n = 29). Participants in the MBTP group received a scheduled mini-basketball training program (5 sessions per week, forty minutes per session) for twelve consecutive weeks, while the control group was instructed to maintain their daily activities. The physical fitness test and the parent-reported Social Responsiveness Scale Second Edition (SRS-2) test were performed before and after the intervention. Results indicated that the 12-week MBTP facilitated performance in the physical fitness test, particularly in speed-agility and muscular strength abilities. Additionally, children in the MBTP group demonstrated improvement in SRS-2 performance in social awareness, social cognition, social communication, and autistic mannerisms, whereas no such changes were found in the control group. It may be concluded that the 12-week MBTP could improve physical fitness and social communication in preschool children with ASD, and thus the use of physical exercise intervention as a therapeutic tool for preschoolers with ASD is recommended. abstract_id: PUBMED:25748026 The questions verbal children with autism spectrum disorder encounter in the inclusive preschool classroom. This study investigated questions adults asked to children with autism spectrum disorder in inclusive pre-kindergarten classrooms, and whether child (e.g. autism severity) and setting (i.e. adult-to-child ratio) characteristics were related to questions asked during center-time. Videos of verbal children with autism spectrum disorder (n = 42) were coded based on the following question categories adapted from the work of Massey et al.: management, low cognitive challenging, or cognitively challenging. Results indicated that management questions (mean = 19.97, standard deviation = 12.71) were asked more than less cognitively challenging questions (mean = 14.22, standard deviation = 8.98) and less cognitively challenging questions were asked more than cognitively challenging questions (mean = 10.00, standard deviation = 6.9). Children with higher language levels had a greater likelihood of receiving cognitively challenging questions (odds ratio = 1.025; p = 0.007). Cognitively challenging questions had a greater likelihood of being asked in classrooms with more adults relative to children (odds ratio = 1.176; p = 0.037). The findings present a first step in identifying the questions directed at preschoolers with autism spectrum disorder in inclusive classrooms. abstract_id: PUBMED:33749170 Predictors of adaptive functioning in preschool aged children with autism spectrum disorder. Difficulties in adaptive functioning are common in autism spectrum disorder (ASD) and contribute to negative outcomes across the lifespan. Research indicates that cognitive ability is related to degree of adaptive functioning impairments, particularly in young children with ASD. However, the extent to which other factors, such as socioeconomic status (SES) and ASD symptom severity, predict impairments in adaptive functioning remains unclear. The goal of this study was to determine the extent to which SES, ASD symptom severity, and cognitive ability contribute to variability in domain-specific and global components of adaptive functioning in preschool-aged children with ASD. Participants were 99 preschool-aged children (2-6 years) with ASD who attended a tertiary diagnostic service. Results demonstrate that cognitive ability accounted for a significant proportion of variance in domain-specific and global components of adaptive functioning, with higher cognitive ability predicting better adaptive functioning. Results also demonstrate that SES accounted for some variability in domain-specific communication skills and global adaptive functioning when compared to basic demographic factors alone (age and gender). By contrast, ASD symptom severity did not predict variability in domain-specific or global components of adaptive functioning. These findings provide support for a relationship between cognitive ability and adaptive functioning in preschool-aged children with ASD and help to explain specific contributions of verbal and nonverbal ability to adaptive functioning; from this, we can better understand which children are likely to show the greatest degree of impairments across components of adaptive functioning early in development. LAY SUMMARY: People with autism often have difficulties with everyday communication, daily living, and social skills, which are also called adaptive functioning skills. This study investigated factors that might be related to these difficulties in preschoolers with autism. We found that better cognitive ability, but not autism symptoms, were associated with better adaptive functioning. This suggests that interventions for young children with autism should take into account cognitive ability to better understand which children are likely to have difficulties with adaptive functioning. abstract_id: PUBMED:1582962 Effects of self-evaluation on preschool children's use of social interaction strategies with their classmates with autism. This study investigated effects of a self-evaluation procedure on preschool children's use of social interaction strategies among their classmates with autism. Three triads of children (comprised of 1 trained normally developing peer, 1 untrained peer, and 1 child with autism) participated. A multiple baseline design across subjects was used to demonstrate that peers who were taught facilitative strategies increased their use of strategies only after the self-evaluation intervention was introduced. Improvements in social behavior of children with autism was associated with peers' increased strategy use. Untrained peers demonstrated little change in their social behavior. Treatment effects were replicated when trained peers were asked to use self-evaluation with other children with autism during other play times. Self-evaluation procedures enhanced the use of social interaction strategies on the part of normally developing peers during social skills interventions. abstract_id: PUBMED:26304031 Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study. Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child. abstract_id: PUBMED:28256099 Assessment of Autistic Traits in Children Aged 2 to 4½ Years With the Preschool Version of the Social Responsiveness Scale (SRS-P): Findings from Japan. The recent development and use of autism measures for the general population has led to a growing body of evidence which suggests that autistic traits are distributed along a continuum. However, as most existing autism measures were designed for use in children older than age 4, to date, little is known about the autistic continuum in children younger than age 4. As autistic symptoms are evident in the first few years, to address this research gap, the current study tested the preschool version of the Social Responsiveness Scale (SRS-P) in children aged 2 to 4½ years in clinical (N = 74, average age 40 months, 26-51 months) and community settings (N = 357, average age 39 months, 25-50 months) in Japan. Using information obtained from different raters (mothers, other caregivers, and teachers) it was found that the scale demonstrated a good degree of internal consistency, inter-rater reliability and test-retest reliability, and a satisfactory degree of convergent validity for the clinical sample when compared with scores from diagnostic "gold standard" autism measures. Receiver operating characteristic analyses and the group comparisons also showed that the SRS-P total score discriminated well between children with autism spectrum disorder (ASD) and those without ASD. Importantly, this scale could identify autistic symptoms or traits distributed continually across the child population at this age irrespective of the presence of an ASD diagnosis. These findings suggest that the SRS-P might be a sensitive instrument for case identification including subthreshold ASD, as well as a potentially useful research tool for exploring ASD endophenotypes. Autism Res 2017, 10: 852-865. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. abstract_id: PUBMED:32781551 Exploring the Participation Patterns and Impact of Environment in Preschool Children with ASD. Participation in everyday activities at home and in the community is essential for children's development and well-being. Limited information exists about participation patterns of preschool children with autism spectrum disorder (ASD). This study examines these participation patterns in both the home and community, and the extent to which environmental factors and social communication abilities are associated with participation. Fifty-four parents of preschool-aged children with ASD completed the Participation and Environment Measure for Young Children and the Autism Classification System of Functioning: Social Communication. The children had a mean age of 48.9 (8.4) months. Patterns of participation were studied using descriptive statistics, radar graphs, and Spearman correlations. Children with ASD participated in a variety of activities at home and in the community, but showed a higher participation frequency at home. Parents identified different barriers (e.g., social demands) and supports (e.g., attitudes) in both settings. There was a moderate positive association between children's social communication abilities and their levels of involvement during participation and the diversity of activities. This study highlights the importance of social communication abilities in the participation of preschool children with ASD, and the need to support parents while they work to improve their child's participation, especially within their communities. abstract_id: PUBMED:30825081 Needs of Grandparents of Preschool-Aged Children with ASD in Sweden. Little is known about needs of grandparents of young children with autism in family and community settings. This study investigated perceived needs of grandparents of preschool-aged children diagnosed with ASD in the cultural context of Sweden. Participants were 120 grandparents of children enrolled into autism intervention programs provided by the public disability services in Stockholm. The Grandparents' Needs Survey and the SDQ Impact supplement were used to collect data. Grandparents expressed most needs in topic areas of information and childcare. No significant relations were found between grandparents' demographics and perceptions of needs; grandparents' needs were predicted by their perceived burden. The findings provide insight into understanding of grandparents' needs essential for planning and provision of quality family-centered early intervention services. abstract_id: PUBMED:28344564 Investigating the Grammatical and Pragmatic Origins of Wh-Questions in Children with Autism Spectrum Disorders. Compared to typically developing children, children with autism (ASD) show delayed production of wh-questions. It is currently controversial the degree to which such deficits derive from social-pragmatic requirements and/or because these are complex grammatical structures. The current study employed the intermodal preferential looking (IPL) paradigm, which reduces social-pragmatic demands. The IPL paradigm can help distinguish these proposals, as successful comprehension promotes the "pragmatics-origins" argument whereas comprehension difficulties would implicate a "grammatical-origins" argument. Additionally, we tested both the linguistic and social explanations by assessing the contributions of children's early grammatical knowledge (i.e., SVO word order) and their social-pragmatic scores on the Vineland to their later wh-question comprehension. Fourteen children with ASD and 17 TD children, matched on language level, were visited in their homes at 4-month intervals. Comprehension of wh-questions and SVO word order were tested via IPL: the wh-question video showed a costumed horse and bird serving as agents or patients of familiar transitive actions. During the test trials, they were displayed side by side with directing audios (e.g., "What did the horse tickle?", "What hugged the bird?", "Where is the horse/bird?"). Children's eye movements were coded offline; the DV was their percent looking to the named item during test. To show comprehension, children should look longer at the named item during a where-question than during a subject-wh or object-wh question. Results indicated that TD children comprehended both subject and object wh-questions at 32 months of age. Comprehension of object-wh questions emerged chronologically later in children with ASD compared to their TD peers, but at similar levels of language. Moreover, performance on word order and social-pragmatic scores independently predicted both groups' later performance on wh-question comprehension. Our findings indicate that both grammar and social-pragmatics are implicated in the comprehension of wh-questions. The "grammatical-origins" argument is supported because the ASD group did not reveal earlier and stable comprehension of wh-questions; furthermore, their performance on SVO word order predicted their later success in linguistic processing of wh-questions. The "pragmatic-origins" argument is also supported because children's earlier socialization and communication scores strongly predicted their successful performance on wh-question comprehension. Answer: Yes, questions about social interaction can correctly identify preschool-aged children with autism. A study developed a questionnaire to assess social development (Social Interaction Questionnaire or SIQ) in preschool children and found that an SIQ score of 14 or more correctly identified children with a Childhood Autism Rating Scale (CARS) score of 30 or more with a sensitivity of 85% and specificity of 85%. Two questions from the SIQ were identified as most predictive of Autism Spectrum Disorder (ASD) (PUBMED:23350819). This suggests that questions focusing on social development can be an effective tool in the early identification of ASD in preschool-aged children.
Instruction: Effect of alcohol in traumatic brain injury: is it really protective? Abstracts: abstract_id: PUBMED:33954580 Biomechanical Analysis of Head Subjected to Blast Waves and the Role of Combat Protective Headgear Under Blast Loading: A Review. Blast-induced traumatic brain injury (bTBI) is a rising health concern of soldiers deployed in modern-day military conflicts. For bTBI, blast wave loading is a cause, and damage incurred to brain tissue is the effect. There are several proposed mechanisms for the bTBI, such as direct cranial entry, skull flexure, thoracic compression, blast-induced acceleration, and cavitation that are not mutually exclusive. So the cause-effect relationship is not straightforward. The efficiency of protective headgears against blast waves is relatively unknown as compared with other threats. Proper knowledge about standard problem space, underlying mechanisms, blast reconstruction techniques, and biomechanical models are essential for protective headgear design and evaluation. Various researchers from cross disciplines analyze bTBI from different perspectives. From the biomedical perspective, the physiological response, neuropathology, injury scales, and even the molecular level and cellular level changes incurred during injury are essential. From a combat protective gear designer perspective, the spatial and temporal variation of mechanical correlates of brain injury such as surface overpressure, acceleration, tissue-level stresses, and strains are essential. This paper outlines the key inferences from bTBI studies that are essential in the protective headgear design context. abstract_id: PUBMED:12809567 The protective effect of M40401, a superoxide dismutase mimetic, on post-ischemic brain damage in Mongolian gerbils. Background: Overproduction of free radical species has been shown to occur in brain tissues after ischemia-reperfusion injury. However, most of free radical scavengers known to antagonize oxidative damage (e.g. superoxide dismutase, catalase), are unable to protect against ischemia-reperfusion brain injury when given in vivo, an effect mainly due to their difficulty to gain access to brain tissues. Here we studied the effect of a low molecular weight superoxide dismutase mimetic (M40401) in brain damage subsequent to ischemia-reperfusion injury in Mongolian gerbils. Results: In animals undergoing ischemia-reperfusion injury, neuropathological and ultrastructural changes were monitored for 1-7 days either in the presence or in the absence of M40401 after bilateral common carotid artery occlusion (BCCO). Administration of M40401 (1-40 mg/kg, given i.p. 1 h after BCCO) protected against post-ischemic, ultrastructural and neuropathological changes occurring within the hippocampal CA1 area. The protective effect of M40401 was associated with a significant reduction of the levels of malondialdehyde (MDA; a marker of lipid peroxidation) in ischemic brain tissues after ischemia-reperfusion. Conclusion: Taken together, these results demonstrate that M40401 provides protective effects when given early after the induction of ischemia-reperfusion of brain tissues and suggest the possible use of such compounds in the treatment of neurological dysfunction subsequent to cerebral flow disturbances. abstract_id: PUBMED:716743 Effect of protective helmets and head guards on localization of injuries to the skull and brain in cranio-cerebral injuries The results of post-mortem examination of 140 cadavers of persons who had died of craniocerebral injuries are appraised. It was established that in injury inflicted through a protective head-piece, the proportion of damage to the bones of the base of the skull, basally located structures of the large hemispheres and stem of the brain in the total number of injuries increases. The study is supplemented with mathematical calculations which explain the dependence observed. abstract_id: PUBMED:25241777 Protective effect of polydatin on learning and memory impairments in neonatal rats with hypoxic‑ischemic brain injury by up‑regulating brain‑derived neurotrophic factor. Polydatin is a key component of Polygonum cuspidatum, a herb with medical and nutritional value. The present study investigated the protective effect of polydatin against learning and memory impairment in neonatal rats with hypoxic‑ischemic brain injury (HIBI). The unilateral common carotid artery ligation method was used to generate neonatal HIBI rats. Y‑maze testing revealed that rats with HIBI exhibited memory impairment, while rats with HIBI treated with polydatin displayed enhanced long‑term learning and memory. Of note, polydatin was found to upregulate the expression of hippocampal brain‑derived neurotrophic factor (BDNF) in rats with HIBI. BDNF has a role in protecting HIBI‑induced brain tissue injury and alleviating memory impairment. These findings showed that polydatin had a protective effect against learning and memory impairment in neonatal rats with HIBI and that the protective effect may be mediated through the upregulation of BDNF. abstract_id: PUBMED:26974030 The protective effect of a helmet in three bicycle accidents--A finite element study. There is some controversy regarding the effectiveness of helmets in preventing head injuries among cyclists. Epidemiological, experimental and computer simulation studies have suggested that helmets do indeed have a protective effect, whereas other studies based on epidemiological data have argued that there is no evidence that the helmet protects the brain. The objective of this study was to evaluate the protective effect of a helmet in single bicycle accident reconstructions using detailed finite element simulations. Strain in the brain tissue, which is associated with brain injuries, was reduced by up to 43% for the accident cases studied when a helmet was included. This resulted in a reduction of the risk of concussion of up to 54%. The stress to the skull bone went from fracture level of 80 MPa down to 13-16 MPa when a helmet was included and the skull fracture risk was reduced by up to 98% based on linear acceleration. Even with a 10% increased riding velocity for the helmeted impacts, to take into account possible increased risk taking, the risk of concussion was still reduced by up to 46% when compared with the unhelmeted impacts with original velocity. The results of this study show that the brain injury risk and risk of skull fracture could have been reduced in these three cases if a helmet had been worn. abstract_id: PUBMED:24721405 Protective effect of meloxicam against acute radiation-induced brain injury in rats Unlabelled: OBJECTIVE To observe the protective effect of meloxicam against acute radiation-induced brain injury in rats. Methods: Fifty-four SD rats were randomly divided into blank control group, radiation group (20 Gy) and therapy group (20 Gy radiation followed by 10 mg/kg meloxicam treatment). The whole brain of SD rats in the radiation and therapy groups were vertically irradiated by 6 MeV electron beam at a dose of 20 Gy. One, 3 and 7 days after irradiation, the morphological changes of hippocampal neurons were observed using HE staining, and the expressions of cyclooxygenase-2 (COX-2) mRNA and protein were detected by RT-PCR and immunohistochemistry, respectively. Results: Compared with the blank control group, the radiation group showed that the neuron swelling and vascular endothelial cell edema as well as space enlargement around the capillaries. Both neuron swelling and vascular endothelial cell injury in the therapy group were slighter than those in the radiation group. Compared with the blank control group, the levels of COX-2 mRNA and protein in the radiation and therapy groups increased obviously one day after irradiation (P&lt;0.05), and compared with the radiation group, the levels decreased obviously in the therapy group (P&lt;0.05); 3 and 7 days after irradiation, the levels of COX-2 mRNA and protein among the 3 groups had no statistical differences (P&gt;0.05). Conclusion: The early use of meloxicam can reduce the brain injury induced by radiation. Its protective effect may be related to the relief of vascular endothelial cell injury and the decreased expression of COX-2. abstract_id: PUBMED:31972233 Is bilingualism protective for adults with aphasia? The bilingual advantage proposes that bilingual individuals have enhanced cognitive control compared to their monolingual counterparts. Bilingualism has also been shown to contribute to cognitive reserve by offsetting the behavioral presentation of brain injury or neural degeneration. However, this effect has not been closely examined in individuals with post-stroke or post-TBI aphasia. Because bilingualism has been suggested as a factor of cognitive reserve, it may provide protective mechanisms for adults with aphasia. In the current study, evidence for the bilingual advantage was examined in 13 Spanish-English bilingual healthy adults (BHA) compared to 13 English monolingual healthy adults (MHA). Additionally, evidence for cognitive reserve as defined by a bilingual advantage was examined in 18 Spanish-English bilingual adults with aphasia (BAA) compared to 18 English monolingual adults with aphasia (MAA) who were otherwise matched on their age, education, language impairment, and non-verbal executive functions. All participants completed a non-linguistic cognitive control task that included congruent and incongruent conditions. Results indicated no bilingual cognitive control advantage on reaction times in healthy adult groups; however, BAA were faster than MAA, suggesting that bilingualism may contribute to cognitive reserve in adults with aphasia. Thus, manipulating multiple languages throughout the lifetime may be protective after an acquired brain injury. abstract_id: PUBMED:30966831 The brain protective effect of dexmedetomidine during surgery for paediatric patients with congenital heart disease. Objective: To study the brain protective effect of dexmedetomidine (DEX) during surgery in paediatric patients with congenital heart disease (CHD). Methods: This randomized single-blind controlled study enrolled paediatric patients aged 0-3 years with CHD who underwent surgery and randomized them into two groups: one group received DEX and the control group received 0.9% NaCl during anaesthesia. Demographic data, heart rate (HR), mean arterial pressure (MAP) and central venous pressure (CVP) were recorded. Levels of neuron specific enolase (NES) and S-100β protein were determined using enzyme-linked immunosorbent assays. Results: The study enrolled 80 paediatric patients with CHD. Compared with the control group, HR, MAP and CVP were significantly lower in the DEX group at all time-points except for T0. At all time-points except for T0, the levels of jugular venous oxygen saturation in the DEX group were significantly higher compared with the control group. At all time-points except for T0, the levels of arterial venous difference and cerebral extraction of oxygen were significantly lower in the DEX group compared with the control group. Levels of NES and S-100β protein in the DEX group were significantly lower compared with the control group at all time-points except for T0. Conclusion: DEX treatment during surgery for CHD improved oxygen metabolism in brain tissues and reduced the levels of NES and S-100β protein. abstract_id: PUBMED:22544830 Attenuation of blast pressure behind ballistic protective vests. Background: Clinical studies increasingly report brain injury and not pulmonary injury following blast exposures, despite the increased frequency of exposure to explosive devices. The goal of this study was to determine the effect of personal body armour use on the potential for primary blast injury and to determine the risk of brain and pulmonary injury following a blast and its impact on the clinical care of patients with a history of blast exposure. Methods: A shock tube was used to generate blast overpressures on soft ballistic protective vests (NIJ Level-2) and hard protective vests (NIJ Level-4) while overpressure was recorded behind the vest. Results: Both types of vest were found to significantly decrease pulmonary injury risk following a blast for a wide range of conditions. At the highest tested blast overpressure, the soft vest decreased the behind armour overpressure by a factor of 14.2, and the hard vest decreased behind armour overpressure by a factor of 56.8. Addition of body armour increased the 50th percentile pulmonary death tolerance of both vests to higher levels than the 50th percentile for brain injury. Conclusions: These results suggest that ballistic protective body armour vests, especially hard body armour plates, provide substantial chest protection in primary blasts and explain the increased frequency of head injuries, without the presence of pulmonary injuries, in protected subjects reporting a history of blast exposure. These results suggest increased clinical suspicion for mild to severe brain injury is warranted in persons wearing body armour exposed to a blast with or without pulmonary injury. abstract_id: PUBMED:25066402 Protective effect of topiramate on hypoxic-ischemic brain injury in neonatal rat. Objective: To explore protective effect of topiramate (TPM) on hypoxic-ischemic brain injury. Methods: A total of 360 neonatal rats were selected then randomly divided into sham operation group, ischemia and hypoxia group, conventional treatment group and degradation therapy group (n=90). After surgical treatment, sham and ischemic hypoxia group were treat with normal saline; conventional treatment group was received TPM solution 100 mg/kg, 2 times/d; degradation therapy group received TPM solution 150 mg/kg, 2 times/d, per 3 d treatment each dosage was reduced 50 mg/kg, the lowest reduced to 50 mg/kg. Four groups received continuous treatment for 10 d. After treatment for 1 d, 4 d, 7 d, 10 d the cerebral edema, neuron-specific enolase (NSE) and γ-aminobutyric acid (GABA) levels and cognitive abilities of four groups were observed. Results: After 1 d, 4 d of treatment, the brain water content and NSE levels in ischemia and hypoxia group, the conventional treatment group and the degradation therapy group were significantly higher than that in sham group (P&lt;0.05), the brain water content and NSE levels of the conventional treatment group and the degradation therapy group were significantly lower than that in the ischemic hypoxia group (P&lt;0.05). GABA levels and learning ability of the ischemia and hypoxia group, the conventional treatment group and degradation therapy group were significantly lower than the sham group (P&lt;0.05), the GABA levels and learning ability of the conventional treatment group and degradation therapy group were significantly higher than the ischemia and hypoxia group (P&lt;0.05). After 7 d, 10 d of treatment, the brain water content and NSE levels in the sham operation group, the conventional treatment group and degradation therapy group were significantly lower than the ischemia and hypoxia group (P&lt;0.05), while the GABA levels and learning ability of these three groups were significantly higher than that in the ischemia and hypoxia group (P&lt;0.05), the GABA levels in the conventional treatment group were significantly higher than degradation therapy group (P&lt;0.05); After 10 d of treatment, the GABA levels of the conventional treatment group were significantly higher than the sham group, the learning ability of the degradation therapy group and sham operation group were significantly higher than the conventional treatment group (P&lt;0.05). Conclusions: The correct amount of short-term TPM has protective effect on hypoxic-ischemic brain injury, but long-term or excessive use may cause new damage to the brain and reduce the cognitive ability. Answer: The provided abstracts do not directly address the question of whether alcohol has a protective effect in traumatic brain injury (TBI). The abstracts focus on various protective measures and treatments for brain injuries, including the use of protective headgear in blast-induced traumatic brain injury (bTBI) (PUBMED:33954580), the administration of a superoxide dismutase mimetic (M40401) in ischemia-reperfusion brain injury (PUBMED:12809567), the localization of injuries when protective helmets are worn (PUBMED:716743), the use of polydatin in hypoxic‑ischemic brain injury (HIBI) (PUBMED:25241777), the effectiveness of helmets in reducing brain injury risk in bicycle accidents (PUBMED:26974030), the protective effect of meloxicam against acute radiation-induced brain injury (PUBMED:24721405), the potential cognitive reserve benefits of bilingualism in adults with aphasia (PUBMED:31972233), the brain protective effect of dexmedetomidine during surgery for pediatric patients with congenital heart disease (PUBMED:30966831), the attenuation of blast pressure behind ballistic protective vests (PUBMED:22544830), and the protective effect of topiramate on hypoxic-ischemic brain injury (PUBMED:25066402). None of these abstracts mention the impact of alcohol on TBI outcomes. Therefore, based on the provided abstracts, it is not possible to determine whether alcohol has a protective effect in the context of traumatic brain injury. Additional research and literature would need to be consulted to answer this question.
Instruction: Do general practitioners want guidelines? Abstracts: abstract_id: PUBMED:23368720 Barriers to the implementation of preconception care guidelines as perceived by general practitioners: a qualitative study. Background: Despite strong evidence of the benefits of preconception interventions for improving pregnancy outcomes, the delivery and uptake of preconception care and periconceptional folate supplementation remain low. General practitioners play a central role in the delivery of preconception care. Understanding general practitioners' perceptions of the barriers and enablers to implementing preconception care allows for more appropriate targeting of quality improvement interventions. Consequently, the aim of this study was to examine the barriers and enablers to the delivery and uptake of preconception care guidelines from general practitioners' perspective using theoretical domains related to behaviour change. Methods: We conducted a qualitative study using focus groups consisting of 22 general practitioners who were recruited from three regional general practice support organisations. Questions were based on the theoretical domain framework, which describes 12 domains related to behaviour change. General practitioners' responses were classified into predefined themes using a deductive process of thematic analysis. Results: Beliefs about capabilities, motivations and goals, environmental context and resources, and memory, attention and decision making were the key domains identified in the barrier analysis. Some of the perceived barriers identified by general practitioners were time constraints, the lack of women presenting at the preconception stage, the numerous competing preventive priorities within the general practice setting, issues relating to the cost of and access to preconception care, and the lack of resources for assisting in the delivery of preconception care guidelines. Perceived enablers identified by general practitioners included the availability of preconception care checklists and patient brochures, handouts, and waiting room posters outlining the benefits and availability of preconception care consultations. Conclusions: Our study has identified some of the barriers and enablers to the delivery and uptake of preconception care guidelines, as perceived by general practitioners. Relating these barriers to a theoretical domain framework provides a clearer understanding of some of the psychological aspects that are involved in the behaviour of general practitioners towards the delivery and uptake of preconception care. Further research prioritising these barriers and the theoretical domains to which they relate to is necessary before a methodologically rigorous intervention can be designed, implemented, and evaluated. abstract_id: PUBMED:27903037 Resuscitation update for general practitioners. Background: The latest changes to resuscitation guidelines in Australia were released in 2016. Few of the changes will have an impact on general practitioners (GPs) but there are some additional issues that they, as health professionals and leaders in the community, should be informed about. Objective: The objective of this article is to provide an update for GPs on the current resuscitation guidelines. Discussion: This article describes the latest changes in resuscitation recommendations in the fields of first aid, basic life support, advanced life support and paediatric resuscitation, with an emphasis on issues of particular relevance to GPs. abstract_id: PUBMED:21536601 General practitioners and clinical practice guidelines: a reexamination. General practitioners' (GPs') use of clinical practice guidelines (CPGs) may be influenced by various contextual and attitudinal factors. This study examines general attitudes toward CPGs to establish profiles according to these attitudes and to determine if these profiles are associated with awareness and with use of CPGs in daily practice. The authors conducted a cross-sectional telephone survey of 1,759 French GPs and measured (a) their general attitudes toward CPGs and (b) their awareness and use in daily practice of CPGs for six specific health problems. A bivariate probit model was used with sample selection to analyze the links between GPs' general attitudes and CPG awareness/use. The authors found three GP profiles according to their opinions toward CPGs and a positive association between these profiles and CPG awareness but not use. It is important to build awareness of CPGs before GPs develop negative attitudes toward them. abstract_id: PUBMED:24160566 General practitioners' knowledge of whiplash guidelines improved with online education. Objective: The primary objective of this study was to evaluate the effect of an online education program used to implement the Australian (New South Wales) whiplash guidelines with general practitioners (GP). The secondary aim was to identify factors associated with learning. Methods: An online educational and evaluation activity was developed to reflect the key messages for GP from the Australian whiplash guidelines. The educational activity was hosted on the Royal Australian College of General Practitioners' website (www.gplearning.com.au) for a period of 3 years. Participants were recruited through advertisement and media releases. Participants completed a baseline evaluation of their knowledge, participated in the interactive educational activity and completed a post-knowledge questionnaire. The primary outcome was change in professional knowledge, predictors of learning were computed using linear regression. Results: Two hundred and fifteen GP participated. Knowledge significantly improved between baseline and post-knowledge questionnaire scores (P &lt; 0.00001). A total of 57.2% of participants improved their knowledge by more than 20%, indicating a large effect. Low baseline knowledge predicted learning, accounting for 71% of the variance. Conclusions: Online education of GP significantly improved their knowledge in relation to guidelines for whiplash. Those with low baseline knowledge improved their knowledge the most, suggesting that implementation strategies should be targeted at this group. abstract_id: PUBMED:11320762 General practitioners and clinical guidelines. Objective: To assess the attitudes of general practitioners in Harare, Zimbabwe, towards the use of clinical practice guidelines (CPG's). Design: Cross sectional survey. Setting: General practitioners in private practice within the urban Harare (Zimbabwe) environs. Subjects: Two hundred and thirty two general practitioners in Harare, Zimbabwe. Main Outcome Measures: The response to a questionnaire enlisting attitudes to CPGs. Results: Questionnaires were sent to 232 general practitioners. Of these, 137 (59.1%) returned a completed questionnaire. Among the respondents, 95.6% felt that general practitioners should be involved in the development of guidelines, 72.6% had read at least one guideline, 65.9% were prepared to use guidelines in their practice, 61.6% thought that guidelines would improve their treatment ability, and 59.7% thought that guidelines would improve their knowledge of disease. 76.5% felt that the government should not legislate, 66.2% felt that guidelines reduce practitioners' flexibility and 57.9% felt that guidelines would not improve their diagnostic ability. Conclusion: The respondents were, in general, favourably disposed towards CPGs. Most had already read some guidelines, and about two thirds were prepared to use them. Almost all respondents felt that general practitioners should be involved in the development of guidelines for use in general practice. These general practitioners felt that guidelines were likely to help them treat patients than to make a diagnosis. Despite these favourable attitudes, many practitioners felt that guidelines would limit their personal flexibility in caring for patients. Organisations developing or implementing CPGs in general practice should address these concerns. abstract_id: PUBMED:25357143 Impact of new guidelines and educational program on awareness of medical fitness to drive among general practitioners in Ireland. Objective: To investigate changes in attitudes, resources, and practices of general practitioners (GPs) toward evaluating medical fitness to drive (MFTD) following the publication of national guidelines and an extensive educational programme in traffic medicine. Method: Postal questionnaire survey to GPs (n = 1,000) in November 2013. Results: The final response rate was 46%. GPs are confident (57%) or very confident (14%) in assessing MFTD. There is a high awareness of the new Irish guidelines, with 86% of GPs using them for assistance in assessing MFTD. GPs are divided as to whether GPs (49%) or practitioners specially trained to assess MFTD (44%) should be primarily responsible for assessing MFTD. GPs expressed interest in traffic medicine educational programs, most notably a resource pack for continuous medical education (CME) Small Group learning (87%), MFTD software (71%), and an online moodle (68%). Many (68%) remain concerned about their liability in regard to MFTD assessments. Conclusion: Irish GPs are confident in assessing MFTD and show a high level of awareness of the new guidelines. There is a clear interest among GPs in further educational supports and training in traffic medicine, particularly MFTD assessments. abstract_id: PUBMED:34101082 What is the significance of guidelines in the primary care setting? : Results of an exploratory online survey of general practitioners in Germany. Medical guidelines aim to ensure that care processes take place in an evidence-based and structured manner. They are especially relevant in outpatient primary care due to the wide range of symptoms and clinical pictures. In German-speaking countries, there is a lack of current findings documenting general practitioners' opinions and experiences regarding guidelines, their expectations and their views on what improvements could be made to increase the use of this type of evidence-based instrument in the primary care setting. Between April and August 2020, a total of 3098 general practitioners were surveyed in the states of Baden-Württemberg, Hesse and Rhineland-Palatinate via an online questionnaire. Alongside the descriptive evaluation, t‑testing was used to determine significant differences between two independent sampling groups. A factor analysis was also used to cluster the expectations of those surveyed regarding the fulfilment of requirements relating to guidelines. A total of 52% of those surveyed have a positive view of guidelines. Overall, guidelines are associated with an increased evidence-based approach (69%), standardisation of diagnosis and treatment (62%) and a reduction in overprovision or underprovision of care (57%). In all, 62% of the physicians who implemented guidelines observed positive effects on the quality of care provided, and 67% reported that the implementation of guidelines improved the quality of their diagnostic or therapeutic skills. However, implementation is often seen as being complicated (43%) and restricting the physician's ability to act independently (63%). Survey participants suggested that guidelines could be optimised by giving greater consideration to nondrug alternatives (46%), focusing on issues related to quality of life (42%) and offering a comparative assessment of various treatment options (39%). In order to further promote the attractiveness of guidelines for primary care the design of guidelines should be oriented more towards their application; they should be well-presented to make them easier to implement. The scope of action available to the physician should be stressed. The guidelines should provide recommendations on opportunities for the delegation of tasks within practice teams. abstract_id: PUBMED:28207044 Review of guidance on recurrence risk management for general practitioners in breast cancer, colorectal cancer and melanoma guidelines. Background: General practitioners (GPs) will face cancer recurrences more frequently due to the rising number of cancer survivors and greater involvement of GPs in the follow-up care. Currently, GPs are uncertain about managing recurrence risks and may need more guidance. Objective: To explore what guidance is available for GPs on managing recurrence risks for breast cancer, colorectal cancer and melanoma, and to examine whether recurrence risk management differs between these tumour types. Methods: Breast cancer, colorectal cancer and melanoma clinical practice guidelines were identified via searches on internet and the literature, and experts were approached to identify guidelines. Guidance on recurrence risk management that was (potentially) relevant for GPs was extracted and summarized into topics. Results: We included 24 breast cancer, 21 colorectal cancer and 15 melanoma guidelines. Identified topics on recurrence risk management were rather similar among the three tumour types. The main issue in the guidelines was recurrence detection through consecutive diagnostic testing. Guidelines agree on both routine and nonroutine tests, but, recommended frequencies for follow-up are inconsistent, except for mammography screening for breast cancer. Only six guidelines provided targeted guidance for GPs. Conclusion: This inventory shows that recurrence risk management has overlapping areas between tumour types, making it more feasible for GPs to provide this care. However, few guidance on recurrence risk management is specific for GPs. Recommendations on time intervals of consecutive diagnostic tests are inconsistent, making it difficult for GPs to manage recurrence risks and illustrating the need for more guidance targeted for GPs. abstract_id: PUBMED:28735500 Adherence to COPD management guidelines in general practice? A review of the literature. Background: Chronic obstructive pulmonary disease (COPD) is a progressive illness that is mostly managed in the general practice setting. The Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines are the international gold standard, and it is important to understand how these are being applied in general practice. Aims: This review aimed to assess the current level of adherence to international best practice guidelines among general practitioners in relation to COPD. Methods: PubMed and EMBASE searches (from 2012 to 2016) were performed and used the search terms guidelines, COPD, general practitioners, and primary care. Papers were excluded if they were not primary sources, were published before 2012, or did not pertain to a general practice setting. Results: After applying set inclusion and exclusion criteria, 11 studies were retrieved. These papers were grouped under three categories: diagnosis, pharmacological, and non-pharmacological management, based on the GOLD guidelines. Conclusions: Current studies show significant variability in adherence to the GOLD guidelines. Barriers identified include lack of clarity, unfamiliarity with recommendations, and lack of familiarity with the guidelines. If general practice is expected to manage COPD and other chronic diseases, health service investment is needed to provide appropriate focused guidelines, to support their dissemination and resources to implement them in practice. abstract_id: PUBMED:26893922 The approach of general practitioners in Lagos to the detection and evaluation of hypertension. Introduction: Hypertension (HTN) control is a major public health and clinical challenge. A number of guidelines exist globally to assist in tackling this challenge. The aim of this study was to determine conformity of the HTN detection and evaluation practices of a sample of Lagos-based general practitioners (GPs) to international guidelines. Methods: Self-administered structured questionnaires were used to collect data from a cohort of GPs attending continuing medical education programs in Lagos. Results: Out of the 460 GPs that were approached, 435 agreed to participate in the study, with questionnaires from 403 GPs analyzed. The average age and number of years post-registration of the GPs were 40.0 ± 11.3 years and 14.3 ± 11.1 years, respectively. Two thirds (n = 269) were in private practice. Their daily average total and HTN patients' loads were 17.4 ± 14.3 and 4.4 ± 3.5, respectively. Awareness of HTN guidelines was 46.7% (n = 188), while 18.1% (n = 73) was able to name one or more HTN guidelines. The approaches of these GPs to the detection and evaluation of HTN and their relationships to the GPs' experience were heterogeneous. Discussion: The approach of the GPs to detection and evaluation of HTN though heterogeneous is unsatisfactory and may partly contribute to poor HTN control in Nigeria. Strengthening the capacity of GPs in this regard through continuous medical education may greatly improve HTN control. Answer: Yes, general practitioners (GPs) generally appear to want guidelines. The literature suggests that GPs recognize the value of clinical practice guidelines (CPGs) and are favorably disposed towards them. For instance, in a study conducted in Harare, Zimbabwe, a majority of GPs felt that they should be involved in the development of guidelines, and about two-thirds were prepared to use them in their practice. They believed that guidelines could improve their treatment ability and knowledge of disease (PUBMED:11320762). Similarly, a survey of GPs in Germany found that 52% have a positive view of guidelines, associating them with increased evidence-based practice, standardization of diagnosis and treatment, and a reduction in overprovision or underprovision of care. The majority of those who implemented guidelines observed positive effects on the quality of care provided (PUBMED:34101082). However, GPs also express concerns about guidelines, such as the potential to restrict their personal flexibility in patient care and the complexity of implementation. Some GPs feel that guidelines could be optimized by giving greater consideration to non-drug alternatives and focusing on quality of life issues (PUBMED:34101082). Additionally, while there is a high level of awareness of certain guidelines, such as those for medical fitness to drive in Ireland, GPs remain concerned about their liability in regard to assessments (PUBMED:25357143). In summary, while GPs do want guidelines and see their benefits, they also desire that guidelines be practical, flexible, and not overly restrictive of their clinical judgment. They also express a need for involvement in the development of guidelines and for educational support to better implement them in practice (PUBMED:11320762; PUBMED:34101082; PUBMED:25357143).
Instruction: The tale of two serologic tests to screen for syphilis--treponemal and nontreponemal: does the order matter? Abstracts: abstract_id: PUBMED:21183862 The tale of two serologic tests to screen for syphilis--treponemal and nontreponemal: does the order matter? Background: Standard syphilis screening involves an initial screening with a nontreponemal test and confirmation of positives with a treponemal test. However, some laboratories have reversed the order. There is no detailed quantitative and qualitative evaluation for the order of testing. In this study, we analyzed the health and economic outcomes of the order of testing for the 2 serologic tests used in syphilis screening under pure screening settings. Methods: We used a cohort decision analysis to examine the health and economic outcomes of the screening algorithms for low and high prevalence settings. The 2-step algorithms were nontreponemal followed by treponemal (Nontrep-First) and treponemal followed by nontreponemal (Trep-First). We included the 1-step algorithms (treponemal only [Trep-Only] and an on-site nontreponemal only [Nontrep-Only]) for comparison. We estimated overtreatment rates and the number of confirmatory tests required for each algorithm. Results: For a cohort of 10,000 individuals, our results indicated that the overtreatment rates were substantially higher (more than 3 times) for the 1-step algorithms, although they treated a higher number of cases (over 15%). The 2-step algorithms detected and treated the same number of individuals. Among the 2-step algorithms, the Nontrep-First was more cost-effective in the low prevalence setting ($1400 vs. $1500 per adverse outcome prevented) and more cost-saving ($102,000 vs. $84,000) in the high prevalence setting. Conclusions: The difference in cost was largely due to the substantially higher number of confirmatory tests required for the Trep-First algorithm, although the number of cases detected and treated was the same. abstract_id: PUBMED:4012175 Prospects for improved laboratory diagnoses of treponemal infections and species differentiation. The serologic diagnosis of treponemal infections has depended in the past on a variety of tests in which specificity was defined on an epidemiologic rather than on an immunologic basis. The lipoidal antigen tests possess no immunologic specificity. Tests based on whole treponemal antigens, although they do have some immunologic specificity, react with antibodies other then those generated in the course of syphilis and yaws infections. Recent developments in biotechnology now permit the identification of immunologically specific antigens in Treponema pallidum, and cloning of appropriate genetic information in Escherichia coli has led to the production of pure specific reagents. These developments will finally place the serologic diagnosis of treponemal infections on a sound immunologic basis. abstract_id: PUBMED:13523399 Treponemal serologic tests; experiences of the Bacteriology Laboratory, California State Department of Public Health. In a study of the relationship of clinical impression regarding syphilis and age, sex and pregnancy status to treponemal serologic test reactivity, it was noted that in diagnostic "problem cases" the standard lipid serologic test titers did not differentiate between syphilitic and biologic false positive reactors. Preliminary data indicated that heroin addiction may be a source of biologic false reactions and that pregnant women with standard serologic test reactivity have a lower treponemal reactivity rate than other women with lipid serologic reactivity. abstract_id: PUBMED:5327851 Fluorescent treponemal antibody tests. A summary and comparison. A comparison of current serologic tests for syphilis shows that treponemal tests are preferable to reagin tests in detecting specific antibodies, but that reagin tests are best for determining the response to treatment. The newly developed fta-absorption technique is suggested as a reliable, inexpensive test for treponemal antibodies. abstract_id: PUBMED:7652994 Treponemal infection among children in Ramotswa, Botswana. A serological study In Botswana in southern Africa, an area with a high prevalence of syphilis, non-venereal treponematoses used to be prevalent. In the present study sera from 136 children (0-18 years) were analysed to evaluate whether infection with non-venereal treponematoses during childhood could explain the high prevalence of treponemal seropositivity found in adults. In the age group 0-14 years, seropositivity was demonstrated in one (1%) of 87 children, compared to 10 (20%) of 49 children in the age group 15-18 years, a statistically significantly higher prevalence. All cases of seropositivity were due to active infection. The local laboratory in Botswana failed to diagnose four (36%) of the 11 cases of active syphilis when the VDRL-test alone was used. We conclude that no serological indication of non-venereal treponematoses were found in the examined children, and that syphilis was the cause of the high prevalence of treponemal infection among the sexually active adults in Botswana. It is recommended that both the VDRL-test and the TPHA-test are used in screening for syphilis in Botswana. Sexually transmitted disease-campaigns directed at the youth in Botswana should have high priorities. abstract_id: PUBMED:9858352 Treponemal specific tests for the serodiagnosis of syphilis. Syphilis and HIV Study Group. Objectives: To determine the rate of concordance of the Microhemagglutination Assay for Antibodies to T. pallidum (MHA-TP) and the Fluorescent Treponemal Antibody-Absorption test (FTA-ABS) prior to therapy in patients with early stage syphilis and to assess the incidence of and associated risk factors for seroreversion of these treponemal specific tests during the first year after therapy for early syphilis. Design: Multicenter, prospective, cohort treatment study of patients with early syphilis. Methods: Five hundred twenty-five patients were enrolled in a study to evaluate the response of early syphilis to either benzathine penicillin 2.4 million units intramuscularly once or this therapy plus amoxicillin 2 g and probenecid 500 mg orally both three times daily for 10 days. Serologic and clinical follow-up was conducted at intervals over 1 year. MHA-TP and FTA-ABS tests were performed on serologic specimens from each patient visit. Results: Enrollment specimens showed 5% discordant MHA-TP and FTA-ABS results with 85% of these demonstrating a nonreactive MHA-TP. This occurred most commonly in primary syphilis. In patients who had a 1-year serologic follow-up with FTA-ABS or MHA-TP, seroreversion occurred in 9% and 5% of cases, respectively. No association between HIV-seropositivity and TST seroreversion was demonstrated. Conclusion: The MHA-TP may be less sensitive than the FTA-ABS for identifying patients with primary syphilis. Treponemal specific tests may become nonreactive during the first year after therapy for early syphilis. abstract_id: PUBMED:20687840 Novel Treponema pallidum serologic tests: a paradigm shift in syphilis screening for the 21st century. The mainstay of diagnosis for Treponema pallidum infections is based on nontreponemal and treponemal serologic tests. Many new diagnostic methods for syphilis have been developed, using specific treponemal antigens and novel formats, including rapid point-of-care tests, enzyme immunoassays, and chemiluminescence assays. Although most of these newer tests are not yet cleared for use in the United States by the Food and Drug Administration, their performance and ease of automation have promoted their application for syphilis screening. Both sensitive and specific, new screening tests detect antitreponemal IgM and IgG antibodies by use of wild-type or recombinant T. pallidum antigens. However, these tests cannot distinguish between recent and remote or treated versus untreated infections. In addition, the screening tests require confirmation with nontreponemal tests. This use of treponemal tests for screening and nontreponemal serologic tests as confirmatory tests is a reversal of long-held practice. Clinicians need to understand the science behind these tests to use them properly in syphilis management. abstract_id: PUBMED:7048873 Interpreting serologic tests for syphilis. Since syphilis is most often diagnosed by serologic studies, the correct interpretation of these tests is critical. Serologic tests are classified as nontreponemal of treponemal, according to the antigen employed. Flocculation tests are generally used for routine screening because of their simplicity. The treponemal procedures are more specific, but false-positive results may still occur. Results vary with the procedure, stage of disease and treatment. Significant difficulties remain with the serologic diagnosis of neurosyphilis and congenital syphilis. abstract_id: PUBMED:36817920 Performance of the nontreponemal tests and treponemal tests on cerebrospinal fluid for the diagnosis of neurosyphilis: A meta-analysis. Background: Nontreponemal and treponemal tests for analyzing cerebrospinal fluid to confirm the existence of neurosyphilis have been widely used, so we aim to evaluate and compare their performance on the cerebrospinal fluid in the diagnosis of neurosyphilis. Methods: We conducted a systematic literature search on five databases and utilized a bivariate random-effects model to perform the quantitative synthesis. Results: Nontreponemal tests demonstrated a pooled sensitivity of 0.77 (95% CI: 0.68-0.83), a pooled specificity of 0.99 (95% CI: 0.97-1.00), and a summary AUC of 0.97 (95% CI: 0.95-0.98). The pooled sensitivity, pooled specificity, and summary AUC of treponemal tests were 0.95 (95% CI: 0.90-0.98), 0.85 (95% CI: 0.67-0.94), and 0.97 (95% CI: 0.95-0.98), respectively. The pooled specificity of all nontreponemal tests varied minimally (ranging from 0.97 to 0.99), with TRUST (0.83) having a higher pooled sensitivity than VDRL (0.77) and RPR (0.73). Among all treponemal tests, EIA has outstanding diagnostic performance with a pooled sensitivity of 0.99 and a pooled specificity of 0.98. Conclusion: Nontreponemal tests exhibited a higher pooled specificity, and treponemal tests exhibited a higher pooled sensitivity in diagnosing neurosyphilis on cerebrospinal fluid. TRUST may be a satisfactory substitute for VDRL. EIA is a prospective diagnostic tool that deserves further study in the future. Our study may be useful to clinical laboratories in selecting appropriate serological tests on the cerebrospinal fluid for the diagnosis of neurosyphilis. abstract_id: PUBMED:22421696 The performance of cerebrospinal fluid treponemal-specific antibody tests in neurosyphilis: a systematic review. Background: No single laboratory test is both sensitive and specific to diagnose neurosyphilis. Several major clinical guidelines suggest that negative cerebrospinal fluid (CSF) treponemal-specific antibody tests rule out the diagnosis of neurosyphilis. Our aim was to systematically review the literature and describe the performance of treponemal-specific CSF antibody tests when diagnosing neurosyphilis. Methods: Two reviewers independently assessed studies published in electronic databases, trial registries, and bibliographies for content and quality. Entry criteria included the assessment of treponemal-specific CSF tests currently used in clinical practice, and the use of standard criteria for both diagnosis and exclusion of neurosyphilis. The primary outcomes were sensitivity, specificity, and negative predictive values of treponemal-specific CSF antibody tests. Results: Of 141 unique citations, 18 studies were included in the systematic review. Due to significant heterogeneity among studies, we were unable to generate pooled summary statistics. Seven different treponemal-specific tests were assessed. Of those, 13 studies evaluated the CSF FTA-ABS (fluorescent treponemal antibody-absorbed) and 9 evaluated the CSF fluorescent treponemal antibody. The performance estimates of these tests were highly variable and depended on the choice of negative and positive controls. No single test had perfect sensitivity, thus the negative predictive value was dependant on the specificity of the test and the prevalence (i.e., pretest probability) of neurosyphilis: the higher the prevalence, the lower the negative predictive value. Few studies included HIV-infected persons. Conclusions: A negative CSF-treponemal-specific antibody test may not exclude the diagnosis of neurosyphilis when the clinical suspicion for neurosyphilis is high. Answer: The order of serologic tests for syphilis screening, whether to use treponemal or nontreponemal tests first, does indeed matter in terms of health and economic outcomes. A study analyzed the outcomes of different screening algorithms for syphilis in both low and high prevalence settings. The two-step algorithms compared were nontreponemal followed by treponemal (Nontrep-First) and treponemal followed by nontreponemal (Trep-First). The study found that the Nontrep-First algorithm was more cost-effective in a low prevalence setting and more cost-saving in a high prevalence setting. This was largely due to the higher number of confirmatory tests required for the Trep-First algorithm, even though the number of cases detected and treated was the same for both algorithms (PUBMED:21183862). The importance of the order of testing is further supported by the fact that treponemal tests are more specific and can detect specific antibodies, while nontreponemal tests are better for determining the response to treatment. The fluorescent treponemal antibody-absorption technique is suggested as a reliable test for treponemal antibodies (PUBMED:5327851). However, the use of treponemal tests for screening and nontreponemal tests as confirmatory is a reversal of long-held practice, and clinicians need to understand the science behind these tests to use them properly in syphilis management (PUBMED:20687840). Moreover, the performance of treponemal-specific tests in neurosyphilis diagnosis has been evaluated, with a systematic review indicating that no single test is both sensitive and specific for diagnosing neurosyphilis. The negative predictive value of a treponemal-specific CSF antibody test is dependent on the specificity of the test and the prevalence of neurosyphilis (PUBMED:22421696). This highlights the complexity of syphilis diagnosis and the need for careful consideration of test selection and order.
Instruction: Are amniotic fluid C-reactive protein and glucose levels, and white blood cell counts at the time of genetic amniocentesis related with preterm delivery? Abstracts: abstract_id: PUBMED:16318616 Are amniotic fluid C-reactive protein and glucose levels, and white blood cell counts at the time of genetic amniocentesis related with preterm delivery? Objective: To compare women with spontaneous preterm delivery before 37 weeks and women who delivered at term with respect to amniotic fluid C-reactive protein (CRP), glucose levels, and white blood cell counts at the time of genetic amniocentesis. Study Design: The study was conducted on 216 pregnant women who underwent genetic amniocentesis between the 15th and 18th weeks of gestation at Baskent University Obstetrics and Gynecology Department. All patients were followed until delivery for the occurrence of pregnancy complication. Indications for amniocentesis included abnormal triple test results showing increased risk for Down's syndrome, advanced maternal age and sonographic findings indicative for chromosomal abnormalities. The samples were carried immediately to the laboratory for cytogenetic and biochemical examination. Women with spontaneous preterm delivery before 37 weeks (n = 20) and those who delivered at term (n = 196) were compared with respect to some maternal and infant characteristics, amniotic fluid C-reactive protein, glucose levels, and amniotic fluid white blood cell counts. Results: During the study period 244 patients underwent amniocentesis. A chromosomal abnormality was present in 11 patients. 1 patient had a spontaneous pregnancy loss within 3 weeks after the procedure and 16 patients were delivered for fetal or maternal indications (preeclampsia, fetal growth restriction, placenta previa). The remaining 216 women were included in the study and investigated for the risk of preterm delivery. The prevalence of spontaneous preterm delivery before 37 weeks was 9.3% (20/216). There were no significant differences between the preterm delivery and the term delivery groups with respect to C-reactive protein levels and white blood cell counts. Mean amniotic glucose levels were significantly lower in the preterm delivery group (P&lt;0.05). Amniotic fluid glucose levels of &lt; or = 46 mg/dL had a sensitivity of 100% and NPV of 100%. Conclusion: Amniotic fluid glucose levels at the time of genetic amniocentesis are lower in women with spontaneous preterm delivery before 37 weeks compared to those who delivered at term. Amniotic fluid glucose levels of &lt; or = 46 mg/dL at the time of genetic amniocentesis may be more sensitive, cheaper and have higher negative predictive value than C-reactive protein levels and white blood cell counts for the prediction of patients in spontaneous preterm labor. The greatest benefit of amniotic fluid glucose testing might be when the physician judges the patient to be at low risk for preterm delivery. abstract_id: PUBMED:16390808 C-reactive protein concentration in vaginal fluid as a marker for intra-amniotic inflammation/infection in preterm premature rupture of membranes. Objective: The purpose of this study was to determine whether C-reactive protein (CRP) concentrations in vaginal fluid can identify patients with intra-amniotic inflammation/infection (IAI) and predict adverse outcome in preterm premature rupture of membranes (PROM). Methods: The study population consisted of 121 singleton pregnant women with preterm PROM (36 weeks of gestation) who had an amniocentesis and vaginal fluid collection. A Dacron polyester-tipped applicator was soaked with vaginal fluid for 10 seconds and diluted with 1 mL buffer solution. Amniotic fluid was cultured for aerobic and anaerobic bacteria, as well as mycoplasmas. Vaginal fluid CRP and amniotic fluid matrix metalloproteinase-8 (MMP-8) were determined by specific immunoassays. IAI was defined as an amniotic fluid MMP-8 concentration &gt;23 ng/mL and/or a positive amniotic fluid culture. Nonparametric tests and survival techniques were used for statistical analysis. Results: Patients with IAI had a significantly higher median vaginal fluid CRP concentration than those without IAI (median (range), 7.8 (0.1-1310.1) ng/mL vs. 1.0 (0.1-319.4) ng/mL, p &lt; 0.005). The median amniotic fluid white blood cell (WBC) count was significantly higher in patients with a vaginal fluid CRP concentration of &gt;10 ng/mL than in those with a lower concentration (median (range), 82.5 (0-8640) cells/mm3 vs. 2 (0-&gt;1000) cells/mm3, p &lt; 0.001). Patients with vaginal fluid CRP concentration of &gt;10 ng/mL had a significantly shorter sampling-to-delivery interval and higher rates of preterm delivery within five days, funisitis, and histologic chorioamnionitis than did those with a vaginal fluid CRP concentration below this cut-off. A vaginal fluid CRP cut-off of 10 ng/mL had a specificity of 89% and a sensitivity of 45% in the identification of IAI. Conclusion: An elevated CRP concentration in vaginal fluid collected by polyester-tipped applicator is a risk factor for intra-amniotic inflammation/infection and impending preterm delivery in preterm PROM. abstract_id: PUBMED:9704784 Is amniotic fluid analysis the key to preterm labor? A model using interleukin-6 for predicting rapid delivery. Objective: Our purpose was to create a model for predicting amnionitis and rapid delivery in preterm labor patients by use of amniotic fluid interleukin-6 and clinical parameters. Study Design: Amniotic fluid was cultured and analyzed, and a clinical score (incorporating gestational age, amniotic fluid Gram stain, glucose, leukocyte esterase, and maternal serum C-reactive protein) was determined in 111 patients diagnosed with preterm labor. Statistical analysis involved t tests, chi2, logarithmic regression, and multivariate regression analysis (P &lt; or = .05). Results: The incidence of positive amniotic fluid cultures was 8.7% (9 of 103 patients). Patients with positive cultures of the amniotic fluid had a shorter delivery interval (4.8 +/- 7.5 vs 28.9 +/- 25.4 days, P &lt; .001). Patients with elevated amniotic fluid interleukin-6 (&gt; or = 7586 pg/ml) were more likely to have a positive amniotic fluid culture (relative risk = 8.8, 95% confidence interval = 1.6 to 47.4, P &lt; .001) and to be delivered within 2 days (relative risk = 16.8, 95% confidence interval = 4.5 to 62.7, P &lt; .001). Stepwise multivariate regression analysis yielded a model using interleukin-6, cervical dilatation, and gestational age (r2 = 0.63, P &lt; .001) with a specificity of 100% for predicting delivery within 2 days of amniocentesis. Conclusions: A mathematical model using maternal amniotic fluid interleukin-6 seems to be a useful clinical tool for quantifying the interval to preterm birth for patients in preterm labor. abstract_id: PUBMED:9120746 Correlation between cytokine levels of amniotic fluid and histological chorioamnionitis in preterm delivery. The aim of this study was to investigate the correlation between the cytokine levels in the amniotic fluid (AF) and the histological stage of chorioamnionitis (CAM) in premature labor. AF of 6 cases (7 samples of AF were obtained as one was a twin pregnancy) in whom CAM was diagnosed histologically, and 12 cases without CAM were included in this study. Amniotic fluid was obtained within 24 hours prior to delivery. Cytokine levels (IL-2, -4, -6, TNF-alpha, IFN-gamma) in AF were measured by an ELISA method. Levels of IL-2 and -6 in the CAM-positive group (mean +/-S.E., 52.9 +/- 83.9 pg/ml, and 20,537.9 +/- 8853.7 pg/ml, respectively) were higher than those in the CAM-negative group (i.e. undetectable, and 65.6 +/- 27.5, respectively) with a statistical significance of p &lt; 0.05, p &lt; 0.001, respectively. There was a positive linear relationship between IL-6 levels of AF and the placental histological inflammatory stages of Blanc in the CAM-positive group. From these results it would appear that the IL-6 level in AF is the most sensitive test in the detection of extraamniotic infection or intraamniotic infection in preterm labor with intact membranes and also indicates the severity infection. abstract_id: PUBMED:29709964 Bacterial-Culture-Negative Subclinical Intra-Amniotic Infection Can Be Detected by Bacterial 16S Ribosomal-DNA-Amplifying Polymerase Chain Reaction. Comprehensive analysis of bacterial DNA has enhanced our understanding of the maternal microbiome and its disturbances in preterm birth although clinical utility of these techniques remains to be determined. We tested whether a broad-range polymerase chain reaction (PCR) technique is useful for detection of culture-negative intra-amniotic infection (IAI). Pregnant women who underwent amniocentesis for the management of preterm birth with or without premature rupture of membranes. Bacterial 16S ribosomal DNA in the amniotic fluid was detected by PCR using primers for a sequence shared by Ureaplasma, Mycoplasma, and other bacteria. Sixty-four women were enrolled, 9 of whom were culture-positive. Of the 55 culture-negative women, 13 were PCR-positive and exhibited significantly higher interleukin 6 and 8 levels and lower glucose levels in the amniotic fluid than the remaining 42 women did, who were PCR- and culture-negative. C-reactive protein concentrations were elevated in cord and neonatal blood in the culture-negative, PCR-positive subgroup, whereas maternal C-reactive protein concentrations, white blood cell counts, and body temperatures were alike. The placental inflammation score (Blanc stage≥2) was significantly higher in the PCR-positive than in PCR-negative subgroup. This PCR-based method could be useful for identifying bacterial-culture-negative subclinical IAI and could help with predicting the severity of IAI. abstract_id: PUBMED:25762201 Non-invasive prediction of intra-amniotic infection and/or inflammation in patients with cervical insufficiency or an asymptomatic short cervix (≤15 mm). Purpose: To identify non-invasive parameters to predict intra-amniotic infection and/or inflammation (IAI) in patients with cervical insufficiency or an asymptomatic short cervix (≤15 mm). Methods: This retrospective cohort study included 72 asymptomatic women with cervical insufficiency (n = 54) or an asymptomatic short cervix (n = 18) at 17-28 weeks. Maternal blood was collected for the determination of the C-reactive protein (CRP) level and white blood cell (WBC) count, and sonography was performed to measure the cervical length shortly after amniocentesis. Amniotic fluid (AF) was cultured and interleukin-6 (IL-6) level and WBC count were determined. Results: The prevalence of intra-amniotic inflammation and a positive AF culture was 22.2 % (16/72) and 8.3 % (6/72), respectively. The best cut-off value for AF IL-6 in predicting the presence of intra-amniotic infection was ≥7.6 ng/mL and was used to diagnose the presence of intra-amniotic inflammation. Women with intra-amniotic inflammation, regardless of culture results, were at increased risk for preterm delivery and adverse outcomes compared to women without intra-amniotic inflammation. In multivariable regression, CRP was the only non-invasive variable statistically significantly associated with IAI. Moreover, the area under the curves for the CRP and AF WBC were not significantly different. Conclusions: In women with cervical insufficiency or a short cervix, the risk for IAI can be predicted fairly and non-invasively by measurements of serum CRP. Overall, this non-invasive parameter appears to have similar accuracy to the AF WBC counts for predicting IAI. abstract_id: PUBMED:28167848 Amniotic Fluid Infection in Preterm Pregnancies with Intact Membranes. Introduction. Intra-amniotic infection (IAI) is a major cause of preterm labor and adverse neonatal outcome. We evaluated amniotic fluid (AF) proteolytic cascade forming biomarkers in relation to microbial invasion of the amniotic cavity (MIAC) and IAI in preterm pregnancies with intact membranes. Material and Methods. Amniocentesis was made to 73 women with singleton pregnancies; 27 with suspected IAI; and 46 controls. AF biomarkers were divided into three cascades: Cascade 1: matrix metalloproteinase-8 (MMP-8), MMP-9, myeloperoxidase (MPO), and interleukin-6; Cascade 2: neutrophil elastase (HNE), elafin, and MMP-9; Cascade 3: MMP-2, tissue inhibitor of matrix metalloproteinases-1 (TIMP-1), MMP-8/TIMP-1 molar ratio, and C-reactive protein (CRP). MMP-8 was measured by an immunoenzymometric assay and the others were measured by ELISA. Standard biochemical methods, molecular microbiology, and culture techniques were used. Results. MMP-8, MMP-9, MPO, elafin, and TIMP-1 concentrations were higher in IAI suspected cases compared to controls and also in IAI suspected cases with MIAC compared to those without MIAC when adjusted by gestational age at amniocentesis. All biomarkers except elafin and MMP-2 had the sensitivity of 100% with thresholds based on ROC-curve. Odd ratios of biomarkers for MIAC were 1.2-38 and 95% confidential intervals 1.0-353.6. Conclusions. Neutrophil based AF biomarkers were associated with IAI and MIAC. abstract_id: PUBMED:7612097 Markers of infection and their relationship to preterm delivery. In this study we evaluated different markers of infection and their relationship to preterm delivery. Forty-four consecutive women with singleton pregnancies in uncomplicated preterm labor were investigated. C-reactive protein (CRP) in peripheral maternal blood, amniotic fluid cytokines, amniotic fluid leukocyte count, and amniotic fluid culture were performed in all patients. Thirty-six patients responded to standard tocolytic therapy and delivered after 34 weeks' gestation. In eight patients treatment failed and they delivered before 34 weeks' gestation. Two of these eight patients had a positive amniotic fluid culture for Ureaplasma urealyticum. The positive culture was accompanied by an elevated neutrophil count in the amniotic fluid. Elevated amniotic fluid levels of tumor necrosis factor (TNF) (more than 23 pg/mL), interleukin-6 (IL-6) (more than 2292 pg/mL) and interleukin-8 (more than 164 pg/mL) correlated with early preterm delivery. CRP levels in serum had a low sensitivity (38%) but a high specificity (94%) in predicting preterm delivery. This study indicates that preterm labor can be initiated by infection. Markers of infection obtained by amniocentesis have a better sensitivity and positive predictive value than noninvasive markers. Elevated IL-6 (more than 2292 pg/mL) seems to be the best predictor for preterm delivery, with a sensitivity of 75% and a specificity of 97%. abstract_id: PUBMED:22085152 Prediction of imminent preterm delivery in women with preterm premature rupture of membranes. Aims: To develop a model based on non-invasive clinical parameters to predict the probability of imminent preterm delivery (delivery within 48 h) in women with preterm premature rupture of membranes (PPROM), and to determine if additional invasive test results improve the prediction of imminent delivery based on the non-invasive model. Methods: Transvaginal ultrasonographic assessment of cervical length was performed and maternal serum C-reactive protein (CRP) and white blood cell (WBC) count were determined immediately after amniocentesis in 102 consecutive women with PPROM at 23-33+6 weeks. Amniotic fluid (AF) obtained by amniocentesis was cultured and interleukin-6 (IL-6) levels and WBC counts were determined. Results: Serum CRP, cervical length, and gestational age were chosen for the non-invasive model (model 1), which has an area under the curve (AUC) of 0.804. When adding AF IL-6 as an invasive marker to the non-invasive model, serum CRP was excluded from the final model (model 2) as not significant, whereas AF IL-6, cervical length, and gestational age remained in model 2. No significant difference in AUC was found between models 1 and 2. Conclusions: The non-invasive model based on cervical length, gestational age, and serum CRP is highly predictive of imminent delivery in women with PPROM. However, invasive test results did not add predictive information to the non-invasive model in this setting. abstract_id: PUBMED:17624934 Validity of amniotic fluid index in preterm rupture of membranes. Background: Preterm premature rupture of membranes (pPROM) complicates up to one-third of preterm deliveries. We studied the Amniotic Fluid Index (AFI) in order to ascertain its validity as a predictive variable of maternal-fetal outcome in pregnancies complicated by pPROM. Study Design: One hundred and fourteen pregnant women with gestational age between 24 and 34 weeks' gestation at the time of pPROM. Patients were categorized into two groups on the basis of AFI value (AFI &lt;5 cm=63 or AFI &gt;or=5 cm=51) performed at the time of admission. Results: AFI numeric values were significantly related to the following maternal-neonatal variables: high maternal body temperature (P&lt;or=0.001), maternal high white blood cells (WBC) count (P&lt;or=0.001) and to Apgar score &gt;7 at 5 min (P&lt;or=0.001). No other significant correlation between the AFI score and others variables (i.e., maternal heart rate, neonatal WBC count or neonatal C-reactive protein [CRP]) were found. Latency in days from pPROM was significantly lower in the group with AFI &lt;5 cm (P&lt;0.05). Interestingly, AFI &lt;5 cm was present in 66% of pregnancies complicated by chorioamnionitis (8/12), and in 70% of neonates affected by RDS at birth (19/27). Conclusion: An AFI score &lt;5 cm at admission may be a useful prognostic variable in the management of third trimester pregnancies affected by pPROM. Answer: The study conducted on 216 pregnant women who underwent genetic amniocentesis between the 15th and 18th weeks of gestation found that there were no significant differences between the preterm delivery and the term delivery groups with respect to C-reactive protein levels and white blood cell counts. However, mean amniotic glucose levels were significantly lower in the preterm delivery group (P<0.05). Amniotic fluid glucose levels of ≤ 46 mg/dL had a sensitivity of 100% and NPV of 100% for predicting spontaneous preterm labor, suggesting that amniotic fluid glucose levels at the time of genetic amniocentesis may be more sensitive and have a higher negative predictive value than C-reactive protein levels and white blood cell counts for the prediction of patients in spontaneous preterm labor (PUBMED:16318616).
Instruction: Do Arterial Hemodynamic Parameters Predict Cognitive Decline Over a Period of 2 Years in Individuals Older Than 80 Years Living in Nursing Homes? Abstracts: abstract_id: PUBMED:25783622 Do Arterial Hemodynamic Parameters Predict Cognitive Decline Over a Period of 2 Years in Individuals Older Than 80 Years Living in Nursing Homes? The PARTAGE Study. Objectives: Several studies have highlighted a link between vascular alterations and cognitive decline. The PARTAGE study showed that arterial stiffness as evaluated by carotid-femoral pulse wave velocity (cfPWV) was associated with a more pronounced cognitive decline over a 1-year period in very old frail institutionalized individuals. The aim of the present analysis was to assess the role of hemodynamic parameters, such as blood pressure (BP), heart rate (HR), cfPWV, and central/peripheral pulse pressure amplification (PPA) on cognitive decline over 2 years in very old frail individuals. Methods: A total of 682 individuals from the PARTAGE study cohort, aged older than 80 years (mean age at inclusion: 87.5 ± 5.0 years) and living in French and Italian nursing homes, were analyzed. Mini-Mental State Examination (MMSE) score was assessed at baseline (BL) and at the end of the first and second year of follow-up (2y-FU). Those with a decrease in MMSE of 3 or more points between BL and 2y-FU were considered as "decliners." The cfPWV and PPA at baseline were assessed with an arterial tonometer. Results: After adjustment for baseline MMSE, HR, body mass index, age, education level, and activities of daily living (ADLs), cfPWV was higher and PPA lower in "decliners" compared with "nondecliners," whereas BP did not differ between the 2 groups. Logistic multivariate analysis also revealed that high cfPWV, low PPA, high HR, and low ADLs were all determinants of MMSE decline. Conclusion: This 2-year longitudinal study in very old institutionalized individuals shows that arterial stiffness and high HR enabled us to identify subjects at higher risk of cognitive decline, whereas BP alone did not appear to have a significant predictive value. These findings highlight the contribution of vascular determinants in cognitive decline even in this very old population. abstract_id: PUBMED:34643343 Exploring the use of music as an intervention for older people living in nursing homes. Background: Enjoying cultural events such as musical performances is a human right as well as contributing to quality of life. However, older people who live in nursing homes are often excluded from such events. Music interventions for older people with cognitive decline have been shown to have a positive effect on their mood and behaviour, particularly in terms of anxiety, agitation and irritability. Aim: To investigate the effect of musical interventions in nursing homes from the perspective of older people, their relatives and caregivers. Method: Musical performances were held at 11 nursing homes in Sweden. These performances were followed by semi-structured interviews that aimed to capture the experiences of the older people, their relatives and caregivers. The interviews were analysed by qualitative thematic analysis. Findings: Four relational themes were generated from the analysis: music enhances well-being for the body and soul, music evokes emotions and a 'spark of life', music adds a 'silver lining' to everyday life, and music inspires a journey of the imagination through time and space. Conclusion: The music concerts had a positive effect on older people, their relatives and caregivers. Providing cultural encounters in nursing homes is an important caring intervention. abstract_id: PUBMED:21450208 Pulse wave velocity is associated with 1-year cognitive decline in the elderly older than 80 years: the PARTAGE study. Objectives: Studies have shown the importance of vascular risk factors in the pathogenesis and evolution of cognitive disorders and dementia especially among the very elderly. The aim of the present longitudinal 1-year cohort analysis was to evaluate the influence of arterial stiffness on cognitive decline in institutionalized subjects older than 80 years. Design: Longitudinal study. Setting: Nursing homes in France and Italy. Participants: A total of 873 subjects (79% women), aged 87 ± 5 years were included in this longitudinal analysis from the PARTAGE cohort. Measurements: All completed the Mini-Mental Status Examination (MMSE) on the 2 visits over 1 year and underwent a measurement of carotid-femoral pulse wave velocity (PWV), an indicator of aortic stiffness. Clinical and 3-day self-measurements of blood pressure (BP) and activities of daily living (ADL) were evaluated at baseline visit. Results: According to PWV tertiles and after adjustment for baseline MMSE, mean BP (MBP), age, education level, and ADL, Δ MMSE was -1.42 ± 3.60 in the first tertile, -1.78 ± 4.08 in the second tertile, and -2.20 ± 3.98 in the third tertile (P &lt; .03). Similar analyses with self-measured MBP failed to show any association between BP on MMSE decline. Conclusion: This 1-year longitudinal study in institutionalized patients older than 80 years shows that the higher the aortic stiffness, the more pronounced the decline in cognitive function. These results point out the interest of measuring PWV, a simple noninvasive and validated method for arterial stiffness assessment, to detect high-risk patients for cognitive decline. abstract_id: PUBMED:37081387 Effect of self-determination theory-based integrated creative art (SDTICA) program on older adults with mild cognitive impairment in nursing homes: Study protocol for a cluster randomised controlled trial. Background: The cognitive benefits of early non-pharmacological approaches have been demonstrated in older adults with mild cognitive impairment (MCI). However, older adults living in nursing homes have more severe cognitive impairment problems and lower initiative and compliance to participate in complex interventions. Hence, it important to investigate more attractive and sustainable methods to prevent or delay cognitive decline. The present study adopts the self-determination theory (SDT) as a theoretical framework to innovatively develop an integrated art-based intervention for older adults with MCI in nursing homes in China and aims to evaluate its effects on cognitive function, mental health, and other health-related outcomes. Methods: The study is a nursing home-based, cluster randomised controlled trial (RCT) that targets older adults (aged ≥ 60 years) with MCI in Fuzhou City, China. All nursing homes in the area covered by Fuzhou City are invited to participate. Eligible nursing homes are randomised to one of two groups: intervention group (receive a 14-week, 27-session intervention) and waitlist control group (receive the usual care). The SDT-based integrated creative art (SDTICA) program reasonably adopts the SDT as a theoretical framework to innovatively develop an integrated art-based intervention for older adults with MCI in nursing homes. The primary (global cognitive function and psychological indicator) and secondary (daily activity function, social function, and specific domains of cognitive function) outcomes will be measured at baseline, after the intervention, and during follow-up. Discussion: This study aims to evaluate the effects of SDTICA program on neuropsychological outcomes in older adults with MCI and provide scientific evidence for art-based non-pharmacologic interventions in nursing homes, which may reduce dementia risk in older adults with MCI. Trial Registration: The trial was prospectively registered at the Chinese Clinical Trials Registry with the registration number ChiCTR2200061681 on 30 June 2022. abstract_id: PUBMED:15852074 Nursing needs among recipients of community health care Background: We aimed at investigating whether disabled old people can get sufficient care in residential facilities for the elderly. Materials And Methods: All residents in our community's care facilities for the elderly in 2001 were registered. Those living in their own homes with a substantial need for care were also registered. Burden of care was assessed by six items measuring activities of daily life and two items measuring cognitive decline. Results: 309 persons were registered; mean age was 84. Those living in residential care facilities staffed 24 hours a day represented the highest average burden of care, though many elderly living in their own homes also need a great deal of care. In our community, the number of nursing home beds set aside for short-term stays has decreased from 24 to 11 over a eight-year period. The burden of care has increased since 1992 in nursing homes as well as in other residential care units. Interpretation: Group-dwelling unit staffed around the clock can be a good alternative to nursing homes for many demented patients. While a high number of such units have been built, the local authorities have found it increasingly difficult to provide a sufficient number of nursing home beds for short-term stays. Most changes observed can be related to the growing number of inhabitants above 80 years of age. abstract_id: PUBMED:34465282 Association of Body Composition with Functional Capacity and Cognitive Function in Older Adults Living in Nursing Homes. Background: Older adults living in nursing homes have an increased risk of adverse outcomes. However, the role of body composition in vital health and quality of life parameters such as functional capacity and cognitive function is less studied in this group of older adults compared to community-dwelling counterparts. Objective: The aim of the present study was to examine the association of body composition with functional capacity and cognitive function in nursing home residents. Methods: Fifty-three older adults (82.8 ± 7.3 years) were enrolled in this study and they underwent body composition evaluation, functional capacity and cognitive function measurements. Results: The results showed a high prevalence of obesity accompanied by functional capacity limitations and cognitive impairment in older adults living in nursing homes. Partial correlations, controlling for age, showed that body fat percentage was positively correlated with sit-to-stand-5 (r = 0.310, p = 0.025) and timed-up-and-go (r = 0.331, p = 0.017), and negatively correlated with handgrip strength test results (r = -0.431, p&lt;0.001), whereas greater lean body mass was associated with better sit-to-stand-5 (r = -0.410, p = 0.003), handgrip strength (r=0.624, p&lt;0.001) and cognitive function performance (r = 0.302, p = 0.037). Conclusions: These important associations reinforce the need to develop effective healthy lifestyle interventions targeting both lean mass and body fat to combat functional and cognitive decline in nursing home residents. abstract_id: PUBMED:36906613 Global prevalence of mild cognitive impairment among older adults living in nursing homes: a meta-analysis and systematic review of epidemiological surveys. Mild cognitive impairment (MCI) is the early stage of cognitive impairment between the expected cognitive decline of normal aging and the more serious decline of dementia. This meta-analysis and systematic review explored the pooled global prevalence of MCI among older adults living in nursing homes and its relevant factors. The review protocol was registered in INPLASY (INPLASY202250098). PubMed, Web of Science, Embase, PsycINFO, and CINAHL databases were systematically searched from their respective inception dates to 8 January 2022. The inclusion criteria were made based on the PICOS acronym, as follows: Participants (P): Older adults living in nursing homes; Intervention (I): not applicable; Comparison (C): not applicable; Outcome (O): prevalence of MCI or the data can generate the prevalence of MCI according to study-defined criteria; Study design (S): cohort studies (only baseline data were extracted) and cross-sectional studies with accessible data published in a peer-reviewed journal. Studies involving mixed resources, reviews, systematic reviews, meta-analyses, case studies, and commentaries were excluded. Data analyses were performed using Stata Version 15.0. Random effects model was used to synthesize the overall prevalence of MCI. An 8-item instrument for epidemiological studies was used to assess the quality of included studies. A total of 53 articles were included involving 376,039 participants with a mean age ranging from 64.42 to 86.90 years from 17 countries. The pooled prevalence of MCI in older adults in nursing homes was 21.2% (95% CI: 18.7-23.6%). Subgroup and meta-regression analyses revealed that the screening tools used were significantly associated with MCI prevalence. Studies using the Montreal Cognitive Assessment (49.8%) had a higher prevalence of MCI than those using other instruments. No significant publication bias was found. Several limitations warrant attention in this study; for example, significant heterogeneity between studies remained and some factors associated with the prevalence of MCI were not examined due to insufficient data. Adequate screening measures and allocation of resources are needed to address the high global prevalence of MCI among older adults living in nursing homes. abstract_id: PUBMED:38185903 Associations Between Physical Fitness, Cognitive Function, and Depression in Nursing Homes Residents Between 60-100 Years of Age in South-Western Poland. BACKGROUND Healthy aging depends on physical fitness, cognitive function, and emotional well-being. Reduced physical activity in the elderly impacts daily activities, increasing morbidity risk. Cognitive decline affects learning, attention, and independence. Depression, prevalent among the elderly, correlates with loneliness and affects overall health. Physical fitness positively influences cognitive health and mood. This study examines these associations in Polish nursing homes residents. MATERIAL AND METHODS We assessed 93 people aged 60-100 years living in nursing homes. The Short Physical Performance Battery (SPPB) test was used to assess physical fitness. The Abbreviated Mental Test Score (AMTS) was used to assess cognitive functions. The Geriatric Depression Scale (GDS) was used to assess depression. RESULTS In the SPPB test, the mean score was 4.85 points, indicating moderate limitations. On the AMTS, 55% of subjects had cognitive impairment. On the GDS scale, 44% of respondents had depressive symptoms. Seniors without mood disorders were characterized by faster gait compared to those with suspected depressive disorders (P=0.036). Men performed significantly better in the whole SPPB test (P=0.024) and in the standing up from a chair and gait speed tests (P=0.046, P&lt;0.001) compared to women. We found a negative correlation between the AMTS test scores and the SPPB gait test scores and age (P&lt;0.05) and a positive correlation between the SPPB gait test scores and the GDS scores (P&lt;0.05). CONCLUSIONS Older nursing homes' residents in better emotional and cognitive state tended to have faster gait. Men tended to have a higher level of physical fitness compared to women. Older age was associated with worse cognitive state of the examined seniors. abstract_id: PUBMED:38185372 Associated factors to the cognitive function among indonesian older adult living in nursing home. Objective: Many older adults in Indonesia decide to live in nursing homes. Living in a nursing home has been associated with the incidence of cognitive decline in older adult that leads to decreasing ability to perform daily activity. This study aimed to determine the association between demographic and clinical characteristics with cognitive functions in older adults living in nursing homes in Indonesia. Methods: This study used a cross-sectional design and involved 60 older adults in a nursing home. Cognitive function was evaluated using the Montreal Cognitive Assessment (MoCA) instrument. Demographic and clinical characteristics such as age, education level, length of stay in the nursing home, as well as serum levels of brain-derived neurotrophic factor (BDNF) and dopamine were studied. Spearman-Rank test was used for data analysis. Results: Cognitive function of attention had a positive correlation with age (r=0.314, P=.015), length of stay in the nursing home (r=0.268, P=.038), and negative correlation with dopamine serum levels (r=-0.425, P=.001). The cognitive function of naming has a positive correlation with age (r=0.263, P=.042). Conclusions: Age, length of stay, and dopamine levels associated with cognitive function in older adult living in nursing homes. The older adult should be assessed in term of factors associated with cognitive function to make the cognitive improvement programs in nursing homes. abstract_id: PUBMED:24826428 Comparison of the nutritional status of residents in shared-housing arrangements and nursing homes: a secondary data analysis Malnutrition and weight loss are special challenges in the care of older people particularly with dementia. In Germany, shared-housing arrangements (SHA) for older care-dependent people evolved in the last years. SHA are an alternative to traditional nursing homes. Despite of the increase of SHA in number it remains unclear if this setting is especially beneficial in terms of (mal-)nutrition. Therefore the nutritional status of older people with and without dementia living in SHA and traditional nursing homes will be compared. From 2010 to 2011 data was collected in various SHA in Berlin and in a nursing home in Schkeuditz/Saxony using standardized face-to-face interviews with nurses. In addition to socio-demographic data the nutritional status using the Mini Nutritional Assessment (MNA) and the cognitive capacities usingthe Mini Mental State Examination (MMSE) were examined. In the secondary data analysis, data from 129 residents (60 residents from 29 SHA and 69 from a nursing home) were included in the analysis. The residents of both settings were mostly female (76.7%) and on average 80.4 years old and with a moderate to severe cognitive decline (MMSE: 13.2). The average MNA score of residents from both settings is 19.7, indicating a risk for malnutrition. Residents of SHA have a significantly higher (and therefore better) MNA score (21.2) on average than residents in the nursing home (18.3; t-Test p&lt;0.001). The present study shows that residents in SHA show a better nutritional status than residents in the nursing home even when taking into account differences concerning age, gender, care dependency, a medical diagnosis of dementia and the MMSE. Further studies should evaluate the concept of food intake in both settings and evaluate differences. Answer: Yes, arterial hemodynamic parameters do predict cognitive decline over a period of 2 years in individuals older than 80 years living in nursing homes. The PARTAGE study found that arterial stiffness, as evaluated by carotid-femoral pulse wave velocity (cfPWV), was associated with a more pronounced cognitive decline over a 2-year period in very old frail institutionalized individuals. After adjusting for various factors such as baseline Mini-Mental State Examination (MMSE) scores, heart rate (HR), body mass index, age, education level, and activities of daily living (ADLs), cfPWV was higher and peripheral pulse pressure amplification (PPA) lower in "decliners" compared with "nondecliners." Logistic multivariate analysis also revealed that high cfPWV, low PPA, high HR, and low ADLs were all determinants of MMSE decline. Blood pressure (BP) alone did not appear to have a significant predictive value for cognitive decline in this population (PUBMED:25783622).
Instruction: Impact of administration angle on the cost of artificial tear solutions: does bottle positioning minimize wastage? Abstracts: abstract_id: PUBMED:17444808 Impact of administration angle on the cost of artificial tear solutions: does bottle positioning minimize wastage? Purpose: The aim of this study was to describe the cost considerations of multidose artificial tear products according to drop volume and the number of usable drops per bottle, based on a 45- versus 90-degree administration angle. Methods: Densitometric assessment of the drop volume of five multidose artificial tear products of a 15-mL labeled bottle size in conjunction with predictive cost analysis. Results: The correlation between drop volume and density was not significant (Spearman correlation, P = 0.4500; alpha &lt; 0.05). Overall, drop size ranged from 65.9 microL to a nadir of 30.8 microL, with a statistically significant difference (Student t test, P &lt; 0.05) between 45- and 90-degree volumes for all but one product. Cost analysis demonstrated up to a $1.93 per bottle cost savings by the administration of drops at a 45- rather than a 90-degree bottle angle. Conclusions: Products that provide a higher number of drops per dollar of product offer economic advantages that may not be otherwise discernible by the examination of the product retail price alone. Furthermore, it is shown that altering the angle of administration may, in general, result in significant economic ramifications in the use of multidose artificial tear products longitudinally. abstract_id: PUBMED:27893296 Evaluation of Visual Field Test Parameters after Artificial Tear Administration in Patients with Glaucoma and Dry Eye. Purpose: To examine the effect of a single dose of artificial tear administration on automated visual field (VF) testing in patients with glaucoma and dry eye syndrome. Material And Methods: A total of 35 patients with primary open-angle glaucoma experienced in VF testing with symptoms of dry eye were enrolled in this study. At the first visit, standard VF testing was performed. At the second and third visits with an interval of one week, while the left eyes served as control, one drop of artificial tear was administered to each patient's right eye, and then VF testing was performed again. The reliability parameters, VF indices, number of depressed points at probability levels of pattern deviation plots, and test times were compared between visits. Results: No significant difference was observed in any VF testing parameters of control eyes (P&gt;0.05). In artificial tear administered eyes, significant improvement was observed in test duration, mean deviation, and the number of depressed points at probability levels (P˂0.5%, P˂1%, P˂2) of pattern deviation plots (P˂0.05). The post-hoc test revealed that artificial tear administration elicited an improvement in test duration, mean deviation, and the number of depressed points at probability levels (P˂0.5%, P˂1%, P˂2%) of pattern deviation plots from first visit to second and third visits (P˂0.01, for all comparisons). The intraclass correlation coefficient for the three VF test indices was found to be between 0.735 and 0.85 (P&lt;0.001, for all). Discussion: A single dose of artificial tear administration immediately before VF testing seems to improve test results and decrease test time. abstract_id: PUBMED:28663611 Financial Implications of Intravenous Anesthetic Drug Wastage in Operation Room. Background And Objectives: Anesthetic drugs and material wastage are common in operation rooms (ORs). In this era of escalating health-care expenditure, cost reduction strategies are highly relevant. The aim of this study was to assess the amount of daily intravenous anesthetic drug wastage from major ORs and to estimate its financial burden. Any preventive measures to minimize drug wastage are also looked for. Methods: It was a prospective study conducted at the major ORs of a tertiary care hospital after getting the Institutional Research Committee approval. The total amount of all drugs wasted at the end of a surgical day from each major OR was audited for five nonconsecutive weeks. Drug wasted includes the drugs leftover in the syringes unutilized and opened vials/ampoules. The total cost of the wasted drugs and average daily loss were estimated. Results: The drugs wasted in large quantities included propofol, thiopentone sodium, vecuronium, mephentermine, lignocaine, midazolam, atropine, succinylcholine, and atracurium in that order. The total cost of the wasted drugs during the study period was Rs. 59,631.49, and the average daily loss was Rs. 1987.67. The average daily cost of wasted drug was maximum for vecuronium (Rs. 699.93) followed by propofol (Rs. 662.26). Interpretation And Conclusions: Financial implications of anesthetic drug wastage can be significant. Propofol and vecuronium contributed maximum to the financial burden. Suggestions for preventive measures to minimize the wastage include education of staff and residents about the cost of drugs, emphasizing on the judicial use of costly drugs. abstract_id: PUBMED:28640362 The impact of cancer drug wastage on economic evaluations. Background: The objective of this study was to determine the impact of modeling cancer drug wastage in economic evaluations because wastage can result from single-dose vials on account of body surface area- or weight-based dosing. Methods: Intravenous chemotherapy drugs were identified from the pan-Canadian Oncology Drug Review (pCODR) program as of January 2015. Economic evaluations performed by drug manufacturers and pCODR were reviewed. Cost-effectiveness analyses and budget impact analyses were conducted for no-wastage and maximum-wastage scenarios (ie, the entire unused portion of the vial was discarded at each infusion). Sensitivity analyses were performed for a range of body surface areas and weights. Results: Twelve drugs used for 17 indications were analyzed. Wastage was reported (ie, assumptions were explicit) in 71% of the models and was incorporated into 53% by manufacturers; this resulted in a mean incremental cost-effectiveness ratio increase of 6.1% (range, 1.3%-14.6%). pCODR reported and incorporated wastage for 59% of the models, and this resulted in a mean incremental cost-effectiveness ratio increase of 15.0% (range, 2.6%-48.2%). In the maximum-wastage scenario, there was a mean increase in the incremental cost-effectiveness ratio of 24.0% (range, 0.0%-97.2%), a mean increase in the 3-year total incremental budget costs of 26.0% (range, 0.0%-83.1%), and an increase in the 3-year total incremental drug budget cost of approximately CaD $102 million nationally. Changing the mean body surface area or body weight caused 45% of the drugs to have a change in the vial size and/or quantity, and this resulted in increased drug costs. Conclusions: Cancer drug wastage can increase drug costs but is not uniformly modeled in economic evaluations. Cancer 2017;123:3583-90. © 2017 American Cancer Society. abstract_id: PUBMED:22345947 Anesthetic drug wastage in the operation room: A cause for concern. Context: The cost of anesthetic technique has three main components, i.e., disposable supplies, equipments, and anesthetic drugs. Drug budgets are an easily identifiable area for short-term savings. Aim: To assess and estimate the amount of anesthetic drug wastage in the general surgical operation room. Also, to analyze the financial implications to the hospital due to drug wastage and suggest appropriate steps to prevent or minimize this wastage. Settings And Design: A prospective observational study conducted in the general surgical operation room of a tertiary care hospital. Materials And Methods: Drug wastage was considered as the amount of drug left unutilized in the syringes/vials after completion of a case and any ampoule or vial broken while loading. An estimation of the cost of wasted drug was made. Results: Maximal wastage was associated with adrenaline and lignocaine (100% and 93.63%, respectively). The drugs which accounted for maximum wastage due to not being used after loading into a syringe were adrenaline (95.24%), succinylcholine (92.63%), lignocaine (92.51%), mephentermine (83.80%), and atropine (81.82%). The cost of wasted drugs for the study duration was 46.57% (Rs. 16,044.01) of the total cost of drugs issued/loaded (Rs. 34,449.44). Of this, the cost of wastage of propofol was maximum being 56.27% (Rs. 9028.16) of the total wastage cost, followed by rocuronium 17.80% (Rs. 2856), vecuronium 5.23% (Rs. 840), and neostigmine 4.12% (Rs. 661.50). Conclusions: Drug wastage and the ensuing financial loss can be significant during the anesthetic management of surgical cases. Propofol, rocuronium, vecuronium, and neostigmine are the drugs which contribute maximally to the total wastage cost. Judicious use of these and other drugs and appropriate prudent measures as suggested can effectively decrease this cost. abstract_id: PUBMED:16440009 The effect of artificial tear administration on visual field testing in patients with glaucoma and dry eye. Aim: To examine the effects of artificial tear administration on perimetry of primary open-angle glaucoma patients with dry eye. Methods: A total of 40 patients with primary open-angle glaucoma experienced in automated perimetry with symptoms of dry eye were enrolled in this study. After their pretest visit, they were instructed to use artificial tear four times a day in both eyes for 1 week. After 1 week, patients had visual field testing. Test taking time, reliability parameters (false-positive and false-negative errors) visual field indices and number of depressed points at different probability levels (P&lt;5%, P&lt;2%, P&lt;1%, P&lt;0.5%) in both total and pattern deviation plots were compared using paired Ttest. Results: We found significant improvement in reliability parameters (false-positive errors from 2.4+/-2.1 to 2.1+/-1.9, P=0.02; and false-negative errors from 7.3+/-6.4 to 4.8+/-3.6, P=0.01) and visual field indices (MD increased from 5.97+/-5.61 to 4.57+/-4.53, P=0.001; PSD from 4.67+/-2.95 to 4.13+/-2.77, P=0.04 and SF decreased from 2.24+/-1.23 to 1.83+/-0.77, P=0.04) in the second testing after artificial tear administration. Test time significantly increased from 11.66+/-2.55 min to 14.26+/-1.36, P=0.001. The number of depressed points at probability levels P&lt;1% (P=0.03) and P&lt;0.5% (P=0.04) at total deviation plot and P&lt;2% (P=0.02) and P&lt;0.5% (P=0.009) in pattern deviation plot decreased significantly. Conclusion: Artificial tear administration in glaucomatous patients with dry eye seems to improve significantly reliability parameters and visual field indices. abstract_id: PUBMED:32110544 The status of drug wastage in the pediatric emergency department of a tertiary hospital. Background: The aim of this study was to evaluate surplus drugs left over from medications used via the intravenous and intramuscular routes in a pediatric emergency unit of a tertiary hospital in Turkey and to determine the financial burden imposed by drug wastage. Materials And Methods: The study was planned prospectively on patients presenting to the pediatric emergency department of a tertiary university hospital between January 1 and April 30, 2017, on weekdays and between 08:00 and 16:00, for any reason, and receiving intravenous and/or intramuscular drug administration resulting in drug wastage after treatment. Results: The number of patients enrolled in the clinical trial was 1620 (35.9%). Twenty-one different medications were administered via the intravenous or intramuscular (IM) routes during the study. The proportion of total medication wastage at the end of trial was estimated to be 0.425. The drug with the highest proportion of mean wastage to drug form was paracetamol (1000mg vial) at 0.79. The total cost of the drugs used for the patients in the study was US$580.98, and the overall burden of drug wastage was US$288.09. The three medications involving the highest wastage costs were methylprednisolone, ondansetron, and dexamethasone. The total wastage cost/total drug cost ratio was 0.495. Conclusion: If commercial drugs with intravenous and IM formulations are used by the pediatric age group, then dosage formulations appropriate for pediatric age group use also need to be produced. The development by manufacturers of ampoules and similar products suitable for multiple use will also reduce drug wastage. Reducing levels of drug wastage will inevitably reduce the drug expenditure. abstract_id: PUBMED:37287243 Quantifying chemotherapy wastage in an ambulatory cancer centre in Singapore. Introduction: To ensure the efficient use of chemotherapy drugs, chemotherapy wastage is an area that can be investigated. This study aims to quantify current parenteral chemotherapy wastage and estimate parenteral chemotherapy wastage when dose banding is executed, using a chemotherapy wastage calculator in an ambulatory cancer centre. The study also examines the variables that significantly predict the total cost of chemotherapy wastage, investigates the reasons for wastage, and explores opportunities to reduce wastage. Methods: Data were collected from the pharmacy in National Cancer Centre Singapore over 9 months retrospectively. Chemotherapy wastage is the sum of wastage in the preparation phase and potential wastage in the administration phase. The calculator was created using Microsoft Excel and generated chemotherapy wastage in terms of cost and amount (mg) and analysed the reasons for potential wastage. Results: The calculator reported a total of 2.22 million mg of chemotherapy wastage generated over 9 months, amounting to $2.05 million (Singapore Dollars, SGD). Regression analysis found that the cost of drug was the only independent variable that significantly predicted the total cost of chemotherapy wastage (P = 0.004). The study also identified low blood count (625 [29.06%]) as the top reason for potential wastage and no-show ($128,715.94 [15.97%]) as the reason that incurred the highest cost of potential wastage. Conclusion: The pharmacy has generated a considerable amount of chemotherapy wastage over 9 months. Interventions in both the preparation and administration phases are required to reduce chemotherapy wastage. The use of the chemotherapy wastage calculator in pharmacy operations could guide efforts to reduce chemotherapy wastage. abstract_id: PUBMED:28605255 Financial Impact of Cancer Drug Wastage and Potential Cost Savings From Mitigation Strategies. Purpose: Cancer drug wastage occurs when a parenteral drug within a fixed vial is not administered fully to a patient. This study investigated the extent of drug wastage, the financial impact on the hospital budget, and the cost savings associated with current mitigation strategies. Methods: We conducted a cross-sectional study in three University of Toronto-affiliated hospitals of various sizes. We recorded the actual amount of drug wasted over a 2-week period while using current mitigation strategies. Single-dose vial cancer drugs with the highest wastage potentials were identified (14 drugs). To calculate the hypothetical drug wastage with no mitigation strategies, we determined how many vials of drugs would be needed to fill a single prescription. Results: The total drug costs over the 2 weeks ranged from $50,257 to $716,983 in the three institutions. With existing mitigation strategies, the actual drug wastage over the 2 weeks ranged from $928 to $5,472, which was approximately 1% to 2% of the total drug costs. In the hypothetical model with no mitigation strategies implemented, the projected drug cost wastage would have been $11,232 to $149,131, which accounted for 16% to 18% of the total drug costs. As a result, the potential annual savings while using current mitigation strategies range from 15% to 17%. Conclusion: The financial impact of drug wastage is substantial. Mitigation strategies lead to substantial cost savings, with the opportunity to reinvest those savings. More research is needed to determine the appropriate methods to minimize risk to patients while using the cost-saving mitigation strategies. abstract_id: PUBMED:38143662 Quantifying Drug Wastage and Economic Loss of Chemotherapy Drugs at an Adult Oncology Care of a Tertiary Care Public Hospital in India. Background and objective New drugs have revolutionized cancer care, but their high cost requires cost-effectiveness studies. However, these studies only consider optimal use, neglecting real-world wastage. We aim to assess chemotherapy drug wastage and financial loss in our adult oncology care. Methods A total of 100 adult patients attending daycare oncology were prospectively evaluated. The total dose of parenteral anticancer drug, the amount administered, and the amount of drug wasted were recorded for each patient. The economic loss estimation was done considering the unit cost for the drug. Results Our study evaluated 157 parenteral drug administrations of 10 different anticancer drugs in 100 enrolled patients. The most common diagnosis was breast cancer (39/100; 39%), and the most commonly prescribed drugs were paclitaxel (36/157; 23%) and cyclophosphamide (21/157; 13%). However, the wastage percentage varied from 6% to 35.06%, and the overall wastage estimated was 16,298 mg (20.06%) of the total drug procured. Notably, the highest proportion of drug wastage was observed for carboplatin (2,525/7200 mg; 35.06%), whereas oxaliplatin, gemcitabine, 5-FU, and cisplatin wastage were more than 20% of the ordered drug. The total cost of the chemotherapy drug procured was 7,26,005 INR (8,738.78 USD), and drug wastage amounted to 17.14% of the total drug cost, resulting in an economic loss of 1,24,485 INR (1,498.40 USD). Gemcitabine (542.86 USD), oxaliplatin (452.66 USD), and paclitaxel (286.15 USD) were responsible for the maximum cost of wastage. Conclusion Drug wastage and financial loss are significant for carboplatin, oxaliplatin, and gemcitabine, with small proportions of paclitaxel also contributing to economic loss. Possible solutions include planning pharmacy inventory for multiple vial sizes and drug-wise batching strategies to facilitate vial sharing. However, these approaches may present challenges. The pharmaceutical industry can consider initiatives such as providing varying packaging sizes to minimize drug wastage. Answer: The study described in abstract PUBMED:17444808 investigated the impact of the administration angle on the cost of artificial tear solutions, specifically examining whether bottle positioning can minimize wastage. The study conducted a densitometric assessment of the drop volume of five multidose artificial tear products of a 15-mL labeled bottle size and performed a predictive cost analysis. The results showed that there was no significant correlation between drop volume and density (Spearman correlation, P = 0.4500). However, there was a statistically significant difference in drop size when comparing administration at a 45-degree angle versus a 90-degree angle for all but one product (Student t test, P < 0.05). The cost analysis demonstrated that administering drops at a 45-degree angle, rather than a 90-degree angle, could result in up to $1.93 per bottle cost savings. The study concluded that products offering a higher number of drops per dollar provide economic advantages that may not be discernible by examining the retail price alone. Furthermore, altering the angle of administration may have significant economic implications for the longitudinal use of multidose artificial tear products.
Instruction: Do high fasting glucose levels suggest nocturnal hypoglycaemia? Abstracts: abstract_id: PUBMED:23672623 Do high fasting glucose levels suggest nocturnal hypoglycaemia? The Somogyi effect-more fiction than fact? Aims: The Somogyi effect postulates that nocturnal hypoglycaemia causes fasting hyperglycaemia attributable to counter-regulatory hormone release. Although most published evidence has failed to support this hypothesis, this concept remains firmly embedded in clinical practice and often prevents patients and professionals from optimizing overnight insulin. Previous observational data found lower fasting glucose was associated with nocturnal hypoglycaemia, but did not assess the probability of infrequent individual episodes of rebound hypoglycaemia. We analysed continuous glucose monitoring data to explore its prevalence. Methods: We analysed data from 89 patients with Type 1 diabetes who participated in the UK Hypoglycaemia study. We compared fasting capillary glucose following nights with and without nocturnal hypoglycaemia (sensor glucose &lt; 3.5 mmol/l). Results: Fasting capillary blood glucose was lower after nights with hypoglycaemia than without [5.5 (3.0) vs. 14.5 (4.5) mmol/l, P &lt; 0.0001], and was lower on nights with more severe nocturnal hypoglycaemia [5.5 (3.0) vs. 8.2 (2.3) mmol/l; P = 0.018 on nights with nadir sensor glucose of &lt; 2.2 mmol/l vs. 3.5 mmol/l]. There were only two instances of fasting capillary blood glucose &gt; 10 mmol/l after nocturnal hypoglycaemia, both after likely treatment of the episode. When fasting capillary blood glucose is &lt; 5 mmol/l, there was evidence of nocturnal hypoglycaemia on 94% of nights. Conclusion: Our data indicate that, in clinical practice, the Somogyi effect is rare. Fasting capillary blood glucose ≤ 5 mmol/l appears an important indicator of preceding silent nocturnal hypoglycaemia. abstract_id: PUBMED:26625003 Can Fasting Glucose Levels or Post-Breakfast Glucose Fluctuations Predict the Occurrence of Nocturnal Asymptomatic Hypoglycemia in Type 1 Diabetic Patients Receiving Basal-Bolus Insulin Therapy with Long-Acting Insulin? Objective: To investigate whether the occurrence of nocturnal asymptomatic hypoglycemia may be predicted based on fasting glucose levels and post-breakfast glucose fluctuations. Patients And Methods: The study subjects comprised type 1 diabetic patients who underwent CGM assessments and received basal-bolus insulin therapy with long-acting insulin. The subjects were evaluated for I) fasting glucose levels and II) the range of post-breakfast glucose elevation (from fasting glucose levels to postprandial 1- and 2-hour glucose levels). The patients were divided into those with asymptomatic hypoglycemia during nighttime and those without for comparison. Optimal cut-off values were also determined for relevant parameters that could predict nighttime hypoglycemia by using ROC analysis. Results: 64 patients (mean HbA1c 8.7 ± 1.8%) were available for analysis. Nocturnal asymptomatic hypoglycemia occurred in 23 patients (35.9%). Fasting glucose levels (I) were significantly lower in those with hypoglycemia than those without (118 ± 35 mg/dL vs. 179 ± 65 mg/dL; P &lt; 0.001). The range of post-breakfast glucose elevation (II) was significantly greater in those with hypoglycemia than in those without (postprandial 1-h, P = 0.003; postprandial 2-h, P = 0.005). The cut-off values determined for relevant factors were as follows: (I) fasting glucose level &lt; 135 mg/dL (sensitivity 0.73/specificity 0.83/AUC 0.79, P &lt; 0.001); and (II) 1-h postprandial elevation &gt; 54 mg/dL (0.65/0.61/0.71, P = 0.006), 2-h postprandial elevation &gt; 78 mg/dL (0.65/0.73/0.71, P = 0.005). Conclusions: Nocturnal asymptomatic hypoglycemia was associated with increases in post-breakfast glucose levels in type 1 diabetes. Study findings also suggest that fasting glucose levels and the range of post-breakfast glucose elevation could help predict the occurrence of nocturnal asymptomatic hypoglycemia. abstract_id: PUBMED:23876123 Fasting glucose level is associated with nocturnal hypoglycemia in elderly male patients with type 2 diabetes. Background: Nocturnal hypoglycemia was a common and serious problem among patients with type 2 diabetes (T2DM), especially in the elderly. This study investigated whether fasting glucose was an indicator of nocturnal hypoglycemia in elderly male patients with T2DM. Methods: A total of 291 elderly male type 2 diabetic patients who received continuous glucose monitoring (CGM) between January 2007 and January 2011 were enrolled in the study. The association of fasting glucose and nocturnal hypoglycemia based on CGM data was analyzed, comparing with bedtime glucose. Results: Based on CGM data, patients with nocturnal hypoglycemia had significantly lower fasting glucose (5.88 ± 1.29 versus 6.92 ± 1.32 mmol/L) and bedtime glucose (7.33 ± 1.70 versus 8.01 ± 1.95 mmol/L) than patients without nocturnal hypoglycemia (both p &lt; 0.01). Compared with the highest quartile, the lowest quartile of fasting glucose had a significantly increased risk of nocturnal hypoglycemia after the multiple adjustments (pfor trend &lt; 0.001). However, this association did not appear in bedtime glucose. When the prediction of nocturnal hypoglycemia either by fasting glucose or bedtime glucose using the area under receiver operating characteristic (ROC) curve, fasting glucose but not bedtime glucose, was an indicator of nocturnal hypoglycemia, with an area under the ROC curve (AUC) of 0.714 (95% CI: 0.653 ∼ 0.774, p &lt; 0.001). On the ROC curve, the Youden index was maximal when fasting glucose was 6.1 mmol/L. Conclusions: Fasting glucose may be a convenient and clinically useful indicator of nocturnal hypoglycemia in elderly male patients with T2DM. Risk of nocturnal hypoglycemia significantly increased when fasting glucose was less than 6.1 mmol/L. abstract_id: PUBMED:28683068 Prediction of nocturnal hypoglycemia unawareness by fasting glucose levels or post-breakfast glucose fluctuations in patients with type 1 diabetes receiving insulin degludec: A pilot study. Objective: To evaluate whether nocturnal asymptomatic hypoglycemia (NAH) can be predicted by fasting glucose levels or post-breakfast glucose fluctuations in patients with type 1 diabetes (T1D) receiving insulin degludec. Methods: Patients with T1D receiving insulin degludec underwent at-home CGM assessments. Indices for glycemic variability before and after breakfast included fasting glucose levels and the range of post-breakfast glucose elevation. For comparison, the patients were classified into those with NAH and those without. The optimal cut-off values for the relevant parameters were determined to predict NAH using ROC analysis. Results: The study included a total of 31 patients (mean HbA1c values, 7.8 ± 0.7%), and 16 patients (52%) had NAH. Those with NAH had significantly lower fasting glucose levels than did those without (82 ± 48 mg/dL vs. 144 ± 69 mg/dL; P = 0.009). The change from pre- to post-breakfast glucose levels was significantly greater among those with NAH (postprandial 1-h, P = 0.028; postprandial 2-h, P = 0.028). The cut-off values for prediction of NAH were as follows: fasting glucose level &lt;84 mg/dL (sensitivity 0.80/specificity 0.75/AUC 0.80; P = 0.004), 1-h postprandial elevation &gt;69 mg/dL (0.75/0.67/0.73; P = 0.033), and 2-h postprandial elevation &gt;99 mg/dL (0.69/0.67/0.71; P = 0.044). Conclusions: The results suggest that fasting glucose level of &lt; 84 mg/dL had approximately 80% probability of predicting the occurrence of NAH in T1D receiving insulin degludec. It was also shown that the occurrence of hypoglycemia led to greater post-breakfast glucose fluctuations and steeper post-breakfast glucose gradients. abstract_id: PUBMED:32124268 Value of Capillary Glucose Profiles in Assessing Risk of Nocturnal Hypoglycemia in Type 1 Diabetes Based on Continuous Glucose Monitoring. Introduction: This study aimed to evaluate the occurrence of nocturnal hypoglycemia in type 1 diabetes (T1D) based on continuous glucose monitoring (CGM), and to explore the value of capillary glucose profiles in assessing the risk of nocturnal hypoglycemia. The study also intended to develop a predictive model to identify people with high risk of nocturnal hypoglycemia. Methods: A total of 169 participants with T1D received 3 days of blinded CGM; meanwhile, their self-monitoring blood glucose (SMBG) profiles were recorded. Logistic regression analyses were used to evaluate contributory factors of nocturnal hypoglycemia. Potential indicators were estimated using area under receiver operator curve (AUC) analyses. Results: During the retrospective CGM period, 95 (56.2%) participants with T1D reported 238 events of hypoglycemia, and 69 (29.0%) of these episodes occurred during the nighttime. Increased risk of nocturnal hypoglycemia correlated with lower HbA1c, glycated albumin, and mean blood glucose (OR = 0.790, 0.940, 0.651, respectively; P &lt; 0.05) and higher standard deviation, mean amplitude of glycemic excursions, and low blood glucose index (OR = 1.463, 1.168, 4.035, respectively; P &lt; 0.05) after adjustment for age and duration. Of the daily SMBG profiles, fasting blood glucose (OR = 0.643, P = 0.001) and blood glucose at bedtime (OR = 0.851, P = 0.037) were associated with the occurrence of nocturnal hypoglycemia. The BGn model, which was derived from the variation of capillary glucose, could discriminate individuals with increased risk of nocturnal hypoglycemia (AUC = 0.774). Conclusions: Nocturnal hypoglycemia constitutes nearly one-third of hypoglycemic events in people with T1D. Strict glycemic control and great fluctuation of glucose are potential contributory factors. Daily SMBG profiles and the BGn model could help assess the risk of nocturnal hypoglycemia in T1D, which may support further development of preventive strategies. abstract_id: PUBMED:8891454 Nocturnal blood glucose profiles in patients with type 1 diabetes mellitus on multiple (&gt; or = 4) daily insulin injection regimens. The aim of the study was to examine nocturnal blood glucose profiles in Type 1 diabetic patients on multiple (&gt; or = 4) daily insulin injections. Nocturnal blood glucose profiles were evaluated in 31 patients collecting blood samples half-hourly from 23.00 till 07.30 h, while they were asleep. Nocturnal episodes of hypoglycaemia (blood glucose &lt; 3.0 mmol l-1 occurred in 29% of these nights; 67% of episodes were asymptomatic. In the early night (23.00-01.00 h), five episodes occurred with a median duration of 1 h. In the early morning (04.00-07.30 h) seven episodes occurred with a median duration of 3 h. No hypoglycaemia was noted from 01.00 to 04.00 h. Bedtime glucose levels appeared to predict 'early night' hypoglycaemia but not 'early morning' hypoglycaemia. Fasting glucose levels &lt; 5.5 mmol l-1 were indicative of preceding 'early morning' hypoglycaemia. There was a large intra-individual variation in nocturnal blood glucose profiles. It is concluded that daily monitoring of bedtime and fasting blood glucose levels may be both more reliable and convenient for the prevention of nocturnal hypoglycaemia than periodic testing of blood glucose at 03.00h as is often advised. Setting a target of &gt; 5.5 mmol l-1 for fasting blood glucose may decrease the frequency of nocturnal hypoglycaemia. abstract_id: PUBMED:3317053 Failure of nocturnal hypoglycemia to cause fasting hyperglycemia in patients with insulin-dependent diabetes mellitus. To test the hypothesis that nocturnal hypoglycemia causes fasting hyperglycemia (the Somogyi phenomenon) in patients with insulin-dependent diabetes mellitus, we studied 10 patients, who were on their usual therapeutic regimens, from 10 p.m. through 8 a.m. on three nights. On the first night, only a control procedure was performed (blood sampling only); on the second night, hypoglycemia was prevented (by intravenous glucose infusion, if necessary, to keep plasma glucose levels above 100 mg per deciliter [5.6 mmol per liter]); and on the third night, hypoglycemia was induced (by stepped intravenous insulin infusions between midnight and 4 a.m. to keep plasma glucose levels below 50 mg per deciliter [2.8 mmol per liter]). After nocturnal hypoglycemia was induced (36 +/- 2 mg per deciliter [2.0 +/- 0.1 mmol per liter] [mean +/- SE] from 2 to 4:30 a.m.), 8 a.m. plasma glucose concentrations (113 +/- 18 mg per deciliter [6.3 +/- 1.0 mmol per liter]) were not higher than values obtained after hypoglycemia was prevented (182 +/- 14 mg per deciliter [10.1 +/- 0.8 mmol per liter]) or those obtained after blood sampling only (149 +/- 20 mg per deciliter [8.3 +/- 1.1 mmol per liter]). Indeed, regression analysis of data obtained on the control night indicated that the 8 a.m. plasma glucose concentration was directly related to the nocturnal glucose nadir (r = 0.761, P = 0.011). None of the patients was awakened by hypoglycemia. Scores for symptoms of hypoglycemia, which were determined at 8 a.m., did not differ significantly among the three studies. We conclude that asymptomatic nocturnal hypoglycemia does not appear to cause clinically important fasting hyperglycemia in patients with insulin-dependent diabetes mellitus on their usual therapeutic regimens. abstract_id: PUBMED:29299329 Impact of Ramadan fasting on glucose levels in women with gestational diabetes mellitus treated with diet alone or diet plus metformin: a continuous glucose monitoring study. Objective: Women with gestational diabetes mellitus (GDM) are categorized as at high risk for adverse events during Ramadan fasting. However, this is largely based on clinical opinion. In this study, we shed some light on what happens to glucose levels during Ramadan fasting. Methods: This is a prospective observational study. A total of 32 patients with GDM were recruited; 10 patients, treated with diet only (group 1), to observe their glucose levels before fasting and 22 patients who insisted on fasting the month of Ramadan, 13 treated with diet only (group 2) and nine treated with diet plus metformin 500 mg twice daily (group 3), to evaluate their glucose levels during fasting. Interstitial glucose was monitored in all by using the iPro2 Professional continuous glucose monitoring (CGM) system. Results: Mean glucose level was 116±21 mg/dL (6.16±1.16 mmol/L), 106±9 mg/dL (5.88±0.49 mmol/L) and 99±7 mg/dL (5.49±0.34 mmol/L) in groups 1, 2 and 3, respectively. Patients in group 1 had the lowest rate of hypoglycemia (50%), followed by patients in group 2 (60%), whereas patients in group 3 had the highest rate of hypoglycemia (78%). Conclusions: CGM data indicates that Ramadan fasting in women with GDM treated with diet alone or with diet plus metformin was associated with lower mean glucose levels and higher rates of hypoglycemia when compared with non-fasting glucose levels. Women with GDM should be advised against fasting during Ramadan until further data is available. abstract_id: PUBMED:19747748 Fasting plasma glucose variability as a marker of nocturnal hypoglycemia in diabetes: evidence from the PREDICTIVE study. The relationship between fasting glucose (FG) variability and nocturnal hypoglycemia was assessed using longitudinal data from PREDICTIVE, the large-scale observational study of insulin detemir. An HbA(1c)-corrected correlation was found between these endpoints, suggesting FG variability can serve as a useful marker for this risk in clinical practice. abstract_id: PUBMED:31782609 Levels of fasting plasma glucose in non-hospitalized older people with high hemoglobin A1c levels. Fasting plasma glucose levels in participants with diabetes in each age group and among those nested within glycated hemoglobin groups. Answer: High fasting glucose levels do not necessarily suggest nocturnal hypoglycemia. In fact, the evidence from the provided abstracts indicates that lower fasting glucose levels are more commonly associated with nocturnal hypoglycemia. For instance, one study found that fasting capillary blood glucose was lower after nights with hypoglycemia than without, and when fasting capillary blood glucose is < 5 mmol/l, there was evidence of nocturnal hypoglycemia on 94% of nights, suggesting that the Somogyi effect (rebound hyperglycemia following hypoglycemia) is rare (PUBMED:23672623). Another study showed that fasting glucose levels were significantly lower in patients with nocturnal asymptomatic hypoglycemia than those without, and the occurrence of nocturnal asymptomatic hypoglycemia was associated with increases in post-breakfast glucose levels in type 1 diabetes (PUBMED:26625003). Similarly, a study in elderly male patients with type 2 diabetes found that those with nocturnal hypoglycemia had significantly lower fasting glucose levels compared to those without nocturnal hypoglycemia (PUBMED:23876123). A pilot study also suggested that a fasting glucose level of < 84 mg/dL had approximately 80% probability of predicting the occurrence of nocturnal asymptomatic hypoglycemia in patients with type 1 diabetes receiving insulin degludec (PUBMED:28683068). Furthermore, a study on type 1 diabetes patients indicated that fasting blood glucose and blood glucose at bedtime were associated with the occurrence of nocturnal hypoglycemia (PUBMED:32124268). Another study concluded that daily monitoring of bedtime and fasting blood glucose levels may be more reliable for the prevention of nocturnal hypoglycemia than periodic testing of blood glucose at 03.00h (PUBMED:8891454). Overall, the evidence suggests that lower, rather than higher, fasting glucose levels may be indicative of nocturnal hypoglycemia, and the concept of the Somogyi effect as a cause for fasting hyperglycemia following nocturnal hypoglycemia is not strongly supported by the data (PUBMED:3317053).
Instruction: Increase in national intravenous thrombolysis rates for ischaemic stroke between 2005 and 2012: is bigger better? Abstracts: abstract_id: PUBMED:27103535 Increase in national intravenous thrombolysis rates for ischaemic stroke between 2005 and 2012: is bigger better? Background: Intravenous thrombolytic therapy after ischaemic stroke significantly reduces mortality and morbidity. Actual thrombolysis rates are disappointingly low in many western countries. It has been suggested that higher patient volume is related to shorter door-to-needle-time (DNT) and increased thrombolysis rates. We address a twofold research question: a) What are trends in national thrombolysis rates and door-to-needle times in the Netherlands between 2005-2012? and b) Is there a relationship between stroke patient volume per hospital, thrombolysis rates and DNT? Methods: We used data from the Stroke Knowledge Network Netherlands dataset. Information on volume, intravenous thrombolysis rates, and admission characteristics per hospital is acquired through yearly surveys, in up to 65 hospitals between January 2005 and December 2012. We used linear regression to determine a possible relationship between hospital stroke admission volume, hospital thrombolysis rates and mean hospital DNT, adjusted for patient characteristics. Results: Information on 121.887 stroke admissions was available, ranging from 7.393 admissions in 2005 to 24.067 admissions in 2012. Mean national thrombolysis rate increased from 6.4% in 2005 to 14.6% in 2012. Patient characteristics (mean age, gender, type of stroke) remained stable. Mean DNT decreased from 72.7 min in 2005 to 41.4 min in 2012. Volume of stroke admissions was not an independent predictor for mean thrombolysis rate nor for mean DNT. Conclusion: Intravenous thrombolysis rates in the Netherlands more than doubled between 2005 and 2012, in parallel with a large decline in mean DNT. We found no convincing evidence for a relationship between stroke patient volume per hospital and thrombolysis rate or DNT. abstract_id: PUBMED:31964668 Optimising acute stroke care organisation: a simulation study to assess the potential to increase intravenous thrombolysis rates and patient gains. Objectives: To assess potential increases in intravenous thrombolysis (IVT) rates given particular interventions in the stroke care pathway. Design: Simulation modelling was used to compare the performance of the current pathway, best practices based on literature review and an optimised model. Setting: Four hospitals located in the North of the Netherlands, as part of a centralised organisational model. Participants: Ischaemic stroke patients prospectively ascertained from February to August 2010. Intervention: The interventions investigated included efforts aimed at patient response and mode of referral, prehospital triage and intrahospital delays. Primary And Secondary Outcome Measures: The primary outcome measure was thrombolysis utilisation. Secondary measures were onset-treatment time (OTT) and the proportion of patients with excellent functional outcome (modified Rankin scale (mRS) 0-1) at 90 days. Results: Of 280 patients with ischaemic stroke, 125 (44.6%) arrived at the hospital within 4.5 hours, and 61 (21.8%) received IVT. The largest improvements in IVT treatment rates, OTT and the proportion of patients with mRS scores of 0-1 can be expected when patient response is limited to 15 min (IVT rate +5.8%; OTT -6 min; excellent mRS scores +0.2%), door-to-needle time to 20 min (IVT rate +4.8%; OTT -28 min; excellent mRS scores+3.2%) and 911 calls are increased to 60% (IVT rate +2.9%; OTT -2 min; excellent mRS scores+0.2%). The combined implementation of all potential best practices could increase IVT rates by 19.7% and reduce OTT by 56 min. Conclusions: Improving IVT rates to well above 30% appears possible if all known best practices are implemented. abstract_id: PUBMED:21997715 Use of telemedicine and other strategies to increase the number of patients that may be treated with intravenous thrombolysis. Stroke is the fourth leading killer in the United States and a leading cause of adult long-term disability. The American Heart Association estimates that only 3% to 5% of patients with acute ischemic stroke are treated with intravenous thrombolysis. A way to improve the rates of treatment with thrombolysis in patients with acute ischemic stroke is the creation of telemedicine stroke networks. Data from many studies support the safety of expanding intravenous tissue plasminogen activator use with the help of telemedicine. In this article we discuss the current evidence for the use of telemedicine within stroke systems of care, the importance of coordinating care within the transferring facilities in the telestroke networks, telestroke economics and applicability, and how to potentially use the telestroke systems to increase recruitment of patients into acute stroke thrombolysis trials. abstract_id: PUBMED:26853139 Analyses of the Turkish National Intravenous Thrombolysis Registry. Background: The relatively late approval of use of recombinant tissue plasminogen activator (rt-PA) for acute ischemic stroke in Turkey has resulted in obvious underuse of this treatment. Here we present the analyses of the nationwide registry, which was created to prompt wider use of intravenous thrombolysis, as well as to monitor safe implementation of the treatment in our country. Methods: Patients were registered prospectively in our database between 2006 and 2013. Admission and 24-hour National Institutes of Health Stroke Scale and 3-month modified Rankin Scale scores were recorded. A "high-volume center" was defined as a center treating 10 or more patients with rt-PA per year. Results: A total of 1133 patients were enrolled into the registry by 38 centers in 18 cities. A nearly 4-fold increase in the study population and in the number of participating centers was observed over the 6 years of the study. The mean baseline NIHSS score was 14.5 ± 5.7, and the prevalence of symptomatic hemorrhage was 4.9%. Mortality at 3 months decreased from 22% to 11% in the 6 years of enrollment, and 65% of cases were functionally independent. Age older than 70 years, an NIHSS score higher than 14 upon hospital admission, and intracranial hemorrhage were independently associated with mortality, and being treated in a high-volume center was related to good outcome. Conclusions: We observed a decreasing trend in mortality and an acceptable prevalence of symptomatic hemorrhage over 6 years with continuous addition of new centers to the registry. The first results of this prospective study are encouraging and will stimulate our efforts at increasing the use of intravenous thrombolysis in Turkey. abstract_id: PUBMED:26835227 Comparisons of outcomes in stroke subtypes after intravenous thrombolysis. The purpose of this study was to analyze the outcomes and complications between stroke subtypes after intravenous thrombolysis. A total of 471 patients with acute ischemic stroke after intravenous thrombolysis from January 2007 to April 2014 were enrolled and classified according to the Trial of Org 10172 in Acute Stroke Treatment. A multivariate logistic regression model was used to evaluate the outcomes and complications among stroke subtypes after adjusting for baseline variables. Of the 471 patients, 117 (25.1 %) had large-artery atherosclerosis (LAA), 148 (31.8 %) had cardioembolism (CE), 82 (17.6 %) had small vessel disease (SVD), 119 (25.5 %) had undetermined etiology, and 5 (1.1 %) had other determined etiology. The patients with SVD had the mildest initial stroke severity and highest ratio of good and favorable outcomes, whereas those with CE had a higher rate of symptomatic intracranial hemorrhage (sICH) than those with SVD. After adjusting for confounding factors, the ratio of favorable outcome in the patients with SVD stroke was higher than in those with LAA. SVD was associated with a significantly lower rate of any hemorrhage compared to other stroke subtypes, whereas there were no differences in sICH or mortality between stroke subtypes. A lower initial National Institutes of Health Stroke Scale score was associated with good and favorable outcomes, and lower rates of sICH and mortality. The patients with SVD after intravenous thrombolysis had better outcomes and a lower rate of hemorrhage even after adjusting for confounding factors. Stroke severity was an independent factor associated with better functional outcomes, sICH and mortality. abstract_id: PUBMED:30464112 Low Cholesterol Levels Increase Symptomatic Intracranial Hemorrhage Rates After Intravenous Thrombolysis: A Multicenter Cohort Validation Study. Aim: Although a lower level of non-high-density lipoprotein cholesterol (HDL-C) was reported to be inversely associated with spontaneous intracranial hemorrhage (ICH), no enough evidence has verified whether lipid profiles modify hemorrhagic transformation and functional outcomes in patients with acute ischemic treated with thrombolysis. Methods: This multicenter cohort study included 2373 patients with acute ischemic stroke treated with intravenous thrombolysis between December 2004 and December 2016. Of these, 1845 patients were categorized into either the hyperlipidemia or non-hyperlipidemia group. Symptomatic ICH (SICH) rates within 24-36 h of thrombolytic onset and functional outcomes at 30 and 90 days were longitudinally surveyed. Models of predicting hemorrhagic transformation were used to validate our findings. Results: For enrolled 1845 patients, SICH rates were ≥2-fold reduced for the hyperlipidemia group by the NINDS (adjusted RR: 0.488 [0.281-0.846], p=0.0106), the ECASS II (adjusted RR: 0.318 [0.130-0.776], p=0.0119), and SITS-MOST standards (adjusted RR: 0.214 [0.048-0.957], p=0.0437). The favorable functional rates between the two groups were not significantly different. Lower levels of LDL-C were showed in robust association with SICH. With a cut-off LDL-C value of <130 mg/dL, new models are more robust and significant in predicting hemorrhagic transformation within 24-36 h. Conclusions: This study supports the strong association between reduced LDL-C and increased SICH, but not for functional outcomes in patients with acute ischemic stroke treated with intravenous thrombolysis. LDL-C level of <130 mg/dL is supposed to a candidate marker for predicting SICH within 24-36 h. abstract_id: PUBMED:34515074 Contemporary Trends in the Treatment of Mild Ischemic Stroke with Intravenous Thrombolysis: Paul Coverdell National Acute Stroke Program. Background: Presentation with mild symptoms is a common reason for intravenous thrombolysis (IVT) nonuse among acute ischemic stroke (AIS) patients. We examined the impact of IVT on the outcomes of mild AIS over time. Methods: Using the Paul Coverdell National Stroke Program data, we examined trends in IVT utilization from 2010 to 2019 among AIS patients presenting with National Institutes of Health Stroke Scale (NIHSS) scores ≤5. Outcomes adjudicated included rates of discharge to home and ability to ambulate independently at discharge. We used generalized estimating equation models to examine the effect of IVT on outcomes of AIS patients presenting with mild symptoms and calculated adjusted odds ratio (AOR) with 95% confidence intervals (CI). Results: During the study period, 346,762 patients presented with mild AIS symptoms. Approximately 6.2% were treated with IVT. IVT utilization trends increased from 3.7% in 2010 to 7.7% in 2019 (p &lt; 0.001). Patients treated with IVT had higher median NIHSS scores upon presentation (IVT 3 [2, 4] vs. no IVT 2 [0, 3]). Rates of discharge to home (AOR 2.06, 95% CI: 1.99-2.13) and ability to ambulate at time of discharge (AOR 1.82, 95% CI: 1.76-1.89) were higher among those treated with IVT. Conclusion: There was an increased trend in IVT utilization among AIS patients presenting with mild symptoms. Utilization of IVT increased the odds of being discharged to home and the ability to ambulate at discharge independently in patients with mild stroke. abstract_id: PUBMED:31165091 Access to and delivery of acute ischaemic stroke treatments: A survey of national scientific societies and stroke experts in 44 European countries. Introduction: Acute stroke unit care, intravenous thrombolysis and endovascular treatment significantly improve the outcome for patients with ischaemic stroke, but data on access and delivery throughout Europe are lacking. We assessed best available data on access and delivery of acute stroke unit care, intravenous thrombolysis and endovascular treatment throughout Europe. Methods: A survey, drafted by stroke professionals (ESO, ESMINT, EAN) and a patient organisation (SAFE), was sent to national stroke societies and experts in 51 European countries (World Health Organization definition) requesting experts to provide national data on stroke unit, intravenous thrombolysis and endovascular treatment rates. We compared both pooled and individual national data per one million inhabitants and per 1000 annual incident ischaemic strokes with highest country rates. Population estimates were based on United Nations data, stroke incidences on the Global Burden of Disease Report. Results: We obtained data from 44 European countries. The estimated mean number of stroke units was 2.9 per million inhabitants (95% CI 2.3-3.6) and 1.5 per 1000 annual incident strokes (95% CI 1.1-1.9), highest country rates were 9.2 and 5.8. Intravenous thrombolysis was provided in 42/44 countries. The estimated mean annual number of intravenous thrombolysis was 142.0 per million inhabitants (95% CI 107.4-176.7) and 72.7 per 1000 annual incident strokes (95% CI 54.2-91.2), highest country rates were 412.2 and 205.5. Endovascular treatment was provided in 40/44 countries. The estimated mean annual number of endovascular treatments was 37.1 per million inhabitants (95% CI 26.7-47.5) and 19.3 per 1000 annual incident strokes (95% CI 13.5-25.1), highest country rates were 111.5 and 55.9. Overall, 7.3% of incident ischaemic stroke patients received intravenous thrombolysis (95% CI 5.4-9.1) and 1.9% received endovascular treatment (95% CI 1.3-2.5), highest country rates were 20.6% and 5.6%. Conclusion: We observed major inequalities in acute stroke treatment between and within 44 European countries. Our data will assist decision makers implementing tailored stroke care programmes for reducing stroke-related morbidity and mortality in Europe. abstract_id: PUBMED:28752508 Safety and Effectiveness of Intravenous Thrombolysis for Acute Ischemic Stroke Outside the Coverage of National Health Insurance in Taiwan. Purpose: Only a small percentage of ischemic stroke patients were treated with intravenous thrombolysis in Taiwan, partly because of the narrow reimbursement criteria of the National Health Insurance (NHI). We aimed to assess the safety and effectiveness of intravenous thrombolysis not covered by the NHI. Methods: This is a retrospective analysis of register data from four hospitals. All patients who received intravenous tissue plasminogen activator and fulfilled the American Heart Association/American Stroke Association (AHA/ASA) thrombolysis guidelines between January 2007 and June 2012 were distinguished into two groups: those in accordance (reimbursement group) and those not in accordance (non-reimbursement group) with the NHI reimbursement criteria. Primary outcome was symptomatic intracerebral hemorrhage (SICH). Secondary outcomes were dramatic improvement in the National Institutes of Health Stroke Scale (NIHSS) score at discharge, good functional outcome (modified Rankin Scale ≤2) at discharge, and all-cause in-hospital mortality. Results: In 569 guideline-eligible patients, 177 (31%) were treated without reimbursement. The reasons for exclusion from reimbursement included age &gt;80 (n=42), baseline NIHSS less than 6 (n=29), baseline NIHSS &gt;25 (n=15), thrombolysis beyond 3 hours (n=49), prior stroke with diabetes (n=28), use of oral anticoagulant (n=2), and more than one contraindication (n=12). Overall, we observed no differences between the reimbursement and non-reimbursement groups in the rate of SICH (7% versus 6%), dramatic improvement (36% versus 36%), good functional outcome (39% versus 37%), and in-hospital mortality (8% versus 6%) Conclusion: In stroke patients treated with intravenous thrombolysis according to the AHA/ASA guidelines, the outcomes were comparable between the reimbursement and non-reimbursement groups. abstract_id: PUBMED:22882725 Utilization of intravenous thrombolysis is increasing in the United States. Background: Evaluating recombinant tissue plasminogen activator utilization rates is important, as many studies have demonstrated that administration of recombinant tissue plasminogen activator to qualified patients significantly improves prognosis. Aims: We investigated recent trends in the utilization and outcomes of administration of intravenous recombinant tissue plasminogen activator in the United States using the National Inpatient Sample between 2001 and 2008. Methods: We identified patients with a primary diagnosis of acute ischemic stroke who underwent treatment with intravenous recombinant tissue plasminogen activator and studied utilization rates and clinical outcomes: discharge to long-term facility (morbidity), in-hospital death (mortality), and intracranial hemorrhage. Information on demographics, hospital characteristics, and comorbidities was collected. A multivariate logistic regression analysis was performed to determine independent predictors of morbidity, mortality, and intracranial hemorrhage. Results: Intravenous recombinant tissue plasminogen activator utilization increased from 1·3% in 2001 to 3·5% in 2008. On multivariate analysis, variables associated with increased morbidity after intravenous recombinant tissue plasminogen activator administration included advanced age (P &lt; 0·001), female gender (P &lt; 0·001), and comorbidities of atrial fibrillation (P &lt; 0·001) and hypertension (P &lt; 0·001). Increased mortality was associated with increased age (P &lt; 0·001) and comorbidities of atrial fibrillation, congestive heart failure, coronary artery disease, and diabetes (P &lt; 0·001 for all comorbidities). Conclusions: Intravenous recombinant tissue plasminogen activator utilization rates increased between 2001 and 2008. Advanced age and atrial fibrillation were significantly associated with increased morbidity and mortality among patients treated with intravenous recombinant tissue plasminogen activator. Answer: The study conducted in the Netherlands found that national intravenous thrombolysis rates for ischemic stroke more than doubled from 6.4% in 2005 to 14.6% in 2012. This increase was accompanied by a significant decline in mean door-to-needle time (DNT), from 72.7 minutes in 2005 to 41.4 minutes in 2012. However, the volume of stroke admissions was not an independent predictor for mean thrombolysis rate nor for mean DNT, suggesting that an increase in patient volume did not necessarily lead to better thrombolysis rates or faster DNT (PUBMED:27103535). In contrast, a simulation study assessing potential increases in intravenous thrombolysis (IVT) rates given particular interventions in the stroke care pathway suggested that improving IVT rates to well above 30% is possible if all known best practices are implemented. This study indicated that specific interventions, such as reducing patient response time, shortening door-to-needle time, and increasing 911 calls, could lead to significant improvements in IVT treatment rates and patient outcomes (PUBMED:31964668). Therefore, while the Dutch study did not find a direct correlation between the volume of stroke admissions and thrombolysis rates or DNT, the simulation study implies that targeted interventions can lead to higher IVT rates and better outcomes, suggesting that "bigger" in terms of higher patient volumes is not necessarily "better" unless accompanied by optimized stroke care processes.
Instruction: Does the presence of psychosocial "yellow flags" alter patient-provider communication for work-related, acute low back pain? Abstracts: abstract_id: PUBMED:19687758 Does the presence of psychosocial "yellow flags" alter patient-provider communication for work-related, acute low back pain? Objective: To determine whether patterns of patient-provider communication might vary depending on psychosocial risk factors for back disability. Methods: Working adults (N = 97; 64% men; median age = 38 years) with work-related low back pain completed a risk factor questionnaire and then agreed to have provider visits audiotaped. Verbal exchanges were divided into utterances and coded for content, then compared among low-, medium-, and high-risk patients. Results: Among high-risk patients only, providers asked more biomedical questions, patients provided more biomedical information, and providers used more language to engage patients and facilitate communication. There were no group differences in psychosocial exchanges. Conclusions: Clinicians may recognize the need for more detailed assessment of patients with multiple psychosocial factors, but increases in communication are focused on medical explanations and therapeutic regimen, not on lifestyle and psychosocial factors. abstract_id: PUBMED:34033963 Lack of Consensus Across Clinical Guidelines Regarding the Role of Psychosocial Factors Within Low Back Pain Care: A Systematic Review. It is widely accepted that psychosocial prognostic factors should be addressed by clinicians in their assessment and management of patient suffering from low back pain (LBP). On the other hand, an overview is missing how these factors are addressed in clinical LBP guidelines. Therefore, our objective was to summarize and compare recommendations regarding the assessment and management of psychosocial prognostic factors for LBP chronicity, as reported in clinical LBP guidelines. We performed a systematic search of clinical LBP guidelines (PROSPERO registration number 154730). This search consisted of a combination of previously published systematic review articles and a new systematic search in medical or guideline-related databases. From the included guidelines, we extracted recommendations regarding the assessment and management of LBP which addressed psychosocial prognostic factors (ie, psychological factors ["yellow flags"], perceptions about the relationship between work and health, ["blue flags"], system or contextual obstacles ["black flags") and psychiatric symptoms ["orange flags"]). In addition, we evaluated the level or quality of evidence of these recommendations. In total, we included 15 guidelines. Psychosocial prognostic factors were addressed in 13 of 15 guidelines regarding their assessment and in 14 of 15 guidelines regarding their management. Recommendations addressing psychosocial factors almost exclusively concerned "yellow" or "black flags," and varied widely across guidelines. The supporting evidence was generally of very low quality. We conclude that in general, clinical LBP guidelines do not provide clinicians with clear instructions about how to incorporate psychosocial factors in LBP care and should be optimized in this respect. More specifically, clinical guidelines vary widely in whether and how they address psychosocial factors, and recommendations regarding these factors generally require better evidence support. This emphasizes a need for a stronger evidence-base underlying the role of psychosocial risk factors within LBP care, and a need for uniformity in methodology and terminology across guidelines. PERSPECTIVE: This systematic review summarized clinical guidelines on low back pain (LBP) on how they addressed the identification and management of psychosocial factors. This review revealed a large amount of variety across guidelines in whether and how psychosocial factors were addressed. Moreover, recommendations generally lacked details and were based on low quality evidence. abstract_id: PUBMED:17515940 Exploring general practitioner identification and management of psychosocial Yellow Flags in acute low back pain. Aim: Over the past decade, psychosocial issues have been increasingly identified as risk factors that are associated with the development of chronicity and disability. These psychosocial risk factors are known as Yellow Flags. In New Zealand, in 1997, the Accident Compensation Corporation (ACC) published the Acute Low Back Pain Guide and the Guide to Assessing Psychosocial Yellow Flags in Acute Low Back Pain. The aim of this qualitative study is to understand the experiences of general practitioners (GPs) in the identification and management of psychosocial Yellow Flags in patients with acute low back pain. Method: A qualitative research approach was used. GPs were purposively selected and semi-structured interviews were undertaken. Results: The doctor-patient relationship created the key element for the GPs in approaching any psychosocial factors that were identified. The management of psychosocial factors depended on an individual GP's worldview and orientation to the biopsychosocial model of pain. Problems with time management were composed of multifactorial facets. Funding, lack of appropriate training, and the GPs' perception of ACC's rehabilitation model, all formed components of the meanings that the GPs constructed from their experiences. Conclusion: GPs did not use the Guide to Assessing Psychosocial Yellow Flags in Acute Low Back Pain or the screening questionnaire to identify psychosocial risk factors in their patients with low back pain. Investment of resources in GPs is needed to empower them to be effective gatekeepers guarding against chronicity. This demonstrates a need to alter the current ACC Guideline dissemination and implementation. abstract_id: PUBMED:35655691 Provider-patient communication: an illustrative case report of how provider language can influence patient prognosis. Patient-provider communication can lead to unhelpful ideas and beliefs about a patient's condition, negatively impacting their clinical outcome. A 34-year-old male Veteran presented for an evaluation of high impact chronic low back pain. Previous interactions with various healthcare providers resulted in the Veteran viewing his condition as ominous and in need of intervention, however clinical findings did not support these beliefs. Our Veteran underwent six visits in the chiropractic clinic with treatment consisting of pain education, utilization of cognitive behavioral principles, active home care exercises and spinal manipulation, resulting in improvements in functional and objective outcome measures. This case report highlights the impact of misalignment between an early contact healthcare provider and patient misunderstanding of their condition on long term outcomes. It serves as an example of how physicians utilizing pathoanatomic explanations to describe a patient's chronic low back pain diagnosis can alter the patient's beliefs about their condition. abstract_id: PUBMED:29686480 Physical therapy clinical specialization and management of red and yellow flags in patients with low back pain in the United States. Objectives: Physical therapists (PTs) may practice in direct access or act as primary care practitioners, which necessitate patients' screening and management for red, orange and yellow flags. The objective of the project was to assess the American PT's ability to manage red, orange and yellow flags in patients with low back pain (LBP), and to compare this ability among PTs with different qualifications. Methods: The project was an electronic cross-sectional survey. The investigators contacted 2,861 PTs. Participants made clinical decisions for three vignettes: LBP with red flag for ectopic pregnancy, with orange flag for depression and with yellow flag for fear avoidance behaviour (FAB). The investigators used logistic regression to compare management of warning flags among PTs with distinct qualifications: orthopaedic clinical specialists (PTOs), fellows of the AAOMPT (PTFs), PTOs and PTFs (PTFOs), and PTs without clinical specialization (PTMSs). Results: A total of 410 PTs completed all sections of the survey (142 PTOs, 110 PTFOs, 74 PTFs and 84 PTMSs). Two hundred and seventeen PTs (53%) managed the patient with LBP and symptoms of ectopic pregnancy correctly, 115 PTs (28.5%) of them managed the patient with LBP and symptoms of depression correctly, and 177 (43.2%) managed the patient with LBP and FAB correctly. Discussion: In general, PTs with specialization performed significantly better than PTMSs in all three clinical vignettes. PTs ability to manage patients with warning flags was relatively low. Based on our results, further education on patients with LBP and warning flags is needed. The survey had the potential for non-response and self-selection bias. Level Of Evidence: 3b. abstract_id: PUBMED:21197284 Yellow flag scores in a compensable New Zealand cohort suffering acute low back pain. Background: Despite its high prevalence, most acute low back pain (ALBP) is nonspecific, self-limiting with no definable pathology. Recurrence is prevalent, as is resultant chronicity. Psychosocial factors (yellow flags comprising depression and anxiety, negative pain beliefs, job dissatisfaction) are associated with the development of chronic LBP. Methods: A national insurer (Accident Compensation Corporation, New Zealand [NZ]), in conjunction with a NZ primary health organization, piloted a strategy for more effective management of patients with ALBP, by following the NZ ALBP Guideline. The guidelines recommend the use of a psychosocial screening instrument (Yellow Flags Screening Instrument, a derivative of Örebro Musculoskeletal Pain Questionnaire). This instrument was recommended for administration on the second visit to a general medical practitioner (GP). This paper tests whether published cut-points of yellow flag scores to predict LBP claims length and costs were valid in this cohort. Results: Data was available for 902 claimants appropriately enrolled into the pilot. 25% claimants consulted the GP once only, and thus were not requested to provide a yellow flag score. Yellow flag scores were provided by 48% claimants who consumed two or more GP services. Approximately 60% LBP presentations resolved within five GP visits. Yellow flag scores were significantly and positively associated with treatment costs and service use, although the association was nonlinear. Claimants with moderate yellow flag scores were similarly likely to incur lengthy claims as claimants with at-risk scores. Discussion: Capturing data on psychosocial factors for compensable patients with ALBP has merit in predicting lengthy claims. The validity of the published yellow flag cut-points requires further testing. abstract_id: PUBMED:21185221 Rethinking yellow flags. The use of Yellow Flags has become widespread in clinical practice, as a means to identify clients with low back pain who might not respond favourably to physical treatments. However, using questionnaires to identify psychosocial risk factors that can result in ongoing pain and suffering is not a straightforward matter, and if used without due thought could result in an impoverished service for the client. This discussion article aims to raise awareness of the issues that emerge when relying on Yellow Flags; including the practicalities of using forced-choice questionnaires to identify complex interactions between a client's social environment and their psychological state. Yellow Flags are based on a biopsychosocial model of health, yet this paper argues that the use of Yellow Flags, in practice, belongs within a reductionist paradigm. By calling attention to the issues raised, we envisage a better utilization of the biopsychosocial model; whereby taking account of a client's unique experience and meaning of pain will enable the individual to be managed with a more genuine and insightful understanding than seemingly occurs at the present time. abstract_id: PUBMED:32703922 Ability of Spine Specialists to Identify Psychosocial Risk Factors as Obstacles to Recovery in Patients with Low Back Pain-Related Disorders. Study Design: Prospective study. Purpose: Yellow flags are psychosocial associated with a greater likelihood of progression to persistent pain and disability. These are referred to as obstacles to recovery. Despite their recognized importance, it is unknown how effective clinicians are in detecting them. The primary objective of this study was thus to determine the effectiveness of spine specialist clinicians in detecting the presence of yellow flags in patients presenting to an orthopedic outpatient clinic with low back-related disorders. Overview Of Literature: Psychosocial factors have been previously studied as important predictors of prognosis in patients with low back pain. However, the ability of spinal specialist to identify them remains unknown. Methods: A prospective, single-center, consecutive cohort study was conducted over a period of 30 months. All new patients with low back-related disorders regardless of pathology completed a Yellow Flag Questionnaire that was adapted from the psychosocial flags framework. Clinicians assessing these patients completed a standardized form to determine which and how many yellow flags they had identified during the consultation. Results: A total of 130 patients were included in the analysis, and the clinicians reported an average of 5 flags (range, 0-9). Fear of movement or injury was the most frequently reported yellow flag, reported by 87.7% (n=114) of patients. Clinician sensitivity in detecting yellow flags was poor, correctly identifying only 2 flags, on average, of the 5 reported by patients, with an overall sensitivity of only 39%. Conclusions: The ability of spine specialists to identify yellow flags is poor and can be improved by asking patients to complete a simple screening questionnaire. abstract_id: PUBMED:16217244 Perceptions of provider communication and patient satisfaction for treatment of acute low back pain. Objective: We sought to assess the relationship between perceptions of provider communication and treatment satisfaction for acute, work-related low-back pain (LBP). Methods: In a prospective cohort study, 544 working adults (67% men) with acute LBP provided 1- and 3-month assessments of pain, function, and work status. Results: In a multiple regression analysis, positive provider communication (took problem seriously, explained condition clearly, tried to understand my job, advised to prevent re-injury) explained more variation in patient satisfaction at 1 month than was explained by clinical improvements in pain and function. At 3 months, clinical improvement variables surpassed provider communication as predictors of patient satisfaction. Conclusions: Patients with work-related LBP place a high value on provider counseling and education, especially during the acute stage (&lt;1 month) of treatment. abstract_id: PUBMED:29959102 Sensitivity and specificity of patient-entered red flags for lower back pain. Background Context: Red flags are questions typically ascertained by providers to screen for serious underlying spinal pathologies. The utility of patient-reported red flags in guiding clinical decision-making for spine care, however, has not been studied. Purpose: The aim of this study was to quantify the sensitivity and specificity of patient-reported red flags in predicting the presence of serious spinal pathologies. Study Design: This was a retrospective nested case-control study. Patient Sample: This study consisted of 120 patients with International Classification of Diseases, Ninth Revision, Clinical Modification codes for spinal pathologies and 380 randomly selected patients, from a population of 4,313 patients seen at a large tertiary care spine clinic between October 9, 2013 and June 30, 2014. Outcome Measures: The presence of patient-reported red flags and red flags obtained from medical records was verified for chart review. The spinal pathology (ie, malignancy, fractures, infections, or cauda equina syndrome) was noted for each patient. Methods: The sensitivity and specificity of patient-reported red flags for detecting serious spinal pathologies were calculated from data obtained from the 500 patients. Youden's J was used to rank performance. Agreement between patient-reported red flags and those obtained from medical record review was assessed via Cohen's kappa statistic. Results: "History of cancer" was the best performing patient-reported red flag to identify malignancy (sensitivity=0.75 [95% confidence intervals, CI 0.53-0.90], specificity=0.79 [95% CI 0.75-0.82]). The best performing patient-reported red flag for fractures was the presence of at least one of the following: "Osteoporosis," "Steroid use," and "Trauma" (sensitivity=0.59 [95% CI 0.44-0.72], specificity=0.65 [95% CI 0.60-0.69]). The prevalence of infection and cauda equina diagnoses was insufficient to gauge sensitivity and specificity. Red flags from medical records had better performance than patient-reported red flags. There was poor agreement between patient red flags and those obtained from medical record review. Conclusions: Patient-reported red flags had low sensitivity and specificity for identification of serious pathologies. They should not be used in insolation to make treatment decisions, although they may be useful to prompt further probing to determine if additional investigation is warranted. Answer: Yes, the presence of psychosocial "yellow flags" does alter patient-provider communication for work-related, acute low back pain. A study found that among high-risk patients, providers asked more biomedical questions, patients provided more biomedical information, and providers used more language to engage patients and facilitate communication. However, there were no group differences in psychosocial exchanges, indicating that while clinicians may recognize the need for more detailed assessment of patients with multiple psychosocial factors, increases in communication are focused on medical explanations and therapeutic regimen, not on lifestyle and psychosocial factors (PUBMED:19687758). This finding suggests that while healthcare providers may adjust their communication strategies when they perceive a patient to be at high risk of back disability due to psychosocial factors, they may not necessarily focus on the psychosocial aspects in their discussions. This could be a missed opportunity to address important psychosocial risk factors that are associated with the development of chronicity and disability in patients with low back pain.
Instruction: Positron emission tomography/computed tomography exam request form under review. Is it effective? Abstracts: abstract_id: PUBMED:22083005 Positron emission tomography computed tomography in oncology. The role of positron emission tomography computed tomography in oncological imaging has rapidly evolved. It has proven itself to be cost effective and alters patient management in a significant proportion of cases. This article discusses its current and future applications. abstract_id: PUBMED:18795494 Pediatric positron emission tomography-computed tomography protocol considerations. Pediatric body oncology positron emission tomography-computed tomography studies require special considerations for optimal diagnostic performance while limiting radiation exposure to young patients. Differences from routine adult procedures include the patient preparation phase, radiopharmaceutical dose, computed tomography acquisition parameters, and approach to computed tomography contrast materials and imaging sequence. Attention to these differences define the best practice for positron emission tomography-computed tomography examinations of children with cancer contributing to optimal care of these patients. abstract_id: PUBMED:21507695 Value of positron emission tomography and computer tomography (PET/CT) for urologic malignancies Positron emission tomography is a functional imaging technique that allows the detection of the regional metabolic rate, and is often coupled with other morphological imaging technique such as computed tomography. The rationale for its use is based on the clearly demonstrated fact that functional changes in tumor processes happen before morphological changes. Its introduction to the clinical practice added a new dimension in conventional imaging techniques. This review presents the current and proposed indications of the use of positron emission/computed tomography for prostate, bladder and testes, and the potential role of this exam in radiotherapy planning. abstract_id: PUBMED:23094230 18F-2-Deoxy-2-Fluoro-D-Glucose Positron Emission Tomography: Computed Tomography for Preoperative Staging in Gastric Cancer Patients. Purpose: The use of 18F-2-deoxy-2-fluoro-D-glucose positron emission tomography-computed tomography as a routine preoperative modality is increasing for gastric cancer despite controversy with its usefulness in preoperative staging. In this study we aimed to determine the usefulness of preoperative positron emission tomography-computed tomography scans for staging of gastric cancer. Materials And Methods: We retrospectively analyzed 396 patients' positron emission tomography-computed tomography scans acquired for preoperative staging from January to December 2009. Results: The sensitivity of positron emission tomography-computed tomography for detecting early gastric cancer was 20.7% and it was 74.2% for advanced gastric cancer. The size of the primary tumor was correlated with sensitivity, and there was a positive correlation between T stage and sensitivity. For regional lymph node metastasis, the sensitivity and specificity of the positron emission tomography-computed tomography were 30.7% and 94.7%, respectively. There was no correlation between T stage and maximum standardized uptake value or between tumor markers and maximum standardized uptake value. Fluorodeoxyglucose uptake was detected by positron emission tomography-computed tomography in 24 lesions other than the primary tumors. Among them, nine cases were found to be malignant, including double primary cancers and metastatic cancers. Only two cases were detected purely by positron emission tomography-computed tomography. Conclusions: Positron emission tomography-computed tomography could be useful in detecting metastasis or another primary cancer for preoperative staging in gastric cancer patients, but not for T or N staging. More prospective studies are needed to determine whether positron emission tomography-computed tomography scans should be considered a routine preoperative imaging modality. abstract_id: PUBMED:8782973 Positron emission tomography and single photon emission computed tomography. Neuroimaging techniques have had a dramatic impact on the evaluation and treatment of patients with epilepsy. In order to take full advantage of their potential, it is important to place them in clinical and electrophysiological context and to understand their technical limitations. Positron emission tomography with 18F-2-deoxyglucose and single photon emission computed tomography can provide valuable data for presurgical localization of epileptogenic zones. Interictal cerebral blood flow studies, however, using either positron emission tomography or simple photon emission computed tomography are unreliable. Positron emission tomography cerebral blood flow activation studies, on the other hand, are becoming very useful for presurgical cognitive mapping and may be able to replace the intracarotid amytal test for language and memory lateralization. There are a number of receptor ligands available for both positron emission tomography and simple photon emission computed tomography studies, including benzodiazepine, opiate, and cholinergic tracers. Increased mu opiate, decreased benzodiazepine, and increased monoamine oxidase B receptor binding have been reported. abstract_id: PUBMED:31293315 Pheochromocytoma: Positive on 131I-MIBG Single-Photon Emission Computed Tomography-Computed Tomography and Negative on 68Ga DOTANOC Positron Emission Tomography-Computed Tomography. Pheochromocytomas are tumors arising from sympathetic lineage-derived cells in adrenal medulla, and 68Ga DOTANOC positron emission tomography-computed tomography (PET-CT) has been found to be superior than 131I MIBG single-photon emission computed tomography-computed tomography (SPECT-CT) for initial localization/diagnosis of the adrenal lesion. We discuss the 68DOTANOC PET-CT and 131I MIBG SPECT-CT findings of a 24-year-old male who presented with clinical and biochemical findings suspicious of pheochromocytoma. abstract_id: PUBMED:25621021 Positron emission tomography/computed tomography for bone tumors (Review). The aim of the present study was to investigate positron emission tomography (PET)/computed tomography (CT) and its applications for the diagnosis and treatment of bone tumors. The advantages and disadvantages of PET/CT were also evaluated and compared with other imaging methods and the prospects of PET/CT were discussed. The PubMed, Medline, Elsevier, Wanfang and China International Knowledge Infrastructure databases were searched for studies published between 1995 and 2013, using the terms 'PET/CT', 'positron emission tomography', 'bone tumor', 'osteosarcoma', 'giant cell bone tumor' and 'Ewing sarcoma'. All the relevant information was extracted and analyzed. A total of 73 studies were selected for the final analysis. The extracted information indicated that at present, PET/CT is the imaging method that exhibits the highest sensitivity, specificity and accuracy. Although difficulties and problems remain to be solved, PET/CT is a promising non-invasive method for the diagnostic evaluation of and clinical guidance for bone tumors. abstract_id: PUBMED:29142348 18F-Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography in Tuberculosis: Spectrum of Manifestations. The objective of this article is to provide an illustrative tutorial highlighting the utility of 18F-fluorodeoxyglucose-positron emission tomography/computed tomography (18F-FDG-PET/CT) imaging to detect spectrum of manifestations in patients with tuberculosis (TB). FDG-PET/CT is a powerful tool for early diagnosis, measuring the extent of disease (staging), and consequently for evaluation of response to therapy in patients with TB. abstract_id: PUBMED:33062275 Modern radiopharmaceuticals for lung cancer imaging with positron emission tomography/computed tomography scan: A systematic review. Introduction: In this study, we evaluated the use and the contribution of radiopharmaceuticals to the field of lung neoplasms imaging using positron emission tomography/computed tomography. Methods: We conducted review of the current literature at PubMed/MEDLINE until February 2020. The search language was English. Results: The most widely used radiopharmaceuticals are the following:Experimental/pre-clinical approaches: (18)F-Misonidazole (18F-MISO) under clinical development, D(18)F-Fluoro-Methyl-Tyrosine (18F-FMT), 18F-FAMT (L-[3-18F] (18)F-Fluorothymidine (18F-FLT)), (18)F-Fluoro-Azomycin-Arabinoside (18F-FAZA), (68)Ga-Neomannosylated-Human-Serum-Albumin (68Ga-MSA) (23), (68)Ga-Tetraazacyclododecane (68Ga-DOTA) (as theranostic agent), (11)C-Methionine (11C-MET), 18F-FPDOPA, ανβ3 integrin, 68Ga-RGD2, 64Cu-DOTA-RGD, 18F-Alfatide, Folate Radio tracers, and immuno-positron emission tomography radiopharmaceutical agents.Clinically approved procedures/radiopharmaceuticals agents: (18)F-Fluoro-Deoxy-Glucose (18F-FDG), (18)F-sodium fluoride (18F-NaF) (bone metastases), and (68)Ga-Tetraazacyclododecane (68Ga-DOTA). The quantitative determination and the change in radiopharmaceutical uptake parameters such as standard uptake value, metabolic tumor volume, total lesion glycolysis, FAZA tumor to muscle ratio, standard uptake value tumor to liver ratio, standard uptake value tumor to spleen ratio, standard uptake value maximum ratio, and the degree of hypoxia have prognostic and predictive (concerning the therapeutic outcome) value. They have been associated with the assessment of overall survival and disease free survival. With the positron emission tomography/computed tomography radiopharmaceuticals, the sensitivity and the specificity of the method have increased. Conclusion: In terms of lung cancer, positron emission tomography/computed tomography may have clinical application and utility (a) in personalizing treatment, (b) as a biomarker for the estimation of overall survival, disease free survival, and (c) apply a cost-effective patient approach because it reveals focuses of the disease, which are not found with the other imaging methods. abstract_id: PUBMED:25860265 Positron emission tomography/computed tomography for lung cancer staging Background: PET/CT (Positron Emission Tomography/Computed Tomography) is widely used in nodal and metastatic staging of lung cancer patients. Aim: To analyze PET/CT detection of metastatic disease in patients with lung cancer. Material And Methods: We reviewed retrospectively F18Fluorodeoxyglucose PET/CT scans performed between December 2008 and December 2013. We selected 143 patients aged 30 to 92 years (63% males) with confirmed lung cancer referred for staging, with no previous treatment. We reviewed whole body PET/CT and brain magnetic resonance images. SUVmax (Standardized Uptake Value maximum) of primary pulmonary lesion, hilar/mediastinal nodes, and distant metastases were calculated. Results: Histological types encountered were adenocarcinoma in 55%, squamous-cell in 15%, small-cell in 8%, large-cell in 6% and adeno-squamous in 2%. In 22 cases (15%) histology was not available. Nodal involvement was observed in 60% of patients (44% hilar and 48% mediastinal). Skip metastases (mediastinal involvement without hilum involvement) were encountered in 17% of cases, and were significantly more common among high uptake lung tumors (p &lt; 0.01). Best SUVmax cut-off for node involvement was 4.4 for hilum and 4.0 for mediastinum (sensibility: 86.4%, specificity: 99.8%). Sixty six patients (46.2%) showed distant metastases on PET/CT. The most common metastases were osseous in 22%, adrenal in 16%, hepatic in14%, pulmonary in 14% and cerebral in 12%. PET/CT detected a second unexpected synchronic cancer in eight patients (6%). Conclusions: PET/CT is accurate for nodal staging using an uptake index as SUVmax. Distant metastases are common, especially in bone, adrenal glands and liver. Answer: Positron emission tomography-computed tomography (PET/CT) has become a significant tool in oncological imaging and is considered effective in various aspects of cancer management. It has been shown to be cost-effective and can alter patient management in a significant proportion of cases (PUBMED:22083005). PET/CT is particularly useful in the staging of cancers, such as lung cancer, where it can accurately detect nodal and metastatic disease, and even identify unexpected synchronous cancers (PUBMED:25860265). It has also been found to be useful in preoperative staging for gastric cancer patients, although its sensitivity varies with the stage of the cancer and it may not be as effective for T or N staging (PUBMED:23094230). In pediatric oncology, PET/CT requires special considerations to optimize diagnostic performance while limiting radiation exposure to young patients (PUBMED:18795494). For urologic malignancies, PET/CT is valuable due to its ability to detect functional changes in tumors before morphological changes become apparent (PUBMED:21507695). In the context of bone tumors, PET/CT is considered to have the highest sensitivity, specificity, and accuracy among imaging methods (PUBMED:25621021). PET/CT is also used beyond oncology, for instance, in the evaluation and treatment of patients with epilepsy, where it can provide valuable data for presurgical localization of epileptogenic zones and cognitive mapping (PUBMED:8782973). Additionally, it has utility in the early diagnosis and evaluation of response to therapy in patients with tuberculosis (PUBMED:29142348). In summary, the PET/CT exam is effective and has a broad range of applications in oncology and other medical fields, contributing to the diagnosis, staging, and treatment planning of various diseases. However, its effectiveness can vary depending on the type of cancer, the stage of the disease, and the specific clinical scenario.
Instruction: Do Changes in Perioperative and Postoperative Treatment Protocol Influence the Frequency of Pulmonary Complications? Abstracts: abstract_id: PUBMED:27220851 Do Changes in Perioperative and Postoperative Treatment Protocol Influence the Frequency of Pulmonary Complications? A Retrospective Analysis of Four Different Bariatric Groups. The current understanding of prophylaxis of pulmonary complications in bariatric surgery is weak. Purpose: The aim of this study was to observe how changes in perioperative and postoperative treatments affect the incidence of pulmonary complications in bariatric patients. Materials: This is a retrospective clinical study of 400 consecutive bariatric patients. The patients, who either underwent a sleeve gastrectomy or a Roux-en-Y gastric bypass, were divided consecutively into four subgroups with different approaches to perioperative treatment. Methods: The first group (patients 0-100) was recovered in the intensive care unit with minimal mobilization (ICU). They had a urinary catheter and a drain. The second group (patients 101-200) was similar to the first group, but the patients used a continuous positive airway pressure (CPAP) device intermittently (ICU-CPAP). The third group (patients 201-300) was recovered on a normal ward without a urinary catheter or a drain and used a CPAP device (ward-slow). The fourth group (patients 301-400) walked to the operating theater and was mobilized in the recovery room during the first 2 h after the operation (ward-fast). CPAP was also used. Primary endpoints were pulmonary complications, pneumonia, and infection, non-ultra descriptus (NUD). Results: The number of pulmonary complications among the groups was significantly different. A long operation time increased the risk for infection (p &lt; 0.001 95 % CI from 2.02 to 6.59 %). Conclusions: Operation time increases the risk for pulmonary complications. Changes in perioperative care toward the ERAS protocol may have a positive effect on the number of pulmonary complications. abstract_id: PUBMED:1983491 Perioperative respiratory therapy and postoperative pain therapy Especially patients with preexisting bronchopulmonary diseases or those undergoing operations in the upper abdomen or thoracotomies are susceptible to post-operative pulmonary complications. All patients at risk should learn the prophylactic respiratory maneuvers preoperatively. Perioperative use of incentive spirometers, breathing exercises or IPPB seems to reduce the incidence of postoperative pulmonary complications. Opioids are used usually for postoperative pain management, but unfortunately they are given mainly as i.m. injections, although an i.v. administration would be far better. If given in an equipotent dose, nearly every opioid provides sufficient postoperative analgesia. Wide interindividual variation in the needed dose requires that opioids be titrated intravenously. abstract_id: PUBMED:24835109 The effect of formalizing enhanced recovery after esophagectomy with a protocol. Enhanced recovery after surgery (ERAS) pathways aim to accelerate functional return and discharge from hospital. They have proven effective in many forms of surgery, most notably colorectal. However, experience in esophagectomy has been limited. A recent study reported significant reductions in pulmonary complications, mortality, and length of stay following the introduction of an ERAS protocol alone, without the introduction of any clinical changes. We instituted a similar change 16 months ago, introducing a protocol to provide a formal framework, for our existing postoperative care. This retrospective analysis compared outcome following esophagectomy for the 16 months before and 20 months after this change. Data were collected from prospectively maintained secure web-based multidisciplinary databases. Complication severity was classified using the Clavien-Dindo scale. Operative mortality was defined as death within 30 days of surgery, or at any point during the same hospital admission. Lower respiratory tract infection was defined as clinical evidence of infection, with or without radiological signs. Respiratory complications included lower respiratory tract infection, pleural effusion (irrespective of drainage), pulmonary collapse, and pneumothorax. Statistical analysis was performed using SPSS v21. One hundred thirty-two patients underwent esophagectomy (55 protocol group; 77 before). All were performed open. There were no differences between the two groups in terms of age, gender, operation, use of neoadjuvant therapy, cell type, stage, tumor site, or American Society of Anesthesiologists grade. Median length of stay was 14.0 days (protocol) compared with 12.0 before (interquartile range 9-19 and 9.5-15.5, respectively; P = 0.073, Mann-Whitney U-test). Readmission within 30 days of discharge occurred in five (9.26%) and six (8.19%; P = 1.000, Fisher's exact test). There were four in-hospital deaths (3.03%): one (1.82%) and three (3.90%), respectively (P = 0.641). There were no differences in the severity of complications (P = non-significant; Pearson's chi-squared). There were no differences in the type of complications occurring in either group. The protocol was completed successfully by 26 (47.3%). No baseline factors were predictive of this. In contrast to previous studies, we did not demonstrate any improvement in outcome by formalizing our existing pathway using a written protocol. Consequently, improvements in short-term outcome from esophagectomy within ERAS would seem to be primarily due to improvements in components of perioperative care. Consequently, we would recommend that centers introducing new (or reviewing existing) ERAS pathways for esophagectomy focus on optimizing clinical aspects of such standardized pathways. abstract_id: PUBMED:30219924 Intensive perioperative rehabilitation improves surgical outcomes after pancreaticoduodenectomy. Purpose: Although the mortality rate for pancreaticoduodenectomy (PD) has decreased to around 2.8-5% in high-volume centers, postoperative complications are still common in 30-50% of cases. Preoperative exercise, called "prehabilitation," has been recently reported to reduce the frequency of complications after surgery. This study aims to evaluate the impact of the intensive perioperative rehabilitation on improvement of surgical outcomes for patients undergoing PD. Methods: Between 2003 and 2014, 576 consecutive patients underwent PD in Wakayama Medical University Hospital. Of these, 331 patients received perioperative rehabilitation combined with prehabilitation and postoperative rehabilitation between 2009 and 2014. Previously, 245 patients underwent PD without perioperative rehabilitation between 2003 and 2008. We compared surgical outcomes between the patients undergoing PD with and without perioperative rehabilitation to evaluate the efficacy of our rehabilitation program. Results: The frequency of pulmonary complications was significantly lower in patients undergoing PD with perioperative rehabilitation than those without (0.9% vs. 4.3%, P = 0.011). There were no significant differences in other complication or mortality rates. Length of hospital stay was also shorter in patients receiving perioperative rehabilitation than that of those not receiving it (16 vs. 24 days, P &lt; 0.001). Conclusions: Intensive perioperative rehabilitation might reduce postoperative pulmonary complications and shorten postoperative hospital stay after PD. Therefore, we suggest that perioperative rehabilitation should be included as part of enhanced recovery after surgery for patients undergoing PD, although further large-scale studies are necessary to confirm our results. abstract_id: PUBMED:19144532 Short-term perioperative treatment with ambroxol reduces pulmonary complications and hospital costs after pulmonary lobectomy: a randomized trial. Objective: To assess in a randomized clinical trial the influence of perioperative short-term ambroxol administration on postoperative complications, hospital stay and costs after pulmonary lobectomy for lung cancer. Methods: One hundred and forty consecutive patients undergoing lobectomy for lung cancer (April 2006-November 2007) were randomized in two groups. Group A (70 patients): ambroxol was administered by intravenous infusion in the context of the usual therapy on the day of operation and on the first 3 postoperative days (1000 mg/day). Group B (70 patients): fluid therapy only without ambroxol. Groups were compared in terms of occurrence of postoperative complications, length of stay and costs. Results: There were no dropouts from either group and no complications related to treatment. The two groups were well matched for perioperative and operative variables. Compared to group B, group A (ambroxol) had a reduction of postoperative pulmonary complications (4 vs 13, 6% vs 19%, p=0.02), and unplanned ICU admission/readmission (1 vs 6, 1.4% vs 8.6%, p=0.1) rates. Moreover, the postoperative stay and costs were reduced by 2.5 days (5.6 vs 8.1, p=0.02) and 2765 Euro (2499 Euro vs 5264 Euro, p=0.04), respectively. Conclusions: Short-term perioperative treatment with ambroxol improved early outcome after lobectomy and may be used to implement fast-tracking policies and cut postoperative costs. Nevertheless, other independent trials are needed to verify the effect of this treatment in different settings. abstract_id: PUBMED:38056860 ESSENSE Concept and Perioperative Management in Esophageal Cancer Treatment The actual operation based on the philosophy of ESsential Strategy for Early Normalization after Surgery with patient's Excellent satisfaction (ESSENSE) in radical thoracic esophageal cancer surgery is described. ESSENSE, which is proposed by the Japanese Society of Surgical Metabolism and Nutrition to promote postoperative recovery, consists of four principles:reduction of invasive reactions, early independence of physical activity, early independence of nutrition intake, and perioperative anxiety reduction and motivation for recovery. Here, we describe the actual operation based on the ESSENSE philosophy in radical thoracic esophageal cancer surgery, which is classified as one of the highly invasive esophageal cancer surgeries. We have been performing perioperative management using the above protocol since April 2012. The outcomes of 334 patients up to April 2020 are described. Preoperative chemotherapy was administered in 74% of patients, 70% underwent thoracoscopic surgery, 50% had Clavien- DindoⅡ or higher postoperative complications, and 14% had postoperative pneumonia. The mean postoperative bed rest was 1.6 days. This contributed to a shorter hospital stay and fewer pulmonary complications compared with previous management. The four principles of ESSENSE are useful for early recovery programs in Japan. The ESSENSE should be implemented from this perspective according to the disease, medical facility, community, and family situation. abstract_id: PUBMED:19550338 The influence of perioperative oxygen concentration on postoperative lung function in moderately obese adults. Background And Objective: Obesity aggravates the negative effects of general anaesthesia and surgery on the respiratory system, resulting in decreased functional residual capacity and expiratory reserve volume, and increased atelectasis and ventilation/perfusion (Va/Q) mismatch. High-inspired oxygen concentrations also promote atelectasis. This study compares the effects of perioperative inspired low-oxygen and high-oxygen concentrations on postoperative lung function and pulse oximetry values in moderately obese patients (BMI 25-35). Methods: We prospectively studied 142 overweight patients, BMI 25-35, undergoing minor peripheral surgery; they were randomly allocated to receive either low-inspired or high-inspired oxygen concentrations during general anaesthesia. Premedication, general anaesthesia and respiratory patterns were standardized. Arterial oxygen saturation (pulse oximetry) was measured on air breathing. Inspiratory and expiratory lung functions were measured preoperatively (baseline) and at 10 min, 0.5, 2 and 24 h after extubation with the patient supine, in a 30 degrees head-up position. The two groups were compared using repeated-measure analysis of variance and t-test analysis. Results: The low-inspired oxygen group had significantly better arterial saturation during the first 24 h (P &lt; 0.01). Mid-expiratory flow 25 values indicating small airway collapse were significantly better in the low-oxygen group at all measurements (P &lt; 0.05). Conclusion: We conclude that postoperative lung function and arterial saturation is better preserved by a low-oxygen strategy, although it is not clear whether this has clinical relevance for the prevention of postoperative pulmonary complications. abstract_id: PUBMED:27272667 Influence of Liver Disease on Perioperative Outcome After Bariatric Surgery in a Northern German Cohort. Objectives: The aim of this study was to assess the prevalence of non-alcoholic fatty liver disease (NAFLD) in morbidly obese patients and evaluate the influence on perioperative complications. Background: Patients undergoing bariatric surgery have a high incidence of non-alcoholic steatohepatitis (NASH). Upcoming data indicates that liver disease has a significant effect on perioperative complications. However, the influence of NAFLD/NASH on perioperative outcome in bariatric patients is still controversial. Methods: We identified a total of 302 patients with concomitant liver biopsies, while performing either laparoscopic Roux-Y gastric bypass or sleeve gastrectomy. Liver biopsy was performed in case of abnormal liver appearance at time of bariatric surgery. Histological results were compared to perioperative complication rate. Results: NAFLD is common in our patient cohort. Abnormal findings in liver histology were found in 82.3 % of our patients. Liver cirrhosis was newly diagnosed in 12 patients (4 %). There were no complications due to liver biopsy. The mortality rate was 0.3 %, leakage rate was 1 %, and postoperative bleeding occurred in 3.3 %. Pulmonary complications were observed in 1.7 % and cardiovascular complications in 1.3 %. One patient developed portal vein thrombosis and one patient acute pancreatitis; both were treated conservatively. No patient had postoperative liver failure. We found no association between histological findings and perioperative outcomes. Conclusions: The prevalence of NAFLD among morbidly obese surgical patients was high, although this condition was not associated with increased risk for postoperative complications. Because of unexpected findings in intraoperative liver biopsies, the routine indication of liver biopsies in patients at high risk for liver disease should be discussed. abstract_id: PUBMED:38374814 Perioperative Risk Factors for Postoperative Pulmonary Complications After Minimally Invasive Esophagectomy. Background: Postoperative pulmonary complications (PPCs) are the most prevalent complication after esophagectomy and are associated with a worse prognosis. This study aimed to investigate the perioperative risk factors for PPCs after minimally invasive esophagectomy (MIE). Methods: Seven hundred and sixty-seven consecutive patients who underwent McKeown MIE via thoracoscopy and laparoscopy were retrospectively studied. Patient characteristics, perioperative data, and postoperative complications were analyzed. Results: The incidence of PPCs after MIE was 25.2% (193/767). Univariate analysis identified age (odds ratio [OR] 1.022, P = 0.044), male sex (OR 2.955, P &lt; 0.001), pulmonary comorbidities (OR 1.746, P = 0.032), chronic obstructive pulmonary disease (COPD) (OR 2.821, P = 0.003), former smoking status (OR 1.880, P = 0.001), postoperative albumin concentration (OR 0.941, P = 0.007), postoperative creatinine concentration (OR 1.011, P = 0.019), and perioperative transfusion (OR 2.250, P = 0.001) as risk factors for PPCs. In multivariate analysis, the independent risk factors for PPCs were male sex (OR 3.135, P &lt; 0.001), body mass index (BMI) (OR 1.088, P = 0.002), COPD (OR 2.480, P = 0.012), neoadjuvant chemoradiotherapy (OR 2.057, P = 0.035), postoperative albumin concentration (OR 0.929, P = 0.002), and perioperative transfusion (OR 1.939, P = 0.013). The area under the receiver operating characteristic curve for the predictive model generated by multivariate logistic regression analysis was 0.671 (95% confidence interval 0.628-0.713). Conclusions: Male sex, BMI, COPD, neoadjuvant chemoradiotherapy, postoperative albumin concentration, and perioperative transfusion were independent predictors of PPCs after MIE. abstract_id: PUBMED:26381728 Updates in perioperative medicine. Background: As our surgical population becomes older and more medically complex, knowledge of the most recent perioperative literature is necessary to optimize perioperative care. We aim to summarize and critique literature published over the past year with the highest potential to impact the clinical practice of perioperative medicine. Methods: We reviewed articles published between January 2014 and April 2015, identified via MEDLINE search. The final 10 articles selected were determined by consensus among all authors, with criteria for inclusion including scientific rigor and relevance to perioperative medicine practice. Results: Key findings include: long term β-blockade should be continued prior to surgery, routine screening with postoperative troponin is not recommended, initiation/continuation of aspirin or clonidine in the perioperative period is not beneficial and may increase adverse outcomes, preoperative diagnosis and treatment of obstructive sleep apnea may reduce risk of postoperative cardiovascular complications, new pulmonary risk indices are available that accurately estimate postoperative pulmonary complications, postoperative atrial fibrillation is associated with increased long-term stroke risk, risk scores such as the CHADS2 (Congestive heart failure, Hypertension, Age ≥75 years, Diabetes Mellitus, previous stroke or transient ischemic attack) are superior to the Revised Cardiac Risk Index in predicting adverse postoperative outcomes for patients with nonvalvular atrial fibrillation, and utilization of bridging anticoagulation comes with a much higher risk of bleeding compared to patients who are not bridged. Conclusions: The body of literature reviewed provides important information for clinicians caring for surgical patients across multiple fronts, including preoperative risk assessment, medication management, and postoperative medical care. Answer: Yes, changes in perioperative and postoperative treatment protocols can influence the frequency of pulmonary complications. A retrospective analysis of 400 consecutive bariatric patients showed that changes in perioperative care toward the Enhanced Recovery After Surgery (ERAS) protocol may have a positive effect on the number of pulmonary complications (PUBMED:27220851). Similarly, the use of perioperative respiratory therapy, such as incentive spirometers, breathing exercises, or intermittent positive-pressure breathing (IPPB), seems to reduce the incidence of postoperative pulmonary complications (PUBMED:1983491). The introduction of an ERAS protocol in esophagectomy did not demonstrate an improvement in outcome by formalizing existing pathways using a written protocol, suggesting that improvements in short-term outcome from esophagectomy within ERAS are primarily due to improvements in components of perioperative care (PUBMED:24835109). Intensive perioperative rehabilitation, including prehabilitation and postoperative rehabilitation, was shown to reduce postoperative pulmonary complications and shorten postoperative hospital stay after pancreaticoduodenectomy (PUBMED:30219924). Short-term perioperative treatment with ambroxol was found to reduce postoperative pulmonary complications and hospital costs after pulmonary lobectomy (PUBMED:19144532). The ESSENSE concept, which includes principles such as reduction of invasive reactions and early independence of physical activity, contributed to fewer pulmonary complications and a shorter hospital stay in patients undergoing radical thoracic esophageal cancer surgery (PUBMED:38056860). A study comparing the effects of perioperative inspired low-oxygen and high-oxygen concentrations on postoperative lung function in moderately obese patients found that a low-oxygen strategy better preserved postoperative lung function and arterial saturation (PUBMED:19550338). However, the prevalence of non-alcoholic fatty liver disease (NAFLD) among morbidly obese surgical patients was not associated with an increased risk for postoperative complications (PUBMED:27272667). Finally, a study on minimally invasive esophagectomy identified male sex, body mass index, chronic obstructive pulmonary disease, neoadjuvant chemoradiotherapy, postoperative albumin concentration, and perioperative transfusion as independent predictors of postoperative pulmonary complications (PUBMED:38374814). Overall, these findings suggest that perioperative and postoperative treatment protocols can significantly impact the frequency of pulmonary complications.
Instruction: Long-haul air travel before major surgery: a prescription for thromboembolism? Abstracts: abstract_id: PUBMED:15945525 Long-haul air travel before major surgery: a prescription for thromboembolism? Objective: To investigate the Incidence of postoperative venous thromboembolism (VTE) in patients who had flown long distances before major surgery. Patients And Methods: Using the Mayo Clinic computerized patient database, we Identified patients who had flown more than 5000 km before major surgery (travelers) and had experienced an episode of clinically significant VTE within 28 days after surgery. Individual medical records were reviewed for the diagnosis of VTE, pertinent risk factors, and outcome. We compared the Incidence of VTE in travelers to the incidence of VTE in patients from North America (nontravelers) undergoing similar surgical procedures. Results: Eleven patients met our criteria for long-haul air travel and clinically significant VTE within 28 days after surgery. Compared with nontravelers undergoing similar surgical procedures, long-haul travelers had a higher Incidence of VTE (4.9% vs 0.15%; P &lt; .001). Compared with nontravelers who developed VTE, travelers were younger (P = .006), developed VTE earlier in the postoperative course (P = .01), had higher American Society of Anesthesiologists physical status classification (P = .02), and had higher prevalence of smoking (P = .007). Of the 11 travelers with VTE, 10 were of Middle Eastern origin. Conclusion: Prolonged air travel before major surgery significantly increases the risk of perioperative VTE. Such patients should receive more Intensive VTE prophylactic measures during the flight and throughout the perioperative period. abstract_id: PUBMED:11777295 Air travel and thrombosis. It is now generally accepted that there is a link between long haul air travel and venous thromboembolism. A similar risk is also recognised with other modes of transport including long coach and car journeys, and the term 'traveller's thrombosis' is to be encouraged instead of 'economy class syndrome'. More research is required to quantify the precise risk, but the risk does appear to be small and largely confined to those with recognised risk factors, which include previous episode of thrombosis, hormonal therapy, recent surgery, malignancy and pregnancy. In addition, haematological abnormalities can predispose to thrombosis. Such thrombophilic disorders include the factor V Leiden mutation and deficiencies of natural anticoagulants such as antithrombin, protein C and protein S. General measures which can be taken to reduce the risk include leg exercises while seated. In addition, there is evidence to support the use of elasticated stockings, but evidence relating to the use of aspirin is less convincing. abstract_id: PUBMED:15706480 Travel, venous thromboembolism, and thrombophilia. Current evidence indicates that prolonged air travel predisposes to venous thrombosis and pulmonary embolism. An effect is seen once travel duration exceeds 6 to 9 hours and becomes obvious in long-haul passengers traveling for 12 or more hours. A recent records linkage study found that increase in thrombosis rate among arriving passengers peaked during the first week and was no longer apparent after 2 weeks. Medium- to long-distance travelers have a 2- to 4-fold increase in relative thrombosis risk compared with nontravelers, but the averaged absolute risk is small (approximately one symptomatic event per 2 million arrivals, with a case-fatality rate of approximately 2%) and there is no evidence that thrombosis is more likely in economy class than in business- or first-class passengers. It remains uncertain whether and to what extent thrombosis risk is increased by short-distance air travel or prolonged travel by motorcar, train, or other means. Most travelers who develop venous thrombosis or pulmonary embolism also have one or more other predisposing risk factors that may include older age, obesity, recent injury or surgery, previous thrombosis, venous insufficiency, malignancy, hormonal therapies, or pregnancy. Limited (though theoretically plausible) evidence suggests that factor V Leiden and the prothrombin gene mutation predispose to thrombosis in otherwise healthy travelers. Given that very many passengers with such predispositions do not develop thrombosis, and a lack of prospective studies to link predisposition with disease, it is not now possible to allocate absolute thrombosis risk among intending passengers or to estimate benefit-to-risk ratios or benefit-to-cost ratios for prophylaxis. Randomized comparisons using ultrasound imaging indicate a measurable incidence of subclinical leg vein thrombosis after prolonged air travel, which appears to increase with travel duration and is reduced by graded pressure elastic support stockings. Whether this surrogate outcome measure translates into clinical benefit remains unknown, but support stockings are likely to be more effective and have less adverse effects than the use of aspirin. abstract_id: PUBMED:11778354 Thromboembolism in travelers The association between long haul travel and the risk of venous thromboembolism are suspected for long time. Mostly air travel related thrombosis series have been reported in the literature. Risk factors can be classified as: 1. travel related factors (coach position, immobilization, prolonged air travel, narrow seat and room, diuretic effect of alcohol, insufficient fluid intake, dehydration, direct pressure on leg veins, rare inspiration). 2. air plane related risk factors (low humidity, relative hypoxia, stress). 3. patient related factors (hereditary and acquired thrombophylia, previous deep venous thrombosis, age over 40, recent surgery or trauma, gravidity, puerperium, oestrogen containing pills, varicosity, chronic heart disease, obesity, fever, diarrhoea, vomiting, smoking). No patient related factors were found in some cases. To reduce the hazards air travellers are rightly concerned to know the level of the risk and the airlines should be responsible for this information. People should discuss with their physician what prophlylactic measures should be taken, such as compression stockings or low molecular weight heparin. Not only flight but car, bus and train travellers are also at risk of developing venous thromboembolism. Long haul travel alone is a separate risk factor for venous thromboembolism. abstract_id: PUBMED:29150108 Economy class syndrome: what is it and who are the individuals at risk? The term 'economy class syndrome' refers to the occurrence of thrombotic events during long-haul flights that mainly occur in passengers in the economy class of the aircraft. This syndrome results from several factors related to the aircraft cabin (immobilization, hypobaric hypoxia and low humidity) and the passenger (body mass index, thrombophilia, oral contraceptives or hormone replacement therapy, cancer), acting together to predispose to excessive blood coagulation, which can result in venous thromboembolism. Several risk factors, both genetic and acquired, are associated with venous thromboembolism. The most important genetic risk factors are natural anticoagulant deficiencies (antithrombin, protein C and protein S), factor V Leiden, prothrombin and fibrinogen gene mutations and non-O blood group individuals. Acquired risk factors include age, pregnancy, surgery, obesity, cancer, hormonal contraceptives and hormone replacement therapy, antiphospholipid syndrome, infections, immobilization and smoking. People who have these risk factors are predisposed to hypercoagulability and are more susceptible to suffer venous thromboembolism during air travel. For these individuals, a suitable outfit for the trip, frequent walks, calf muscle exercises, elastic compression stockings and hydration are important preventive measures. Hence, it is essential to inform about economic class syndrome in an attempt to encourage Brazilian health and transport authorities to adopt measures, in partnership with the pharmaceutical industry, to prevent venous thromboembolism. abstract_id: PUBMED:15456346 Airline chair-rest deconditioning: induction of immobilisation thromboemboli? Air passenger miles will likely double by year 2020. The altered and restrictive environment in an airliner cabin can influence haematological homeostasis in passengers and crew. Flight-related deep venous thromboemboli (DVT) have been associated with at least 577 deaths on 42 of 120 airlines from 1977 to 1984 (25 deaths/million departures), whereas many such cases go unreported. However, there are four major factors that could influence formation of possible flight-induced DVT: sleeping accommodations (via sitting immobilisation); travellers' medical history (via tissue injury); cabin environmental factors (via lower partial pressure of oxygen and lower relative humidity); and the more encompassing chair-rest deconditioning (C-RD) syndrome. There is ample evidence that recent injury and surgery (especially in deconditioned hospitalised patients) facilitate thrombophlebitis and formation of DVT that may be exacerbated by the immobilisation of prolonged air travel. In the healthy flying population, immobilisation factors associated with prolonged (&gt;5 hours) C-RD such as total body dehydration, hypovolaemia and increased blood viscosity, and reduced venous blood flow (pooling) in the legs may facilitate formation of DVT. However, data from at least four case-controlled epidemiological studies did not confirm a direct causative relationship between air travel and DVT, but factors such as a history of vascular thromboemboli, venous insufficiency, chronic heart failure, obesity, immobile standing position, more than three pregnancies, infectious disease, long-distance travel, muscular trauma and violent physical effort were significantly more frequent in DVT patients than in controls. Thus, there is no clear, direct evidence yet that prolonged sitting in airliner seats, or prolonged experimental chair-rest or bed-rest deconditioning treatments cause DVT in healthy people. abstract_id: PUBMED:30984384 Case Report: Unprovoked venous thromboembolism in a young adult male. A 24-year-old male was presented to us with sudden onset of chest pain and dyspnea for the past one hour. There was no history of calf pain, trauma, surgery, prolonged immobilization, long-haul air travel, bleeding diathesis or any other co-morbidity. The patient denied any addiction history. The heart rate was 114 beats/min, and blood pressure was 106/90 mmHg. Electrocardiogram showed tachycardia with S 1Q 3T 3 pattern. The left arterio-venous Doppler study was suggestive of a thrombus in popliteal vein and sapheno-popliteal junction. The CT-Pulmonary Angiogram scan was suggestive of a massive pulmonary thromboembolism. The patient was thrombolysed with Intravenous Alteplase immediately and was put on tab Rivaroxaban for maintenance. He was later discharged after being stable. Unprovoked venous thromboembolism (VTE) is very rare and has the potential to lead to pulmonary embolism which could be disastrous, especially in young adults. We present such a case where unprovoked VTE was diagnosed and treated. This case suggests that high clinical suspicion is the key for the diagnosis of acute pulmonary embolism, especially in the absence of history suggestive of deep vein thrombosis. abstract_id: PUBMED:19465817 Three cases of pulmonary thromboembolism and extensive prayer (invocation) activity as a new possible risk factor. Pulmonary thromboembolism (PTE) is caused when thrombi are detached from the deep vein of the lower leg. In the field of forensic medicine, it is a well-known cause of sudden death. It has been reported that risk factors for PTE include surgery, trauma, extensive bed rest, and malignant neoplasm, among others; in addition, long-haul air travel is associated with a slightly increased risk for PTE, though such cases are rare. Recently, PTE had been reported in association with different conditions, such as ethrombosis, seated immobility thromboembolism, driving for long periods, and after traveling. The authors performed autopsies on 3 patients who died suddenly after 3 to 4 days of prayer in a prayer center or hermitage. It was confirmed that all deaths were caused by thrombi that had developed in the deep vein, obstructing the pulmonary artery. It was concluded that during repeated praying activities over an extensive time period, the kneeling position might have caused PTE. It is also possible that dehydration due to fasting may affect the formation of thrombi. According to the literature, PTE cases developed in association with prayer activity and position have not been reported to date, and so PTE caused by prayer activity is thought to be a new type of PTE developed in association with a certain life style. Therefore, people should be advised that a position involving a long period of immobilization, including long periods of prayer, could raise the risk of PTE. In addition, social policies to prevent the development of this kind of PTE are needed. abstract_id: PUBMED:21162604 Overview of venous thromboembolism. Thrombosis occurs at sites of injury to the vessel wall, by inflammatory processes leading to activation of platelets, platelet adherence to the vessel wall and the formation of a fibrin network. A thrombus that goes on to occlude a blood vessel is known as a thromboembolism. Venous thromboembolism begins with deep vein thrombosis (DVT), which forms in the deep veins of the leg (calf) or pelvis. In some cases, the DVT becomes detached from the vein and is transported to the right-hand side of the heart, and from there to the pulmonary arteries, giving rise to a pulmonary embolism (PE). Certain factors predispose patients toward the development of venous thromboembolism (VTE), including surgery, trauma, hospitalization, immobilization, cancer, long-haul travel, increased age, obesity, major medical illness and previous VTE; in addition, there may also be a genetic component to VTE. VTE is responsible for a substantial number of deaths per annum in Europe. Anticoagulants are the mainstay of both VTE treatment and VTE prevention, and many professional organizations have published guidelines on the appropriate use of anticoagulant therapies for VTE. Treatment of VTE aims to prevent morbidity and mortality associated with the disease, and any long-term complications such as VTE recurrence or post-thrombotic syndrome. Generally, guidelines recommend the use of low molecular weight heparins (LMWH), unfractionated heparin (UFH) or fondaparinux for the pharmacological prevention and treatment of VTE, with the duration of therapy varying according to the baseline characteristics and risk profile of the individual. Despite evidence showing that the use of anticoagulation prevents VTE, the availability of several convenient, effective anticoagulant therapies and the existence of clear guideline recommendations, thromboprophylaxis is underused, particularly in patients not undergoing surgery. Greater adherence to guideline-recommended therapies, such as LMWH, which can be administered on an outpatient basis, should reduce the mortality associated with this preventable disease. abstract_id: PUBMED:19135280 Risk of thromboembolism varies, depending on category of immobility in outpatients. Study Objective: Immobility predisposes to venous thromboembolism, but this risk may vary, depending on the underlying cause of immobility. Methods: This was a prospective, longitudinal outcome study of self-presenting emergency department (ED) patients who were from 12 hospitals and had suspected venous thromboembolism. Using explicit written criteria, clinicians recorded clinical features of each patient in the ED by using a Web-based data form. The form required one of 6 types of immobility: no immobility, general or whole-body immobility greater than 48 hours, limb (orthopedic) immobility, travel greater than 8 hours causing immobility within the previous 7 days, neurologic paralysis, or other immobility not listed above. Patients were followed for 45 days for outcome of venous thromboembolism, which required positive imaging results and clinical plan to treat. Odds ratios (ORs) were derived from logistic regression including 12 covariates. Results: From 7,940 patients enrolled, 545 of 7,940 (6.9%) were diagnosed with venous thromboembolism (354 pulmonary embolism, 72 deep venous thrombosis, 119 pulmonary embolism and deep venous thrombosis). Risk of venous thromboembolism varied, depending on immobility type: limb (OR=2.24; 95% confidence interval [CI] 1.40 to 3.60), general (OR=1.76; 95% CI 1.26 to 2.44), other (OR=1.97; 95% CI 1.25 to 3.09), neurologic (OR=2.23; 95% CI 1.01 to 4.92), and travel (OR=1.19; 95% CI 0.85 to 1.67). Other significant risk factors from multivariate analysis included age greater than 50 years (OR =1.5; 95% CI 1.25 to 1.82), unilateral leg swelling (OR=2.68; 95% CI 2.13 to 3.37), previous venous thromboembolism (OR=2.99; 95% CI 2.41 to 3.71), active malignancy (OR=2.23; 95% CI 1.69 to 2.95), and recent surgery (OR=2.12; 95% CI 1.61 to 2.81). Conclusion: In a large cohort of symptomatic ED patients, risk of venous thromboembolism was substantially increased by presence of limb, whole-body, or neurologic immobility but not by travel greater than 8 hours. These data show the importance of clarifying the cause of immobility in risk assessment of venous thromboembolism. Answer: Long-haul air travel before major surgery may significantly increase the risk of perioperative venous thromboembolism (VTE). A study using the Mayo Clinic's patient database found that patients who had flown more than 5000 km before major surgery and experienced a clinically significant VTE within 28 days after surgery had a higher incidence of VTE compared to non-travelers undergoing similar surgical procedures (4.9% vs 0.15%; P < .001). These travelers were also younger, developed VTE earlier in the postoperative course, had higher American Society of Anesthesiologists physical status classification, and had a higher prevalence of smoking. Notably, 10 out of the 11 travelers with VTE were of Middle Eastern origin (PUBMED:15945525). The risk of thrombosis associated with air travel, also known as 'traveler's thrombosis' or 'economy class syndrome', is generally accepted and is not confined to air travel but includes other modes of transport such as long coach and car journeys. The risk appears to be small and largely confined to those with recognized risk factors, which include previous episodes of thrombosis, hormonal therapy, recent surgery, malignancy, and pregnancy. Haematological abnormalities such as factor V Leiden mutation and deficiencies of natural anticoagulants can also predispose to thrombosis. Preventive measures include leg exercises while seated, the use of elasticated stockings, and possibly low molecular weight heparin for those at higher risk (PUBMED:11777295, PUBMED:15706480, PUBMED:11778354, PUBMED:29150108). In summary, prolonged air travel before major surgery is a significant risk factor for perioperative VTE, and patients undertaking such travel should receive more intensive VTE prophylactic measures during the flight and throughout the perioperative period (PUBMED:15945525).
Instruction: Is early mortality related to timing of surgery after fracture femur in the elderly? Abstracts: abstract_id: PUBMED:26136789 Preliminary results of an early vs delayed timing of surgery in the management of proximal femur fragility fractures. Introduction: The appropriate surgical timing for the treatment of proximal femur fractures is still debated. Advantages of a delayed surgery may be: stabilization of systemic diseases, decrease of the risk of perioperative mortality and morbidity. An early timing of surgery may allow: early mobilization, reduction of the risks of disability and hospital stays, early return to home of the patients. However, the effects on mortality are still discussed. Purpose: The purpose of this study is to assess the influence of the surgical timing on clinical outcomes, complications, and mortality in a preliminary experience of the early management of these fractures vs the delayed surgery. Methods: A series of 176 patients was retrospectively evaluated. 132 patients were followed-up for one year after surgery. The evaluation was performed by the assessment of the comorbidities, preoperative wait for surgery, type of fracture and procedures, hospital stay, and functional outcomes: 33 patients were operated with an early timing, 99 with a delayed surgery. Results: The mean mortality rate was 18.2% in the early timing (6/33 patients), and 23.2% in the delayed timing (23/99 patients): no significant difference was recorded in the preliminary analysis. Postoperative complications were recorded in 28 patients (21.2%): 4 patients were operated within 48 hours (12.1%) and 24 after 48 hours (24.2%) with no substantial differences. The postoperative hospital stay showed no correlation with the timing of surgery, as no evidence was found on the functional recovery and postoperative disability. Conclusions: No significant differences were found on the evaluated parameters in the two groups in the present preliminary study. A correlation between male sex and mortality, and male sex and postoperative complications was assessed. An enlargement of the study population is needed to surely clarify any effective differences, given the fact that recent studies seem to identify in the early treatment the better strategy to ensure the best recovery and the lower rate of mortality and complications. abstract_id: PUBMED:33150119 Mortality Following Distal Femur Fractures Versus Proximal Femur Fractures in Elderly Population: The Impact of Best Practice Tariff. Background and objectives The mortality after hip, proximal femur, fractures in elderly patients has steadily declined in the last decade in the United Kingdom as a result of implementing of multiple protocols focusing on prompt multidisciplinary pre- and post-operative optimization and reducing time to surgery. The pinnacle of these protocols is the development of the best practice tariff as an incentive program for hospitals that meet set criteria by the National Health Service (NHS) England in managing these injuries. Until the time of writing this paper, there was no parallel program for the management of fractures involving distal femur in the elderly. The aim of this study is to evaluate the outcomes of distal femur fractures in elderly patients against proximal femur fractures regarding post-injury mortality, the prevalence of surgical treatment and time delay till surgery. Methods A retrospective study of all patients above the age of 60 admitted to Queens Hospital Burton between 2010 and 2014 with fractures involving distal end of the femur. Patient data were assessed for demographic criteria, co-morbidities as per Charleston Comorbidities Index, type of management, time-lapse before surgery and 30-day, six-month and one-year mortality. Results were compared to an age-matched control group of patients with proximal femur fractures randomly selected during the same time window. Results The main demographic criteria such as age, gender, and Charleston Comorbidities Index were similar in both groups. There were more patients treated non-operatively in the distal femur group than in the proximal femur group (15% vs 4%). Time to surgery was statistically significantly longer in distal femur group compared to the proximal femur (49.130 hours vs 34.075 hours, P = 0.041). The mortality in distal femur group was higher at all times (9.68% at 30 days, 20.32% at six months and 34.41% at one year) when compared to that in the proximal femur group (6.99% at 30 days, 14.52% at six months, 21.51% at one year). Conclusion The distal femoral fractures showed higher mortality at 30 days, six months and one year compared to the proximal femur group. This could be partly influenced by the implementation of best practice tariff in the proximal femur fracture group reflected in less time to surgery, pre- and post-operative multidisciplinary approach and more frequent operative management. abstract_id: PUBMED:31317068 In-Hospital Mortality following Proximal Femur Fractures in Elderly Population. Context In India, hip fracture crude incidence above the age of 50 years was 129 per 100,000. Aims The aim of this study is to analyze the in-hospital mortality following proximal femur fractures in elderly Indian population. Methods and Material The study was done in Sri Ramachandra Medical Center, Chennai, India. Patient's records were retrospectively evaluated for a period of 3 years from January 1, 2015 to January 1, 2018. The inclusion criteria were patients both male and female aged more than 65 years admitted with the diagnosis of neck of femur or intertrochanteric or subtrochanteric fractures. The exclusion criteria were patients having any associated fracture or previous hip fracture history or diagnosed primary or secondary malignancies. To evaluate any surgical delay two groups were formed. After eliminating cases based on exclusion criteria, we had 270 patients for evaluation. Statistical Analysis Used The collected data were analyzed with IBM.SPSS statistics software 23.0 Version. To describe about the data descriptive statistics frequency analysis, percentage analysis were used for categorical variables and the mean and standard deviation (SD) were used for continuous variables. To find the significant difference between the bivariate samples, Student's t -test and analysis of variance (ANOVA) were used. The p -value of 0.05 is considered as significant level. Results We had a total of 24 mortalities with 15 males and 9 females. The in-hospital mortality of patients who underwent replacement surgeries for proximal femur fractures was 14 in our study. Sixteen of the in-hospital mortality patients had low Parker's mobility score. Twenty patients had mortality when surgery was delayed more than 48 hours. Conclusions In-hospital mortality in elderly patients having proximal femur fracture increases significantly if the patient was having low-preoperative mobility status, if surgery was delayed more than 48 hours, and if patient undergoes replacement surgeries. abstract_id: PUBMED:33992422 Mortality after distal femur fractures in the elderly. Introduction: the frequency of distal femur fractures in the elderly is rapidly increasing. A study of these fractures was conducted in our center in order to evaluate the comorbidities and the mortality associated with this entity. Material And Methods: all the distal femur fractures by low energy in patients over 65 years old at a tertiary center were included, between January 2010 and December 2016. Baseline characteristics, the type of fracture, comorbidities, and functional status before admission, were collected. The relationship of each of these variables to the final functional class, immediate and late complications and mortality during the follow-up. Fifty-nine patients were included, with a median age of 85.3 years (IQR 78.6-91.6). Fifty-one patients were women. In 10 patients, the fractures were atraumatic (postural change mainly in non-walking patients), and in 54 of the cases were treated surgically (6 with retrograde intramedullary nailing and 48 with lateral locking plate). The median time to surgery was 4.5 days (IQR 2-6) and 14 patients were operated within 48 hours. The median follow-up was 26.3 months. Results: fourteen patients died during the first year of follow-up. Factors independently associated with death during the first year after the fracture were: conservative treatment, and the inability to ambulate before the episode. The absence of certain comorbidities, such as chronic heart disease, and cancer, and an age under 80 years, behaved as protective factors. Conclusion: low-energy distal femur fractures comprise a severe injury in the elderly and are associated with high mortality. Surgical treatment showed better outcomes in terms of survival, with no significant differences depending on the type of fracture, the type of implant or the median time to surgery. abstract_id: PUBMED:32862297 No rest for elderly femur fracture patients: early surgery and early ambulation decrease mortality. Background: Literature has shown a significant correlation between early treatment and mortality in femur fractures, but the influence of time to ambulation on mortality has not been studied. The purpose of the present study is to evaluate whether time to ambulation is correlated to femur fracture mortality independently from time to surgery. Patients And Methods: All patients older than 65 years admitted at a level I trauma center with proximal femoral fracture during a 1-year period were included. The following data were collected: age, gender, date and time of admission to emergency department, height, weight, body mass index, type and side of fracture, ASA score, date and time of surgery, surgical time, time to ambulation, length of hospitalization, death during hospitalization, and mortality at 6 and 12 months. Results: The study sample comprises 516 patients. The mean age was 83.6 years; ASA score was 3-5 in 53% of patients; 42.7% presented with medial fracture; mean time between admission and surgery was 48.4 h; 22.7% of patients were not able to walk during the first 10 days after fracture; mean duration of hospitalization was 13 days; and mortality was 17% at 6 months and 25% at 1 year. Early surgery and walking ability at 10 days after trauma were independently and significantly associated with mortality at 6 months (p = 0.014 and 0.002, respectively) and at 1 year (0.027 and 0.009, respectively). Conclusions: Early surgery in femur fracture became a priority in health systems, but early postoperative physiotherapy also plays a major role in prevention of mortality: independently from surgical timing, patients who did not walk again within 10 days from surgery showed mortality rates higher than those of patients who did. Level Of Evidence: IV. abstract_id: PUBMED:16598329 Is early mortality related to timing of surgery after fracture femur in the elderly? Objective: The purpose of this study is to review the outcome of fracture femur in elderly patients (&gt;65 years), and to identify cause or causes of mortality. Methods: Between January 1996 and December 2002, 115 patients over 65 years were admitted and operated at King Fahd University Hospital, Al-Khobar. Fifty-six of patients suffered with femoral fractures. Demographic data collected included age, gender, site of fracture, co-morbidities, delay in surgery, duration of surgery, implant used and Anesthesia Society of America scoring (ASA). A minimum follow up of 12 months was considered important for inclusion in the study. Patients remained alive were assessed for their functional independence. Results: The data of 48 patients were gathered for analysis. There were 31 males and 17 females with a mean age of 76.5 years (age range 65-101 years). The mean follow up was 32.8 months (12-84 months +/- SD 17.81). There were 32 fractures of the trochanteric area. The average delay in surgery was 112 hours (24-280 hours). At the end of 24 months: 13 (27%) were dead and 28 (80%) were functionally independent similar to pre-injury status. There was statistical significance between the ASA score and the mortality (p&lt;0.005). However, mortality significantly higher in patients who underwent surgery under general anesthesia p&lt;0.05. Conclusion: Our data indicate that the mortality in the elderly is not related to the delay in surgery. The significant factors to early demise of patients were high ASA score, and the type of anesthesia used during surgery. abstract_id: PUBMED:28755108 Correlation between pre-injury mobility and ASA score with the mortality following femoral neck fracture in elderly. A poor pre-injury mobility and high American Society of Anaesthiologist (ASA) grading is thought to be associated with a poor survival following surgical treatment of femoral neck fracture in the elderly. Hence there are concerns among orthopaedic surgeons about surgical treatment in this group of patients. In this retrospective study, the pre-injury mobility and ASA scores of 401 patients with fractured neck of femur treated by surgery was assessed in relation to mortality following surgery within the first 30 days of injury. Following surgery, a temporary deterioration in the ASA grading and mobility was noticed. Patients who required intensive medical care following surgery had higher mortality rate. The mortality was 15% among patients with ASA III and 40% among patients with ASA IV. 14% of 65 immobile patients, 18% of those mobile with Zimmer frame passed away after surgery for femoral neck fracture. 6.1% of ASA I scorers died compared with 40% of ASA IV scorers; this difference was statistically significant (χ2=13.883, df=1, P&lt;0.001). Significant number of patients with ASA-IV (60%) and immobile patients (88%) survived following surgery for femoral neck fracture. Poor pre-injury mobility and high ASA scoring are associated with higher early mortality following surgery for femoral neck fracture, however, this should not preclude surgery for patients with poor pre-injury ASA grading and mobility sustaining femoral neck fracture, as significant number of our patients survived. abstract_id: PUBMED:38116023 Distal Femur Replacement: An Option for Osteoporotic Fractures in the Elderly. Background A distal femur fracture (DFF) around the native or prosthetic knee is commonly seen in the osteoporotic elderly population. Surgical management is required to restore the function. Fracture fixation requires a period of restricted weight-bearing; however, distal femoral replacement (DFR) allows immediate weight-bearing and quicker recovery. Methods All patients who underwent distal femur replacement from 2020 to 2023 at our hospital were retrospectively reviewed. Data related to the patient's demographics, medical comorbidities, preinjury mobility status, perioperative management and length of stay were collected. Results Eleven patients with 13 distal femoral replacements were included. There were 10 periprosthetic and 3 native fractures around the distal femur. Two patients had bilateral periprosthetic fractures. The median age was 84 years (range 62-95) with all patients being females. Eight patients were living in their homes while three were care home residents. The median duration of surgery was 120 min. The mean blood loss was 350 ml. Patients were mobilised out of bed at a median of three days and were able to walk for 2 meters with a frame at a mean of 10 days (range 3-15) except for two patients whose mobility was limited to the chair. The mean length of hospital stay was 32 days (range 8-54). All patients were discharged back to their original destination except for one who was shifted to a care home instead of her own home. Conclusion In our opinion, distal femur replacement provided a more favourable outcome with respect to pain management, early rehabilitation with full weight-bearing immediately following the surgery and fewer complications. Furthermore, in our hands, the surgical time was short with limited blood loss. abstract_id: PUBMED:33936949 Mortality profile after 2 years of hip fractures in elderly patients treated with early surgery. Background: In geriatric age group, hip fractures tend to become a major public health hazard. Due to this high occurrence, there is a need to develop standardized, effective, and multidisciplinary management for treatment. These elderly patients have excessive mortality that can extend ahead of the time of recovery. Early surgery after hip fractures has lead to a notable reduction in mortality rates. Still, it is considerably high as compared to other fractures. Methods: 266 patients of &gt;65 years who were operated within 72 h hours in a tertiary level health care centre for hip fractures were included. They were evaluated with X-rays and grade of Singh's index was noted. Mortality rates and the factors associated with it such as age, sex, co morbidities (using Charlson's co morbidity Index/CCI) were evaluated after 2 year follow up. Results: The overall 2-year mortality reported in our study population was 11.2%. It was broadly lower as compared to most of the other studies. It was 6.3% in females as compared to 18.1% in males. While it was reported to be only 6% in 65-74 years of age, it was 25% in patients who were 85 years and above. 76.6% of the patients had Singh's index of ≤ grade 3 showing osteoporosis. The patients with Low Charlson's score showed only 4.2% mortality while those with high Charlson's score showed 25.5% mortality. Conclusion: It was concluded that Mortality among elderly patients after early surgery after osteoporotic hip fractures is quite significant. The factors for improvement in long term survival post-hip fracture may include changing treatment patterns, increasing life expectancy and early surgery. Increase in age, female sex, and high CCI Scores were major risk factors of mortality after hip fractures in a 2-year follow-up period. abstract_id: PUBMED:31435104 Does Early Surgical Fixation of Proximal Femoral Fractures in Elderly Patients Affect Mortality Rates? Objective To analyze whether a reduction in the waiting time for orthopedic surgery in elderly patients with fracture of the proximal end of the femur leads to a reduction in in-hospital mortality, 6-month mortality, and duration of hospitalization. Methods This was a retrospective cohort study including 81 patients with intertrochanteric and subtrochanteric fractures who underwent surgical repair between 2015 and 2016 in a referral hospital, before and after a protocol for early surgical fixation (&lt; 48 hours) was implemented. Results The mean length of hospital stay decreased from 17 to 11 days. Regarding the in-hospital mortality rate, five patients died before, and another five died after the protocol was implemented. The 6-month postoperative mortality decreased from 26.7% to 19.4%. The results showed no statistical significance. Conclusion The present study demonstrated that there is a tendency to reduce the length of hospital stay and 6-month mortality when the surgery for the treatment of proximal femoral fractures in the elderly is performed within 48 hours of hospitalization. Answer: The relationship between early mortality and the timing of surgery after a femur fracture in the elderly is a subject of ongoing research and debate. Several studies have investigated this issue with varying results. One study found no significant differences in mortality rates or postoperative complications between patients who underwent early surgery (within 48 hours) and those who had delayed surgery for proximal femur fractures (PUBMED:26136789). However, the authors noted that recent studies suggest that early treatment may be the better strategy for ensuring the best recovery and lower rates of mortality and complications, indicating that further research with a larger study population is needed. Another study comparing mortality following distal femur fractures versus proximal femur fractures in the elderly population found that distal femur fractures had higher mortality rates at 30 days, six months, and one year compared to proximal femur fractures. The authors suggested that the implementation of best practice tariffs, which include prompt multidisciplinary pre- and post-operative optimization and reducing time to surgery, may have contributed to the lower mortality rates in the proximal femur fracture group (PUBMED:33150119). A study conducted in India reported that in-hospital mortality for elderly patients with proximal femur fractures increased significantly if surgery was delayed more than 48 hours, among other factors (PUBMED:31317068). Similarly, another study emphasized that early surgery and walking ability at 10 days after trauma were independently and significantly associated with lower mortality at 6 months and 1 year (PUBMED:32862297). Conversely, a study from King Fahd University Hospital indicated that mortality in the elderly was not related to the delay in surgery, with significant factors for early demise being high ASA score and the type of anesthesia used during surgery (PUBMED:16598329). In summary, while some studies suggest that early surgery may be associated with lower mortality rates in elderly patients with femur fractures (PUBMED:32862297; PUBMED:31317068), other research indicates that the timing of surgery may not be the sole determinant of early mortality, with factors such as ASA score, type of anesthesia, and pre-injury mobility also playing significant roles (PUBMED:16598329; PUBMED:28755108). Therefore, the relationship between early mortality and the timing of surgery after a femur fracture in the elderly appears to be complex and may be influenced by multiple factors.
Instruction: Primary care physician compensation method in medical groups: does it influence the use and cost of health services for enrollees in managed care organizations? Abstracts: abstract_id: PUBMED:9516000 Primary care physician compensation method in medical groups: does it influence the use and cost of health services for enrollees in managed care organizations? Context: Growth of at-risk managed care contracts between health plans and medical groups has been well documented, but less is known about the nature of financial incentives within those medical groups or their effects on health care utilization. Objective: To test whether utilization and cost of health services per enrollee were influenced independently by the compensation method of the enrollee's primary care physician. Design: Survey of medical groups contracting with selected managed care health plans, linked to 1994 plan enrollment and utilization data for adult enrollees. Setting: Medical groups, major managed care health plans, and their patients/enrollees in the state of Washington. Study Participants: Sixty medical groups in Washington, 865 primary care physicians (internal medicine, pediatrics, family practice, or general practice) from those groups and affiliated with 1 or more of 4 managed care health plans, and 200 931 adult plan enrollees. Intervention: The effect of method of primary care physician's compensation on the utilization and cost of health services was analyzed by weighted least squares and random effects regression. Main Outcome Measures: Total visits, hospital days, and per member per year estimated costs. Results: Compensation method was not significantly (P&gt;.30) related to utilization and cost in any multivariate analyses. Patient age (P&lt;.001), female gender (P&lt;.001), and plan benefit level (P&lt;.001) were significantly positively related to visits, hospital days, and per member per year costs. The primary care physician's age was significantly negatively related (P&lt;.001) to all 3 dependent measures. Conclusions: Compensation method was not significantly related to use and cost of health services per person. Enrollee, physician, and health plan benefit factors were the prime determinants of utilization and cost of health services. abstract_id: PUBMED:12406808 Provision of sexual health services to adolescent enrollees in Medicaid managed care. Objectives: This Seattle project measured sexual health services provided to 1112 Medicaid managed care enrollees aged 14 to 18 years. Methods: Three health maintenance organizations (HMOs) that provide Medicaid services for a capitated rate agreed to participate. These included a non-profit staff-model HMO, a for-profit independent practice association (IPA), and a non-profit alliance of community clinics. Analyses used health maintenance organizations' administrative data, chart reviews, and Medicaid encounter data. Results: Health maintenance organizations provided primary care to 54% and well care to 20% of Medicaid enrollees. Girls were more likely than boys to have their sexual history taken or to be given condom counseling. Only 27% of sexually active girls were tested for chlamydia, with significantly lower rates of testing among those who spoke English as a second language. The nonprofit staff-model plan outperformed the for-profit independent practice association on most measures. Conclusions: Substantial room for improvement exists in sexual health services delivery to adolescent Medicaid managed care enrollees. abstract_id: PUBMED:10645076 Use of preventive services by managed care enrollees: an updated perspective. We examined whether enrollees in managed care plans received more preventive services than enrollees in non-managed care plans did, by conducting an updated literature synthesis of studies published between 1990 and 1998. We found that 37 percent of comparisons indicated that managed care enrollees were significantly more likely to obtain preventive services; 3 percent indicated that they were significantly less likely to do so; and 60 percent found no difference. Enrollees in group/staff-model health maintenance organizations (HMOs) were more likely to receive preventive services, but there was little evidence, outside of Medicaid managed care, that managed care plans are worse at providing preventive services. However, most of the evidence is equivocal: Provision of preventive services was neither better nor worse in managed versus non-managed care plans. Because of the blurred distinctions among types of health plans, more research is needed to identify which plan characteristics are most likely to encourage appropriate utilization. abstract_id: PUBMED:10178492 Effect of compensation method on the behavior of primary care physicians in managed care organizations: evidence from interviews with physicians and medical leaders in Washington State. The perceived relationship between primary care physician compensation and utilization of medical services in medical groups affiliated with one or more among six managed care organizations in the state of Washington was examined. Representatives from 67 medical group practices completed a survey designed to determine the organizational arrangements and norms that influence primary care practice and to provide information on how groups translate the payments they receive from health plans into individual physician compensation. Semistructured interviews with 72 individual key informants from 31 of the 67 groups were conducted to ascertain how compensation method affects physician practice. A team of raters read the transcripts and identified key themes that emerged from the interviews. The themes generated from the key informant interviews fell into three broad categories. The first was self-selection and satisfaction. Compensation method was a key factor for physicians in deciding where to practice. Physicians' satisfaction with compensation method was high in part because they chose compensation methods that fit with their practice styles and lifestyles. Second, compensation drives production. Physician production, particularly the number of patients seen, was believed to be strongly influenced by compensation method, whereas utilization of ancillary services, patient outcomes, and satisfaction are seen as much less likely to be influenced. The third theme involved future changes in compensation methods. Medical leaders, administrators, and primary care physicians in several groups indicated that they expected changes in the current compensation methods in the near future in the direction of incentive-based methods. The responses revealed in interviews with physicians and administrative leaders underscored the critical role compensation arrangements play in driving physician satisfaction and behavior. abstract_id: PUBMED:9711452 Clinic provision of contraceptive services to managed care enrollees. Context: Since the initiation of managed health care, little information has been available on whether family planning agencies are seeking ways to serve (and obtain reimbursement for serving) the growing number of clients who are managed care enrollees. Methods: A 1995 mail survey sought information from a nationally representative sample of publicly funded family planning agencies about the agencies' involvement with managed health care plans and related clinic services, policies and practices. Completed surveys were received from 603 agencies, for an overall response rate of 68%. Results: One-half of all publicly funded family planning agencies had served known enrollees or managed care plans. One-quarter (24%) had served managed care enrollees under contract, while others sought out-of-plan reimbursement for services provided to enrollees (13%) or used other sources to cover the cost of these services (12%). Family planning clinics administered by hospitals and community health centers were more likely than other types of clinics to have contracts to provide full primary-care services to managed care enrollees, whereas Planned Parenthood affiliates were more likely to have contracts that covered the provision of contraceptive care only. Clinics administered by health departments rarely had secured managed care contracts (10%), and only 36% reported even serving managed care enrollees. Conclusions: The challenges presented by managed care, and agencies' responses to these challenges, vary according to the type of organization providing contraceptive care. Family planning agencies need to seek relationships with managed care organizations based on those services that their clinics can best supply. abstract_id: PUBMED:8726975 Managed care fundamentals: implications for health care organizations and health care professionals. Managed care is changing our health care delivery system as radically as the computer chip has changed telecommunications. Health care professionals and organizations that do not understand managed care's implications will not be prepared for the future. For example, one implication of managed care is payment capitation, which is the transfer of financial risk from the insurer to the provider. As a result, health care providers, including occupational therapy professionals, need to be better managers of scarce resources by recognizing the cost implications among various alternative procedures while still delivering quality care. Under managed care with capitation, occupational therapists will need to learn to provide services within the parameters of a fixed budget, requiring reengineering of the therapies and processes of care and a considerable reduction in the procedures and modalities for any given treatment or therapy. As a result, patients will be required to do more for themselves, and occupational therapists will have to become better patient educators and motivators. Additionally, managed care will require changes in professional curriculums, emphasis through continuing education, and assimilation of better cost information to practitioners to facilitate decision making. Implications of managed care other than payment capitation are assigning to enrollees a gatekeeper who is responsible for limiting access to costly specialty services, practicing utilization review to audit usage patterns and provide constructive recommendations to reduce costs and improve service quality, and forming networks and associations among medical providers for developing economies of scale and providing an integrated continuum of health care services to enrollees. abstract_id: PUBMED:12891474 Opportunities and risks of managed care Aim: The present paper aims at analysing and discussing managed care with its opportunities and risks. This analysis should be a further basis for a reasonable discussion concerning the implementation of managed care elements in the German Health Care system. Method: On the basis of an update literature research the relevant international experiences with managed care in several health care systems--especially in the United States and Switzerland--are analysed and described. Most relevant opportunities and risks of managed care are deduced from this analysis. Results: The most important opportunities of managed care are the stabilisation of health care costs, an improvement of health care processes and quality and a stronger consideration of preventive measures. The possibility of choosing between several health care models and more convenient health insurance fees are opportunities for the patients. Relevant risks of managed care involve the potential withholding of medical care with the necessity of a comprehensive quality assurance and negative influences on the physicians' autonomy. Furthermore, managed care may have negative effects on the relationship between patients and physicians or between general practitioners and medical specialists. Conclusions: Managed care has proven advantages in respect of cost stabilisation and quality improvement compared with traditional health care systems. If the risks and known problems of managed care are realised and avoided, the available opportunities could be an important option to reform the German Health Care system in respect of costs and quality. abstract_id: PUBMED:10783188 The cost of access to mental health services in managed care. Managed care has controlled the cost of specialty mental health services, but its impact on access to care is not well described. In a retrospective design, the study used empirical data to demonstrate a direct relationship between managed care plans' claims costs per member per month and the proportion of plan members who use specialty mental health services annually. Each increment of $1 per member per month in spending on claims was associated with a.9 percent increase in the proportion of enrollees receiving specialty mental health treatment. These data raise concerns that plans with low per-member per-month costs may unduly restrict access to specialty treatment. abstract_id: PUBMED:10387267 Trends in managed care organizations: implications for the physician executive. The rapid change in the managed health care industry is placing substantial demands on the managerial and leadership skills of physician executives. These changes are forcing a reevaluation of the fundamental principles of managed care organizations, specifically in terms of patient satisfaction, cost containment, and quality health care. Additionally, the physician executive will be confronted with substantial issues concerning future staffing needs. This article assesses the health care industry's environment to suggest where managed care is going and how physician executives should position themselves to optimize their position in the marketplace. abstract_id: PUBMED:12117130 Transmission of financial incentives to physicians by intermediary organizations in California. Many U.S. physicians participate in provider-sponsored organizations that act as their intermediaries in contracting with managed care plans, particularly where capitation contracts are used. Examining a survey of 153 intermediary entities in California, we trace the cascade of financial incentives from health plans through physician organizations to primary care physicians. Although the physician organizations received the vast majority (84 percent) of their revenues through capitation contracts, most of the financial risk related to utilization and costs was retained at the group level. Capitation of primary care physicians was common in independent practice associations (IPAs), but payments typically were restricted to primary care services. Thirteen percent of medical groups and 19 percent of IPAs provided bonuses or withholds based on utilization or cost performance, which averaged 10 percent of base compensation. Answer: According to the study conducted on the influence of primary care physician compensation method on the use and cost of health services for enrollees in managed care organizations, the compensation method was not significantly related to utilization and cost in any multivariate analyses (PUBMED:9516000). The study, which linked a survey of medical groups contracting with selected managed care health plans to 1994 plan enrollment and utilization data for adult enrollees, found that patient age, female gender, and plan benefit level were significantly positively related to visits, hospital days, and per member per year costs. In contrast, the primary care physician's age was significantly negatively related to all three dependent measures. The conclusion drawn from this study was that enrollee, physician, and health plan benefit factors were the prime determinants of utilization and cost of health services, rather than the method of compensation for primary care physicians.
Instruction: Image guided radiofrequency thermo-ablation therapy of chondroblastomas: should it replace surgery? Abstracts: abstract_id: PUBMED:24477425 Image guided radiofrequency thermo-ablation therapy of chondroblastomas: should it replace surgery? Objective: To assess the safety and effectiveness of image-guided radiofrequency ablation (RF ablation) in the treatment of chondroblastomas as an alternative to surgery. Materials And Methods: Twelve patients with histologically proven chondroblastoma at our institution from 2003 to date. We reviewed the indications, recurrences and complications in patients who underwent RF ablation. Results: Twelve patients were diagnosed with chondroblastoma. Out of these, 8 patients (6 male, 2 female, mean age 17 years) with chondroblastoma (mean size 2.7 cm) underwent RF ablation. Multitine expandable electrodes were used in all patients. The number of probe positions needed varied from 1 to 4 and lesions were ablated at 90 °C for 5 min at each probe position. The tumours were successfully treated and all patients became asymptomatic. There were no recurrences. There were 2 patients with knee complications, 1 with minor asymptomatic infraction of the subchondral bone and a second patient with osteonecrosis/chondrolysis. Conclusion: Radiofrequency ablation appears to be a safe and effective alternative to surgical treatment with a low risk of recurrence and complications for most chondroblastomas. RF ablation is probably superior to surgery when chondroblastomas are small (less than 2.5 cm) with an intact bony margin with subchondral bone and in areas of difficult surgical access. abstract_id: PUBMED:21767778 Percutaneous ablation of benign bone tumors. Percutaneous image-guided ablation has become a standard of practice and one of the primary modalities for treatment of benign bone tumors. Ablation is most commonly used to treat osteoid osteomas but may also be used in the treatment of chondroblastomas, osteoblastomas, and giant cell tumors. Percutaneous image-guided ablation of benign bone tumors carries a high success rate (&gt;90% in case series) and results in decreased morbidity, mortality, and expense compared with traditional surgical methods. The ablation technique most often applied to benign bone lesions is radiofrequency ablation. Because the ablation technique has been extensively applied to osteoid osteomas and because of the uncommon nature of other benign bone tumors, we will primarily focus this discussion on the percutaneous ablation of osteoid osteomas. abstract_id: PUBMED:16267666 Radiofrequency ablation of chondroblastoma using a multi-tined expandable electrode system: initial results. The standard treatment for chondroblastoma is surgery, which can be difficult and disabling due to its apo- or epiphyseal location. Radiofrequency (RF) ablation potentially offers a minimally invasive alternative. The often large size of chondroblastomas can make treatment with plain electrode systems difficult or impossible. This article describes the preliminary experience of RF treatment of chondroblastomas with a multi-tined expandable RF electrode system. Four cases of CT guided RF treatment are described. The tumour was successfully treated in all cases. In two cases, complications occurred; infraction of a subarticular chondroblastoma in one case and cartilage and bone damage in the unaffected compartment of a knee joint in the other. Radiofrequency treatment near a joint surface threatens the integrity of cartilage and therefore long-term joint function. In weight-bearing areas, the lack of bone replacement in successfully treated lesions contributes to the risk of mechanical failure. Multi-tined expandable electrode systems allow the treatment of large chondroblastomas. In weight-bearing joints and lesions near to the articular cartilage, there is a risk of cartilage damage and mechanical weakening of the bone. In lesions without these caveats, RF ablation appears promising. The potential risks and benefits need to be evaluated for each case individually. abstract_id: PUBMED:19304917 Chondroblastoma: radiofrequency ablation--alternative to surgical resection in selected cases. Purpose: To demonstrate that radiofrequency (RF) ablation can be used safely and effectively to treat selected cases of chondroblastoma. Materials And Methods: Approval was obtained from institutional review boards, research was in compliance with HIPAA protocol. The need to obtain informed consent was waived for retrospective review of patient records. The records of patients with biopsy-proved chondroblastoma who were treated with RF ablation at two academic centers from July 1995 to July 2007 were reviewed. RF ablation was performed with a single-tip electrode by using computed tomography for guidance. Lesion characteristics were determined from imaging studies obtained at the time of the procedure. Symptoms were assessed before and 1 day after the procedure. Longer-term follow-up was obtained from medical records. Results: Thirteen male and four female patients were treated (mean age, 17.3 years). The lesions were located in the proximal humerus (n = 7), proximal tibia (n = 4), proximal femur (n = 3), and distal femur (n = 3). The mean volume of the lesions was 2.46 mL. All patients reported relief of symptoms on postprocedure day 1. Three patients were lost to follow-up. Of the 14 patients for whom longer-term (mean, 41.3 months; range, 4-134 months) follow-up was available, 12 had complete relief of symptoms with no need for medications and full return to all activities. The patient who had the largest lesion of the study required surgical intervention because of collapse of the articular surface in the treatment area. Residual viable tumor was found at surgery. Another patient experienced mechanical problems that were thought to be unrelated to the RF ablation and was rendered pain-free after subsequent surgical treatment. Conclusion: Percutaneous RF ablation is an alternative to surgery for treatment of selected chondroblastomas. Larger lesions beneath weight-bearing surfaces should be approached with caution due to an increased risk of articular collapse and recurrence. abstract_id: PUBMED:25432292 Radiofrequency ablation of chondroblastoma: long-term clinical and imaging outcomes. Objectives: To investigate the long-term clinical and imaging outcomes of patients with chondroblastoma treated by radiofrequency ablation (RFA). Methods: Retrospective analysis of 25 consecutive patients treated with RFA from September 2006 to December 2013. Patients were reviewed within one month of the procedure, then every 3-6 months, and yearly for up to three years. Serial magnetic resonance imaging (MRI) was performed at follow-up to monitor recovery. Functional outcome was assessed using the Musculoskeletal Tumour Society Score (MSTS). Results: Pre-procedure MRI confirmed osteolytic lesions (size range 1.0-3.3 cm; mean 2.0 cm). Patients reported continued symptomatic improvement at four months review. Serial MRI confirmed progressive resolution of inflammation with fatty consolidation of cavity. 88 % of patients became asymptomatic during the follow up period. Three patients' (12 %) symptoms returned at 16, 22 and 24 months respectively after RFA. MRI and biopsy confirmed recurrence in these patients. Functional assessment using MSTS score had an average score of 97.5 %. Mean follow up for the study group was 49 months. Conclusion: RFA is an effective alternative to surgery in the management of chondroblastoma. We recommend a multi-disciplinary approach and RFA should be considered as a first-line treatment. Long-term follow-up is required for timely detection of recurrences. Key Points: • RFA is a safe and effective technique in the treatment of chondroblastoma. • Positive outcomes in 88 % patients at mean follow-up period of 49 months. • Local recurrences occurred in 12 % cases. • Long-term follow-up is required for timely detection of recurrences. • RFA should be considered as a first-line treatment for chondroblastoma. abstract_id: PUBMED:19468917 Treatment of bone tumours by radiofrequency thermal ablation. Radiofrequency thermal ablation (RFTA) is considered the treatment of choice for osteoid osteomas, in which it has long been safely used. Other benign conditions (chondroblastoma, osteoblastoma, giant cell tumour, etc.) can also be treated by this technique, which is less invasive than traditional surgical procedures. RFTA ablation is also an option for the palliation of localized, painful osteolytic metastatic and myeloma lesions. The reduction in pain improves the quality of life of patients with cancer, who often have multiple morbidities and a limited life expectancy. In some cases, these patients are treated with RFTA because conventional therapies (surgery, radiotherapy, chemotherapy, etc.) have been exhausted. In other cases, it is combined with conventional therapies or other percutaneous treatments, e.g., cementoplasty, offering faster pain relief and bone strengthening. A multidisciplinary approach to the management of these patients is recommended to select the optimal treatment, including orthopaedic surgeons, neurosurgeons, medical and radiation oncologists and interventional radiologists. abstract_id: PUBMED:36320002 Conservative surgery with microwave ablation for recurrent bone tumor in the extremities: a single-center study. Background: Surgical treatment for recurrent bone tumors in the extremities still presents a challenge. This study was designed to evaluate the clinical value of microwave ablation in the treatment of recurrent bone tumors. Methods: We present 15 patients who underwent microwave ablation for recurrent bone tumors during the last 7 years. The following parameters were analyzed for outcome evaluation: general condition, surgical complications, local disease control, overall survival, and functional score measured using the Musculoskeletal Tumor Society (MSTS) 93 scoring system. Results: Percutaneous microwave ablation in one patient with osteoid osteoma and another with bone metastasis resulted in postoperative pain relief. Thirteen patients received intraoperative microwave ablation before curettage or resection, including those with giant cell tumors of bone (6), chondroblastoma (2), osteosarcoma (2), undifferentiated sarcoma (1), and bone metastases (2). All patients achieved reasonable local tumor control in the mean follow-up of 29.9 months. The functional score was 24.1 for the 15 patients 6 months after the operation. Four patients had tumor metastasis and died, whereas 3 patients with tumors survived, and the remaining 8 patients without the disease survived. Conclusions: Microwave ablation represents an optional method for local control in treating recurrent bone tumors in the extremities. abstract_id: PUBMED:36451942 Radio Frequency Ablation for the Treatment of Appendicular Skeleton Chondroblastoma: Is It an Excellent Alternative? Systematic Review and Meta-Analysis. Radio frequency ablation (RFA) is a minimally invasive technique that has become recognized in clinical practice for treating chondroblastoma, although curettage with bone graft is the standard treatment. Chondroblastoma is a locally aggressive cartilaginous bone tumor, representing nearly 5% of benign bone tumors. Chondroblastoma shows a preference toward the epiphysis or apophysis of long bones, but it was also reported in vertebrae and flat bones. The management of chondroblastoma could be challenging due to the risk to injure the epiphyseal plate or difficult location. The aim of this study was to determine if RFA is a suitable alternative to curettage with bone graft for the treatment of chondroblastoma. Moreover, there will be an evaluation of RFA's effectiveness in terms of symptoms relief; we also define the proper size of the lesion to be treated with RFA, and discuss the complications after the procedure, including the recurrence rate. Furthermore, we review the best imaging method to evaluate the therapeutic response of RFA and for the detection of residual disease early after the ablation. A comprehensive PubMed and Google Scholar search followed the Preferred Reporting Items for Systematic Review and Meta-Analysis 2020 checklist guidelines. Ninety-seven patients were identified after reviewing the available full texts of nine articles. The results of the current review provide further evidence to support the use of RFA as an alternative option to surgery. abstract_id: PUBMED:24654528 Diagnosis and treatment of patellar chondroblastoma Chondroblastoma of the patella, rare occurred in patellar,is a kind of an uncommon benign bone tumor. Compared with giant cell tumor, the morbidity of chondroblastoma is lower. Meanwhile, its clinical manifestations are various, and images are very complicated. Therefore, the understanding of this kind of tumors may be limited even to the orthopedist. The differences of patellar chondroblastoma between other tumor in X-ray, CT and MRI is a spot in recent years. Sometimes patellar chondroblastoma coexists with aneurysmal bone cyst, which is a challenge to obtain an accurate pathological and radiological diagnosis. For the treatment, curettage and bone grafting is one the most popular method, but whether to perform a biopsy before surgery still remain controversy. Some new technique still has an unknown prospect for the treatment such as radiofrequency ablation. abstract_id: PUBMED:27966029 Chondroblastoma of the thoracic spine: a rare location. Case report with radiologic-pathologic correlation. Chondroblastoma is a rare benign cartilage neoplasm that arises from the appendicular skeleton in the vast majority of the cases (80%). Chondroblastoma of the spine is an even more rare condition (30 cases reported), and vertebral chondroblastomas, unlike chondroblastomas of the extremities, present with the appearance of an aggressive tumor on CT and MR imaging and occur at least a decade later. Even though vertebral chondroblastomas are very uncommon tumors, they should nonetheless be included in the differential diagnosis when encountered with an aggressive vertebral mass, and a histological confirmation should be performed. We present a case of chondroblastoma of the thoracic spine of a 27-year-old female for which detailed radiologic-pathologic correlation was obtained. Answer: The abstracts provided offer insights into the use of image-guided radiofrequency ablation (RFA) therapy for the treatment of chondroblastomas and its comparison with traditional surgical methods. According to the abstract from PUBMED:24477425, RFA appears to be a safe and effective alternative to surgery for most chondroblastomas, with a low risk of recurrence and complications. The study suggests that RFA may be superior to surgery for small chondroblastomas (less than 2.5 cm) with an intact bony margin and in areas of difficult surgical access. Similarly, the abstract from PUBMED:21767778 indicates that percutaneous image-guided ablation, most commonly RFA, is a standard practice for benign bone tumors, including chondroblastomas, with a high success rate and reduced morbidity, mortality, and expense compared to traditional surgical methods. However, the abstract from PUBMED:16267666 highlights potential risks associated with RFA, particularly near joint surfaces and in weight-bearing areas, where there is a risk of cartilage damage and mechanical weakening of the bone. This suggests that RFA may not be suitable for all cases of chondroblastoma. The abstract from PUBMED:19304917 also supports the use of RFA as an alternative to surgery for selected cases of chondroblastoma but cautions that larger lesions beneath weight-bearing surfaces should be approached with caution due to an increased risk of articular collapse and recurrence. Further support for RFA comes from PUBMED:25432292, which reports that RFA is an effective alternative to surgery in managing chondroblastoma, with positive outcomes in 88% of patients at a mean follow-up period of 49 months. However, local recurrences occurred in 12% of cases, indicating the need for long-term follow-up. The abstract from PUBMED:19468917 discusses RFA as a treatment option for various benign bone conditions, including chondroblastoma, and as a palliative option for painful osteolytic metastatic and myeloma lesions. PUBMED:36320002 presents microwave ablation, another form of thermal ablation, as an optional method for local control in treating recurrent bone tumors, including chondroblastoma.
Instruction: Local recurrence after initial multidisciplinary management of soft tissue sarcoma: is there a way out? Abstracts: abstract_id: PUBMED:27909132 Myxofibrosarcoma of the extremity and trunk: a multidisciplinary approach leads to good local rates of LOCAL control. Aims: Myxofibrosarcomas (MFSs) are malignant soft-tissue sarcomas characteristically presenting as painless slowly growing masses in the extremities. Locally infiltrative growth means that the risk of local recurrence is high. We reviewed our experience to make recommendations about resection strategies and the role of the multidisciplinary team in the management of these tumours. Patients And Methods: Patients with a primary or recurrent MFS who were treated surgically in our unit between 1997 and 2012 were included in the study. Clinical records and imaging were reviewed. A total of 50 patients with a median age of 68.4 years (interquartile range 61.6 to 81.8) were included. There were 35 men; 49 underwent surgery in our unit. Results: The lower limb was the most common site (32/50, 64%). The mean size of the tumours was 8.95 cm (1.5 to 27.0); 26 (52%) were French Fédération Nationale des Centres de Lutte Contre le Cancer grade III. A total of 21 (43%) had positive margins after the initial excision; 11 underwent further excision. Histology showed microscopic spread of up to 29 mm beyond macroscopic tumour. Local recurrence occurred in seven patients (14%) at a mean of 21 months (3 to 33) and 15 (30%) developed metastases at a mean of 17 months (3 to 30) post-operatively. Conclusion: High rates of positive margins and the need for further excision makes this tumour particularly suited to management by multidisciplinary surgical teams. Microscopic tumour can be present up to 29 mm from the macroscopic tumour in fascially-based tumours. Cite this article: Bone Joint J 2016;98-B:1682-8. abstract_id: PUBMED:37575880 Multidisciplinary management of recurrent synovial sarcoma of the chest wall. Background: Synovial sarcoma (SS) is part of soft tissue sarcomas (STS). An incidence between 5% to 10% is estimated. The origin is mesenchymal mainly affecting the extremities. Being even rarer at the chest level and vertebral body, representing around 1%. Histologically, it consists of 3 variants: monophasic, biphasic, and poorly differentiated. Surgical resection is a priority when it comes to multidisciplinary management. The prognosis of patients with SS over the years has improved markedly. Purpose: Understand and evaluate the multidisciplinary management of SS considering that the SS has a lowe prevalence and highly malignancy. Study Design: We present a case of a 31-year-old male who has a history of monophasic synovial sarcoma diagnosed in 2019 and underwent surgery. Patient came back after two years without symptoms and posterior to a control MRI we observed a local recurrence of SS. Methods: The literature was reviewed with a focus on best clinical and surgical strategy for recurrence of SS. Results: The patient recovered well with return to his normal daily activities. The review of the literature shows us the importance of the multidisciplinary management for the optimal clinical and surgical approach of SS recurrence. Conclusions: SS represents a unique variant of STS, with malignant and metastatic potential. Being a rare pathology, an adequate multidisciplinary management is essential when providing optimal care for the patient. abstract_id: PUBMED:37526250 Local recurrence management of extremity soft tissue sarcoma. Patients diagnosed with soft tissue sarcoma (STS) present a number of challenges for physicians, due to the vast array of subtypes and aggressive tumor biology. There is currently no agreed-upon management strategy for these tumors, which has led to the ongoing debate surrounding how frequently surveillance scans should be performed following surgery. However, advances in multidisciplinary care have improved patient outcomes over recent years. The early detection of local recurrence reflects a more aggressive tumor, even in association with the same histopathologic entity. Treating the local recurrence of extremity STS is a difficult clinical challenge. The goal should be to salvage limbs when possible, with treatments such as resection and irradiation, although amputation may be necessary in some cases. Regional therapies such as high-intensity, low-dose or interleukin-1 receptor antagonist treatment are appealing options for either definitive or adjuvant therapy, depending on the location of the disease's recurrence. The higher survival rate following late recurrence may be explained by variations in tumor biology. Since long-term survival is, in fact, inferior in patients with high-grade STS, this necessitates the implementation of an active surveillance approach. abstract_id: PUBMED:20700676 Local recurrence after initial multidisciplinary management of soft tissue sarcoma: is there a way out? Background: Multimodality treatment of primary soft tissue sarcoma by expert teams reportedly affords a low incidence of local recurrence. Despite advances, treatment of local recurrence remains difficult and is not standardized. Questions/purposes: We (1) determined the incidence of local recurrence from soft tissue sarcoma; (2) compared characteristics of the recurrent tumors with those of the primary ones; (3) evaluated local recurrences, metastases and death according to treatments; and (4) explored the relationship between the diagnosis of local recurrence and the occurrence of metastases. Methods: From our prospective database, we identified 618 soft tissue sarcomas. Thirty-seven of the 618 patients (6%) had local recurrence. Leiomyosarcoma was the most frequent diagnosis (eight of 37). The mean delay from original surgery was 22 months (range, 2-75 months). Mean size was 4.8 cm (range, 0.4-28.0 cm). Median followup after local recurrence was 16 months (range, 0-98 months). Results: Recurrent tumors had a tendency toward becoming deeper seated and higher graded. Nineteen of the 37 patients with recurrence underwent limb salvage (nine free flaps) and six had an amputation. Twenty-two (59%) had metastases, including 10 occurring after the local recurrence event at an average delay of 21 months (range, 1-34 months). Six patients developed additional local recurrences, with no apparent difference in risk between amputation (two of six) and limb salvage (four of 19). Conclusions: Patients with a local recurrence of a soft tissue sarcoma have a poor prognosis. Limb salvage and additional radiotherapy remain possible but with substantial complications. Amputation did not prevent additional local recurrence or death. abstract_id: PUBMED:32770258 Multidisciplinary surgical treatment approach for dermatofibrosarcoma protuberans: an update. Dermatofibrosarcoma protuberans (DFSP) is a cutaneous sarcoma that has remained a challenge for oncologic and reconstructive surgeons due to a high rate of local recurrence. The objective of this study is to investigate the oncologic and reconstructive benefits of employing a multidisciplinary two-step approach to the treatment of DFSP. A retrospective review was conducted using a prospectively collected database of all patients who underwent resection and reconstruction of large DFSPs by a multidisciplinary team, including a Mohs micrographic surgeon, surgical oncologist, dermatopathologist, and plastic and reconstructive surgeon, at one academic institution from 1998-2018. Each patient underwent Mohs micrographic surgery for peripheral margin clearance (Step 1) followed by wide local excision (WLE) of the deep margin by surgical oncology and immediate reconstruction by plastic surgery (Step 2). 57 patients met inclusion criteria. Average defect size after WLE (Step 2): 87.3 cm2 (range 8.5-1073.5 cm2). Mean follow-up time was 37 months (range 0-138 months). There were no cases of recurrence. A two-step multidisciplinary surgical treatment approach for DFSP minimizes risk of recurrence, decreases patient discomfort, and allows immediate reconstruction after deep margin clearance. abstract_id: PUBMED:29785452 Local recurrence of soft-tissue sarcoma: issues in imaging surveillance strategy. Soft-tissue sarcomas pose diagnostic and therapeutic challenges to physicians, owing to the large number of subtypes, aggressive tumor biology, lack of consensus on management, and controversy surrounding interval and duration of surveillance scans. Advances in multidisciplinary management have improved the care of sarcoma patients, but controversy remains regarding strategies for surveillance following definitive local control. This review provides an updated, comprehensive overview of the current understanding of the risk of local recurrence of soft-tissue sarcoma, by examining the literature based on features such as histological type and grade, tumor size, and resection margin status, with the aim of helping clinicians, surgeons, and radiologists to develop a tailored approach to local imaging surveillance. abstract_id: PUBMED:8501911 Soft tissue sarcoma: the enigma of local recurrence. Local recurrence following the treatment of soft tissue sarcoma has been long recognized as a grave prognostic sign. Nevertheless, many investigators have recently suggested that local recurrence following limited surgery ("local persistence") may be a manifestation of a tumor's size and metastatic potential and not a cause of tumor cell dissemination. The author reviewed the experience of several investigators with local persistence. This event was not found to be a threat to survival. The author offers an explanation for this unexpected finding. Soft tissue tumors vary widely in their metastatic potential, and patients also may vary widely in their ability to resist the distant implantation of circulating tumor cells. Patients with a low level of host resistance may be more susceptible to both distant metastases and local persistence, and vice versa. Weaker patients succumb to their initial tumor. Patients who survive the circulating tumor cells from their primary tumor may be immunologically prepared to survive the local persistence of a similar volume of tumor without developing distant disease. abstract_id: PUBMED:36772961 Oncologic outcomes in myxofibrosarcomas: the role of a multidisciplinary approach and surgical resection margins. Backgrounds: Myxofibrosarcomas (MFS) are malignant soft tissue sarcomas with an infiltrative growth pattern and propensity for local recurrence(LR).We aimed to assess our management of MFS and make recommendations about the role of a multidisciplinary team approach and margin widths. Methods: Fifty-seven patients were identified with MFS treated at a single sarcoma centre between 1998 and 2020. Patients were stratified based on whether they presented for a planned resection (59.6%) or after an unplanned resection (40.4%) performed at a non-specialized facility. All patients underwent radiotherapy before definitive surgery. Results: 73.7% underwent a combined onco-plastic approach. The 5 year LRFS rate was 78.2% (84.4%, planned, versus 70.1%, unplanned, P = 0.194) and found comparable oncological outcomes between the planned and unplanned groups for the 5 year metastasis free survival (74.5% versus 86.1%, P = 0.257), disease free survival (70.1% versus 72.4%, P = 0.677), and Overall Survival (64.5% versus 75.9%, P = 0.950). Margin width ≥ 2 cm was obtained in 84.2% of cases and improved local control (HR = 0.22; 95% CI 0.06-0.81; P = 0.023), metastasis (HR = 0.24; 95% CI 0.07-0.80; P = 0.019) and mortality rates (HR = 0.23; 95% CI 0.09, 0.61; P = 0.003) compared to &lt;2 cm. Margin width &gt; 3 cm did not further affect oncological outcomes. Conclusion: Our study shows that a multidisciplinary team approach allows the achievement of low local recurrence rate and good oncological outcomes of myxofibrosarcomas, regardless of presentation status. We recommend a minimum of 2 cm margin width. abstract_id: PUBMED:37642010 Optimal timing of re-excision in synovial sarcoma patients: Immediate intervention versus waiting for local recurrence. Background: To investigate the difference in efficacy of re-excision in synovial sarcoma patients with and without residual tumor following unplanned excision, and to compare the prognostic outcomes of immediate re-excision versus waiting for local recurrence. Method: This study included synovial sarcoma patients who underwent re-excision at our center between 2009 and 2019, categorized into groups based on unplanned excision and local recurrence. Analyzed endpoints included overall survival (OS), local recurrence-free survival (LRFS), and distant relapse-free survival (DRFS). Prognostic factors associated with these three different survival outcomes were analyzed through the use of Kaplan-Meier curves and Cox regression approaches. Result: In total, this study incorporated 109 synovial sarcoma patients, including 32 (29.4%) with no residual tumor tissue identified after re-excision, 31 (28.4%) with residual tumor tissue after re-excision, and 46 (42.2%) with local recurrence after initial excision. Patients were assessed over a median 52-month follow-up period. The respective 5-year OS, 5-year LRFS, and 5-year DRFS rates were 82.4%, 76.7%, and 74.2% for the nonresidual group, 80.6%, 80.4%, and 77.3% for the residual tumor tissue group, and 63.5%, 50.7%, and 46.3% for the local recurrence group. There was no significant difference in OS of nonresidual group and residual group patients after re-excision (p = 0.471). Concurrent or sequential treatment with chemotherapy and radiotherapy significantly reduced the risk of metastasis and mortality when compared with noncombined chemoradiotherapy, and was more effective in the local recurrence group (p &lt; 0.05). Conclusion: Prompt and adequate re-excision is crucial for patients with synovial sarcoma who undergo initial inadequate tumor excision, and their prognosis is significantly better compared with patients who delay re-excision until local recurrence. abstract_id: PUBMED:9053489 Association of local recurrence with subsequent survival in extremity soft tissue sarcoma. Purpose: The aim of this study was to analyze local recurrence in a large cohort of prospectively followed patients with primary extremity soft tissue sarcoma. In particular, we analyzed the correlation of local recurrence with subsequent metastasis and disease-specific survival. Patients And Methods: Patients who underwent treatment for primary extremity soft tissue sarcoma from July 1982 through July 1995 at Memorial Sloan-Kettering Cancer Center were the subject of this study. Local recurrence, distant metastasis, and disease-specific survival were used as end points of the study. The influence of local recurrence on subsequent distant metastasis and disease-specific survival were examined using the Cox proportional hazards model. Results: We treated 911 patients, of whom 297 (33%) developed recurrent disease. Local recurrence occurred in 116 patients (13%), metastasis in 167 (18%), and synchronous local recurrence and metastasis in 13 (2%). Of 116 patients who developed local recurrence, 38 subsequently developed metastasis and 34 died of disease. Metastasis after local recurrence was predicted in patients with initial high-grade (P = .005; risk = 3.5) or deep (P = .02; risk = 2.9) tumors. Tumor mortality after local recurrence was predicted in patients with initial high-grade (P = .007; risk = 3.7) or large (&gt; 5 cm; P = .01; risk = 3.2) primary tumors. Discussion: These findings suggest that there is a strong association of local recurrence with the development of subsequent metastasis and tumor mortality, and that local recurrence is a poor prognostic factor. It would seem prudent to consider patients who develop local recurrence and have high-grade tumors as being at high risk for systemic disease and therefore eligible for investigational adjuvant systemic therapy. Answer: Local recurrence of soft tissue sarcoma (STS) after initial multidisciplinary management remains a significant challenge, with a poor prognosis for patients who experience it. Despite advances in treatment, the management of local recurrence is not standardized and can be difficult (PUBMED:20700676). The incidence of local recurrence is reported to be around 6%, with recurrent tumors tending to become deeper seated and higher graded. Treatment options for local recurrence include limb salvage with or without radiotherapy and amputation, although neither approach guarantees prevention of further local recurrence or death (PUBMED:20700676). A multidisciplinary approach is crucial in managing STS, including myxofibrosarcomas (MFS) and synovial sarcomas (SS), to achieve good local control rates and improve patient outcomes (PUBMED:27909132; PUBMED:37575880). For MFS, achieving wide surgical margins is particularly important due to the infiltrative growth pattern of the tumor, with microscopic tumor spread potentially extending up to 29 mm beyond the macroscopic tumor (PUBMED:27909132). A multidisciplinary team approach has been shown to allow the achievement of low local recurrence rates and good oncological outcomes, with a recommended minimum margin width of 2 cm (PUBMED:36772961). In the case of SS, immediate re-excision after inadequate initial tumor excision is crucial, as the prognosis is significantly better compared to patients who delay re-excision until local recurrence (PUBMED:37642010). Additionally, the use of combined chemotherapy and radiotherapy can significantly reduce the risk of metastasis and mortality, especially in the local recurrence group (PUBMED:37642010). For dermatofibrosarcoma protuberans (DFSP), a multidisciplinary two-step approach involving Mohs micrographic surgery followed by wide local excision and immediate reconstruction has been shown to minimize the risk of recurrence (PUBMED:32770258). Overall, while local recurrence of STS after initial treatment is a serious concern, a multidisciplinary approach that includes careful surveillance, wide surgical margins, and combined treatments can provide a way forward for managing these challenging cases (PUBMED:20700676; PUBMED:27909132; PUBMED:36772961; PUBMED:32770258; PUBMED:37642010).
Instruction: Are the "life-support" courses updated? Abstracts: abstract_id: PUBMED:16609101 Are the "life-support" courses updated? An evaluation of their literature base. Background: As a condition of employment, many physicians practicing in the United States are mandated to remain current in their certification of some or all of the life-support courses. These courses reputedly set the standard of care by establishing nationally recognized paradigms of resuscitation. These courses' textbooks are revised and re-released at regular intervals. Objectives: To determine whether the source data for these texts are vigorously updated with each revision and whether there is an obvious literature-based impetus to release a new version. Methods: A comparison was made of the years of the references contained within the three most recent textbook editions of the advanced cardiac life support (ACLS), advanced trauma life support, basic life support, and pediatric advanced life support courses. The years of the references were tallied for each text, and these tallies were compared both within and between the courses. Data were divided into three groups: group 1, references published before the previous versions' release; group 2, references published after previous versions' release; group 3, references dated within three years of the texts' release. Results: There appears to be a large amount of overlap of group 1 data throughout most of the course texts; the number of references in group 2 and 3 varies greatly between and within these courses. Conclusions: With one exception (ACLS in 2003), the life-support courses appear to be based on similar reference sets for the last 7 to 11 years. There may be a reason other than an availability of a critical mass of new information that prompts the release of a new edition of these life-support courses. abstract_id: PUBMED:32351644 Availability of basic life support courses for the general populations in India, Nigeria and the United Kingdom: An internet-based analysis. Background: The number of lay people willing to attempt cardiopulmonary resuscitation (CPR) in real life is increased by effective education in basic life support (BLS). However, little is known about access of general public to BLS training across the globe. This study aimed to investigate availability and key features of BLS courses proposed for lay people in India, Nigeria and the United Kingdom (UK). Methods: A Google search was done in December 2018, using English keywords relevant for community resuscitation training. Ongoing courses addressing BLS and suitable for any adult layperson were included in the analysis. On-site training courses were limited to those provided within the country's territory. Results: A total of 53, 29 and 208 eligible courses were found for India, Nigeria and the UK, respectively. In the UK, the number of courses per 10 million population (31.5) is 79 and 21 times higher than that in India (0.4) and Nigeria (1.5). Course geography is limited to 28% states and one union territory in India, 30% states and the Federal Capital Territory in Nigeria. In the UK, the training is offered in all constituent countries, with the highest prevalence in England. Courses are predominantly classroom-based, highly variable in duration, group size and instructors' qualifications. For India and Nigeria, mean cost of participation is exceeding the monthly minimum wage. Conclusion: In contrast to the UK, the availability and accessibility of BLS courses are critically limited in India and Nigeria, necessitating immediate interventions to optimize community CPR training and improve bystander CPR rates. abstract_id: PUBMED:8953961 Life support courses: are they effective? Study Objective: To determine the effectiveness of life support courses for health care providers on the basis of one of three outcomes: (1) patient mortality and morbidity, (2) retention of knowledge or skills, and (3) change in practice behavior. Methods: English-language articles from 1975 to 1992 were identified through MEDLINE and ERIC searches, bibliographies of articles, and current abstracts. Studies were considered relevant if they included a study population of life support providers, an intervention of any of the identified life support courses, and assessment of at least one of the three listed outcomes. Relevant studies were selected and validity scores were assigned to them by agreement of two independent reviewers, using a structured form to assess validity. Data on setting, methods, participants, intervention, and outcomes were then abstracted and verified. Results: Seventeen of 67 identified studies pertaining to life support courses met the inclusion criteria. (1) All three mortality and morbidity studies indicated a positive impact, with an overall odds ratio of.28 (95% confidence interval [Cl], .22 to .37). (2) No net increase in scores was found in 5 of 8 studies of retention of knowledge and in 8 of 9 studies of skills retention. Two of three studies reporting refresher activities yielded positive effects on knowledge retention. Outcomes were not significantly different between groups taught with modular or didactic techniques. (3) Studies assessing behavioral outcome were methodologically weak. Conclusion: Among providers, retention of knowledge and skills acquired by participation in support courses is poor. However, refresher activities increase knowledge retention. Modular courses are as good as lectures for learning course material. There is evidence that use of the Advanced Trauma Life Support course has decreased mortality and morbidity. Further studies of patient outcome and provider behaviors are warranted. abstract_id: PUBMED:35204949 Evaluation of Pediatric Immediate Life Support Courses by the Students. A retrospective analysis was performed of 1637 questionnaires among students of immediate pediatric life support (IPLS) courses. All theory and practice classes and organization and methods received an average score higher than 8.5 except for the schedule and time devoted to developing contents. All parameters evaluating instructors' skills received a score higher than 9. Participants requested more time to practice and for course adaptation to their specific professionals needs. IPLS courses are highly valued by students. The duration of IPLS practice sessions should be increased and the course should be adapted to the specific professional needs of participants. abstract_id: PUBMED:9819533 Advanced trauma life support versus Combat Trauma Life Support courses: a comparison of cognitive knowledge decline. This prospective study was conducted to compare cognitive knowledge decline among graduates of the Advanced Trauma Life Support (ATLS) and Combat Trauma Life Support (CTLS) courses in Israel. The investigation was based on multiple-choice questions that tested the results of 211 ATLS and CTLS course graduates and was performed 3 to 66 months after completion of the courses. These results were then compared with the examination outcomes immediately after the course. A statistical model based on survival analysis was used to evaluate the decline pattern and extent and to compare the two courses. No significant difference was found in the rate of decline in knowledge gained from the two courses after a given period. Priority for refresher courses should be set regardless of type of course previously attended by physicians. abstract_id: PUBMED:35592876 Blended learning for accredited life support courses - A systematic review. Aim: To evaluate the effectiveness on educational and resource outcomes of blended compared to non-blended learning approaches for participants undertaking accredited life support courses. Methods: This review was conducted in adherence with PRISMA standards. We searched EMBASE.com (including all journals listed in Medline), CINAHL and Cochrane from 1 January 2000 to 6 August 2021. Randomised and non-randomised studies were eligible for inclusion. Study screening, data extraction, risk of bias assessment (using RoB2 and ROBINS-I tools), and certainty of evidence evaluation (using GRADE) were all independently performed in duplicate. The systematic review was registered with PROSPERO (CRD42022274392). Results: From 2,420 studies, we included data from 23 studies covering fourteen basic life support (BLS) with 2,745 participants, eight advanced cardiac life support (ALS) with 33,579 participants, and one Advanced Trauma Life Support (ATLS) with 92 participants. Blended learning is at least as effective as non-blended learning for participant satisfaction, knowledge, skills, and attitudes. There is potential for cost reduction and eventual net profit in using blended learning despite high set up costs. The certainty of evidence was very low due to a high risk of bias and inconsistency. Heterogeneity across studies precluded any meta-analysis. Conclusion: Blended learning is at least as effective as non-blended learning for accredited BLS, ALS, and ATLS courses. Blended learning is associated with significant long term cost savings and thus provides a more efficient method of teaching. Further research is needed to investigate specific delivery methods and the effect of blended learning on other accredited life support courses. abstract_id: PUBMED:33318904 Advanced life support courses in Africa: Certification, availability and perceptions. Background: Advanced life support (ALS) short training courses are in demand across Africa, though overwhelmingly designed and priced for non-African contexts. The continental expansion of emergency care is driving wider penetration of these courses, but their relevance and accessibility is not known. We surveyed clinicians within emergency settings to describe ALS courses' prevalence and perceived value in Africa. Methods: We conducted a cross-sectional quantitative analysis of 235 clinicians' responses to the African Federation for Emergency Medicine's online needs assessment for an open-access ALS course in Africa. Participants responded to multiple-choice and open answer questions assessing demographics, ALS course certification and availability, perceptions of ALS courses, and barriers and facilitators to undertaking such courses. Results: 235 clinicians working in 23 African nations responded. Most clinicians reported ALS course completion within the past three years (73%) and in-country access to ALS courses (76%). Most believed the content adequately met their region's needs (60%). Price and course availability were the most common barriers to taking an ALS course. The most common courses were cardiac and paediatric-focused, and the most common reasons to take a course included general career development, personal interest, and departmental requirements. Conclusion: One-quarter of emergency care clinicians lack access to ALS courses in twenty-three African nations. Most clinicians believe that ALS courses have value in their clinical settings and meet the needs of their region. Our findings illustrate the need for an affordable, widely available ALS course tailored to lower-resource African settings that could reach rural and peri-urban clinicians. abstract_id: PUBMED:37189881 The Influence of Participation in Pregnancy Courses and Breastfeeding Support Groups on Attitudes and Knowledge of Health Professionals about Breastfeeding. Numerous factors affect the behavior, attitudes, and knowledge of health professionals about breastfeeding. The aim of this paper is to determine the impact of participation in pregnancy courses and breastfeeding support groups on the attitudes and knowledge of health professionals about breastfeeding. The study compares two groups of health professionals according to the results they achieved on a validated questionnaire of behavior, attitudes, and knowledge about breastfeeding. The authors did not make personal contact with the respondents, as the questionnaires were filled out online. The two groups of respondents differed according to the frequency of participation in pregnancy courses, that is, groups for breastfeeding support. The results are presented tabularly and graphically (frequencies and percentages), while differences in the results between the infrequent and regular participants are shown with the Mann-Whitney U test (asymmetric distribution). Better results on the questionnaire were achieved by those who regularly attended breastfeeding support groups (Mdn = 149, IQR = 11) in comparison to infrequent visitors (Mdn = 137, IQR = 23). The same is found for regular visitors of pregnancy courses (Mdn = 149, IQR = 15.75) in comparison to infrequent visitors (Mdn = 137, IQR = 23). The differences are statistically significant (p &lt; 0.00). Partial correlation confirms a more significant influence of breastfeeding support groups (&lt;0.00) than pregnancy courses (p = 0.34). Working in breastfeeding support groups had a statistically significant positive effect on the attitudes and knowledge of health professionals about breastfeeding. The topic of breastfeeding should be given more space and importance during pregnancy courses as well. Personal experience working in breastfeeding support groups and pregnancy courses should be incorporated into the training of medical students. abstract_id: PUBMED:36396010 Specific theorical and practical education on mechanical chest compression during advanced life support training courses - Results from a local experience. Specific training modules focusing on mechanical chest compression and device use might be considered in a structured manner during the standard advanced life support (ALS) courses. The aim of this study was to evaluate the impact of a specific brief 15 min training on the use of a specific mechanical CPR device during Advanced Cardiac Life Support courses on its correct use and on attendees' satisfaction. abstract_id: PUBMED:20414632 Six years of Advanced Trauma Life Support (ATLS) in Germany: the 100th provider course in Hamburg With over 1 million certified physicians in more than 50 countries worldwide, the Advanced Trauma Life Support (ATLS) concept is one of the most successful international education programs. The concept is simple, priority-orientated (ABCDE scheme) and assesses the situation of the trauma patient on the basis of vital signs to treat the life-threatening injuries immediately. With over 100 ATLS provider courses and 10 instruction courses accomplished in less than 6 years, no other land in the world has successfully established this concept in such a short time as Germany. Meanwhile nearly 1,600 colleagues have been trained and certified. Evaluation of the first 100 ATLS courses in Germany supports this concept. The total evaluation of all courses is 1.36 (1.06-1.8, n=100). The individual parts of the course were marked as followed: presentations 1.6 (1.0-2.81, n=100), practical skills stations 1.46 (1.0-2.4, n=100) and surgical skills stations 1.38 (1.0-2.38, n=100). In 2009 a total of 47 ATLS courses were accomplished which will clearly increase in 2010. Other ATLS formats, such as ATCN (Advanced Trauma Care for Nurses) and refresher courses are planned for the beginning of 2010. Answer: Based on the provided abstracts, it appears that "life support" courses undergo periodic updates, but the extent and nature of these updates may vary. According to the evaluation of the literature base for these courses, there seems to be a significant amount of overlap in the references used across different editions of the textbooks for advanced cardiac life support (ACLS), advanced trauma life support, basic life support, and pediatric advanced life support courses. With the exception of ACLS in 2003, the courses appear to be based on similar reference sets for the last 7 to 11 years, suggesting that updates may not always be driven by the availability of new information (PUBMED:16609101). Furthermore, studies have evaluated the effectiveness of life support courses, indicating that while there is evidence of a positive impact on patient mortality and morbidity, retention of knowledge and skills among providers is poor without refresher activities. Modular courses are found to be as effective as lectures for learning material (PUBMED:8953961). Additionally, a comparison of cognitive knowledge decline between ATLS and CTLS courses showed no significant difference in the rate of decline, suggesting that the type of course may not affect long-term retention of knowledge (PUBMED:9819533). In terms of course delivery, blended learning approaches for accredited life support courses have been found to be at least as effective as non-blended learning, with potential for cost reduction and long-term savings (PUBMED:35204949). Specific training modules, such as those focusing on mechanical chest compression during advanced life support training, have also been evaluated for their impact on correct use and attendee satisfaction (PUBMED:36396010). Overall, while "life support" courses are updated and revised, the frequency and significance of these updates may not be consistent across all courses or driven solely by new scientific evidence. The effectiveness of these courses and the retention of knowledge by healthcare providers remain areas of ongoing study and improvement.
Instruction: Does asthma control correlate with quality of life related to upper and lower airways? Abstracts: abstract_id: PUBMED:19243359 Does asthma control correlate with quality of life related to upper and lower airways? A real life study. Background: The goal of asthma therapy is to achieve an optimal level of disease control, but the relationship between asthma control, impact of comorbid rhinitis and health related quality of life (HRQoL) in real life remains unexplored. Objective: The aims of this real life study were to evaluate asthma control, the impact of asthma (with and without rhinitis) on HRQoL, the relationship between asthma control and HRQoL, and the role of rhinitis on asthma control and HRQoL. Methods: 122 asthma patients completed the Asthma Control Test, Rhinitis Symptoms score (T5SS) and RHINASTHMA. Results: Asthma control was unsatisfactory (44.27% of uncontrolled patients), as well as HRQoL. Controlled patients controlled showed significantly lower scores in all the RHINASTHMA domains compared to uncontrolled. Irrespective of their level of control, patients with rhinitis symptoms showed worse HRQoL in Upper Airways (UA) (P &lt; 0.0001), Lower Airways (LA) (P &lt; 0.001), and Global Summary (GS) (P &lt; 0.0001). In patients with symptomatic rhinitis, RHINASTHMA were lower in controlled asthma patients (UA P = 0.002; LA P &lt; 0.0001; RAI P &lt; 0.01; GS P &lt; 0.0001). Asthma control was associated with lower T5SS score (P = 0.034). Conclusion: Asthma control in real life is unsatisfactory. Rhinitis and asthma influence each other in terms of control and HRQoL. The control of rhinitis in asthma patients can lead to an optimization of HRQoL related to the upper airways, while this phenomenon is not so evident in asthma. These results suggest to strengthen the ARIA recommendation that asthma patients must be evaluated for rhinitis and vice versa. abstract_id: PUBMED:35270608 Health-Related Quality of Life (HRQoL) of Residents with Persistent Lower Respiratory Symptoms or Asthma Following a Sulphur Stockpile Fire Incident. Background: This study evaluated health-related quality of life (HRQoL) in residents with persistent lower respiratory symptoms (PLRS) or asthma six years after exposure to sulphur dioxide vapours emanating from an ignited sulphur stockpile. Methods: A cross-sectional study was carried out, using interview data collected at three time points (prior to, one- and six-years post incident), medical history, respiratory symptoms and HRQOL using the Medical Outcomes Study Form 36 (SF-36). Results: A total of 246 records, 74 with and 172 without PLRS or asthma, were analysed. The mean age was 42 (SD:12) years in the symptomatic group and 41 (SD:13) years in the asymptomatic group. Mean SF-36 scores were significantly lower for the symptomatic group in the Physical Functioning (24 vs. 39), Role-Physical (33 vs. 48) and General Health (GH) domains (24 vs. 37). Symptomatic residents experienced a significant decline in their Role-Physical (OR = 1.97; CI 1.09, 3.55) and GH (OR = 3.50; CI 1.39, 8.79) at year 6 compared to asymptomatic participants. Residents with co-morbid reactive upper airways dysfunction syndrome demonstrated stronger associations for GH (OR = 7.04; CI 1.61, 30.7) at year 1 and at year 6 (OR = 8.58; CI 1.10, 65.02). Conclusions: This study highlights the long-term adverse impact on HRQoL among residents with PLRS or asthma following a sulphur stockpile fire disaster. abstract_id: PUBMED:29094017 Frequency and effect of type 1 hypersensitivity in patients from India with chronic obstructive pulmonary disease and associated upper airways symptoms. Background: Chronic obstructive pulmonary disease (COPD) is now recognized as a systemic disorder with many comorbidities. Atopy in patients with COPD and upper airways symptoms has not been characterized. Objective: We investigated the occurrence and impact of aeroallergen sensitisation in patients with COPD and upper airways symptoms. Methods: All 41 subjects with COPD diagnosed as per Global Initiative for Chronic Obstructive Lung Disease guidelines, underwent spirometry with reversibility, computed tomography of the paranasal sinuses (CT-PNS), skin prick test (SPT) against common aeroallergens and responded to St. George's Respiratory Questionnaire (SGRQ) and Sino Nasal Outcome Test - 22 (SNOT-22) questionnaires. Upper airways symptoms were assessed as per the Allergic Rhinitis and its Impact on Asthma guidelines. Results: As documented earlier, 27 of the 41 patients (65.9%) with COPD had upper airways symptoms. Of these 27 patients, 11 had SPT positivity against at least one aeroallergen (group 1). One patient had monosensitisation to pollens of grass Imperata while polysensitisation was seen in 10/11 patients commonly to weeds, trees, and insects. Fungal sensitisation to Aspergillus fumigatus was seen in 3 of 11 patients (27.2%). In group 1, all 11 patients (100%) had radiological sinusitis as compared to 8 of 16 (50%) in group 2. The mean CT-PNS scores were significantly higher in group 1 as compared to group 2. Similarly, the SNOT-22 scores were significantly higher in group 1 as compared to group 2. However, there was no difference in SGRQ scores between the 2 groups. In group 1, there was a significant correlation between CT-PNS and SNOT-22 scores. Conclusion: Patients with COPD, associated upper airways symptoms and a positive SPT had a significantly higher frequency of radiological sinusitis on CT-PNS. They even had worse quality of life as compared to those with a negative SPT. The study suggested that atopic patients with COPD and upper airways involvement were more symptomatic. It is therefore possible that upper airways symptoms, if left untreated, would result in less than desirable control of the disease. abstract_id: PUBMED:24073408 Is health-related quality of life associated with upper and lower airway inflammation in asthmatics? Background: Allergic diseases impair health-related quality of life (HR-QoL). However, the relationship between airway inflammation and HR-QoL in patients with asthma and rhinitis has not been fully investigated. We explored whether the inflammation of upper and lower airways is associated with HR-QoL. Methods: Twenty-two mild allergic asthmatics with concomitant rhinitis (10 males, 38 ± 17 years) were recruited. The Rhinasthma was used to identify HR-QoL, and the Asthma Control Test (ACT) was used to assess asthma control. Subjects underwent lung function and exhaled nitric oxide (eNO) test, collection of exhaled breath condensate (EBC), and nasal wash. Results: The Rhinasthma Global Summary score (GS) was 25 ± 11. No relationships were found between GS and markers of nasal allergic inflammation (% eosinophils: r = 0.34, P = 0.24; ECP: r = 0.06, P = 0.87) or bronchial inflammation (pH of the EBC: r = 0.12, P = 0.44; bronchial NO: r = 0.27, P = 0.22; alveolar NO: r = 0.38, P = 0.10). The mean ACT score was 18. When subjects were divided into controlled (ACT ≥ 20) and uncontrolled (ACT &lt; 20), the alveolar NO significantly correlated with GS in uncontrolled asthmatics (r = 0.60, P = 0.04). Conclusions: Upper and lower airways inflammation appears unrelated to HR-QoL associated with respiratory symptoms. These preliminary findings suggest that, in uncontrolled asthma, peripheral airway inflammation could be responsible for impaired HR-QoL. abstract_id: PUBMED:33492196 Manifesto on united airways diseases (UAD): an Interasma (global asthma association - GAA) document. Objective: The large amount of evidence and the renewed interest in upper and lower airways involvement in infectious and inflammatory diseases has led Interasma (Global Asthma Association) to take a position on United Airways Diseases (UAD). Methods: Starting from an extensive literature review, Interasma executive committee discussed and approved this Manifesto developed by Interasma scientific network (INES) members. Results: The manifesto describes the evidence gathered to date and defines, states, advocates, and proposes issues on UAD (rhinitis, rhinosinusitis and nasal polyposis), and concomitant/comorbid lower airways disorders (asthma, chronic obstructive pulmonary disease, bronchiectasis, cystic fibrosis, obstructive sleep apnoea) with the aim of challenging assumptions, fostering commitment, and bringing about change. UAD refers to clinical pictures characterized by the coexistence of upper and lower airways involvement, driven by a common pathophysiological mechanism, leading to a greater burden on patient's health status and requiring an integrated diagnostic and therapeutic plan. The high prevalence of UAD must be taken into account. Upper and lower airways diseases influence disease control and patient's quality of life. Conclusions: Patients with UAD need to have a timely and adequate diagnosis, treatment, and, when recommended, referral for management in a specialized center. Diagnostic testing including skin prick or serum specific IgE, lung function, fractional exhaled nitric oxide (FeNO), polysomnography, allergen-specific immunotherapies, biological therapies and home based continuous positive airway pressure (CPAP) whenever these are recommended, should be part of the management plan for UAD. Education of medical students, physicians, health professionals, patients and caregivers on the UAD is needed. abstract_id: PUBMED:36906884 Work-related asthma consequences on socioeconomic, asthma control, quality of life, and psychological status compared with non-work-related asthma: A cross-sectional study in an upper-middle-income country. Background: Work-related asthma (WRA) is the most prevalent occupational respiratory disease, and it has negative effects on socioeconomic standing, asthma control, quality of life, and mental health status. Most of the studies on WRA consequences are from high-income countries; there is a lack of information on these effects in Latin America and in middle-income countries. Methods: This study compared socioeconomic, asthma control, quality of life, and psychological outcomes among individuals diagnosed with WRA and non-work-related asthma (NWRA) in a middle-income country. Patients with asthma, related and not related to work, were interviewed using a structured questionnaire to assess their occupational history and socioeconomic conditions, and with questionnaires to assess asthma control (Asthma Control Test and Asthma Control Questionnaire-6), quality of life (Juniper's Asthma Quality of Life Questionnaire), and presence of anxiety and depression symptoms (Hospital Anxiety and Depression Scale). Each patient's medical record was reviewed for exams and use of medication, and comparisons were made between individuals with WRA and NWRA. Results: The study included 132 patients with WRA and 130 with NWRA. Individuals with WRA had worse socioeconomic outcomes, worse asthma control, more quality-of-life impairment, and a higher prevalence of anxiety and depression than individuals with NWRA. Among individuals with WRA, those who had been removed from occupational exposure had a worse socioeconomic impact. Conclusions: Consequences on socioeconomic, asthma control, quality of life, and psychological status are worse for WRA individuals when compared with NWRA. abstract_id: PUBMED:37915724 The effect of biologics in lung function and quality of life of patients with united airways disease: A systematic review. Background: Increasing evidence supports the united airway disease concept for the management of upper and lower respiratory tract diseases, particularly in patients with asthma and chronic rhinosinusitis with nasal polyps (CRSwNP). However, evidence for a combined approach in asthma and CRSwNP is scarce. Objective: In this systematic review, we focused on the role of biologics in the lung function and quality of life in patients with severe asthma and CRSwNP. Methods: We conducted a systematic search of 3 electronic databases using 2 search strategies to identify studies published from January 2010 to March 2022. Quality assessment was performed with the Critical Appraisal Skills Programme. Results: Of 1030 studies identified, 48 original studies reporting data of benralizumab (12), dupilumab (14), mepolizumab (10), omalizumab (13), and reslizumab (2) were analyzed. Primary diagnosis was mostly asthma or CRSwNP, with only 15 studies, mainly observational, performed in populations with united airway disease. In total, 18 studies reported data on quality of life (mostly 22-item Sino-Nasal Outcome Test score), 8 on lung function (mostly FEV1), and 22 on both outcomes. Significant FEV1 and 22-item Sino-Nasal Outcome Test score improvements were consistently observed after 24-week treatment, and thereafter, mostly in real-world studies that included variable proportions of patients with asthma/CRSwNP. Conclusions: The use of biologics in patients with severe asthma and CRSwNP was overall associated with significant improvements in lung function and quality of life. However, we observed a high heterogeneity of populations and outcome measurements across studies. Notwithstanding the need of larger studies, our results reinforce the joint management of asthma and CRSwNP as united airway disease in clinical practice. abstract_id: PUBMED:31283523 Main Contributory Factors on Asthma Control and Health-Related Quality of Life in Elderly Asthmatics. Objective: To assess the main factors involved in asthma control and health-related quality of life in elderly asthmatic patients. Methods: We performed a retrospective case-control study nested in a historical cohort that compared patients who had partly controlled or uncontrolled asthma (Asthma Control Test [ACT] score ≤19) (cases) with patients who had well-controlled asthma (ACT ≥20) (controls). Clinical data were collected from medical records. Outcomes included ACT score and health-related quality of life (Asthma-Specific Quality of Life Questionnaire [AQLQ]). Pulmonary function was determined by spirometry. Results: We evaluated 209 asthma patients (151 women) aged ≥65 years. Mean age was 73.55 years. Most patients had persistent moderate (47.60%) or severe (47.12%) asthma. A total ACT score ≤19 was obtained in 64 (30.62%) patients. Lack of adherence to treatment and presence of severe exacerbations were risk factors for partly controlled/uncontrolled asthma (OR, 8.33 and 5.29, respectively). In addition, for each additional unit score in the AQLQ, the risk of poor control increased by 1.51. The factors influencing the AQLQ score were asthma control (ACT) and presence of comorbidities such as depression, gastroesophageal reflux disease, and osteoporosis. Conclusions: Despite receiving antiasthma therapy, almost one-third of elderly patients had uncontrolled asthma, possibly as a result of poor adherence, exacerbations, and reduced health-related quality of life. Nonrespiratory comorbid conditions in older patients do not seem to be associated with worse control of asthma symptoms, although their effect on health-related quality of life could indirectly affect asthma control. abstract_id: PUBMED:29031962 Asthma control and quality of life Introduction: The assessment of asthma control is based on objective measures: clinical, pharmacological and spirometry. However subjective component may be also necessary for assessing asthma control. Objectives: To study the feasibility and clinical value of the assessment of the quality of life of patients with asthma by the SF-36 (Medical Outcomes Study Short Form) and the possible existence of a correlation between controlled asthma and a better quality of life. Patients And Methods: A prospective study that included 167 patients with asthma in a stable condition. Control of asthma and SF-36 were established three months after the inclusion of patients. Results: The SF-36 was lower in the uncontrolled group in all areas of the physical component and the difference was significant in the "limitation related to physical activity" and "perceived health". In the mental component, the score was lower in "mental health" and the "limitation due to mental state" in the group with uncontrolled asthma and the difference was significant only in the limitation due to mental state (P=0.043). Conclusion: The quality of life of asthmatic patients is correlated to the control of this disease. abstract_id: PUBMED:20384618 The impact of concomitant rhinitis on asthma-related quality of life and asthma control. Background: Characterizing the interactions between the upper and lower airways is important for the management of asthma. This study aimed at assessing the specific impact of concomitant rhinitis on asthma-related quality of life (QOL) and asthma control. Methods: A cross-sectional, observational survey was conducted among 1173 patients with asthma (aged 12-45) recruited by general practitioners and chest physicians. AR was defined by self-reported rhinitis symptoms and previously documented sensitization to inhalant allergens. The primary outcomes were (1) asthma control assessed by the Asthma Control Questionnaire (ACQ) and (2) asthma-specific QOL evaluated through the Mini Asthma Quality of Life Questionnaire (mAQLQ). Results: AR was present in 73.9% of the population with asthma and nonallergic rhinitis (NAR) in 13.6%. AR and NAR were associated with an increased risk of uncontrolled asthma (i.e. ACQ score &gt; 1.5) with adjusted odds ratios (OR) of 2.00 (95% confidence interval [CI]: 1.35-2.97) and 1.77 (95%CI: 1.09-2.89), respectively. Multivariate linear regression analysis showed that AR and NAR had a modest, although significant, negative impact on the global mAQLQ score (beta coefficient: -0.293, standard error [SE]: 0.063 and beta coefficient: -0.221, SE: 0.080, P &lt; 0.001, respectively), even after adjustment for the level of asthma control and demographic characteristics. Conclusion: This survey provides direct evidence that AR and NAR are associated with an incremental adverse impact on the disease-specific QOL of patients with asthma and the level of asthma control. Further investigations are required to determine whether appropriate treatment of rhinitis would efficiently reduce asthma morbidity. Answer: Yes, asthma control does correlate with quality of life related to upper and lower airways. Studies have shown that patients with controlled asthma have significantly better health-related quality of life (HRQoL) scores compared to those with uncontrolled asthma (PUBMED:19243359). Moreover, the presence of comorbid rhinitis can further impact HRQoL, with patients exhibiting rhinitis symptoms showing worse HRQoL in both upper and lower airways (PUBMED:19243359). Additionally, work-related asthma has been associated with worse socioeconomic outcomes, worse asthma control, more quality-of-life impairment, and a higher prevalence of anxiety and depression compared to non-work-related asthma (PUBMED:36906884). Furthermore, the presence of concomitant rhinitis has been shown to have a negative impact on asthma-related quality of life and asthma control, with both allergic rhinitis (AR) and nonallergic rhinitis (NAR) associated with an increased risk of uncontrolled asthma and a negative impact on disease-specific quality of life (PUBMED:20384618). The use of biologics in patients with severe asthma and chronic rhinosinusitis with nasal polyps (CRSwNP) has also been associated with significant improvements in lung function and quality of life, reinforcing the joint management of asthma and CRSwNP as united airway disease in clinical practice (PUBMED:37915724). In elderly asthmatics, factors such as lack of adherence to treatment, presence of severe exacerbations, and reduced health-related quality of life were identified as risk factors for partly controlled or uncontrolled asthma (PUBMED:31283523). Additionally, the quality of life of asthmatic patients has been correlated with the control of the disease, with uncontrolled asthma associated with lower scores in various components of the SF-36 quality of life assessment (PUBMED:29031962). Overall, these findings suggest that achieving and maintaining good asthma control is crucial for improving the quality of life of patients with asthma and that the management of comorbid conditions such as rhinitis is an important aspect of asthma care.
Instruction: Rapid improvement in diabetes after gastric bypass surgery: is it the diet or surgery? Abstracts: abstract_id: PUBMED:30982211 Clinical Predictors of Rapid Gastric Emptying in Patients Presenting with Dyspeptic Symptoms. Background: Rapid gastric emptying (RGE) is defined as less than 30% retention at 1 h of solid meal ingestion. It is unclear whether RGE represents a separated clinical entity or part of the functional dyspepsia spectrum. Aims: To determine clinical predictors of RGE in patients presenting with dyspeptic symptoms. Methods: Retrospective study of patients who underwent solid Gastric Emptying Scintigraphy to evaluate dyspeptic symptoms from January 2011 to September 2012. Patients with delayed gastric emptying (&gt; 10% gastric retention at 4 h) or prior gastric surgery were excluded. Patients with RGE were compared to those with normal gastric emptying (NGE) in a patient ratio of 1:3. Demographic data, symptoms, comorbidities, surgeries, endoscopy findings, medications, HbA1c, and TSH were analyzed. Univariate and multivariate logistic regression analyses were performed. Results: A total of 808 patients were included, 202 patients with RGE and 606 patients with NGE. Mean gastric retention at 1 h was 18% [12.0, 24.0] and 65% [52.0, 76.0], respectively. Patient with RGE were more likely to present with nausea/vomiting (OR 2.4, p &lt; 0.001), weight loss (OR 1.7, p = 0.008), and autonomic symptoms (OR 2.8, p = 0.022). Identified clinical predictors of RGE were older age (OR 1.08 [1.01, 1.1], p = 0.018), male gender (OR 2.0 [1.4, 2.9], p ≤ &lt;0.001), higher BMI (OR 1.03 [1.00, 1.05], p = 0.018), diabetes (OR 1.8 [1.2, 2.7], p = 0.05), and fundoplication (OR 4.3 [2.4, 7.7], p ≤ 0.001). Conclusion: RGE represents a distinct population among patients presenting with dyspepsia in whom fundoplication, diabetes, and male gender were the strongest clinical predictors. RGE was significantly associated with nausea/vomiting, weight loss, and autonomic symptoms. abstract_id: PUBMED:23530013 Rapid improvement in diabetes after gastric bypass surgery: is it the diet or surgery? Objective: Improvements in diabetes after Roux-en-Y gastric bypass (RYGB) often occur days after surgery. Surgically induced hormonal changes and the restrictive postoperative diet are proposed mechanisms. We evaluated the contribution of caloric restriction versus surgically induced changes to glucose homeostasis in the immediate postoperative period. Research Design And Methods: Patients with type 2 diabetes planning to undergo RYGB participated in a prospective two-period study (each period involved a 10-day inpatient stay, and periods were separated by a minimum of 6 weeks of wash-out) in which patients served as their own controls. The presurgery period consisted of diet alone. The postsurgery period was matched in all aspects (daily matched diet) and included RYGB surgery. Glucose measurements were performed every 4 h throughout the study. A mixed-meal challenge test was performed before and after each period. RESULTS Ten patients completed the study and had the following characteristics: age, 53.2 years (95% CI, 48.0-58.4); BMI, 51.2 kg/m(2) (46.1-56.4); diabetes duration, 7.4 years (4.8-10.0); and HbA1c, 8.52% (7.08-9.96). Patients lost 7.3 kg (8.1-6.5) during the presurgery period versus 4.0 kg (6.2-1.7) during the postsurgery period (P = 0.01 between periods). Daily glycemia in the presurgery period was significantly lower (1,293.58 mg/dL · day [1,096.83-1,490.33) vs. 1,478.80 mg/dL · day [1,277.47-1,680.13]) compared with the postsurgery period (P = 0.02 between periods). The improvements in the fasting and maximum poststimulation glucose and 6-h glucose area under the curve (primary outcome) were similar during both periods. Conclusions: Glucose homeostasis improved in response to a reduced caloric diet, with a greater effect observed in the absence of surgery as compared with after RYGB. These findings suggest that reduced calorie ingestion can explain the marked improvement in diabetes control observed after RYGB. abstract_id: PUBMED:31983031 Rapid changes in neuroendocrine regulation may contribute to reversal of type 2 diabetes after gastric bypass surgery. Objective: To explore the role of hormones and the autonomic nervous system in the rapid remission of diabetes after Roux-en-Y Gastric Bypass (RYGB). Research Design And Methods: Nineteen obese patients with type 2 diabetes, 7 M/12 F, were randomized (2:1) to RYGB or standard-of-care medical treatment (control). At baseline and 4 and 24 weeks post surgery, fasting blood sampling, OGTT, intravenous arginine challenge, and heart-rate variability (HRV) assessments were performed. Results: At both 4 and 24 weeks post-RYGB the following effects were found: arginine-stimulated insulin secretion was reduced. GLP-1, GIP, and glucagon rise during OGTT was enhanced. IGF-1 and GH levels increased. In addition, total HRV and spectral components PLF (power of low frequency) and PHF (power of high frequency) increased. At 4 weeks, morning cortisol was lower than baseline and 24 weeks. At 24 weeks, NEFA levels during OGTT, and the PLF/PHF ratio decreased. None of these changes were seen in the control group. Conclusions: There were rapid changes within 4 weeks after RYGB: signs of enhanced parasympathetic nerve activity, reduced morning cortisol, and enhanced incretin and glucagon responses to glucose. The findings suggest that neurohormonal mechanisms can contribute to the rapid improvement of insulin resistance and glycemia following RYGB in type 2 diabetes. abstract_id: PUBMED:29805089 Improvement in insulin resistance after gastric bypass surgery is correlated with a decline in plasma 2-hydroxybutyric acid. Background: Gastric bypass surgery for weight reduction often corrects dysglycemia in diabetic patients, but a full understanding of the underlying biochemical pathways continues to be investigated. Objectives: To explore the effects of weight loss by surgical and dietary interventions on plasma metabolites using both targeted and discovery-oriented metabolomics platforms. Setting: An academic medical center in the United States. Methods: Improvement in homeostatic model assessment for insulin resistance (HOMA-IR), as an index of insulin resistance, was compared at 6 months in 11 patients that underwent Roux-en-Y gastric bypass against 11 patients that were matched for weight loss in the Weight Loss Maintenance (WLM) program. Metabolites in plasma were evaluated by nontargeted gas chromatography/mass spectrometry for the potential detection of &gt;1100 biochemical markers. Results: Among multiple metabolites detected, 2-hydroxybutyric acid (2-HBA) declined most significantly after 6 months in comparing patients that underwent Roux-en-Y gastric bypass with those in WLM (P &lt; .001), corresponding with declines in HOMA-IR (P = .025). Baseline levels of 2-HBA for all patients were correlated with preintervention levels of HOMA-IR (R2 = .565, P &lt; .001). Moreover, the changes in 2-HBA after 6 months were correlated with changes in HOMA-IR (R2 = .399, P = .0016). Conclusions: Correlation between insulin resistance and 2-HBA suggests the utility of the latter as an excellent biomarker for tracking glycemic improvement, and offers further insight into the pathways that control diabetes. This is the first report of a decline in 2-HBA in response to bariatric surgery. abstract_id: PUBMED:33025250 Both gastric electrical stimulation and pyloric surgery offer long-term symptom improvement in patients with gastroparesis. Background: Gastroparesis (GP) is hallmarked by nausea, vomiting, and early satiety. While dietary and medical therapy are the mainstay of treatment, surgery has been used to palliate symptoms. Two established first-line surgical options are gastric electrostimulation (GES) and pyloric procedures (PP) including pyloroplasty or pyloromyotomy. We sought to compare these modalities' improvement in Gastroparesis cardinal symptom index (GCSI) subscores and potential predictors of therapy failure. Methods: All patients undergoing surgery at a single institution were prospectively identified and separated by surgery: GES, PP, or combined GESPP. GCSI was collected preoperatively, at 6 weeks and 1 year. Postoperative GCSI score over 2.5 or receipt of another gastroparesis operation were considered treatment failures. Groups were compared using Pearson's chi-squared and Kruskal-Wallis one-way ANOVA. Results: Eighty-two patients were included: 18 GES, 51 PP, and 13 GESPP. Mean age was 44, BMI was 26.7, and 80% were female. Preoperative GCSI was 3.7. The PP group was older with more postsurgical gastroparesis. More patients with diabetes underwent GESPP. Preoperative symptom scores and gastric emptying were similar among all groups. All surgical therapies resulted in a significantly improved GCSI and nausea/vomiting subscore at 6 weeks and 1 year. Bloating improved initially, but relapsed in the GES and GESPP group. Satiety improved initially, but relapsed in the PP group. Fifty-nine (72%) had surgical success. Ten underwent additional surgery (7 crossed into the GESPP group, 3 underwent gastric resection). Treatment failures had higher preoperative GCSI, bloating, and satiety scores. Treatment failures and successes had similar preoperative gastric emptying. Conclusions: Both gastric electrical stimulation and pyloric surgery are successful gastroparesis treatments, with durable improvement in nausea and vomiting. Choice of operation should be guided by patient characteristics and discussion of surgical risks and benefits. Combination GESPP does not appear to confer an advantage over GES or PP alone. abstract_id: PUBMED:29532631 Roux-en-Y gastric bypass compared with equivalent diet restriction: Mechanistic insights into diabetes remission. Aims: To investigate the physiological mechanisms leading to rapid improvement in diabetes after Roux-en-Y gastric bypass (RYGB) and specifically the contribution of the concurrent peri-operative dietary restrictions, which may also alter glucose metabolism. Materials And Methods: In order to assess the differential contributions of diet and surgery to the mechanisms leading to the rapid improvement in diabetes after RYGB we enrolled 10 patients with type 2 diabetes scheduled to undergo RYGB. All patients underwent a 10-day inpatient supervised dietary intervention equivalent to the peri-operative diet (diet-only period), followed by, after a re-equilibration (washout) period, an identical period of pair-matched diet in conjunction with RYGB (diet and RYGB period). We conducted extensive metabolic assessments during a 6-hour mixed-meal challenge test, with stable isotope glucose tracer infusion performed before and after each intervention. Results: Similar improvements in glucose levels, β-cell function, insulin sensitivity and post-meal hepatic insulin resistance were observed with both interventions. Both interventions led to significant reductions in fasting and postprandial acyl ghrelin. The diet-only intervention induced greater improvements in basal hepatic glucose output and post-meal gastric inhibitory polypeptide (GIP) secretion. The diet and RYGB intervention induced significantly greater increases in post-meal glucagon-like peptide-1 (GLP-1), peptide YY (PYY) and glucagon levels. Conclusions: Strict peri-operative dietary restriction is a main contributor to the rapid improvement in glucose metabolism after RYGB. The RYGB-induced changes in the incretin hormones GLP-1 and PYY probably play a major role in long-term compliance with such major dietary restrictions through central and peripheral mechanisms. abstract_id: PUBMED:17368291 Predictors of early quality-of-life improvement after laparoscopic gastric bypass surgery. Background: Quality of life is getting more attention in the medical literature. Treatment outcomes are now gauged by their effect on quality of life (QOL), along with their direct effect on diseases they are targeting. Similarly, in obesity, consensus has been reached on the importance of QOL as an independent outcome measure for obesity surgery along with weight loss and comorbidity. Therefore, the aim of this study was to assess the impact of patient demographics and comorbidities on short-term QOL improvement after laparoscopic gastric bypass (LGB) surgery. Methods: The change in QOL after LGB was assessed in 171 patients (147 women, 24 men; mean age, 43.1 y) using the Short-Form-36 (SF-36) questionnaire. Multivariate logistic regression analysis was used to identify patients' demographics and comorbidities predictive of major QOL improvement. Results: Body mass index decreased significantly at 3 months (48.5 +/- 5.8 to 38.4 +/- 5.4 kg/m2; P &lt; .001) with excess weight loss of 37.4% +/- 9.2%. The SF-36 follow-up evaluation showed significant improvement (44.2 +/- 15.7 to 78.6 +/- 15.5; P &lt; .001). A significant inverse correlation was found between QOL (before and after bypass) and the number of comorbidities (r = .29, P = .001; R = .22, P = .005; respectively), but the magnitude of QOL change did not correlate with the number of comorbidities (P = .5). When the entire cohort of patients was dichotomized according to their magnitude of change in SF-36 scores, the univariate analysis showed that the group of patients with no improvement or minor improvement in their SF-36 was characterized by a higher percentage of male sex and a lower prevalence of diabetes. These 2 preoperative factors remained statistically significant in the multivariate analysis. Preoperative diagnosis of type 2 diabetes increased the likelihood of major improvement in QOL after LGB by 6.2 times, whereas being a woman increased this likelihood by 16.1 times. Conclusions: Significant weight loss was achieved as early as 3 months after LGB, causing substantial improvement in QOL in more than 95% of patients. Women with type 2 diabetes have the highest odds to achieve a major QOL improvement after LGB and therefore they should represent the ideal target population for surgical weight loss programs. abstract_id: PUBMED:35123904 Gastric motility disorders and their endoscopic and surgical treatments other than bariatric surgery. Gastroparesis is the most common gastric motility disorder. The cardinal symptoms are nausea, vomiting, gastric fullness, early satiety, or bloating, associated with slow gastric emptying in the absence of mechanical obstruction. Delayed gastric emptying is demonstrated by a gastric emptying scintigraphy or by a breath test. Gastroparesis can be idiopathic, post-operative, secondary to diabetes, iatrogenic, or post-infectious. Therapeutic care must be multidisciplinary including nutritional, medical, endoscopic and surgical modes. The complications of delayed gastric emptying must be sought and addressed, particularly malnutrition, in order to identify and correct vitamin deficiencies and fluid and electrolyte disturbances. An etiology should be identified and treated whenever possible. Improvement in symptoms can be treated by dietary regimes and pharmaceutical treatments, including prokinetics. If these are not effective, specialized endoscopic approaches such as endoscopic or surgical pyloromyotomy aim at relaxing the pyloric sphincter, while the implantation of an electrical stimulator of gastric muscle should be discussed in specialized centers. abstract_id: PUBMED:31598899 Perioperative Outcomes of Roux-en-Y Gastric Bypass and Sleeve Gastrectomy in Patients with Diabetes Mellitus: an Analysis of the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) Database. Background: The safety and efficacy of laparoscopic sleeve gastrectomy (LSG) and laparoscopic Roux-en-Y gastric bypass (LRYGB) to treat obesity and associated comorbidities, including diabetes mellitus, is well established. As diabetes may add risk to the perioperative period, we sought to characterize perioperative outcomes of these surgical procedures in diabetic patients. Methods: Using the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) database, we identified patients who underwent LSG and LRYGB between 2015 and 2017, grouping by non-diabetics (NDM), non-insulin-dependent diabetics (NIDDM), and insulin-dependent diabetics (IDDM). Primary outcomes included serious adverse events, 30-day readmission, 30-day reoperation, and 30-day mortality. Univariate and multivariable analyses were used to evaluate the outcome in each diabetic cohort. Results: Multivariable analysis of patients who underwent LSG (with NDM patients as reference) showed higher 30-day mortality (NIDDM AOR = 1.52, p = 0.043; IDDM AOR = 1.91, p = 0.007) and risk of serious adverse events (NIDDM AOR = 1.15, p &lt; 0.001; IDDM AOR = 1.58, p &lt; 0.001) in the diabetic versus NDM groups. Multivariable analysis of patients who underwent LRYGB (with NDM patients as reference) showed higher risk of serious adverse events (NIDDM AOR = 1.09, p = 0.014; IDDM AOR = 1.43, p &lt; 0.001) in the diabetic versus NDM groups. Conclusions: Diabetics who underwent LSG and LRYGB had higher rates of several perioperative complications compared with non-diabetics. IDDM had a stronger association with several perioperative complications compared with NIDDM. This increase in morbidity and mortality is modest and should be weighed against the real benefits of bariatric surgery in patient with obesity and diabetes mellitus. abstract_id: PUBMED:18996755 Percentage of excess BMI lost correlates better with improvement of metabolic syndrome after Roux-en-Y gastric bypass in morbidly obese subjects: anthropometric indexes and gastric bypass. Background: The metabolic syndrome (MS) is a complex disorder characterized by a number of cardiovascular risk factors usually associated with central fat deposition and insulin resistance. Many different medical treatments are available for MS, including bariatric surgery, which improves all risk factors. The present study aimed to evaluate, at the Clinic of Gastroenterology and Obesity Surgery (Brazil), the accuracy of different anthropometric indexes and their correlation with improvement of the MS factors in the postoperative (6-month) period. Methods: This was a retrospective study of 140 patients who had undergone gastric bypass. Most of the patients were women (79.3%). The mean body mass index was 44.17 kg/m(2). We evaluated the weight of the subjects, the presence of diabetes mellitus and hypertension as co-morbidities, and the biochemical parameters. The anthropometric indexes tested included the percentage of excess weight loss, amount lost in kilograms, difference in the body mass index, percentage of initial weight lost, percentage of excess body mass index lost (%EBL), and percentage of initial fat mass lost. Results: The %EBL had a Spearman's correlation coefficient of 0.55 (P &lt;.0001) for the difference between the MS factors before and after Roux-en-Y gastric bypass. The receiver operating characteristic curve for the %EBL resulted in an area under the curve of 0.846 (P = 0.0001) and a sensitivity and specificity of 100% and 61.29%, respectively, for a cutoff value of 64.55%. Conclusion: The %EBL was the best method to report weight loss and the improvement in MS in morbidly obese subjects after Roux-en-Y gastric bypass. Answer: The rapid improvement in diabetes after gastric bypass surgery has been a subject of investigation to understand whether it is primarily due to the surgery itself or the caloric restriction associated with the postoperative diet. Research has shown that glucose homeostasis improves in response to a reduced caloric diet, with a greater effect observed in the absence of surgery as compared with after Roux-en-Y gastric bypass (RYGB) (PUBMED:23530013). This suggests that reduced calorie ingestion can explain the marked improvement in diabetes control observed after RYGB. However, other studies have indicated that surgically induced changes, such as hormonal alterations, may also play a significant role in the rapid remission of diabetes after RYGB. For instance, rapid changes in neuroendocrine regulation, including enhanced parasympathetic nerve activity, reduced morning cortisol, and enhanced incretin and glucagon responses to glucose, have been observed within weeks after RYGB, suggesting that neurohormonal mechanisms contribute to the rapid improvement of insulin resistance and glycemia following the surgery (PUBMED:31983031). Moreover, a study comparing the physiological mechanisms leading to rapid improvement in diabetes after RYGB with equivalent peri-operative dietary restrictions found that strict dietary restriction is a main contributor to the rapid improvement in glucose metabolism after RYGB. However, the RYGB-induced changes in incretin hormones such as glucagon-like peptide-1 (GLP-1) and peptide YY (PYY) likely play a major role in long-term compliance with dietary restrictions through central and peripheral mechanisms (PUBMED:29532631). In summary, while caloric restriction plays a significant role in the rapid improvement of diabetes after gastric bypass surgery, surgically induced hormonal and neuroendocrine changes also contribute to the remission of diabetes, suggesting that both diet and surgery are important factors in the observed improvements in diabetes control post-surgery.
Instruction: Is HE4 a useful endometrioma marker? Abstracts: abstract_id: PUBMED:25118488 Is HE4 a useful endometrioma marker? Purpose Of Investigation: By the comparison between most used tumor marker trend (cancer antigen 125: CA 125 and human epididymal secretory protein E4: HE4) before and after laparoscopic surgery, the aim of the present study was to assess HE4 usefulness in ovarian benign cyst and endometrioma diagnosis. Materials And Methods: Thirty-eight patients were enrolled in this prospective study: 25 women underwent unilateral endometriosis ovarian cyst excision, 13 underwent benign ovarian cyst incision, and 26 were healthy controls. CA 125 and HE4 serum levels were estimated before surgery (in the early proliferative phase of the cycle) and one month after surgery. Results: A statistically significant decrease of CA 125 serum level was found after an endometrioma surgical excision but no decreases in HE4 serum level. Conclusion: In patients with endometrioma, no alteration was found in HE4 serum levels before and after surgery, while CA125 serum levels decreased after surgery. HE4 may better distinguish a malign cyst from benign one, but it is not useful in the diagnosis of low risk endometrioma. abstract_id: PUBMED:27899969 Diagnostic usefulness of the Risk of Ovarian Malignancy Algorithm using the electrochemiluminescence immunoassay for HE4 and the chemiluminescence microparticle immunoassay for CA125. The present study aimed to investigate the usefulness of the Risk of Ovarian Malignancy Algorithm (ROMA) in the preoperative stratification of patients with ovarian tumors using a novel combination of laboratory tests. The study group (n=619) consisted of 354 premenopausal and 265 postmenopausal patients. The levels of carbohydrate antigen 125 (CA125) and human epididymis protein 4 (HE4) were determined, and ROMA calculations were performed for each pre- and postmenopausal patient. HE4 levels were determined using an electrochemiluminescence immunoassay, while CA125 levels were determined by a chemiluminescence microparticle immunoassay. A contingency table was applied to calculate the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). Receiver operating characteristic curves were also constructed, and areas under the curves (AUCs) were compared between the marker determinations and ROMA algorithms. In terms of distinguishing between ovarian cancer and benign disease, the sensitivity of ROMA was 88.3%, specificity was 88.2%, PPV was 75.3% and NPV was 94.9% among all patients. The respective parameters were 71.1, 90.1, 48.2 and 91.1% in premenopausal patients and 93.6, 82.9, 86.6 and 91.6% in postmenopausal patients. The AUC value for the ROMA algorithm was 0.926 for the ovarian cancer vs. benign groups in all patients, 0.813 in premenopausal patients and 0.939 in postmenopausal patients. The respective AUC values were 0.911, 0.879 and 0.934 for CA125; and 0.879, 0.783 and 0.889 for HE4. In this combination, the ROMA algorithm is characterized by an extremely high sensitivity of prediction of ovarian cancer in women with pelvic masses, and may constitute a precise tool with which to support the qualification of patients to appropriate surgical procedures. The ROMA may be useful in diagnosing ovarian endometrial changes in young patients. abstract_id: PUBMED:34590878 HE4 might be a more useful tumor biomarker to detect malignancy in patients with ovarian endometrioma when malignancy is suspected. Objective: To analyze the utility of carbohydrate antigen (CA)125 and human epididymis protein 4 (HE4) to detect malignancy in women with ovarian endometriosis, when ovarian cancer is suspected and ultrasonography results are inconclusive. Methods: Women who underwent surgery between 2015 and 2019 for ovarian endometriosis or for adnexal masses, with a final diagnosis of ovarian carcinoma (clear cell and endometrioid) were included in this retrospective study. The women were divided into three groups: ovarian endometriosis (OE), ovarian carcinoma without endometriosis (OC), and ovarian carcinoma with endometriosis (OC + E). Adnexal masses were assessed preoperatively by transvaginal ultrasonography according to the International Ovarian Tumor Analysis (IOTA) simple rules, and CA125 and HE4 blood levels were obtained. Results: Of 208 women, 45 had malignancy, 16 in the OC + E group and 29 in the OC group. According to transvaginal ultrasonography, 13 were classified as undetermined risk of malignancy: OC group: 3, OE group: 3, and OC + E group: 7. When we compared the tumor biomarkers, significant differences in HE4 but not in CA125 levels were found between the groups. Conclusions: When ovarian malignancy is suspected in patients with ovarian endometriosis, HE4 is a more useful tumor biomarker to diagnose OC when ultrasonography results are inconclusive. abstract_id: PUBMED:30009497 Diagnostic performance of CA 125, HE4, and risk of Ovarian Malignancy Algorithm for ovarian cancer. Objective: We evaluated the diagnostic performance of CA 125, HE4, and ROMA for ovarian cancer in Koreans and set optimal cutoffs. Method: Serum levels of HE4 and CA 125 and the ROMA score were determined in 762 patients with benign gynecological disease and 70 with ovarian cancer. Receiver operating characteristic curves were constructed to calculate the areas under the curve (AUC). CA 125, HE4, and ROMA exhibiting maximum Youden index were determined, respectively, as the optimal cutoffs, and sensitivity and specificity were evaluated by applying those cutoffs. Results: In benign diseases, CA 125 significantly increased in patients with uterine myoma, adenomyosis, endometrial pathology, or endometriosis, but HE4 only increased in patients with adenomyosis. For the diagnosis of ovarian cancer, the combination of CA 125, HE4, and age showed the highest AUC value of 0.892 in the premenopausal group, and ROMA demonstrated the best diagnostic performance, with an AUC of 0.935 in postmenopausal patients. When the optimal cutoff values for CA 125 and HE4 were applied, the sensitivities of CA 125, HE4, and ROMA in premenopausal women were all the same at 0.714, while the specificities were 0.841, 0.974, and 0.972, respectively. In the postmenopausal group, the sensitivities of these markers were 0.857, 0.804, and 0.929, and the specificities were 0.836, 0.887, and 0.800, respectively. Conclusion: Although all markers demonstrated good diagnostic performance, they varied depending on the pathologic types of benign diseases and ovarian cancer. For accurate diagnosis of ovarian cancer, CA 125, HE4, and ROMA should be used complementarily. abstract_id: PUBMED:21340004 HE4 in the differential diagnosis of a pelvic mass: a case report. Neoplasms of the ovary present an increasing challenge to the physician. Neoplastic ovarian cysts can resemble endometriomas in ultrasound imaging and need to be carefully considered in the differential diagnosis. We report the case of a woman with a strong family history of hereditary breast and ovarian cancer, who presented with a pelvic mass. The young girl refused oncogenetic counseling and genetic testing, even though she had a 50% a priori probability of being a BRCA1 mutation carrier. Pelvic magnetic resonance imaging (MRI) and a comparative analysis of the serum concentration of HE-4 and CA125 biomarkers provided accuracy and sensitivity in the diagnosis of a benign ovarian pathology. Based on this experience, we propose that the sensitivity of a screening program based on a HE4 and CA125 assay and MRI in high risk patients with mutations in the BRCA1 and BRCA2 genes may be considered a useful pre-operative tool for the differential diagnosis of pelvic masses. abstract_id: PUBMED:36708088 Clinical characteristics and serum CA19-9 combined with HE4 are valuable in diagnosing endometriosis-associated ovarian cancer. Objective: Endometriosis-associated ovarian cancer (EAOC) is difficult to diagnose because of its low incidence, uncertain risk factors, and the absence of effective markers. This study aimed to investigate the clinical characteristics of EAOC and identify useful serological markers. Methods: We retrospectively studied the clinical characteristics of patients with EAOC and ovarian endometriosis, obtained between January 1, 2011 and October 31, 2021. Univariate and multivariate logistic regression analyses were used to explore the relationship between clinical characteristics and EAOC. Receiver operating characteristic curves were applied to access the diagnostic value of serological markers in EAOC. Results: In total, the clinical characteristics of 220 patients were obtained; 44 with EAOC and 176 with ovarian endometriosis. EAOC patients were older (46.20 vs. 36.26 years, P &lt; 0.001) and had larger tumors (9.10 vs. 6.73 cm, P = 0.003) together with higher CA19-9 (21.44 vs. 4.72 U/mL, P &lt; 0.001) and HE4 levels (62.35 vs. 44.19 pmol/L, P &lt; 0.001) when compared with ovarian endometriosis patients. Multivariate analysis showed that HE4 greater than 59.7 pmol/L, CA19-9 greater than 8.5 U/mL, age 42 years or older, and tumor length 9.2 cm or longer were independent risk factors for EAOC. Significantly, CA19-9 combined with HE4 had high sensitivity (72.73%) and specificity (78.41%) in diagnosing EAOC. Conclusion: Age over 42 years, large ovarian tumor, serum CA19-9 and HE4 are valuable in the diagnosis of EAOC. abstract_id: PUBMED:30123870 Adnexal mass with extremely high levels of CA-125 and CA19-9 but normal Human Epididymis Protein 4 (HE4) and Risk of Ovarian Malignancy Algorithm (ROMA): Endometriosis or ovarian malignancy? A case report. Background: It has been shown that Carbohydrate antigen (CA) 125 and CA 19-9 tumor markers are useful for diagnosis and follow up of ovarian carcinoma. Case: In this case, we reported the high level of CA-125 and CA 19-9 with large right ovarian intact endometrioma and extensive involvement of omentum. Conclusion: Human Epididymis protein (HE4) and Risk of ovarian malignancy algorithm (ROMA) can be useful in differentiation between malignancies and benign pathologies with a good sensitivity and specificity value. abstract_id: PUBMED:25920309 Evaluation of applicability of HE4 and ROMA in the preoperative diagnosis of adnexal masses Objective: The aim of the study was to evaluate the effectiveness of HE4 alone and in combination with CA 125 (ROMA) in selecting patients at high risk of adnexal malignancy. Material And Methods: Serum CA 125 and HE4 levels were determined and the ROMA value was calculated in 259 women qualified for surgery due to adnexal mass. The results were compared with histopathological findings. Results: Sensitivity and specificity in preoperative diagnosis of primary ovarian cancer were 93.2% and 71.5% for CA 125 and 95.4% and 81.3% for HE4, respectively ROMA algorithm achieved sensitivity of 95.4% and specificity of 79.8%. All methods reached sensitivity of 100% at specificity of 65.6% for CA125, 93.4% for HE4 and 82.0% for ROMA in premenopausal women, whereas in postmenopausal women sensitivity and specificity achieved levels of 92.1% and 81.7% for CA 125, 94.7% and 60.6% for HE4 and 94.7% and 76.1% for ROMA, respectively Serum levels of both CA 125 and HE4 were significantly higher in women with primary ovarian cancer as compared to benign disease. Concentrations of CA 125 in patients with endometriosis were significantly elevated as compared to women with other benign tumors. Such relation was not observed when HE4 levels were concerned. Conclusions: CA 125, HE4 and ROMA are useful in preoperative diagnosis of ovarian malignancy HE4 improves the diagnostic accuracy in cases of endometriosis, verifying false positive results of CA 125. abstract_id: PUBMED:30917847 Biomarkers and algorithms for diagnosis of ovarian cancer: CA125, HE4, RMI and ROMA, a review. Ovarian cancer is the 5th leading cause of death for women with cancer worldwide. In more than 70% of cases, it is only diagnosed at an advanced stage. Our study aims to give an update on the biological markers for diagnosing ovarian cancer, specifically HE4, CA 125, RMI and ROMA algorithms.Serum CA125 assay has low sensitivity in the early stages and can be increased in certain conditions such as menstruation or endometriosis. The level of HE4 is overexpressed in ovarian tumors. Its specificity is 94% and its level is not affected by endometriosis cysts. The combined measures of CA125 and HE4 have proved to be highly efficient with an area under the curve (AUC) of up to 0.96. Furthermore, this combined measure of CA125 can correct the variations in HE4 which are due to smoking or contraception combining estrogen plus progestin. While the specificity of RMI sometimes reaches 92%, the rather low AUC of 0.86 does not make it the best diagnostic tool. The specificity of ROMA is lower than HE4 (84% compared to 94%).To date, the most efficient biological diagnostic tool to diagnose ovarian cancer is the combination of CA125 and HE4. abstract_id: PUBMED:22426406 Serum HE4 as a useful biomarker in discriminating ovarian cancer from benign pelvic disease. Objective: To evaluate the role of the novel tumor marker human epididymal secretory protein E4 (HE4) in discriminating ovarian cancer from benign pelvic disease in patients with a pelvic mass. Methods: Serum samples from 131 patients with epithelial ovarian cancer (EOC) and 126 patients with various benign pelvic diseases were collected preoperatively and tested for cancer antigen (CA)125 and HE4 levels. Receiver operator characteristic curves were constructed, and the area under the curve (AUC) was compared between the markers. Results: The median CA125 and HE4 levels were significantly higher in the patients with EOC than in those with benign disease (P &lt; 0.001). Using benign controls as the comparison group for all cases, the AUC for combined HE4 and CA125 (0.955) was significantly higher than that for HE4 (0.941) or CA125 alone (0.924; P &lt; 0.05). A comparison of premenopausal benign controls to EOC cases showed that the AUC for combined HE4 and CA125 (0.97) was significantly higher than that for CA125 (0.93; P &lt; 0.004). The AUC for HE4 was significantly higher compared to that of CA125 in discriminating EOC from ovarian endometriosis (0.969 vs 0.904; P = 0.014) and pelvic inflammatory disease (0.909 vs 0.819; P = 0.034). Conclusion: Serum HE4 testing is a more powerful tool than CA125 assay to discriminate EOC from ovarian endometriosis and pelvic inflammatory disease. For patients with a pelvic mass, especially premenopausal patients, the serum concentration of HE4 adds valuable information to CA125 in identifying patients with EOC from those with benign pelvic disease. Answer: Based on the provided abstracts, HE4 does not appear to be a useful marker for diagnosing endometrioma specifically. One study found that there was no statistically significant decrease in HE4 serum levels after surgical excision of an endometrioma, suggesting that HE4 may not be useful in the diagnosis of low-risk endometrioma (PUBMED:25118488). However, HE4 might be more useful in detecting malignancy in patients with ovarian endometrioma when malignancy is suspected, especially when ultrasonography results are inconclusive (PUBMED:34590878). Additionally, HE4 has been shown to improve diagnostic accuracy in cases of endometriosis by verifying false-positive results of CA125 (PUBMED:25920309). The Risk of Ovarian Malignancy Algorithm (ROMA), which includes HE4, has been found to be a precise tool for predicting ovarian cancer in women with pelvic masses and may be useful in diagnosing ovarian endometrial changes in young patients (PUBMED:27899969). Furthermore, HE4 has been reported to be a more powerful tool than CA125 in discriminating epithelial ovarian cancer (EOC) from ovarian endometriosis and pelvic inflammatory disease, particularly in premenopausal patients (PUBMED:22426406). In summary, while HE4 may not be a reliable marker for endometrioma alone, it can be valuable in the context of differentiating benign from malignant ovarian masses, particularly when used in combination with other markers and algorithms such as CA125 and ROMA.
Instruction: Do tibiofemoral contact point and posterior condylar offset influence outcome and range of motion in a mobile-bearing total knee arthroplasty? Abstracts: abstract_id: PUBMED:29980426 Effects of posterior condylar offset and posterior tibial slope on mobile-bearing total knee arthroplasty using computational simulation. Background: Postoperative changes of the femoral posterior condylar offset (PCO) and posterior tibial slope (PTS) affect the biomechanics of the knee joint after fixed-bearing total knee arthroplasty (TKA). However, the biomechanics of mobile-bearing is not well known. Therefore, the aim of this study was to investigate whether alterations to the PCO and PTS affect the biomechanics for mobile-bearing TKA. Methods: We used a computational model for a knee joint that was validated using in vivo experiment data to evaluate the effects of the PCO and PTS on the tibiofemoral (TF) joint kinematics, patellofemoral (PF) contact stress, collateral ligament force and quadriceps force, for mobile-bearing TKA. The computational model was developed using ±1-, ±2- and ±3-mm PCO models in the posterior direction and -3°, 0°, +3°, and +6° PTS models based on each of the PCO models. Results: The maximum PF contact stress, collateral ligament force and quadriceps force decreased as the PTS increased. In addition, the maximum PF contact stress and quadriceps force decreased, and the collateral ligament force increased as PCO translated in the posterior direction. This trend is consistent with that observed in any PCO and PTS. Conclusions: Our findings show the various effects of postoperative alterations in the PCO and PTS on the biomechanical results of mobile-bearing TKA. Based on the computational simulation, we suggest that orthopaedic surgeons intraoperatively conserve the patient's own anatomical PCO and PTS in mobile-bearing TKA. abstract_id: PUBMED:23677140 Do tibiofemoral contact point and posterior condylar offset influence outcome and range of motion in a mobile-bearing total knee arthroplasty? Purpose: The posterior condylar offset (PCO) and the tibiofemoral contact point (CP) have been reported as important factors that can influence range of motion and clinical outcome after total knee arthroplasty. A mobile-bearing knee implant with an anterior posterior gliding insert would in theory be more sensitive for changes in PCO and CP. For this reason, we analysed the PCO and CP and the relation with outcome and range of motion in 132 patients from a prospectively documented cohort in this type of implant. Methods: The prosthesis used was a posterior cruciate retaining AP gliding mobile-bearing total knee replacement (SAL II Sulzer Medica, Switzerland). In 132 knees, the pre- and postoperative PCO and postoperative CP were evaluated. Measurements were made on X-rays of the knee taken in approximately 90° of flexion and with less than 3-mm rotation of the femur condyles. The outcome parameters, range of motion (ROM) and the knee society score (KSS), for each knee were determined preoperatively and at 5-year follow-up. Results: The mean KSS improved from 91 to 161 at 5-year follow-up (p &lt; 0.001) and the mean ROM from 102 to 108 (p &lt; 0.05). The mean PCO difference (postoperative PCO-preoperative PCO) was--0.05 mm (SD 2.15). The CP was on average 53.9% (SD 5.5%). ROM was different between the 3 PCO groups (p = 0.05): patients with 3 or more mm decrease in PCO had the best postoperative ROM (p = 0.047). There was no statistical difference between the postoperative ROM between patients with a stable PCO and those with an increased PCO. There was no correlation between the difference in PCO and the difference in ROM; R Pearson = -0.056. There was no difference in postoperative ROM or postoperative total KSS between CP &lt;60% and CP &gt;60%: p = 0.22, p = 0.99, for ROM and KSS, respectively. Scatter plots showed uniform clouds of values: increase or decrease in PCO and CP had no significant influence on ROM or KSS. Conclusion: The hypotheses that a stable PCO and a more natural CP increase postoperative ROM and improve clinical outcome could not be confirmed. On the contrary, a decreased PCO seemed to improve knee flexion. Furthermore, a relationship between PCO and CP could not be found. Level Of Evidence: Prospective cohort study, Level II. abstract_id: PUBMED:36536107 The role of posterior condylar offset ratio on clinical and functional outcome of posterior stabilized total knee arthroplasty: a retrospective cohort study. Background: Postoperative Range of Motion (ROM) is an important measurement of the success of a Total Knee Arthroplasty (TKA). Much enthusiasm has been recently directed toward the posterior femoral condylar offset (PFCO), with some authors reporting increasing postoperative knee flexion when increasing PFCO. The aim of this study is to retrospectively determine the effect of the PFCO on the clinical and functional outcome of a cohort of patients who underwent a Posterior Stabilized (PS) TKA. Methods: Clinical and radiological data of all patients who underwent TKA with PS implant for primary osteoarthritis were retrospectively reviewed. Knee Society Score (KSS), knee ROM, PFCO ratio (PFCOR), and tibial slope (TS) were measured pre and postoperatively. Results: One hundred and twenty-one patients (141 knees) met the inclusion criteria. The mean knee flexion increased from 98 ± 20.2° (range 30-130) to 123 ± 12.1° (range 70-140) and the mean KSS increased from 74.0 ± 3.3 (range 27-130) to 203.9 ± 8.1 (range 26-249). Postoperative PFCOR and TS were 0.492 ± 0.005 (range 0.40-0.57) and 2.36 ± 0.56° (range - 10.9-12.15°), respectively. Neither maximal flexion angle nor KSS showed a significant correlation with postoperative PFCOR (Pearsons'r = - 0.057, p = 0.5 for flexion angle and Pearsons'r = - 0.073, p = 0.5 for KSS) or with postoperative TS (Pearsons'r = 0.042, p = 0.62 for flexion angle and Pearsons'r = 0.002, p = 0.98 for KSS). Conclusion: Posterior femoral condylar offset remains an important parameter and, especially when using anterior femoral referencing TKA, care must be taken to prevent excessive resection of the posterior femoral condyles. abstract_id: PUBMED:27108259 Full-thickness cartilage-based posterior femoral condylar offset. Influence on knee flexion after posterior-stabilized total knee arthroplasty. Background: The association between posterior condylar offset (PCO) and maximal knee flexion remains controversial. The measured PCO in the previous studies is usually determined on the plain radiographs without taking into account the cartilage thickness. Hypothesis: Full-thickness cartilage-based PCO is a valid criterion to compare preoperative and postoperative offset, and has a significant influence on postoperative knee flexion after total knee arthroplasty. Materials And Methods: Ninety-five patients (107 knees) who underwent posterior-stabilized total knee arthroplasty were enrolled in a prospective study. Intra-operative measurement of cartilage thickness of the posterior femoral condyle was documented in all patients. Preoperative and postoperative radiographic PCO was measured respectively on the true lateral view of the knee. True PCO was adjusted by adding cartilage thickness of the posterior femoral condyle. The relationship between the change in knee flexion and PCO difference was assessed using Pearson correlation analysis. Results: The postoperative radiographic PCO difference (2.01±2.05mm) was significantly greater than the true PCO difference (0.09±2.12mm). The mean postoperative change in maximum knee flexion angle was 5.4°±9.9°. No significant correlation was found between PCO difference and the change in knee flexion, regardless of radiographic (P=0.232) or cartilage-based measurements (P=0.693). Conclusions: Full-thickness cartilage-based PCO is an optimal criterion to estimate the change in PCO before and after total knee arthroplasty. However in this study, neither cartilage-based nor radiographic PCO appeared to have a significant influence on postoperative knee flexion after posterior-stabilized total knee arthroplasty. Level Of Evidence: Level 4 Cohort Study. abstract_id: PUBMED:30741664 Biomechanical analysis of a changed posterior condylar offset under deep knee bend loading in cruciate-retaining total knee arthroplasty. Background: The conservation of the joint anatomy is an important factor in total knee arthroplasty (TKA). The restoration of the femoral posterior condylar offset (PCO) has been well known to influence the clinical outcome after TKA. Objective: The purpose of this study was to determine the mechanism of PCO in finite element models with conservation of subject anatomy and different PCO of ±1, ±2, ±3 mm in posterior direction using posterior cruciate ligament-retaining TKA. Methods: Using a computational simulation, we investigated the influence of the changes in PCO on the contact stress in the polyethylene (PE) insert and patellar button, on the forces on the collateral and posterior cruciate ligament, and on the quadriceps muscle and patellar tendon forces. The computational simulation loading condition was deep knee bend. Results: The contact stresses on the PE insert increased, whereas those on the patellar button decreased as posterior condylar offset translated to the posterior direction. The forces exerted on the posterior cruciate ligament and collateral ligaments increased as PCO translated to the posterior direction. The translation of PCO in the anterior direction, in an equivalent flexion angle, required a greater quadriceps muscle force. Conclusions: Translations of the PCO in the posterior and anterior directions resulted in negative effects in the PE insert and ligament, and the quadriceps muscle force, respectively. Our findings suggest that orthopaedic surgeons should be careful with regard to the intraoperative conservation of PCO, because an excessive change in PCO may lead to quadriceps weakness and an increase in posterior cruciate ligament tension. abstract_id: PUBMED:29330345 A computational simulation study to determine the biomechanical influence of posterior condylar offset and tibial slope in cruciate retaining total knee arthroplasty. Objectives: Posterior condylar offset (PCO) and posterior tibial slope (PTS) are critical factors in total knee arthroplasty (TKA). A computational simulation was performed to evaluate the biomechanical effect of PCO and PTS on cruciate retaining TKA. Methods: We generated a subject-specific computational model followed by the development of ± 1 mm, ± 2 mm and ± 3 mm PCO models in the posterior direction, and -3°, 0°, 3° and 6° PTS models with each of the PCO models. Using a validated finite element (FE) model, we investigated the influence of the changes in PCO and PTS on the contact stress in the patellar button and the forces on the posterior cruciate ligament (PCL), patellar tendon and quadriceps muscles under the deep knee-bend loading conditions. Results: Contact stress on the patellar button increased and decreased as PCO translated to the anterior and posterior directions, respectively. In addition, contact stress on the patellar button decreased as PTS increased. These trends were consistent in the FE models with altered PCO. Higher quadriceps muscle and patellar tendon force are required as PCO translated in the anterior direction with an equivalent flexion angle. However, as PTS increased, quadriceps muscle and patellar tendon force reduced in each PCO condition. The forces exerted on the PCL increased as PCO translated to the posterior direction and decreased as PTS increased. Conclusion: The change in PCO alternatively provided positive and negative biomechanical effects, but it led to a reduction in a negative biomechanical effect as PTS increased.Cite this article: K-T. Kang, Y-G. Koh, J. Son, O-R. Kwon, J-S. Lee, S. K. Kwon. A computational simulation study to determine the biomechanical influence of posterior condylar offset and tibial slope in cruciate retaining total knee arthroplasty. Bone Joint Res 2018;7:69-78. DOI: 10.1302/2046-3758.71.BJR-2017-0143.R1. abstract_id: PUBMED:35861866 No difference between mobile and fixed bearing in primary total knee arthroplasty: a meta-analysis. Purpose: Both mobile (MB) and fixed (FB) bearing implants are routinely used for total knee arthroplasty (TKA). This meta-analysis compared MB versus FB for TKA in terms of implant positioning, joint function, patient reported outcome measures (PROMs), and complications. It was hypothesised that MB performs better than FB implants in primary TKA. Methods: This meta-analysis was conducted according to the 2020 PRISMA statement. In February 2022, the following databases were accessed: Pubmed, Web of Science, Google Scholar, Embase. All the randomized clinical trials (RCTs) comparing mobile versus fixed bearing for primary TKA were considered. Results: Data from 74 RCTs (11,116 procedures) were retrieved. The mean follow-up was 58.8 (7.5 to 315.6) months. The MB group demonstrated greater range of motion (ROM) (P = 0.02), Knee Society Score (KSS) score (P &lt; 0.0001), and rate of deep infections (P = 0.02). No difference was found in implant positioning: tibial slope, delta angle, alpha femoral component angle, gamma femoral component angle, beta tibial component angle, tibiofemoral alignment angle, posterior condylar offset, radiolucent lines. No difference was found in duration of the surgical procedure. No difference was found in the following PROMs: Oxford Knee Score (OKS), Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), visual analogue scale (VAS), function and pain subscales of the KSS score. No difference was found in the rate of anterior knee pain, revision, aseptic loosening, fractures, and deep vein thrombosis. Conclusion: There is no evidence in support that MB implants promote greater outcomes compared to FB implants in primary TKA. Level Of Evidence: Level I. abstract_id: PUBMED:34717014 Reliability of the posterior condylar offset. The posterior condylar offset (PCO) has been proposed as a determinant of a postoperative range of motion after total knee arthroplasty, although there is no consensus. This study aimed to demonstrate the error introduced by forcing the femoral rotation to overlap both condyles for the "true" lateral X-ray projection for the PCO measurement. We hypothesize that the angular discrepancy between the posterior femoral cortical reference plane and the posterior condylar axis plane due to rotation invalidates the acquisition of reliable measurements on X-rays. We have measured the PCO in 50 "true" lateral X-rays and compared it with the medial and lateral condyles PCO's assessed on a computed tomography-scan-based three-dimensional (3D) model of each knee. PCO based on the 3D imaging differed significantly between the medial (25.8 ± 3.67 mm) and lateral (16.59 ± 2.92 mm) condyle. Three-dimensional PCO values differ significantly from those determined in the radiographic studies. Also, the mean values of the medial and lateral condyle PCO measurements differed significantly (p &lt; 0.001) with all PCO measurements on radiographs. We have identified a difference between the posterior cortical plane and the posterior condylar axis projections, both on the axial plane with a mean value of 11.23° ± 3.64°. Our data show an interplane discrepancy angle between the posterior femoral diaphyseal cortical and the posterior condylar axis plane (due to the femur's necessary rotation to overlap both condyles) may invalidate the 2D X-ray PCO assessment as a reliable measurement. abstract_id: PUBMED:36809510 Balancing the flexion gap first in total knee arthroplasty leads to better preservation of posterior condylar offset resulting in better knee flexion. Purpose: The purpose of this study is to determine whether the flexion first balancing technique, developed in an attempt to solve the dissatisfaction due to instability in total knee arthroplasties, leads to better restoration of joint line height and medial posterior condylar offset. This might result in better knee flexion, compared to the classic extension first gap balancing technique. The secondary objective is to show non-inferiority of the flexion first balancing technique in terms of clinical outcomes as measured by the Patient Reported Outcome Measurements. Methods: A cohort of 40 patients (46 knee replacements) operated using the flexion first balancing technique was retrospectively analyzed and compared with a cohort of 51 patients (52 knee replacements) operated using the classic gap balancing technique. Radiographic analysis of the coronal alignment, joint line height and posterior condylar offset was performed. Clinical and functional outcome data were analyzed pre- and postoperatively and compared between both groups. The two sample t test, Mann-Whitney U test, Chi-square test and a linear mixed model were used for performing statistical analyses, after normality analyses were executed. Results: Radiologic evaluation showed a decrease in posterior condylar offset using the classic gap balancing technique (p = 0.040) versus no change using the flexion first balancing technique (p = n.s.). No statistically significant differences were noted for joint line height and coronal alignment. Using the flexion first balancer technique leads to a greater postoperative range of motion with deeper flexion (p = 0.002) and a better Knee injury and Osteoarthritis Outcome Score (KOOS) (p = 0.025). Conclusion: The Flexion First Balancing technique is a valid and safe technique for TKA, resulting in better preservation of PCO with consequently greater postoperative flexion and better KOOS scores. Level Of Evidence: III. abstract_id: PUBMED:38304244 Does Selective Posterior Tibial Slope Technique in Cruciate-Retaining Total Knee Arthroplasty Result in the Elimination of Posterior Cruciate Ligament Management? In cruciate-retaining total knee arthroplasty (CR-TKA), intraoperative posterior cruciate ligament (PCL) management is necessary because retention of optimum PCL tension with high reproducibility is difficult. If PCL management is not performed appropriately, problems such as postoperative pain, poor range of motion, and a feeling of instability may occur. The posterior tibial slope (PTS) has a major influence on the tension of the PCL in CR-TKA. Changes in femoral posterior condylar offset also influences PCL tension in CR-TKA. We designed a surgical procedure in which the PTS is adjusted in association with the posterior condylar offset during surgery. The postoperative clinical results of the primary total knee arthroplasty 159 knee performed by this procedure were favorable. In addition, none of the knees required management of PCL. In our procedure, PCL management, which is the main problem in CR-TKA, is not necessary, and this may be the main advantage of the new procedure. Answer: The influence of tibiofemoral contact point (CP) and posterior condylar offset (PCO) on the outcome and range of motion (ROM) in mobile-bearing total knee arthroplasty (TKA) has been investigated in several studies, with varying results. One study found that changes in PCO and CP did not significantly influence ROM or clinical outcomes as measured by the Knee Society Score (KSS). Specifically, patients with a decrease in PCO of 3 mm or more had the best postoperative ROM, but there was no statistical difference in ROM between patients with a stable PCO and those with an increased PCO. Additionally, there was no correlation between the difference in PCO and the difference in ROM, and no difference in postoperative ROM or KSS between different CP positions (PUBMED:23677140). Another study using computational simulation suggested that alterations in PCO and posterior tibial slope (PTS) affect various biomechanical aspects of the knee joint in mobile-bearing TKA. The study found that as PTS increased, the maximum patellofemoral (PF) contact stress, collateral ligament force, and quadriceps force decreased. Moreover, as PCO translated in the posterior direction, the maximum PF contact stress and quadriceps force decreased, while the collateral ligament force increased. The study concluded that orthopaedic surgeons should aim to conserve the patient's own anatomical PCO and PTS during surgery (PUBMED:29980426). In contrast, another study reported that neither the postoperative posterior femoral condylar offset ratio (PFCOR) nor the PTS showed a significant correlation with maximal flexion angle or KSS, suggesting that PFCOR may not be a critical factor in determining clinical and functional outcomes in posterior stabilized TKA (PUBMED:36536107). Overall, while some studies suggest that PCO and CP may have an impact on the biomechanics of the knee joint and potentially on the clinical outcomes of mobile-bearing TKA, the evidence is not entirely consistent. Some research indicates that maintaining the patient's anatomical PCO and PTS could be beneficial, while other studies find no significant correlation between these parameters and postoperative outcomes. Therefore, the influence of tibiofemoral contact point and posterior condylar offset on outcome and range of motion in mobile-bearing total knee arthroplasty remains a topic with mixed findings.
Instruction: Are third-trimester adipokines associated with higher metabolic risk among women with gestational diabetes? Abstracts: abstract_id: PUBMED:25890778 Are third-trimester adipokines associated with higher metabolic risk among women with gestational diabetes? Aim: This study aimed to determine whether third-trimester adipokines during gestational diabetes (GDM) are associated with higher metabolic risk. Methods: A total of 221 women with GDM (according to IADPSG criteria) were enrolled between 2011/11 and 2013/6 into a prospective observational study (IMAGE), and categorized as having elevated fasting blood glucose (FBG) or impaired fasting glucose (IFG, n = 36) if levels were ≥ 92 mg/dL during a 75-g oral glucose tolerance test (OGTT), impaired glucose tolerance (IGT, n = 116) if FBG was &lt; 92 mg/dL but with elevated 1-h or 2-h OGTT values, or impaired fasting and stimulated blood glucose (IFSG, n = 69) if both FBG was ≥ 92 mg/dL and 1-h or 2-h OGTT values were elevated. Results: Pre-gestational body mass index (BMI) was higher in women with IFG or IFSG compared with IGT (P &lt; 0.001), as were leptin levels in women with IFG vs IGT [34.7 (10.5-119.7) vs 26.6 (3.56-79.4) ng/L; P = 0.008]. HOMA2-IR scores were higher in women with IFG or IFSG vs IGT (1.87 ± 1.2 or 1.72 ± 0.9 vs 1.18 ± 0.8, respectively; P &lt; 0.001). Also, those with IFSG vs those with IGT had significantly lower HOMA2-B scores (111.4 ± 41.3 vs 127.1 ± 61.6, respectively; P &lt; 0.05) and adiponectin levels [5.00 (1.11-11.3) vs 6.19 (2.11-17.7) μg/mL; P &lt; 0.001], and higher levels of IL-6 [1.14 (0.33-20.0) vs 0.90 (0.31-19.0); P = 0.012] and TNF-α [0.99 (0.50-10.5) vs 0.84 (0.45-11.5) pg/mL; P = 0.003]. After adjusting for age, parity, and pre-gestational and gestational BMI, the difference in adiponectin levels remained significant. Conclusion: Diagnosing GDM by IADSPG criteria results in a wide range of heterogeneity. Our study has indicated that adipokine levels in addition to FBG may help to select women at high metabolic risk for appropriate monitoring and post-delivery interventions (ClinicalTrials.gov number NCP02133729). abstract_id: PUBMED:25749468 Adipokine levels during the first or early second trimester of pregnancy and subsequent risk of gestational diabetes mellitus: A systematic review. Objective: We aimed to systematically review available literature linking adipokines to gestational diabetes mellitus (GDM) for a comprehensive understanding of the roles of adipokines in the development of GDM. Methods: We searched PubMed/MEDLINE and EMBASE databases for published studies on adipokines and GDM through October 21, 2014. We included articles if they had a prospective study design (i.e., blood samples for adipokines measurement were collected before GDM diagnosis). Random-effects models were used to pool the weighted mean differences comparing levels of adipokines between GDM cases and non-GDM controls. Results: Of 1523 potentially relevant articles, we included 25 prospective studies relating adipokines to incident GDM. Our meta-analysis of nine prospective studies on adiponectin and eight prospective studies on leptin indicated that adiponectin levels in the first or early second trimester of pregnancy were 2.25 μg/ml lower (95% CI: 1.75-2.75), whereas leptin levels were 7.25 ng/ml higher (95% CI 3.27-11.22), among women who later developed GDM than women who did not. Prospective data were sparse and findings were inconsistent for visfatin, retinol binding protein (RBP-4), resistin, tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), and vaspin. We did not identify prospective studies for several novel adipokines, including chemerin, apelin, omentin, or adipocyte fatty acid-binding protein. Moreover, no published prospective studies with longitudinal assessment of adipokines and incident GDM were identified. Conclusion: Adiponectin levels in the first or second trimester of pregnancy are lower among pregnant women who later develop GDM than non-GDM women, whereas leptin levels are higher. Well-designed prospective studies with longitudinal assessment of adipokines during pregnancy are needed to understand the trajectories and dynamic associations of adipokines with GDM risk. abstract_id: PUBMED:34836226 Longitudinal Association of Maternal Pre-Pregnancy BMI and Third-Trimester Glycemia with Early Life Growth of Offspring: A Prospective Study among GDM-Negative Pregnant Women. Intrauterine modifiable maternal metabolic factors are essential to the early growth of offspring. The study sought to evaluate the associations of pre-pregnancy BMI and third-trimester fasting plasma glucose (FPG) with offspring growth outcomes within 24 months among GDM-negative pregnant women. Four hundred eighty-three mother -offspring dyads were included from the Shanghai Maternal-Child Pairs Cohort. The pregnant women were categorized into four mutually exclusive groups according to pre-pregnancy BMI as normal or overweight/obesity and third-trimester FPG as controlled or not controlled. Offspring growth in early life was indicated by the BAZ (BMI Z-score), catch-up growth, and overweight/obesity. Among those with controlled third-trimester FPG, pre-pregnancy overweight/obesity significantly increased offspring birth weight, BAZ, and risks of overweight/obesity (RR 1.83, 95% CI 1.23 to 2.73) within 24 months. Those who had uncontrolled third-trimester FPG had a reduced risk of offspring overweight/obesity within 24 months by 47%. The combination of pre-pregnancy overweight/obesity and maternal uncontrolled third-trimester FPG increased 5.24-fold risk of offspring catch-up growth within 24 months (p &lt; 0.05). Maternal pre-pregnancy overweight/obesity and uncontrolled third-trimester glycemia among GDM-negative women both have adverse effects on offspring growth within 24 months. With the combination of increasing pre-pregnancy BMI and maternal third-trimester FPG, the possibility of offspring catch-up growth increases. abstract_id: PUBMED:34563584 Increased risk for microvascular complications among women with gestational diabetes in the third trimester. Aims: The risk of microvascular disease has been thought to commence with the onset of overt diabetes. Women with gestational diabetes have only had a short-term exposure to frank hyperglycemia, but, due to underlying β-cell dysfunction, they may also have had long-term exposure to mild degrees of hyperglycemia. The aim of the study was to determine whether women with gestational diabetes are at increased risk for microalbuminuria and retinopathy compared to women with normal glucose tolerance in pregnancy. Methods: We recruited women aged ≥ 25 years with singleton pregnancies at 32 to 40 weeks' gestational age, with and without gestational diabetes. Women with hypertension, preeclampsia, or pre-gestational diabetes were excluded. Results: Of 372 women included in the study, 195 had gestational diabetes. The prevalence of microalbuminuria was 15% among those with gestational diabetes versus 6% in those with normal glucose tolerance (adjusted odds ratio 2.4, 95% confidence interval 1.1 to 5.2, p = 0.006). Diastolic blood pressure and HbA1c were associated with microalbuminuria. The prevalence of retinopathy did not differ between groups (10% versus 11%). Conclusions: Women with gestational diabetes have an increased risk of microalbuminuria in the third trimester, despite having been exposed to only a brief period of overt hyperglycemia. abstract_id: PUBMED:20556334 Gestational diabetes, comparison of women diagnosed in second and third trimester of pregnancy with non GDM women: Analysis of a cohort study. Unlabelled: Pregnant women are normally screened for Gestational diabetes (GDM) at week 24 of pregnancy. However some women develop the disease later on their pregnancies. No study has analyzed women developing GDM later in pregnancy. Objective: To analyze data on a cohort study and compare women diagnosed with GDM in second and third trimester of pregnancy with women without GDM. Results: GDM women diagnosed during their first two trimesters of pregnancy were older (p = 0.0008) and had higher body mass index (BMI) (p = 0.0007) than non GDM women. However, the only risk factor in women diagnosed in their third trimester of pregnancy was having first degree relatives with type 2 DM and this was independent of age and BMI (OR of 2.7, 95% CI 1.2 - 6.0). Conclusions: Women who develop GDM in their second trimester of pregnancy have known risk factors for diabetes mellitus such as age and higher BMI, however, the only recognised risk factor between non GDM women and women developing GDM late in pregnancy is family history of type 2 DM. Two populations of GDM may exist and future studies should focus on analysing short and long term complications of these women to support the need to diagnosed and treat them all. abstract_id: PUBMED:33939909 Third trimester HbA1c and the association with large-for-gestational-age neonates in women with gestational diabetes. Objective: To evaluate the association between HbA1c levels measured in the third trimester and the risk for large for gestational age (LGA) in neonates of mothers affected by gestational diabetes mellitus (GDM). Secondarily, we aimed to identify an ideal cut-off for increased risk of LGA amongst pregnant women with GDM. Methods: Observational retrospective review of singleton pregnant women with GDM evaluated in a diabetes and pregnancy clinic of a tertiary and academic hospital. From January/2011 to December/2017, 1,085 pregnant women underwent evaluation due to GDM, of which 665 had an HbA1c test in the third trimester. A logistic regression model was performed to evaluate predictors of LGA. A receiver-operating-characteristic (ROC) curve was used to evaluate the predictive ability of third trimester HbA1c for LGA identification. Results: A total of 1,085 singleton pregnant women were evaluated during the study period, with a mean age of 32.9 ± 5.3 years. In the multivariate analysis, OGTT at 0 minutes (OR: 1.040; CI 95% 1.006-1.076, p = 0.022) and third trimester HbA1c (OR: 4.680; CI 95% 1.210-18.107, p = 0.025) were associated with LGA newborns. Using a ROC curve to evaluate the predictive ability of third trimester HbA1c for LGA identification, the optimal HbA1c cut-off point was 5.4% where the sensitivity was 77.4% and the specificity was 71.7% (AUC 0.782; p &lt; 0.001). Conclusion: Few studies in the Mediterranean population have evaluated the role of HbA1c in predicting neonatal complications in women with GDM. A third trimester HbA1c &gt; 5.4% was found to have good sensitivity and specificity for identifying the risk of LGA. abstract_id: PUBMED:30600493 Gestational diabetes mellitus and quality of life during the third trimester of pregnancy. Purpose: The primary aim of this study was to investigate the effect of gestational diabetes mellitus (GDM) on the quality of life (QoL) of pregnant women during the third trimester of pregnancy. The secondary aim was to compare the QoL of pregnant women with GDM according to their therapeutic approach. This is the first study of this kind conducted in Greece. Methods: A case-control study with 62 pregnant women (31 with GDM and 31 with uncomplicated pregnancy), during the third trimester of pregnancy. QoL and Health Related QoL were studied with the use of three questionnaires (EQ-5D-5L, WHOQOL-BREF and ADDQoL). Results: A decrease in the QoL was found in pregnant women with GDM compared with pregnant women with uncomplicated pregnancy (p &lt; 0.05) regarding both social life and health scales. On the contrary, there was no difference in the QoL between pregnant women with GDM who followed different treatment approaches (diet or insulin). Conclusions: The diagnosis of GDM is associated with a reduction in the QoL of pregnant women during the third trimester of pregnancy, while the type of treatment does not seem to further affect it. More studies should be conducted so that the modifiers of this association can be clarified. abstract_id: PUBMED:28488900 Isolated polyhydramnios in the third trimester: is a gestational diabetes evaluation of value? We evaluated implications of testing for gestational diabetes mellitus (GDM) in pregnancies complicated by third trimester isolated polyhydramnios with previous negative diabetes screening test. In this retrospective cohort study of 104 pregnant women with polyhydramnios between 2005 and 2013, all had normal first trimester fasting glucose and normal glucose challenge test (GCT &lt; 140 mg/dL). Late onset GDM was diagnosed in five women (4.8%) with isolated polyhydramnios, one abnormal value in the oral glucose tolerance test (OGTT) was identified in four additional women (3.8%). No significant differences were found in risk factors for GDM, mean second trimester GCT (117.5 vs. 107.2 mg/dL, p = 0.38) or fasting glucose values (82 vs. 86 mg/dL, p = 0.29) between women in the polyhydramnios group with and without late GDM diagnosis. Moreover, no significant difference was found in relation to the mode of delivery or birth weight between the studied groups (3437 ± 611 vs. 3331 ± 515 g, p = 0.63). Diagnosis of third trimester polyhydramnios was not associated with increased risk for GDM or neonatal complications. abstract_id: PUBMED:32285718 Incidence of large for gestational age and predictive values of third-trimester ultrasound among pregnant women with false-positive glucose challenge test. This cohort study aimed to determine the association between false-positive 50-g GCT and incidence of LGA and to evaluate predictive roles of third-trimester ultrasonographic examination. A total of 200 women with false-positive 50-g GCT and 188 women without GDM risks were enrolled. Third-trimester ultrasonographic examinations were offered. Rate of LGA during third trimester and at birth were compared between groups. Factors associated with LGA and diagnostic properties of third-trimester ultrasonography were evaluated. Incidence of LGA by third-trimester ultrasound and at birth were significantly higher in women with false-positive GCT (19.0% vs. 10.6%, p = .03 and 22% vs. 13.8%; p = .04). Factors associated with LGA included multiparity (adjusted OR 2.32, p = .01), excessive weight gain (adjusted OR 2.57, p = .01) and LGA by ultrasound (adjusted OR 9.79, p &lt; .001). Third-trimester ultrasonography had 47.1% sensitivity, 92.1% specificity and LR + and LR- of 5.96 and 0.57 in identifying LGA infants.Impact statementWhat is already known on this subject? Women with abnormal GCT but normal OGTT (false positive GCT) might have some degree of glucose intolerance so that GDM-related outcomes could develop, including LGA, macrosomia, shoulder dystocia, and caesarean delivery. Roles of ultrasonography in the prediction of LGA and macrosomia has been reported with mixed results.What do the results of this study add? The results showed that the incidence of LGA, by third-trimester ultrasound and at birth, were significantly increased in women with false-positive GCT. Multiparity, excessive weight gain and LGA by third-trimester ultrasound significantly increased the risk of LGA. Third-trimester ultrasonography had 47.1% sensitivity, 92.1% specificity and LR + and LR- of 5.96 and 0.57 in identifying LGA infants.What are the implications of these findings for clinical practice and/or further research? More intensive behavioural and dietary interventions, together with weight gain control and monitoring, may be needed in women with false-positive GCT to minimise the risk of LGA. Third trimester ultrasonographic examination might be helpful to detect and predict LGA at birth and should be included into routine clinical practice. Further studies that are more widely generalisable are needed to elucidate the relationship between false-positive GCT and adverse pregnancy outcomes and to investigate the benefits of ultrasonographic examination in the prediction of LGA and macrosomia. abstract_id: PUBMED:24784714 First-trimester maternal serum C-reactive protein as a predictor of third-trimester impaired glucose tolerance. Objective: We evaluated whether first-trimester high-sensitivity C-reactive protein (hsCRP), a suggested marker of pregnancy-associated hyperglycemia, predicts third-trimester impaired glucose tolerance (IGT) in a secondary analysis of a prospective cohort of nondiabetic singletons enrolled at &lt;26 weeks gestation. Study Design: We measured the association between hsCRP collected at &lt;14 weeks among women classified as IGT (gestational diabetes screening results, 135 to &lt;200 mg/dL) and those among normoglycemic women. Multivariable modeling estimated the association between log hsCRP and IGT, adjusted for maternal body mass index (BMI). Results: Among 300 women, 13% (39 of 300) had IGT. The hsCRP was positively associated with glucose (P = .005). Compared with normoglycemic women, women with IGT had higher log hsCRP (0.87 ± 0.66 vs 0.67 ± 0.60, P = .04), but the association was not significant in adjusted models (adjusted odds ratio 1.20, 95% confidence interval 0.65-2.21). The hsCRP did not predict third-trimester IGT in this analysis when BMI is considered. Conclusion: Early identification of women at risk of IGT remains a priority, but the contribution of maternal BMI appears greater than hsCRP. Answer: Yes, third-trimester adipokines are associated with higher metabolic risk among women with gestational diabetes (GDM). A study found that pre-gestational body mass index (BMI) was higher in women with elevated fasting blood glucose (FBG) or impaired fasting glucose (IFG) compared with impaired glucose tolerance (IGT). Leptin levels were also higher in women with IFG versus IGT, and HOMA2-IR scores, which are indicative of insulin resistance, were higher in women with IFG or IFG and stimulated blood glucose (IFSG) versus IGT. Additionally, women with IFSG had significantly lower HOMA2-B scores, which reflect beta-cell function, and adiponectin levels, and higher levels of inflammatory markers IL-6 and TNF-α. After adjusting for age, parity, and pre-gestational and gestational BMI, the difference in adiponectin levels remained significant, suggesting that adipokine levels in addition to FBG may help to identify women at high metabolic risk for appropriate monitoring and post-delivery interventions (PUBMED:25890778). Moreover, a systematic review indicated that adiponectin levels in the first or early second trimester of pregnancy were lower, whereas leptin levels were higher among women who later developed GDM than those who did not (PUBMED:25749468). This supports the notion that adipokine levels are associated with the development of GDM and could potentially be used as early indicators of metabolic risk.
Instruction: Can sinogram-affirmed iterative reconstruction improve the detection of small hypervascular liver nodules with dual-energy CT? Abstracts: abstract_id: PUBMED:24834888 Can sinogram-affirmed iterative reconstruction improve the detection of small hypervascular liver nodules with dual-energy CT? Objective: To optimize a dual-energy computed tomographic protocol with sinogram-affirmed iterative reconstruction algorithms for improving small nodules detection. Methods: The raw data of a dual-energy computed tomographic arterial acquisition of a cirrhotic patient were reconstructed with a standard filtered back projection (B20f) and 3 iterative (I26, I30, I31) kernels with different strength (S3-S5). The 80-kilovolt (peak) (kVp) and the linear blended (DE_0.5) images (80-140 kVp) were analyzed. For each series, 8-subcentimeter low-contrast lesions were simulated within the liver. Four radiologists performed a detectability test and rated the image quality (5-point scales) in all images. Results: The sensitivity increased from 31% (B20f) to 87.5% with sinogram-affirmed iterative reconstruction S5 kernels without a difference between 80-kVp and DE_0.5 series (W test, P = 0.062). The highest image quality rating was 3.8 (B20 DE_0.5), without difference from DE_0.5 I30-S5 and I26-S3. Conclusions: Iterative reconstructions increase the sensitivity for detecting abdominal lesions, even in the 80-kVp series. The kernel I30-S5 was considered the best. abstract_id: PUBMED:25998980 A case of multiple hypervascular hyperplastic liver nodules in a patient with no history of alcohol abuse or chronic liver diseases. Up-to-date imaging modalities such as three-dimensional dynamic contrast-enhanced CT (3D CT) and MRI may contribute to detection of hypervascular nodules in the liver. Nevertheless, distinguishing a malignancy such as hepatocellular carcinoma from benign hypervascular hyperplastic nodules (HHN) based on the radiological findings is sometimes difficult. Multiple incidental liver masses were detected via abdominal ultrasonography (US) in a 65-year-old male patient. He had no history of alcohol intake and no remarkable past medical history or relevant family history, and his physical examination results and laboratory findings were normal. 3D CT and MRI showed numerous enhanced nodules with hypervascularity during the arterial phase. After US guided liver biopsy, the pathological diagnosis was HHN. To date, several cases of HHN have been reported in patients with chronic alcoholic liver disease or cirrhosis. Herein, we report on a case of HHN in a patient with no history of alcoholic liver disease or cirrhosis. abstract_id: PUBMED:15547193 CT of benign hypervascular liver nodules in autoimmune hepatitis. Objective: The purpose of this report is to describe the frequency and histopathologic basis of benign hypervascular liver nodules seen on CT in patients with autoimmune hepatitis. Conclusion: Benign hypervascular liver nodules may be seen on CT in patients with cirrhosis due to autoimmune hepatitis and may represent large regenerative nodules. This phenomenon is important to recognize because of the potential for confusion with hepatocellular carcinoma. abstract_id: PUBMED:33194859 Masquerading Hypervascular Exophytic Liver Nodule. Patients with liver cirrhosis are at increased risk of developing hepatocellular carcinoma (HCC) and are placed on routine surveillance for HCC. Diagnosis algorithms are in place to guide clinicians in the evaluation of liver lesions detected during surveillance. Radiological assessments are critical with diagnostic criteria based on identification of typical hallmarks of HCCs on multiphasic computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging (MRI). We report a patient with a hypervascular exophytic lesion indeterminate for HCC on CT imaging. While the detection of an exophytic arterially-enhancing lesion in an at-risk patient on CT imaging may prompt clinicians to treat the lesion as HCC without further evaluation, the patient underwent contrast-enhanced MRI with the lesion being eventually diagnosed as an exophytic haemangioma. Thus, no further action was necessary and the patient was continued on routine HCC surveillance. Learning Points: Radiological surveillance for hepatocellular carcinoma (HCC) is routine in patients at risk of HCC.Diagnosis algorithms that are in place for indeterminate lesions detected during HCC surveillance should be adhered to in order to achieve an accurate diagnosis.Sequential imaging with contrast-enhanced (gadoxetate) MRI should be used to obviate the need for an invasive biopsy when an exophytic lesion indeterminate for HCC is identified during CT imaging in a patient with liver cirrhosis, especially when a hepatic haemangioma remains a differential diagnosis. abstract_id: PUBMED:22212941 Ultrasonography, computed tomography and magnetic resonance imaging of hepatocellular carcinoma: toward improved treatment decisions. Detection, characterization, staging, and treatment monitoring are major roles in imaging diagnosis in liver cancers. Contrast-enhanced ultrasonography (CEUS) using microbubble contrast agents has expanded the role of US in the detection and diagnosis of liver nodules in patients at high risk of hepatocellular carcinoma (HCC). CEUS provides an accurate differentiation between benign and malignant liver nodules, which is critical for adequate management of HCC and is also useful for guidance of percutaneous local therapy of HCC and postprocedure monitoring of the therapeutic response. The technology of multidetector-row computed tomography (MDCT) has increased spatial and temporal resolutions of computed tomography (CT). It has made possible a more precise evaluation of the hemodynamics of liver tumor, and the diagnostic accuracy of dynamic MDCT has improved. Perfusion CT can measure tissue perfusion parameters quantitatively and can assess segmental hepatic function. Dynamic MDCT with high spatial and temporal resolution enables us to reconstruct 3- and 4-dimensional imaging, which is very useful for pretreatment evaluation. Dual-energy CT makes possible the differentiation of materials and tissues in images obtained based on the differences in iodine and water densities. Monochromatic images, which can be reconstructed by dual-energy CT data, provide some improvement in contrast and show a higher contrast-to-noise ratio for hypervascular HCCs. Dynamic magnetic resonance imaging with fast imaging sequence of 3-dimensional Fourier transformation T(1)-weighted gradient echo and nonspecific contrast medium can show high detection sensitivity of hypervascular HCC. However, the hepatic tissue-specific contrast medium, gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid, has become an essential contrast medium for liver imaging because of its higher diagnostic ability. It may replace CT during hepatic arteriography and during arterioportography. abstract_id: PUBMED:17882031 Imaging of benign hypervascular hepatocellular nodules in alcoholic liver cirrhosis: differentiation from hypervascular hepatocellular carcinoma. Objectives: To retrospectively describe imaging analyses of benign hypervascular hyperplastic liver nodules (HHN) that resulted from alcoholic liver cirrhosis and to examine the possibility of imaging differentiation between these nodules and hypervascular hepatocellular carcinoma (HCC). Methods: Ten histopathologically confirmed HHN arise in alcoholic liver cirrhosis, and 9 HCC were examined. Magnetic resonance imaging (MRI) (10 HHN and 9 HCC), superparamagnetic iron oxide-enhanced T2-weighted MRI (6 HHN and 4 HCC), and dual-phase computed tomography hepatic arteriography (5 HHN and 6 HCC) were performed, respectively. Results: On T1-weighted magnetic resonance images, 7 HHNs showed hyperintensity and 3 showed iso- to hypointensity, and all HCCs showed hypointensity compared with surrounding liver. On T2-weighted magnetic resonance images, 2 HHNs showed hyperintensity and 8 showed iso- to hypointensity. In contrast, 1 HCC showed hypointensity and 8 showed hyperintensity. On superparamagnetic iron oxide-enhanced T2 MRI, all HHNs showed iso- to hypointensity, and all HCCs showed hyperintensity. All HHN and HCCs subjected to dual-phase computed tomography hepatic arteriography showed enhancement on early-phase images and coronalike enhancement on late-phase images. Conclusions: Imaging findings of highly-well differentiated HCCs possibly overlap with HHN. So, for correct diagnosis of HHN, at first, we should suspect HHN based on clinical findings and MRI findings, and then perform core needle biopsy to verify the radiological diagnosis. abstract_id: PUBMED:15318108 Hypervascular liver nodules in heavy drinkers of alcohol. Background: Three cases of hypervascular nodules in the liver, without hepatitis B or C virus infection and with a history of alcohol abuse (120 ml/day for 15 to 30 years), are presented. Results: Ultrasound examination revealed hypoechoic nodules in segment 6 (2 cm in diameter, case 1), in the right and left lobes (1-2 cm multiple type, case 2), and in segment 4 (4 cm, case 3). Hepatic angiography and computed tomography during arteriography revealed hypervascular nodules in the three cases. First, hepatocellular carcinoma, focal nodular hyperplasia, hemangioma, hemangioendothelioma, inflammatory pseudotumor, and pseudolymphoma were diagnostically differentiated. Histologically, there was no evidence of hepatocellular carcinoma or of any of the pathologies considered in the differential diagnosis by imaging studies. In case 1, the lesion was composed of an irregular, thin, trabecular-patterned hepatic acinus with slighter hypercellularity than in the nonnodular area. In cases 2 and 3, the lesions were composed mainly of fibrosis without hyperplasia, showing stellate scar-like fibrosis septa dividing the nodule. Marked pericellular fibrosis, neutrophilic infiltration, and Mallory bodies in the cytoplasm were also observed. In cases 1 and 2, small unpaired arteries explaining the hypervascularity of the nodules were observed. Conclusion: These hypervascular nodules were classified as regenerative, not neoplastic, nodules according to the classification of the International Working Party. abstract_id: PUBMED:24159569 Recent Advances in CT and MR Imaging for Evaluation of Hepatocellular Carcinoma. Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide. Accurate diagnosis and assessment of disease extent are crucial for proper management of patients with HCC. Imaging plays a crucial role in early detection, accurate staging, and the planning of management strategies. A variety of imaging modalities are currently used in evaluating patients with suspected HCC; these include ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), nuclear medicine, and angiography. Among these modalities, dynamic MRI and CT are regarded as the best imaging techniques available for the noninvasive diagnosis of HCC. Recent improvements in CT and MRI technology have made noninvasive and reliable diagnostic assessment of hepatocellular nodules possible in the cirrhotic liver, and biopsy is frequently not required prior to treatment. Until now, the major challenge for radiologists in imaging cirrhosis has been the characterization of small cirrhotic nodules smaller than 2 cm in diameter. Further technological advancement will undoubtedly have a major impact on liver tumor imaging. The increased speed of data acquisition in CT and MRI has allowed improvements in both spatial and temporal resolution, which have made possible a more precise evaluation of the hemodynamics of liver nodules. Furthermore, the development of new, tissue-specific contrast agents such as gadoxetic acid has improved HCC detection on MRI. In this review, we discuss the role of CT and MRI in the diagnosis and staging of HCC, recent technological advances, and the strengths and limitations of these imaging modalities. abstract_id: PUBMED:15854001 Multiple hypervascular liver nodules in a heavy drinker of alcohol. A case of hypervascular nodules in the liver, but without hepatitis B or C virus infection in a 38-year-old woman with a history of alcohol abuse is presented. An ultrasound disclosed 1-2-cm hypoechoic tumors in the right and left lobes. Magnetic resonance imaging showed high-intensity tumors at both the T1-weighted and T2-weighted sequences. Incremental dynamic computed tomography and hepatic angiography revealed hypervascular tumors. Ultrasound-guided needle biopsy revealed no evidence of hepatocellular carcinoma, metastatic liver cancer, hemangioendothelioma, inflammatory pseudotumors or pseudolymphoma, but demonstrated stellate-scar fibrosis septa, which contained small unpaired arteries without hyperplasia dividing the nodule. Moreover, marked pericellular fibrosis, neutrophilic infiltration and Mallory bodies were observed in the cytoplasm. There was no evidence of bile duct proliferation. From these findings, the diagnosis of alcohol-induced fibrosis, distinctly different from focal nodular hyperplasia, was tenable. Further studies may provide insights into the pathogenesis of nodule formation and hypervascularity in heavy drinkers of alcohol. abstract_id: PUBMED:33193879 Clinical and contrast-enhanced image features in the prediction model for the detection of small hepatocellular carcinomas. Purpose: To identify novel radiological features and clinical characteristics to improve diagnostic criteria for early detection of small hepatocellular carcinoma (HCC). Patients and Methods: We retrospectively recruited asymptomatic patients with no history of HCC but a high risk of HCC in whom a new, solitary, well-defined, solid nodule between 10 and 20 mm was detected through a screening ultrasound. We retrospectively collected all clinical data, and patients were examined using dynamic contrast-enhanced computed tomography or magnetic resonance imaging; subsequently, fine-needle biopsy was performed. A multivariate analysis of the predictors of small HCCs was performed by fitting a multiple logistic regression model with the stepwise variable selection method. Results: In total, 392 and 347 patients with a small liver nodule received a final pathologic confirmation of HCC and non-HCC, respectively. The estimated odds ratios and 95% confidence intervals of tumor size &gt; 12.45 mm, age &gt; 56.61 years, liver cirrhosis, hepatitis C virus (HCV) carrier status, ln alpha-fetoprotein (AFP) &gt; 1.954, arterial phase enhancement, and portal or venous phase washout appearance without arterial phase enhancement were 2.0735 (1.4746-2.9155), 1.8878 (1.2949-2.7521), 1.6927 (1.1294-2.5369), 1.6186 (1.0347-2.5321), 2.0297 (1.3342-3.0876), 3.7451 (2.3845-5.8821), and 2.0327 (1.3500-3.0608), respectively. The area under the receiver operating characteristic curves for the diagnosis of small HCCs was 0.79 for arterial phase enhancement and 0.75 for portal or venous phase washout appearance without arterial phase enhancement. Conclusion: Clinical and contrast-enhanced image features are valuable in the prediction model for the detection and early diagnosis of small HCCs in patients with a high risk of HCC. In addition to negative portal or venous washout and negative arterial enhancement in images, age &gt; 56.61 years, tumor size &gt; 12.45 mm, HCV carrier status, and ln(AFP) &gt; 1.954, are useful indicators for the early detection of small HCCs. Answer: Yes, sinogram-affirmed iterative reconstruction can improve the detection of small hypervascular liver nodules with dual-energy CT. A study optimized a dual-energy computed tomographic protocol with sinogram-affirmed iterative reconstruction algorithms and found that the sensitivity for detecting small hypervascular liver nodules increased from 31% with standard filtered back projection to 87.5% with the strongest sinogram-affirmed iterative reconstruction S5 kernels. This improvement was observed without a significant difference between 80-kVp and DE_0.5 series (PUBMED:24834888).
Instruction: Does psychomotor agitation in major depressive episodes indicate bipolarity? Abstracts: abstract_id: PUBMED:30697051 Association between anxious distress in a major depressive episode and bipolarity. Purpose: Mixed features in a major depressive episode (MDE) predict bipolar disorder (BD). The mixed features specifier included in the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) could be restrictive because it excludes the symptoms common to both mania/hypomania and depression, including psychomotor agitation. On the other hand, an anxious distress (ANXD) specifier has also been introduced in the DSM-5, and psychomotor agitation has been defined as a severity of ANXD. In this study, we retrospectively investigated the association between presence of ANXD in an MDE and bipolarity. Patients And Methods: The subjects were patients admitted with an MDE to the Department of Psychiatry at Tokyo Women's Medical University Hospital from December 2014 to March 2016. Eligible patients were older than 20 years of age and met the DSM-5 criteria for major depressive disorder or BD. All data were extracted from medical records. The subjects were grouped according to whether they did or did not have ANXD. The demographics and clinical features of these groups were compared. Severity of illness was evaluated according to the Hamilton Rating Scale for Depression (HRSD) score on admission. Results: ANXD was present in 31 and absent in 33 of 64 patients with MDE. The HRSD score was significantly higher in the group with ANXD than in the group without ANXD (P=0.0041). Mixed features (P=0.0050) and suicide attempts (P=0.0206) were significantly more common in the group with ANXD than in the group without ANXD. Conclusion: We found that the presence of ANXD in an MDE was associated with greater severity and more mixed features and suicide attempts. It is important to evaluate a patient with an MDE for ANXD so that a diagnosis of mixed depression is not missed. More studies in larger samples are needed to investigate further the association between ANXD in MDE and bipolarity. abstract_id: PUBMED:18806921 Does psychomotor agitation in major depressive episodes indicate bipolarity? Evidence from the Zurich Study. Background: Kraepelin's partial interpretation of agitated depression as a mixed state of "manic-depressive insanity" (including the current concept of bipolar disorder) has recently been the focus of much research. This paper tested whether, how, and to what extent both psychomotor symptoms, agitation and retardation in depression are related to bipolarity and anxiety. Method: The prospective Zurich Study assessed psychiatric and somatic syndromes in a community sample of young adults (N = 591) (aged 20 at first interview) by six interviews over 20 years (1979-1999). Psychomotor symptoms of agitation and retardation were assessed by professional interviewers from age 22 to 40 (five interviews) on the basis of the observed and reported behaviour within the interview section on depression. Psychiatric diagnoses were strictly operationalised and, in the case of bipolar-II disorder, were broader than proposed by DSM-IV-TR and ICD-10. As indicators of bipolarity, the association with bipolar disorder, a family history of mania/hypomania/cyclothymia, together with hypomanic and cyclothymic temperament as assessed by the general behavior inventory (GBI) [15], and mood lability (an element of cyclothymic temperament) were used. Results: Agitated and retarded depressive states were equally associated with the indicators of bipolarity and with anxiety. Longitudinally, agitation and retardation were significantly associated with each other (OR = 1.8, 95% CI = 1.0-3.2), and this combined group of major depressives showed stronger associations with bipolarity, with both hypomanic/cyclothymic and depressive temperamental traits, and with anxiety. Among agitated, non-retarded depressives, unipolar mood disorder was even twice as common as bipolar mood disorder. Conclusion: Combined agitated and retarded major depressive states are more often bipolar than unipolar, but, in general, agitated depression (with or without retardation) is not more frequently bipolar than retarded depression (with or without agitation), and pure agitated depression is even much less frequently bipolar than unipolar. The findings do not support the hypothesis that agitated depressive syndromes are mixed states. Limitations: The results are limited to a population up to the age of 40; bipolar-I disorders could not be analysed (small N). abstract_id: PUBMED:28691250 Obesity in patients with major depression is related to bipolarity and mixed features: evidence from the BRIDGE-II-Mix study. Objectives: The Bipolar Disorders: Improving Diagnosis, Guidance and Education (BRIDGE)-II-Mix study aimed to estimate the frequency of mixed states in patients with a major depressive episode (MDE) according to different definitions. The present post-hoc analysis evaluated the association between obesity and the presence of mixed features and bipolarity. Methods: A total of 2811 MDE subjects were enrolled in a multicenter cross-sectional study. In 2744 patients, the body mass index (BMI) was evaluated. Psychiatric symptoms, and sociodemographic and clinical variables were collected, comparing the characteristics of MDE patients with (MDE-OB) and without (MDE-NOB) obesity. Results: Obesity (BMI ≥30) was registered in 493 patients (18%). In the MDE-OB group, 90 patients (20%) fulfilled the DSM-IV-TR criteria for bipolar disease (BD), 225 patients (50%) fulfilled the bipolarity specifier criteria, 59 patients (13%) fulfilled DSM-5 criteria for MDEs with mixed features, and 226 patients (50%) fulfilled Research-Based Diagnostic Criteria for an MDE. Older age, history of (hypo)manic switches during antidepressant treatment, the occurrence of three or more MDEs, atypical depressive features, antipsychotic treatment, female gender, depressive mixed state according to DSM-5 criteria, comorbid eating disorders, and anxiety disorders were significantly associated with the MDE-OB group. Among (hypo)manic symptoms during the current MDE, psychomotor agitation, distractibility, increased energy, and risky behaviors were the variables most frequently associated with MDE-OB group. Conclusions: In our sample, the presence of obesity in patients with an MDE seemed to be associated with higher rates of bipolar spectrum disorders. These findings suggest that obesity in patients with an MDE could be considered as a possible marker of bipolarity. abstract_id: PUBMED:37489522 Anxious distress in people with major depressive episodes: a cross-sectional analysis of clinical correlates. Objective: Most people with major depressive episodes meet the criteria for the anxious distress (AD) specifier defined by DSM-5 as the presence of symptoms such as feelings of tension, restlessness, difficulty concentrating, and fear that something awful may happen. This cross-sectional study was aimed at identifying clinical correlates of AD in people with unipolar or bipolar depression. Methods: Inpatients with a current major depressive episode were included. Data on socio-demographic and clinical variables were collected. The SCID-5 was used to diagnose depressive episodes and relevant specifiers. The Montgomery-Åsberg Depression Rating Scale (MADRS) and Young Mania Rating Scale (YMRS) were used to assess the severity of depressive and manic (mixed) symptoms, respectively. Multiple logistic regression analyses were carried out to identify clinical correlates of AD. Results: We included 206 people (mean age: 48.4 ± 18.6 yrs.; males: 38.8%) admitted for a major depressive episode (155 with major depressive disorder and 51 with bipolar disorder). Around two-thirds of the sample (N = 137; 66.5%) had AD. Multiple logistic regression models showed that AD was associated with mixed features, higher YMRS scores, psychotic features, and a diagnosis of major depressive disorder (p &lt; 0.05). Conclusion: Despite some limitations, including the cross-sectional design and the inpatient setting, our study shows that AD is likely to be associated with mixed and psychotic features, as well as with unipolar depression. The identification of these clinical domains may help clinicians to better contextualize AD in the context of major depressive episodes. abstract_id: PUBMED:25248024 Psychomotor agitation in major depressive disorder is a predictive factor of mood-switching. Background: The relationship between psychomotor agitation in unipolar depression and mood-switching from depression to manic, hypomanic and mixed states has been controversial. We investigated the future risk of initial mood-switching as a function of psychomotor agitation in unipolar depression. Methods: We identified 189 participants diagnosed with major depressive disorder (MDD). We divided all patients with MDD into two categories (1) agitated patients (n=74), and (2) non-agitated patients (n=115). These groups were prospectively followed and compared by time to mood-switching. Kaplan-Meier survival curves, log-rank test for trend for survivor functions, and Cox proportional hazard ratio estimates for a multivariate model were conducted to examine the risk of mood-switching by psychomotor agitation. Results: During follow-up, mood-switching occurred in 20.3% of the agitated patients and 7.0% of the non-agitated patients. In the Kaplan-Meier survival estimates for time to incidence of mood-switching with agitated or non-agitated patients, the cumulative probability of developing mood-switching for agitated patients was higher than those for non-agitated patients (log-rank test: χ(2)=7.148, df=1, p=0.008). Survival analysis was also performed using Cox proportional hazards regression within a multivariate model. The agitation remained significantly associated with incidence of mood-switching (HR=2.98, 95% CI: 1.18-7.51). Limitations: We did not make a clear distinction between antidepressant-induced mood-switching and spontaneous switching. Conclusions: The main finding demonstrated that MDD patients with agitation were nearly threefold as likely to experience mood-switching, suggesting that psychomotor agitation in MDD may be related to an indicator of bipolarity. abstract_id: PUBMED:16318825 The relationship of major depressive disorder to bipolar disorder: continuous or discontinuous? Recent studies have questioned current diagnostic systems that split mood disorders into the independent categories of bipolar disorders and depressive disorders. The current classification of mood disorders runs against Kraepelin's unitary view of manic-depressive insanity (illness). The main findings of recent studies supporting a continuity between bipolar disorders (mainly bipolar II disorder) and major depressive disorder are presented. The features supporting a continuity between bipolar II disorder and major depressive disorder currently are 1) depressive mixed states (mixed depression) and dysphoric (mixed) hypomania (opposite polarity symptoms in the same episode do not support a splitting of mood disorders); 2) family history (major depressive disorder is the most common mood disorder in relatives of bipolar probands); 3) lack of points of rarity between the depressive syndromes of bipolar II disorder and major depressive disorder; 4) major depressive disorder with bipolar features such as depressive mixed states, young onset age, atypical features, bipolar family history, irritability, racing thoughts, and psychomotor agitation; 5) a high proportion of major depressive disorders shifting to bipolar disorders during long-term follow-up; 6) a high proportion of major depressive disorders with history of manic and hypomanic symptoms; 7) factors of hypomania present in major depressive disorder episodes; 8) recurrent course of major depressive disorder; and 9) depressive symptoms much more common than manic and hypomanic symptoms in the course of bipolar disorders. abstract_id: PUBMED:31400256 The role of different patterns of psychomotor symptoms in major depressive episode: Pooled analysis of the BRIDGE and BRIDGE-II-MIX cohorts. Background: Psychomotor agitation (PA) or retardation (PR) during major depressive episodes (MDEs) have been associated with depression severity in terms of treatment-resistance and course of illness. Objectives: We investigated the possible association of psychomotor symptoms (PMSs) during a MDE with clinical features belonging to the bipolar spectrum. Methods: The initial sample of 7689 MDE patients was divided into three subgroups based on the presence of PR, PA and non-psychomotor symptom (NPS). Univariate comparisons and multivariate logistic regression models were performed between subgroups. Results: A total of 3720 patients presented PR (48%), 1971 showed PA (26%) and 1998 had NPS (26%). In the PR and PA subgroups, the clinical characteristics related to bipolarity, along with the diagnosis of bipolar disorder (BD), were significantly more frequent than in the NPS subgroup. When comparing PA and PR patients, the former presented higher rates of bipolar spectrum features, such as family history of BD (OR = 1.39, CI = 1.20-1.61), manic/hypomanic switches with antidepressants (OR = 1.28, CI = 1.11-1.48), early onset of first MDE (OR = 1.40, CI = 1.26-1.57), atypical (OR = 1.23, CI = 1.07-1.42) and psychotic features (OR = 2.08, CI = 1.78-2.44), treatment with mood-stabilizers (OR = 1.39, CI = 1.24-1.55), as well as a BD diagnosis according to both the DSM-IV criteria and the bipolar specifier criteria. When logistic regression model was performed, the clinical features that significantly differentiated PA from PR were early onset of first MDE, atypical and psychotic features, treatment with mood-stabilizers and a BD diagnosis according to the bipolar specifier criteria. Conclusions: Psychomotor symptoms could be considered as markers of bipolarity, illness severity, and treatment complexity, particularly if PA is present. abstract_id: PUBMED:27218816 Insomnia brings soldiers into mental health treatment, predicts treatment engagement, and outperforms other suicide-related symptoms as a predictor of major depressive episodes. Given the high rates of suicide among military personnel and the need to characterize suicide risk factors associated with mental health service use, this study aimed to identify suicide-relevant factors that predict: (1) treatment engagement and treatment adherence, and (2) suicide attempts, suicidal ideation, and major depressive episodes in a military sample. Army recruiters (N = 2596) completed a battery of self-report measures upon study enrollment. Eighteen months later, information regarding suicide attempts, suicidal ideation, major depressive episodes, and mental health visits were obtained from participants' military medical records. Suicide attempts and suicidal ideation were very rare in this sample; negative binomial regression analyses with robust estimation were used to assess correlates and predictors of mental health treatment visits and major depressive episodes. More severe insomnia and agitation were significantly associated with mental health visits at baseline and over the 18-month study period. In contrast, suicide-specific hopelessness was significantly associated with fewer mental health visits. Insomnia severity was the only significant predictor of major depressive episodes. Findings suggest that assessment of sleep problems might be useful in identifying at-risk military service members who may engage in mental health treatment. Additional research is warranted to examine the predictive validity of these suicide-related symptom measures in a more representative, higher suicide risk military sample. abstract_id: PUBMED:21035193 Associations between subtypes of major depressive episodes and substance use disorders. The goal of this study was to examine whether certain subtypes of major depressive episodes (MDEs)-defined by their particular constellations of symptoms-were more strongly associated with substance use disorders (SUDs), compared to other subtypes of MDEs. Participants were adults in the National Comorbidity Survey-Replication sample who met DSM criteria for at least one lifetime MDE (n=1829). Diagnostic assessments were conducted using structured interviews. The following MDE subtypes were examined: atypical, psychomotor agitation, psychomotor retardation, melancholic, and suicidal. The results indicated that: (1) suicidal MDEs were associated with increased risk for all SUDs; (2) melancholic MDEs were associated with increased risk for alcohol use disorders; and (3) psychomotor agitation was associated with increased risk for alcohol dependence. These associations did not differ significantly by gender. Adjusting for age, the severity of the MDE, the age of onset of the first MDE, and psychiatric comorbidity did not substantially change the results. Supplemental analyses examining only diagnoses that occurred in the year prior to the assessment demonstrated a similar pattern (with MDEs characterized by psychomotor agitation being associated with drug use disorders as well). Exploratory order of onset analyses indicated that participants with lifetime MDEs and SUDs tended to report an MDE onset prior to the SUD onset, and those who experienced a suicidal MDE at some time in their lives were particularly likely to have had their first MDE prior to developing a SUD. Therefore, risk for lifetime SUDs differs according to the particular set of symptoms experienced during MDEs. abstract_id: PUBMED:37660092 Role of trazodone in treatment of major depressive disorder: an update. Major depressive disorder (MDD) is the most common mood disorder and a leading cause of disability worldwide. Trazodone, a triazolopyridine serotonin receptor antagonist and reuptake inhibitor (SARI) antidepressant approved for major depressive disorder (MDD) in adults, has established efficacy that is comparable to other available antidepressants, and is effective for a range of depression symptoms, including insomnia, which is one of the most common and bothersome symptoms of depression. Also, trazodone's pharmacodynamic properties allow it to avoid the side effects of insomnia, anxiety and sexual dysfunction often associated with selective serotonin reuptake inhibitor antidepressants. In this narrative review, we have summarized recent clinical trials and real-world data on trazodone, including the recently introduced once-daily formulation, which has single dose pharmacokinetic properties that maintain effective blood trazodone levels for 24 h, while avoiding concentration peaks associated with side effects. This, combined with a low incidence of weight gain, and sexual dysfunction, may improve adherence to treatment. The most common adverse effects of trazodone are somnolence, headache, dizziness and xerostomia. It has minimal anticholinergic activity but may be associated infrequently with orthostatic hypotension (especially in patients with cardiovascular disease or older adults), QT interval prolongation, cardiac arrhythmias, and rare episodes of priapism. The low liability for activating side effects, the efficacy on symptoms such as insomnia and psychomotor agitation and the rapid onset of action make it useful for many depressed patients, both in monotherapy at nominal dosages of 150-300 mg/day, and in combination with other antidepressants at lower dosages. Answer: The relationship between psychomotor agitation in major depressive episodes (MDEs) and bipolarity has been the subject of research, with varying conclusions. One study found that psychomotor agitation in MDEs is associated with a higher risk of mood-switching to manic, hypomanic, or mixed states, suggesting that it may be an indicator of bipolarity (PUBMED:25248024). Another study reported that both agitated and retarded depressive states were equally associated with indicators of bipolarity and anxiety, and that combined agitated and retarded major depressive states are more often bipolar than unipolar (PUBMED:18806921). However, the same study noted that pure agitated depression is much less frequently bipolar than unipolar, challenging the hypothesis that agitated depressive syndromes are mixed states. Further research indicated that the presence of anxious distress (ANXD) in an MDE, which includes psychomotor agitation as a severity marker, was associated with greater severity, more mixed features, and suicide attempts, suggesting the importance of evaluating ANXD in MDEs to avoid missing a diagnosis of mixed depression (PUBMED:30697051). Additionally, obesity in patients with an MDE, which was associated with psychomotor agitation among other symptoms, seemed to be linked with higher rates of bipolar spectrum disorders, implying that obesity in MDE patients could be a marker of bipolarity (PUBMED:28691250). Another study pooled data from the BRIDGE and BRIDGE-II-MIX cohorts and found that psychomotor symptoms during a MDE, particularly psychomotor agitation, were associated with clinical features related to bipolarity and were more frequent in patients with a diagnosis of bipolar disorder (PUBMED:31400256). In summary, while some studies suggest that psychomotor agitation in MDEs may indicate bipolarity or be associated with bipolar spectrum features, the evidence is not conclusive, and other factors must also be considered. It is important to evaluate patients with MDEs comprehensively to determine the presence of bipolarity, considering psychomotor agitation as one of several potential indicators (PUBMED:18806921; PUBMED:30697051; PUBMED:28691250; PUBMED:25248024; PUBMED:31400256).
Instruction: Chromaticity of daylight: is the spectral composition of daylight an aetiological element in winter depression? Abstracts: abstract_id: PUBMED:15253481 Chromaticity of daylight: is the spectral composition of daylight an aetiological element in winter depression? Objectives: Surveys on winter depression in Iceland indicate a significantly lower prevalence rate of winter SAD than expected according to Iceland's latitude. Research into daylight availability in Iceland failed to reveal factors contributing to higher average daylight availability than predicted by latitude. In view of the well-known healing effects of bright light treatment, we propose that properties of daylight other than daylight availability may ease the symptoms of winter depression. Method: We analysed the spectral composition of daylight in Iceland as expressed by its chromaticity and assessed its seasonal and diurnal variations. The colorimetric properties of daylight during the year 1998 are dealt with in detail. Perception of daylight is modelled, applying the chromaticity model of MacLeod and Boynton along with environmental data on spectral irradiance recorded on location at 64 degrees 8.8' N and 21 degrees 55.8' W in Reykjavik, Iceland, and recently published data on cone fundamentals by Stockman and Sharpe. Results: The main finding is that blue hue dominates the colour of the sky, with high correlated colour temperature, without significant seasonal variations. Diurnal variations are, however, observed. Furthermore, significant deviation from 'standard' sky is detected. Conclusions: It is not known whether the observed chromaticity of daylight is a significant factor in explaining the unexpectedly low prevalence rate of seasonal affective disorder in Iceland. abstract_id: PUBMED:15526930 Daylight availability: a poor predictor of depression in Iceland. Objectives: To test the hypothesis that the unexpectedly low prevalence of winter depression in Iceland is explained by Icelanders enjoying more daylight, during the winter months, than allocated to them by latitude. Methods: A conventional photometer was applied to measure illuminance on a horizontal surface at 64 degrees 8.8' N and 21 degrees 55.8' W every minute throughout the year. The illuminance thus measured was compared with computed illuminance, based on theoretical upper bounds. Results: Daylight availability proved to be, on average, 60% of the theoretical upper bounds derived using clear sky conditions. Snow cover did not, on average, cause a significant increase in daylight availability. Great variability was observed in illuminance from day to day, as well as within days. Conclusions: Average daylight availability does not explain the lower than expected prevalence of winter depression in Iceland. The great variability in illuminance might, however, affect the expression of winter depression, as could daylight quality and genetic factors. abstract_id: PUBMED:32163389 Effect of the C1473G Polymorphic Variant of the Tryptophan Hydroxylase 2 Gene and Photoperiod Length on the Dopamine System of the Mouse Brain A decrease in the light in autumn and winter causes depression like seasonal affective disorders (SAD) in sensitive patients, in which the serotonin (5-HT) and dopamine (DA) brain mediator systems are involved. We studied the interaction of the 5-HT and DA brain systems in an experimental SAD model in sexually mature male mice of the congenic B6-1473C and B6-1473G lines with high and low activity of tryptophan hydroxylase 2, a key enzyme of 5-HT synthesis in the brain. Mice of each line (divided into two groups of eight individuals) were kept for 30 days in standard (14 h light/10 h dark) and short (4 h light/20 h dark) daylight. The presence of the C1473G variant in the tryptophan hydroxylase 2 gene did not affect the expression of key genes of DA system: Drd1, Drd2, Scl6a3, Th, and Comt, that encode the D1 and D2 receptors, dopamine transporter, tyrosine hydroxylase, and catechol-o-methyltransferase, respectively. A decrease in the level of DA in the midbrain, as well as of its metabolite 3,4-dihydroxyphenylacetic acid (DOPAC) in the striatum, was detected in B6-1473G mice. Keeping mice in short daylight did not affect expression of the Drd1 gene in all brain structures nor the expression of the Slc6a3 and Th genes in the midbrain. Drd2 expression increased in the midbrain and decreased in the hippocampus, where Comt expression increased. An increase in DA level in the midbrain and DOPAC in the striatum was detected in mice kept in short daylight. This indicates the involvement of the brain's DA system in the reaction to a decrease in daylight duration. No statistically significant effect of the interaction between the presence of the C1473G variant and daylight length on indicators of the activity of DA system was detected. No reasons were found to assert that this polymorphism determines the observed reaction of the brain DA system in keeping of animals under short daylight conditions. abstract_id: PUBMED:33471850 Are consumer confidence and asset value expectations positively associated with length of daylight?: An exploration of psychological mediators between length of daylight and seasonal asset price transitions. Many economists claim that asset price transitions, particularly stock price transitions, have a seasonal cycle affected by length of daylight. Although they claim that the seasonal affective disorder (SAD) is a mediator between the length of daylight and asset price transitions, recent studies in psychology have been inconclusive about the existence of SAD, and some economics studies disagree regarding the involvement of SAD in seasonal stock price transitions. The purpose of the present study is to examine if there is any psychological mediator linking length of daylight and seasonal asset price transitions as an alternative or supplement to SAD. As a possible mediator, we examined Japan's consumer confidence index (CCI) and asset value expectations (AVE), which indicate people's optimism for future economy and are generated from a monthly household survey by the Japanese government. We analyzed individual longitudinal data from this survey between 2004 and 2018 and estimated four fixed-effects regression models to control for time-invariant unobserved heterogeneity across individual households. The results revealed that, (i) there was a seasonal cycle of CCI and AVE; the trough occurred in December and the peak in early summer; (ii) the length of daylight time was positively associated with CCI and AVE; and (iii) the higher the latitude, the larger the seasonal cycle of CCI and AVE became. These findings suggest that the length of the daylight may affect asset price transitions through the cycle of optimism/pessimism for future economy exemplified by the CCI and AVE. abstract_id: PUBMED:9572101 Self-reported mental distress under the shifting daylight in the high north. Background: The validity of the concept of seasonal affective disorder and the causal link to lack of daylight in winter is controversial. There is a need for investigations in large samples of the general population at different latitudes and within general research contexts to avoid selective response bias and sensitization of the population. Methods: During a study of health effects of the air pollution from Russia in a small community at 70 degrees north, a self-administered questionnaire was filled in by 3736 inhabitants, 60.8% of the total population between 18 and 69 years. Three questions concerned depression, sleeping problems and other problems related to the two contrasting seasons with regard to daylight. Results: Twenty-seven per cent reported to have some kind of problem in the dark period. Most frequently reported were sleeping problems during winter, in 19.9% of women and 11.2% of men. Self-reported depression in winter was found in 11.1% of women and 4.8%% of men. Sleeping problems increased with age, while depression was most often reported by middle-aged people. The only other reported problem in winter was fatigue. The adjusted relative risk (RR) for winter depression in women compared to men was 2.5 (95% confidence interval: 1.9-3.2). Very few had problems in summer. Conclusions: In the high north, one-third of the women and one-fifth of the men experience problems with sleep, mood or energy related to season. The prevalence of self-reported depression was surprisingly low in winter considering the lack of daylight. abstract_id: PUBMED:38460431 Bright daylight produces negative effects on affective and cognitive outcomes in nocturnal rats. The daily light/dark cycle affects animals' learning, memory, and cognition. Exposure to insufficient daylight illumination negatively impacts emotion and cognition, leading to seasonal affective disorder characterized by depression, anxiety, low motivation, and cognitive impairment in diurnal animals. However, how this affects memory, learning, and cognition in nocturnal rodents is largely unknown. Here, we studied the effect of daytime light illuminance on memory, learning, cognition, and expression of mRNA levels in the hippocampus, thalamus, and cortex, the higher-order learning centers. Two experiments were performed. In experiment one, rats were exposed to 12 L:12D (12 h light and 12 h dark) with a 10, 100, or 1000 lx daytime light illuminance. After 30 days, various behavioral tests (novel object recognition test, hole board test, elevated plus maze test, radial arm maze, and passive avoidance test) were performed. In experiment 2, rats since birth were raised either under constant bright light (250 lx; LL) or a daily light-dark cycle (12 L:12D). After four months, behavioral tests (novel object recognition test, hole board test, elevated plus maze test, radial arm maze, passive avoidance test, Morris water maze, and Y-maze tests) were performed. At the end of experiments, rats were sampled, and mRNA expression of Brain-Derived Neurotrophic Factor (Bdnf), Tyrosine kinase (Trk), microRNA132 (miR132), Neurogranin (Ng), Growth Associated Protein 43 (Gap-43), cAMP Response Element-Binding Protein (Crebp), Glycogen synthase kinase-3β (Gsk3β), and Tumour necrosis factor-α (Tnf-α) were measured in the hippocampus, cortex, and thalamus of individual rats. Our results show that exposure to bright daylight (100 and 1000 lx; experiment 1) or constant light (experiment 2) compromises memory, learning, and cognition. Suppressed expression levels of these mRNA were also observed in the hypothalamus, cortex, and thalamus. These results suggest that light affects differently to different groups of animals. abstract_id: PUBMED:18631427 We are in the dark here: induction of depression- and anxiety-like behaviours in the diurnal fat sand rat, by short daylight or melatonin injections. Circadian rhythms are considered an important factor in the aetiology, expression and treatment of major affective disorders, including seasonal affective disorder (SAD). However, data on the effects of daylight length manipulation or melatonin administration are complex. It has been suggested that since diurnal and nocturnal mammals differ significantly in their physiological and behavioural responses to daylight, diurnal rodents offer a preferable model of disorders related to circadian rhythms in the diurnal human. We previously found that diurnal fat sand rats maintained under short daylight (SD), show depression-like behaviour in the forced swim test (FST). The present study was designed to test additional behaviours related to affective disorders and study the involvement of melatonin in these behaviours. Sand rats were divided into short-daylight (SD, 5 h light:19 h dark) and long-daylight (LD, 12 h light:12 h dark) groups, and received 100 microg melatonin or vehicle administration for 3 wk (5 h and 8.5 h after light onset in the LD room). Animals were then tested for reward-seeking behaviour (saccharin consumption), anxiety (elevated plus-maze), aggression (resident-intruder test), and depression-like behaviour (FST). SD or melatonin administration resulted in a depressed/anxious-like behavioural phenotype including reduced reward seeking, increased anxiety, decreased aggression and decreased activity in the FST, supporting the notion that in a diurnal animal, reduced light results in a variety of behavioural changes that may model depression and anxiety; and that melatonin may be a significant factor in these changes. We suggest that the sand rat may offer an excellent model species to explore the interactions between daylight, affective behaviour and the related underlying mechanisms. abstract_id: PUBMED:19428655 Effects of bright light treatment on depression- and anxiety-like behaviors of diurnal rodents maintained on a short daylight schedule. A possible relationship between circadian rhythms and affective disorders has been strongly implicated, but understanding of the biological basis of such a relationship demands the utilization of appropriate animal models. Most research is performed with nocturnal rodents while some of the effects of daylight cycles or melatonin levels in nocturnal animals may differ greatly from effects in diurnal species (including humans). Recent studies suggested the diurnal fat Sand rat as an appropriate model animal to study circadian mechanisms involvement in mood and anxiety disorders, especially seasonal affective disorder (SAD). These studies demonstrated that Sand rats chronically exposed to short daylight (SD), or to melatonin regimen mimicing short daylight, show anxiety- and depression-like behaviors. These findings established face and construct validity for the model. The present study evaluated predictive validity by testing the effects of bright light treatment in Sand rats exposed to chronic SD. Sand rats maintained on SD for 3 weeks were treated with 1h daily 3000lx light for 3 weeks, 1h after "lights on" (during the light phase of the light/dark cycle), and their behavior tested in the sweet solution preference test (SSP), elevated plus-maze (EPM) and forced swim test (FST) and compared with control animals without treatment. Results indicate that bright light treatment reduced anxiety-like behavior in the EPM and depression-like behavior in the FST but not SSP. It is suggested that the results support the possibility that the diurnal Sand rat might be a preferred model animal for the study of SAD. abstract_id: PUBMED:31031606 Low Daytime Light Intensity Disrupts Male Copulatory Behavior, and Upregulates Medial Preoptic Area Steroid Hormone and Dopamine Receptor Expression, in a Diurnal Rodent Model of Seasonal Affective Disorder. Seasonal affective disorder (SAD) involves a number of psychological and behavioral impairments that emerge during the low daytime light intensity associated with winter, but which remit during the high daytime light intensity associated with summer. One symptom frequently reported by SAD patients is reduced sexual interest and activity, but the endocrine and neural bases of this particular impairment during low daylight intensity is unknown. Using a diurnal laboratory rodent, the Nile grass rat (Arvicanthis niloticus), we determined how chronic housing under a 12:12 h day/night cycle involving dim low-intensity daylight (50 lux) or bright high-intensity daylight (1,000 lux) affects males' copulatory behavior, reproductive organ weight, and circulating testosterone. We also examined the expression of mRNAs for the aromatase enzyme, estrogen receptor 1 (ESR1), and androgen receptor (AR) in the medial preoptic area (mPOA; brain site involved in the sensory and hormonal control of copulation), and mRNAs for the dopamine (DA) D1 and D2 receptors in both the mPOA and nucleus accumbens (NAC; brain site involved in stimulus salience and motivation to respond to reward). Compared to male grass rats housed in high-intensity daylight, males in low-intensity daylight displayed fewer mounts and intromissions when interacting with females, but the groups did not differ in their testes or seminal vesicle weights, or in their circulating levels of testosterone. Males in low-intensity daylight unexpectedly had higher ESR1, AR and D1 receptor mRNA in the mPOA, but did not differ from high-intensity daylight males in D1 or D2 mRNA expression in the NAC. Reminiscent of humans with SAD, dim winter-like daylight intensity impairs aspects of sexual behavior in a male diurnal rodent. This effect is not due to reduced circulating testosterone and is associated with upregulation of mPOA steroid and DA receptors that may help maintain some sexual motivation and behavior under winter-like lighting conditions. abstract_id: PUBMED:1685840 Seborrheic dermatitis and daylight. Patients with mood depression have been found to have a higher prevalence of seborrheic dermatitis (SD), possibly related to their tendency to live indoors. The prevalence of outpatients with SD has now been found to be directly related to the number of gloomy days in the area. Since UV light might not be the only reason for the well-known improvement in SD in summer, an explanation possibly related to melatonin is envisaged. Answer: The spectral composition of daylight, particularly its chromaticity, has been proposed as a potential factor in the etiology of winter depression, also known as seasonal affective disorder (SAD). Research conducted in Iceland, where the prevalence of winter SAD is lower than expected based on latitude, has focused on the properties of daylight other than its availability. The study found that the blue hue dominates the color of the sky in Iceland, with a high correlated color temperature and without significant seasonal variations, although diurnal variations were observed. The study also noted a significant deviation from 'standard' sky conditions (PUBMED:15253481). However, it is not conclusively known whether the observed chromaticity of daylight is a significant factor in explaining the low prevalence rate of SAD in Iceland. Other studies have looked at factors such as daylight availability and found that it does not explain the lower than expected prevalence of winter depression in Iceland. The variability in illuminance might affect the expression of winter depression, as could daylight quality and genetic factors (PUBMED:15526930). Additionally, research on the effects of daylight on the dopamine system of the mouse brain indicates that the brain's dopamine system is involved in the reaction to a decrease in daylight duration, which is relevant to the understanding of SAD (PUBMED:32163389). In summary, while there is some evidence to suggest that the spectral composition of daylight, or its chromaticity, may have an impact on winter depression, the exact relationship remains unclear, and further research is needed to establish chromaticity as an etiological element in winter depression. Factors such as daylight variability, quality, and genetic predispositions may also play significant roles in the expression of SAD.
Instruction: Routine surveillance cystoscopy for patients with augmentation and substitution cystoplasty for benign urological conditions: is it necessary? Abstracts: abstract_id: PUBMED:19239457 Routine surveillance cystoscopy for patients with augmentation and substitution cystoplasty for benign urological conditions: is it necessary? Objective: To evaluate screening cystoscopy as the long-term follow up in patients with an enterocystoplasty for &gt; or =10 years. Patients And Methods: We performed a prospective analysis of 92 consecutive patients who attended our endoscopy suite for regular check cystoscopy as per standard follow-up. This is performed for all patients with cystoplasty performed at our institute after 10 years. The data were recorded on patient demographics, original diagnosis and type of cystoplasty. In all, 53 of these patients consented to undergo bladder biopsies at the same time. Results: The median (range) follow-up was 15 (10-33) years. No cancer was identified with either surveillance cystoscopy or on routine biopsies. Chronic inflammation was identified in 25 biopsies (27%). Villous atrophy was present in 12 (55%) ileal patch and three (12.5%) colonic patch biopsies. During this study, the first and only case of malignancy in a cystoplasty at our institution was diagnosed in a symptomatic patient. She had intermittent haematuria and recurrent urinary tract infections (UTIs). She previously had a normal surveillance cystoscopy. Conclusions: We feel that it is not necessary to perform yearly check cystoscopies in patients with augmented bladders at least in the first 15 years, as cancer has not yet been detected with surveillance cystoscopy in this patient group. However, if the patient develops haematuria or other worrisome symptoms including suprapubic pain and recurrent unexplained UTIs a full evaluation, including cystoscopy and computerized tomography should be undertaken. abstract_id: PUBMED:35770106 Renal transplantation in patients with an augmentation cystoplasty. Background: The effects of renal transplantation in patients with augmentation cystoplasty are still controversial. We retrospectively analyzed nine patients who underwent renal transplantation after augmentation cystoplasty. Methods: A total of nine patients who underwent augmentation cystoplasty prior to renal transplantation between January 1990 and May 2020 were reviewed. Basic information on augmentation cystoplasty, transplant procedures, and long-term outcomes of renal transplantation were analyzed. Results: The bowel segments utilized for augmentation cystoplasty were the stomach in two patients (one patient needed revision using the ileum), the ileum in four patients, the ileocolic pouch in one patient, the sigmoid in one patient, and the ureter in one patient. All the cystoplasties were performed prior to renal transplantation. The mean follow-up period after transplantation was 161 months (range, 2-341 months). Two patients had an episode of acute rejection each; however, their graft functions were well-maintained. Five patients had recurrent urinary tract infections, and three of these patients progressed to allograft failure. One patient died from bladder cancer with a functioning graft. Five of nine patients showed well-maintained graft function. Conclusions: Renal transplantation after bladder augmentation surgery is a major operation requiring a high level of surgical skill. Based on our long-term experiences, we recommend diligent postoperative monitoring for urinary tract infections, optimal catheter use, and use of appropriate antibiotic prophylaxis to avoid severe complications. abstract_id: PUBMED:28522925 Urinary tract stone development in patients with myelodysplasia subjected to augmentation cystoplasty. Patients with myelodysplasia who have undergone augmentation cystoplasty are at risk for urinary tract stones. We sought to determine the incidence and risk factors for stone development in this population. The charts of 40 patients with myelodysplasia who have undergone augmentation cystoplasty were reviewed. None had a prior history of urinary tract stones. All patients were seen on an annual basis with plain abdominal imaging, renal ultrasonography, and laboratory testing. Statistical analysis included a multivariable bootstrap resampling method and Student's t-test. Fifteen (37.5%) patients developed stones, 14 with bladder stones and 1 with a solitary renal stone, at a mean of 26.9 months after augmentation. Five (33.3%) developed recurrent bladder stones. The patient with a renal stone never developed a bladder stone. The mean follow-up for the stone formers was 117.2 months and for non-stone formers was 89.9 months. The stone incidence per year was 6.8%. Risk factors included a decline in serum chloride after augmentation (P = .02), female sex, younger age at time of augmentation, longer time period since augmentation, and bowel continence. A significant proportion of patients with myelodysplasia subjected to augmentation cystoplasty develop urinary tract stones, predominantly in the bladder. Dehydration may play a role in development of lower urinary tract stones as the decline in serum chloride suggests contraction alkalosis, which could lead to constipation and improved bowel continence. Therefore, improved hydration should be a goal in this cohort. abstract_id: PUBMED:25738119 Massive vesical calculi formation as a complication of augmentation cystoplasty. Introduction: Here we report an unusual case of massive stone formation in augmented urinary bladder. Case Presentation: A 25-year-old man presented with recurrent urinary tract infection ten years after augmentation cystoplasty after a complex pelvic fracture urethral distraction defect. On evaluation by ultrasonography, X-ray, and computed tomography of abdomen showed large burden of stones in the urinary bladder. Patent underwent an open cystolithotomy and forty stones weighing about 1400 g were removed. It was one of the largest reported stone burdens following augmented cystoplasty until now. Discussion: Even though stone formation is a common complication after augmentation cystoplasty, it can be prevented by regular bladder wash and good follow-up. abstract_id: PUBMED:21855939 Screening for malignancy after augmentation cystoplasty in children with spina bifida: a decision analysis. Purpose: Augmentation cystoplasty is the mainstay of surgical treatment for medically refractory neurogenic bladder in patients with spina bifida. Concerns regarding an increased risk of malignancy have prompted many centers to consider routine postoperative screening. We examine the potential cost-effectiveness of such screening. Materials And Methods: A Markov model was used to compare 2 screening strategies among patients with spina bifida after cystoplasty, namely annual screening cystoscopy and cytology and usual care. Model parameters were informed via a systematic review of post-augmentation malignancy and cost estimates from published reports or government sources. Results: In a hypothetical cohort the individual increase in life expectancy for the entire cohort was 2.3 months with an average lifetime cost of $55,200 per capita, for an incremental cost-effectiveness ratio of $273,718 per life-year gained. One-way and two-way sensitivity analyses suggest the screening strategy could be cost effective if the annual rate of cancer development were more than 0.26% (12.8% lifetime risk) or there were a greater than 50% increase in screening effectiveness and cancer risk after augmentation. After adjusting for multiple levels of uncertainty the screening strategy had only an 11% chance of being cost effective at a $100,000 per life-year threshold or a less than 3% chance of being cost effective at $100,000 per quality adjusted life-year. Conclusions: Annual screening for malignancy among patients with spina bifida with cystoplasty using cystoscopy and cytology is unlikely to be cost effective at commonly accepted willingness to pay thresholds. This conclusion is sensitive to a higher than expected risk of malignancy and to highly optimistic estimates of screening effectiveness. abstract_id: PUBMED:25867054 Risk of malignancy after augmentation cystoplasty: A systematic review. Objectives: To systematically evaluate the evidence regarding the risk of malignancy after augmentation cystoplasty. Method: A systematic review search was performed through Medline and Embase databases using the following key words: "cancer," "malignant neoplasm," "cystoplasty," and "bladder augmentation" until November 2014. An article was considered relevant to this review if it focused on malignant tumors occurring after augmentation cystoplasty performed for benign bladder disease. Results: After screening, 57 articles were included in the synthesis. The level of evidence was usually poor and results should be interpreted with caution. The follow-up time probability to develop a malignant tumor(s) after augmentation cystoplasty ranged from 0-5.5% and estimated incidence ranged from 0 to 272.3 per 100,000 patients/year. Adenocarcinoma was the commonest histological type (51.6%). Malignant lesions predominantly occurred at the entero-urinary anastomosis (50%). The mean latency period was 19 years and most malignant lesions were diagnosed more than 10 years after surgery (90%). Long-term surveillance by cystoscopy is still controversial because of its lack of efficiency. Non-invasive techniques have been proposed and need further evaluations. Tumors were often diagnosed at an advanced stage within surveillance protocols, because of urinary tract related symptoms (64.1%). The carcinogenesis pathway is still not clearly understood but several factors are involved. Conclusion: Augmentation cystoplasty is associated with a risk of malignancy. Studies regarding carcinogenesis and surveillance strategies should be considered to develop a more efficient follow-up protocol and allow early diagnosis. Neurourol. Urodynam. 35:675-682, 2016. © 2015 Wiley Periodicals, Inc. abstract_id: PUBMED:33844426 Complication profile of augmentation cystoplasty in contemporary paediatric urology: a 20-year review. Background: The aim of this study was to describe the complication profile of augmentation cystoplasty in contemporary paediatric urology as well as its effect on bladder metrics. Methods: Consecutive operative cases were retrospectively reviewed at a single institution over 20 years (1999-2019). Short- and long-term outcomes and complications following augmentation cystoplasty were defined. Results: Of the 71 operative cases; the most common underlying diagnoses were neurogenic bladder (34%), exstrophy-epispadias complex (30%) and posterior urethral valves (23%). The most common tissue-type utilized was ileal (58%) and ureteric (30%). Peri-operative urine leak affected nine (13%) children but reservoir perforations were less common (4%). Mean end-of-study detrusor pressure improved significantly following bladder augmentation (38-17 cmH2 O, P &lt; 0.001). Bladder capacity improved significantly (67-89%, P = 0.041). The median follow-up was 4.5 years (interquartile range: 1.9-10 years). Bladder urolithiasis affected 13 (18%) patients, and symptomatic urinary tract infections 36 (51%) patients. Formation of a continent catheterisable channel contributed a number of complications relating predominantly to stenosis (50%). Repeat augmentation cystoplasty was necessary in three (4%) cases. Conclusion: Augmentation cystoplasty is a surgical intervention that improves bladder metrics. Given the potential complications, careful patient selection and appropriate pre-operative counselling are essential. Furthermore, pro-active post-operative management and transitional care are vital in the surgical care of children following augmentation cystoplasty. abstract_id: PUBMED:24678231 Delivery after augmentation cystoplasty: Implications and precautions. A young female with history of genitourinary tuberculosis with solitary functioning kidney became pregnant 1 year after augmentation cystoplasty (AC) with ureteric reimplantation. Throughout pregnancy she had two episode of febrile urinary tract infection. Her renal function remained normal. She was planned for cesarian section due to obstetric indications. Despite altered pelvic anatomy, we successfully did the lower segment cesarian section. We reviewed the literature regarding pregnancy in patients with AC to find that what the treating Urologist and Gynecologist should know about these rare cases. Various complications which should be anticipated and measures to prevent them are also discussed. abstract_id: PUBMED:32951908 Invasive poorly differentiated adenocarcinoma of the bladder following augmentation cystoplasty: a multi-institutional clinicopathological study. Augmentation cystoplasty is a surgical procedure used in the management of patients with neurogenic bladder. This procedure involves anastomosis of the bladder with gastrointestinal grafts, including portions of ileum, colon, or stomach. A rare but important complication of augmentation cystoplasty is the development of malignancy. The majority of malignancies arising in this setting have been described in case reports. A search for cases of non-urothelial carcinoma following augmentation cystoplasty was conducted through the urological pathology files of four major academic institutions. Ten cases were identified, including six cystoprostatectomy/cystectomy, two partial cystectomy, and two transurethral resection of bladder tumour specimens. The mean patient age at diagnosis was 47 years (range 27-87 years). The male:female ratio was 4:6. The tumours tended to present at an advanced stage; four cystoprostatectomy/cystectomy cases were categorised as pT3a, one was categorised as pT3b, and one was categorised as pT4a. Lymph node metastases were present in all cases which had lymph node excision (range 1-16 positive nodes per case). The majority of cases (90%) were predominantly characterised by a poorly differentiated adenocarcinoma with signet ring cell features. Other morphological features included mucinous features (30%), plasmacytoid features (20%), enteric/villous architecture (10%), and large cell undifferentiated morphology (10%). This is the largest study to date on the clinicopathological features of invasive non-urothelial carcinoma of the bladder following augmentation cystoplasty. The tumours are typically poorly differentiated adenocarcinoma, with diffuse signet ring cell features, aggressive, and present at high stage. Further molecular characterisation may provide additional insights into the pathogenesis of this entity. abstract_id: PUBMED:26955550 Adenoma-carcinoma Sequence in the Bladder After Augmentation Cystoplasty. We present a case of a 64-year-old woman showing multistep progression from adenoma to adenocarcinoma in the bladder 46 years after augmentation ileocystoplasty. She underwent augmentation ileocystoplasty for tuberculous contracted bladder at 18 years. After 44 years, tubulovillous adenomas were found and resected at the ileovesical anastomosis site. After 2 more years, bladder tumors recurred and revealed adenocarcinomas. Finally, radical cystectomy was required because of frequent recurrence and tumor extensiveness. To our knowledge, this is the first case demonstrating adenoma-carcinoma sequence histopathologically in the bladder after augmentation cystoplasty, indicating multistep carcinogenesis similar to intestinal carcinogenesis. Answer: Routine surveillance cystoscopy for patients with augmentation and substitution cystoplasty for benign urological conditions is a topic of debate. A prospective analysis of 92 consecutive patients who had undergone cystoplasty and were followed up for a median of 15 years found no cancer identified with either surveillance cystoscopy or on routine biopsies. Chronic inflammation was identified in some biopsies, and the only case of malignancy diagnosed at the institution was in a symptomatic patient, suggesting that yearly check cystoscopies may not be necessary in the first 15 years post-surgery. The study concluded that if a patient develops worrisome symptoms such as hematuria, suprapubic pain, or recurrent unexplained UTIs, a full evaluation including cystoscopy and computerized tomography should be undertaken (PUBMED:19239457). However, the risk of malignancy after augmentation cystoplasty is a concern, with studies suggesting that the risk of developing a malignant tumor after such procedures ranges from 0-5.5%, with adenocarcinoma being the most common type. Most malignant lesions were diagnosed more than 10 years after surgery, and the mean latency period was 19 years. The effectiveness of long-term surveillance by cystoscopy is still controversial, and tumors were often diagnosed at an advanced stage within surveillance protocols, indicating a need for more efficient follow-up protocols and early diagnosis (PUBMED:25867054). In summary, while routine surveillance cystoscopy may not be deemed necessary within the first 15 years post-cystoplasty, there is a recognized risk of malignancy associated with augmentation cystoplasty that warrants careful long-term monitoring. The decision to implement routine surveillance should be individualized based on patient risk factors, symptoms, and the latency period for potential malignancy development.
Instruction: Is the nocturnal fall in blood pressure reduced in essential hypertensive patients with metabolic syndrome? Abstracts: abstract_id: PUBMED:15581337 Is the nocturnal fall in blood pressure reduced in essential hypertensive patients with metabolic syndrome? Objective: The aim of this study was to examine whether an impaired reduction in nocturnal blood pressure (BP), defined on the basis of two periods of ambulatory BP monitoring (ABPM), is present in hypertensive patients with metabolic syndrome, as defined by the NCEP criteria. Methods: 460 grade 1 and 2 untreated essential hypertensives (mean age 45.9 +/- 11.9 years) referred for the first time to our outpatient hospital clinic underwent the following procedures: 1) medical history and physical examination; 2) repeated clinic BP measurements; 3) routine examinations; 4) ABPM over two 24-hour periods within 4 weeks. Metabolic syndrome was defined as at least three of the following alterations: increased waist circumference, increased triglycerides, decreased HDL-cholesterol, increased BP, or high fasting glucose. Nocturnal dipping was defined as a night-time reduction in average SBP and DBP &gt;10% compared to average daytime values. Results: The 135 patients with metabolic syndrome (group I) were similar for age, gender and known duration of hypertension to the 325 patients without it (group II). There were no significant differences between the two groups in average 48-hour, daytime, night-time SBP/DBP values and the percentage nocturnal SBP and DBP decrease (-17.7 / -15.7 vs. -18.4 / -16.2, p = ns). A reproducible nocturnal dipping (decrease in BP &gt;10% from mean daytime in both ABPM periods) and non-dipping profile (decrease in BP &lt; or =10% in both ABPM periods) was found in 74 (54.8%) and 29 (21.4%) in group I and in 169 (52.1%) and 73 (22.4%) in group II, respectively (p = ns); 32 patients (23.7%) in group I and 83 patients (25.5%) in group II had a variable dipping profile (p = ns). Conclusions: This study shows that no significant difference exists in nocturnal BP patterns, assessed by two ABPMs, in untreated essential hypertensive patients with metabolic syndrome compared to those without it. abstract_id: PUBMED:38091343 Metabolic Syndrome and its Correlates among Hypertensive Patients in Abuja, North Central Nigeria. Background: Metabolic syndrome is a constellation of abnormalities which includes central obesity, dyslipidaemia, elevated blood pressure and hyperglycemia. Hypertension, (which is a very common component of metabolic syndrome), and diabetes mellitus, are independently associated. Also, studies examining metabolic syndrome inAbuja, a city with affluence-driven lifestyle, are not available. This study aimed to investigate the prevalence of metabolic syndrome among hypertensive patients in Abuja, Nigeria, as well as to examine the associations between metabolic syndrome and certain factors in that cohort of hypertensive patients. Methods: This was a retrospective study that used data from hypertensive patients who attended clinic over a period of five years. Eight hundred and fifty-eight, (858-combined), case files of pre-treated, (previously known hypertensive patients) and newly diagnosed hypertensive participants were used for the study. The student t-tests were used to compare continuous variables, while Chi-square (χ2) tests were used for relationship between qualitative variables. The likelihood ratio test was employed to further confirm the statistical significance of certain independent variables relating with metabolic syndrome. A P-value of &lt; 0.05 was considered statistically significant. Results: The mean ages were 48.70±12.18, 49.19±11.06 and 48.2±13.3 years for combined group, the pre-treated and the newly-diagnosed groups respectively. The pre-treated, group consists of those previously known hypertensive patients, while the new group consists of those who were newly diagnosed hypertensive patients and were treatment naïve. The prevalence of metabolic syndrome in this study was 45.5% in the combined group, 47.23% in the pre-treated group and 37.3% in the newly diagnosed group. The commonest component of metabolic syndrome was reduced high density lipoprotein cholesterol, HDL-C. Conclusion: Metabolic syndrome is prevalent among hypertensive patients in Abuja, Nigeria. Some correlates of metabolic syndrome include; elevated BMI, truncal obesity, elevated total cholesterol, the use of thiazide diuretics and beta blockers as antihypertensives. abstract_id: PUBMED:18034997 Microalbuminuria in Thai essential hypertensive patients. Essential hypertensive patients (176 males and 329 females), aged 58.0+/-11.2 years were enrolled in a cross-sectional study conducted from February to March 2006 to investigate the prevalence and risk factors for microalbuminuria in hypertensive patients attending the Outpatient Department of Siriraj Hospital, Bangkok, Thailand. Macroalbuminuria was detected in 11 (2.2%) patients and microalbuminuria in 94 (18.6%) patients. Only male aged&gt;or=45 years or female aged&gt;or=55 years correlated significantly with a high occurrence of microalbuminuria, while calcium channel blocker and statin users were protected against microalbuminuria. The presence of microalbuminuria was not associated with age&gt;or=60 years, male gender, current/previous smokers, hypertension duration&gt;or=10 years, lack of blood pressure normalization, metabolic syndrome, use of angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, and multi-drug use. Risk factor recognition for microalbuminuria will enable physicians to identify cases that should be screened for microalbuminuria. abstract_id: PUBMED:32368135 Isolated Nocturnal Hypertension: What Do We Know and What Can We Do? Nocturnal hypertension has been recognized as a significant risk factor for cardio- and cerebrovascular diseases. Blood pressure (BP) monitoring significantly increased our awareness of nocturnal hypertension and studies revealed its influence on target organ damage. Nocturnal hypertension is associated with nonphysiological 24-h BP patterns, which consider inadequate drop or even increment of nighttime BP in comparison with daytime BP (nondipping and reverse dipping). Nevertheless, investigations showed that nocturnal hypertension was a predictor of adverse outcome independently of circadian BP pattern. There are still many uncertainties regarding diagnosis, mechanisms and treatment of nocturnal hypertension. There is a small difference between American and European guidelines in cutoff values defining nocturnal hypertension. Pathophysiology is also not clear because many conditions such as diabetes, metabolic syndrome, obesity, sleep apnea syndrome, and renal diseases are related to nocturnal hypertension and nonphysiological circadian BP pattern, but mechanisms of nocturnal hypertension still remain speculative. Therapeutic approach is another important issue and chronotherapy provided the best results so far. There are studies which showed that some groups of antihypertensive medications are more effective in regulation of nocturnal BP, but it seems that the timing of drug administration has a crucial role in the reduction of nighttime BP and conversion of circadian patterns from nonphysiologic to physiologic. Follow-up studies are necessary to define clinical benefits of nocturnal BP reduction and restoring unfavorable 24-h BP variations to physiological variant. abstract_id: PUBMED:15924807 A clinical intervention study among 463 essential hypertensive patients with metabolic syndrome Objective: To study the role of baseline risk factors in predicting the onset of diabetes among essential hypertensive patients with metabolic syndrome (MS) and to evaluate an ideal therapeutic regime that could reduce the risk factors and risk of onset of diabetes. Methods: A randomized parallel clinical trial in essential hypertensive patients of grade 1 or 2 was conducted. Two of the three components (1) increased waist circumference and/or BMI; (2) increased triglycerides (TG) and/or decreased high-density lipoprotein cholesterol; (3) impaired glucose tolerance (IGT) were present define the MS. The three intervention therapy groups were: indapamide + fosinopril (I + F, n = 151); atenolol + nitrendipine (A + N, n = 160); atenolol + nitrendipine + metformin (A + N + M, n = 152). Each case was followed-up monthly and the dosage of medicine taken be adjusted according to their BP level. The plasma glucose during fasting and two hours after taking 75 g glucose orally was also measured every six months. The new onset of diabetes was diagnosed according to the criteria. OGTT, insulin release test, lipid analysis, body weight and waist circumference were measured again at the last follow-up. Results: (1) The lowering of BP was similar among the three groups (P &gt; 0.05). 23 new diabetes onsets occurred, being 10 in group I + F and 8 in group A + N and 5 in group A + N + M, respectively (P &gt; 0.05); (2) Proportions of patients' risk factors decreased significantly in group A + N or A + N + M, e.g. the proportions of high TG in each group reduced by 14.7% and 9.3% respectively (P &lt; 0.05), the central fat distribution reduced by 16.7% and 15.9% respectively (P &lt; 0.05) and the IGT reduced by 6.6% and 29.6% respectively (P &lt; 0.05). However no changes were found in group I + F; (3) After 1 year and 5 months' follow-up, the proportions of main risk factors (high TG, central fat distribution and IGT) in the three groups were 91%, 96%, 83% and 90%, 88%, 47%, respectively. The difference of IGT was significant between two groups (P &lt; 0.01) and the proportions of having three risk factors were 70% and 31% in the two groups (P &lt; 0.01); (4) I + F group was better than A + N group in reduction of TG and central fat distribution. And A + N + M group improved in all risk factors. Conclusions: IGT alone or combined with increased TG plus abdominal obesity are the most important risk factors in predicting a new onset of diabetes among essential hypertensive patients with MS. Metformin in combination with atenolol plus nitrendipine can significantly prevent the onset of diabetes as well as improve patients' metabolic abnormality. abstract_id: PUBMED:28734791 Association between nocturnal blood pressure variation and wake-up ischemic stroke. Ischemic stroke during nocturnal sleep, known as wake-up stroke (WUS), has been reported to have more severe symptoms and worse outcomes than non-WUS. However, studies on risk factors for WUS are scarce and the association between nocturnal blood pressure (BP) and WUS is unclear. In this study, we used ambulatory blood pressure monitoring (ABPM) to examine the association between WUS and variation in nocturnal BP. A total of 369 patients with ischemic stroke within one week were consecutively enrolled. ABPM was applied 1-2weeks after the ictus because of possible reactive increments of BP; antihypertensive medications were delayed until ABPM. Patients were classified into two groups: WUS and non-WUS. Clinical characteristics, including ABPM parameters, were compared. Sixty-seven (18%) patients had WUS. In univariate analysis, patients with WUS had more severe stroke symptoms than patients with non-WUS. There were no differences in clinical characteristics. In addition, ABPM parameters, including nocturnal BP dipping and morning BP surge, were not associated with occurrence of WUS. Patients with WUS had more severe stroke symptoms and worse outcomes than those with non-WUS. Variation in nocturnal BP may not associated with the occurrence of WUS. abstract_id: PUBMED:24786779 Effect of antihypertensive treatments on insulin signalling in lympho-monocytes of essential hypertensive patients: a pilot study. It was previously demonstrated that metabolic syndrome in humans is associated with an impairment of insulin signalling in circulating mononuclear cells. At least in animal models of hypertension, angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARB) may correct alterations of insulin signalling in the skeletal muscle. In the first study, we investigated the effects of a 3-month treatment with an ARB with additional PPARγ agonist activity, telmisartan, or with a dihydropyridine calcium channel blocker, nifedipine, on insulin signalling in patients with mild-moderate essential hypertension. Insulin signalling was evaluated in mononuclear cells by isolating them through Ficoll-Paque density gradient centrifugation and protein analysis by Western Blot. An increased expression of mTOR and of phosphorylated (active) mTOR (p-mTOR) was observed in patients treated with telmisartan, but not in those treated with nifedipine, while both treatments increased the cellular expression of glucose transporter type 4 (GLUT-4). We also investigated the effects of antihypertensive treatment with two drug combinations on insulin signalling and oxidative stress. Twenty essential hypertensive patients were included in the study and treated for 4 weeks with lercanidipine. Then they were treated for 6 months with lercanidipine + enalapril or lercanidipine + hydrochlorothiazide. An increased expression of insulin receptor, GLUT-4 and an increased activation of p70S6K1 were observed during treatment with lercanidipine + enalapril but not with lercanidipine + hydrochlorothiazide. In conclusion, telmisartan and nifedipine are both effective in improving insulin signalling in human hypertension; however, telmisartan seems to have broader effects. The combination treatment lercanidipine + enalapril seems to be more effective than lercanidipine + hydrochlorothiazide in activating insulin signalling in human lympho-monocytes. abstract_id: PUBMED:29973135 Genome-wide association study of nocturnal blood pressure dipping in hypertensive patients. Background: Reduced nocturnal fall (non-dipping) of blood pressure (BP) is a predictor of cardiovascular target organ damage. No genome-wide association studies (GWAS) on BP dipping have been previously reported. Methods: To study genetic variation affecting BP dipping, we conducted a GWAS in Genetics of Drug Responsiveness in Essential Hypertension (GENRES) cohort (n = 204) using the mean night-to-day BP ratio from up to four ambulatory BP recordings conducted on placebo. Associations with P &lt; 1 × 10- 5 were further tested in two independent cohorts: Haemodynamics in Primary and Secondary Hypertension (DYNAMIC) (n = 183) and Dietary, Lifestyle and Genetic determinants of Obesity and Metabolic Syndrome (DILGOM) (n = 180). We also tested the genome-wide significant single nucleotide polymorphism (SNP) for association with left ventricular hypertrophy in GENRES. Results: In GENRES GWAS, rs4905794 near BCL11B achieved genome-wide significance (β = - 4.8%, P = 9.6 × 10- 9 for systolic and β = - 4.3%, P = 2.2 × 10- 6 for diastolic night-to-day BP ratio). Seven additional SNPs in five loci had P values &lt; 1 × 10- 5. The association of rs4905794 did not significantly replicate, even though in DYNAMIC the effect was in the same direction (β = - 0.8%, P = 0.4 for systolic and β = - 1.6%, P = 0.13 for diastolic night-to-day BP ratio). In GENRES, the associations remained significant even during administration of four different antihypertensive drugs. In separate analysis in GENRES, rs4905794 was associated with echocardiographic left ventricular mass (β = - 7.6 g/m2, P = 0.02). Conclusions: rs4905794 near BCL11B showed evidence for association with nocturnal BP dipping. It also associated with left ventricular mass in GENRES. Combined with earlier data, our results provide support to the idea that BCL11B could play a role in cardiovascular pathophysiology. abstract_id: PUBMED:38066794 Features of Allostatic Load in Patients with Essential Hypertension without Metabolic Syndrome Depending on the Nature of Nighttime Decreases in Blood Pressure. Changes in the activity of the renin-angiotensin-aldosterone system are responsible for a stable shift in the regulation of the cardiovascular system in essential hypertension (EH). They can be characterized as hemodynamic allostasis. The purpose of our study was to determine the role of hemodynamic parameters in allostatic load in patients with EH without metabolic syndrome. Twenty-four hours of ambulatory blood pressure monitoring was performed, followed by linear and non-linear rhythm analysis. Based on the daily index, patients with EH were divided into two groups: group 1-patients with no significant nighttime decrease in blood pressure (BP); group 2-patients who had a nocturnal decrease in BP. The control group included healthy persons aged 25 to 69 years. A linear analysis was used to determine the mean values of systolic and diastolic BP, heart rate (HR), time load of BP, circadian index, and structural point of BP. Non-linear analysis was applied to determine the mesor, amplitude, range of oscillations and % rhythm of BP and HR. The allostatic load index (ALI) was also calculated on the basis of the corresponding biomarkers. It was found that ALI was significantly higher in groups 1 and 2 in comparison with the control group. The hemodynamic mechanisms of this increase were different. abstract_id: PUBMED:20046414 Increased inflammation, reduced plasma phospholipid eicosapentaenoic acid and reduced antioxidant potential of treated hypertensive patients with metabolic syndrome. Purpose: In the present study, we tested whether the presence of metabolic syndrome (MetS) would worsen the features of inflammation, plasma omega 3 fatty acid levels and antioxidant potential in treated hypertensive patients. Materials And Methods: TWO GROUPS WERE CLASSIFIED BY THE COMPONENTS OF METS: a reference group of treated hypertensive subjects: hypertension (HTN) group (n = 39) and with more than two additional MetS components: HTN with Mets group (n = 40). We further compared the parameters between HTN group and HTN with MetS group. Results: The results showed that age (p &lt; 0.001) and body mass index (BMI) (p &lt; 0.001) were significantly different between HTN group and HTN with MetS group. Age- and BMI-adjusted total radical trapping antioxidant potential (TRAP) (p &lt; 0.01) was significantly lower, whereas age- and BMI-adjusted CD (p &lt; 0.05) and interleukin (IL) 6 (p &lt; 0.05) were significantly higher in HTN with MetS group than in HTN group. Moreover, HTN with MetS group had significantly lower levels of age- and BMI-adjusted plasma phospholipid eicosapentaenoic acid (EPA) than HTN group (p &lt; 0.05). On the other hand, the levels of age- and BMI-adjusted intracellular cell adhesion molecule-1 (ICAM-1), adiponectin and high molecular weight (HMW)-adiponectin were not significantly different between the groups. Conclusion: In conclusion, our results showed increased inflammatory marker, reduced antioxidant potential and EPA levels in treated hypertensive patients in the presence of MetS, suggesting the importance of changes of therapeutic lifestyle to modify the features of MetS. Answer: The nocturnal fall in blood pressure is not significantly reduced in essential hypertensive patients with metabolic syndrome compared to those without it. A study involving 460 grade 1 and 2 untreated essential hypertensives showed no significant differences in average 48-hour, daytime, night-time systolic blood pressure (SBP)/diastolic blood pressure (DBP) values, and the percentage nocturnal SBP and DBP decrease between patients with metabolic syndrome and those without it. The reproducibility of nocturnal dipping (decrease in BP >10% from mean daytime in both ambulatory blood pressure monitoring (ABPM) periods) and non-dipping profile (decrease in BP ≤10% in both ABPM periods) was also similar between the two groups (PUBMED:15581337).
Instruction: Do respiratory cycle-related EEG changes or arousals from sleep predict neurobehavioral deficits and response to adenotonsillectomy in children? Abstracts: abstract_id: PUBMED:25126038 Do respiratory cycle-related EEG changes or arousals from sleep predict neurobehavioral deficits and response to adenotonsillectomy in children? Study Objectives: Pediatric obstructive sleep apnea (OSA) is associated with hyperactive behavior, cognitive deficits, psychiatric morbidity, and sleepiness, but objective polysomnographic measures of OSA presence or severity among children scheduled for adenotonsillectomy have not explained why. To assess whether sleep fragmentation might explain neurobehavioral outcomes, we prospectively assessed the predictive value of standard arousals and also respiratory cycle-related EEG changes (RCREC), thought to reflect inspiratory microarousals. Methods: Washtenaw County Adenotonsillectomy Cohort II participants included children (ages 3-12 years) scheduled for adenotonsillectomy, for any clinical indication. At enrollment and again 7.2 ± 0.9 (SD) months later, children had polysomnography, a multiple sleep latency test, parent-completed behavioral rating scales, cognitive testing, and psychiatric evaluation. The RCREC were computed as previously described for delta, theta, alpha, sigma, and beta EEG frequency bands. Results: Participants included 133 children, 109 with OSA (apnea-hypopnea index [AHI] ≥ 1.5, mean 8.3 ± 10.6) and 24 without OSA (AHI 0.9 ± 0.3). At baseline, the arousal index and RCREC showed no consistent, significant associations with neurobehavioral morbidities, among all subjects or the 109 with OSA. At follow-up, the arousal index, RCREC, and neurobehavioral measures all tended to improve, but neither baseline measure of sleep fragmentation effectively predicted outcomes (all p &gt; 0.05, with only scattered exceptions, among all subjects or those with OSA). Conclusion: Sleep fragmentation, as reflected by standard arousals or by RCREC, appears unlikely to explain neurobehavioral morbidity among children who undergo adenotonsillectomy. Clinical Trial Registration: ClinicalTrials.gov, ID: NCT00233194. abstract_id: PUBMED:25083016 Respiratory cycle-related electroencephalographic changes during sleep in healthy children and in children with sleep disordered breathing. Study Objective: To investigate respiratory cycle-related electroencephalographic changes (RCREC) in healthy children and in children with sleep disordered breathing (SDB) during scored event-free (SEF) breathing periods of sleep. Design: Interventional case-control repeated measurements design. Setting: Paediatric sleep laboratory in a hospital setting. Participants: Forty children with SDB and 40 healthy, age- and sex-matched children. Interventions: Adenotonsillectomy in children with SDB and no intervention in controls. Measurements And Results: Overnight polysomnography; electroencephalography (EEG) power variations within SEF respiratory cycles in the overall and frequency band-specific EEG within stage 2 nonrapid eye movement (NREM) sleep, slow wave sleep (SWS), and rapid eye movement (REM) sleep. Within both groups there was a decrease in EEG power during inspiration compared to expiration across all sleep stages. Compared to controls, RCREC in children with SDB in the overall EEG were significantly higher during REM and frequency band specific RCRECs were higher in the theta band of stage 2 and REM sleep, alpha band of SWS and REM sleep, and sigma band of REM sleep. This between-group difference was not significant postadenotonsillectomy. Conclusion: The presence of nonrandom respiratory cycle-related electroencephalographic changes (RCREC) in both healthy children and in children with sleep disordered breathing (SDB) during NREM and REM sleep has been demonstrated. The RCREC values were higher in children with SDB, predominantly in REM sleep and this difference reduced after adenotonsillectomy. Citation: Immanuel SA, Pamula Y, Kohler M, Martin J, Kennedy D, Saint DA, Baumert M. Respiratory cycle-related electroencephalographic changes during sleep in healthy children and in children with sleep disordered breathing. abstract_id: PUBMED:22294810 Respiratory cycle-related EEG changes: response to CPAP. Study Objectives: Respiratory cycle-related EEG changes (RCREC) quantify statistically significant synchrony between respiratory cycles and EEG spectral power, vary to some extent with work of breathing, and may help to predict sleepiness in patients with obstructive sleep apnea. This study was designed to assess the acute response of RCREC to relief of upper airway obstruction by positive airway pressure (PAP). Design: Comparison of RCREC between baseline diagnostic polysomnograms and PAP titration studies. Setting: Accredited academic sleep disorders center. Patients: Fifty adults referred for suspected sleep disordered breathing. Interventions: For each recording, the RCREC in specific physiologic EEG frequency ranges were computed as previously described for the last 3 h of sleep not occupied by apneic events. Results: The sample included 27 women; mean age was 47 ± 11 (SD) years; and median respiratory disturbance index at baseline was 24 (inter-quartile range 15-43). Decrements in RCREC, from baseline to PAP titration, reached 43%, 24%, 14%, 22%, and 31% for delta (P = 0.0004), theta (P = 0.01), alpha (P = 0.10), sigma (P = 0.08), and beta (P = 0.01) EEG frequency ranges, respectively. Within each specific sleep stage, these reductions from baseline to PAP studies in synchrony between EEG power and respiratory cycles still reached significance (P &lt; 0.05) for one or more EEG frequency ranges and for all frequency ranges during REM sleep. Conclusions: RCREC tends to diminish acutely with alleviation of upper airway obstruction by PAP. These data in combination with previous observations support the hypothesis that RCREC reflect numerous, subtle, brief, but consequential inspiratory microarousals. abstract_id: PUBMED:34973525 Can pediatric sleep questions be incorporated into a risk model to predict respiratory complications following adenotonsillectomy? Background: Adenotonsillectomy, one of the most frequent surgical procedures in children, is usually performed for sleep-disordered breathing, a disease spectrum from primary snoring to obstructive sleep apnea. Children undergoing an adenotonsillectomy may be at risk for perioperative respiratory complications, necessitating intervention or escalation of care. However, there is no effective preoperative screening or risk-stratification model for perioperative respiratory complications that incorporates not only clinical history and physical examination but also sleep question responses for children as there is for adults. Objectives: The aim of this prospective observational study was to develop a risk-stratification model for perioperative respiratory complications in children undergoing an adenotonsillectomy incorporating not only clinical history and physical examination but also sleep question responses. Methods: A 25-question sleep questionnaire was prospectively administered preoperatively for 1895 children undergoing an adenotonsillectomy from November 2015 to December 2017. The primary outcome measure was overall perioperative respiratory complications, collected prospectively and defined as having at least one major or minor complication intraoperatively or postoperatively. Results: The incidence of overall perioperative respiratory complications was 20.4%. Preoperative factors associated with perioperative respiratory complications in the multiple regression model were age, race, preoperative tonsil size, the presence of a syndrome, and the presence of a pulmonary disease. None of the sleep questionnaire responses remained in the multivariable analysis. The area under the ROC curve for the risk stratification model incorporating sleep question responses was only 0.6114% (95% CI: 0.60, 0.67). Conclusion: Preoperative sleep question responses may be unable to predict overall perioperative respiratory complications in children undergoing an adenotonsillectomy. A robust risk stratification model incorporating sleep question responses with clinical history and physical examination was unable to discriminate or predict perioperative respiratory complications in our population undergoing an adenotonsillectomy. abstract_id: PUBMED:25571368 Symbolic dynamics of respiratory cycle related sleep EEG in children with sleep disordered breathing. Childhood sleep disordered breathing (SDB) is characterized by an increased work of breathing, restless night sleep and excessive daytime sleepiness and has been associated with cognitive impairment, behavioral disturbances and early cardiovascular changes. Compared to normal controls, children with SDB have elevated arousal thresholds and their sleep EEG may elicit cortical activation associated with arousals but often too subtle to be visually scored. The aim of this study was to assess EEG complexity throughout the respiratory cycle based on symbolic dynamics in children with SDB (n=40) and matched healthy controls. EEG amplitude values were symbolized based on the quartiles of their distribution and words of length 3 were formed and classed into 4 types based on their patterns. Children with SDB showed less complex EEG dynamics in non-REM sleep that was unrelated to the respiratory phase. In REM sleep normal children showed a respiratory phase-related reduction in EEG variability during the expiratory phase compared to inspiration, which was not apparent in children with SDB. In conclusion, respiratory cycle related EEG dynamics are altered in children with SDB during REM sleep and indicate changes in cortical activity. abstract_id: PUBMED:28833232 Persistent respiratory effort after adenotonsillectomy in children with sleep-disordered breathing. Objectives: Adenotonsillectomy (AT) markedly improves but does not necessarily normalize polysomnographic findings in children with adenotonsillar hypertrophy and related sleep-disordered breathing (SDB). Adenotonsillectomy efficacy should be evaluated by follow-up polysomnography (PSG), but this method may underestimate persistent respiratory effort (RE). Mandibular movement (MMas) monitoring is an innovative measurement that readily identifies RE during upper airway obstruction. We hypothesized that MMas indices would decrease in parallel of PSG indices and that children with persistent RE more reliably could be identified with MMas. Methods: Twenty-five children (3-12 years of age) with SDB were enrolled in this individual prospective-cohort study. Polysomnography was supplemented with a midsagittal movement magnetic sensor that measured MMas during each respiratory cycle before and &gt; 3 months after AT. Results: Adenotonsillectomy significantly improved PSG indices, except for RE-related arousals (RERA). Mandibular movement index changes after AT significantly were correlated with corresponding decreases in sleep apnea-hypopnea index (AHI) and O2 desaturation index (ODI) (Spearman's rho = 0.978 and 0.922, respectively), whereas changes in MMas duration significantly were associated with both RERA duration (rho = 0.475, P = 0.017) and index (rho = 0.564, P = 0.003). Conditional multivariate analysis showed that both AHI and RERA significantly contributed to the variance of MMas index after AT (P = 0.0003 and 0.0005, respectively), whereas MMas duration consistently was related to the duration of RERA regardless of AT. Conclusion: Adenotonsillectomy significantly reduced AHI. However, persistent RERA were apparent in a significant proportion of children, and this was reflected by the remaining abnormal MMas pattern. Follow-up of children after AT can be recommended and readily achieved by monitoring MMas to identify persistent RE. Level Of Evidence: 4. Laryngoscope, 128:1230-1237, 2018. abstract_id: PUBMED:25218486 Periodic leg movements during sleep in children scheduled for adenotonsillectomy: frequency, persistence, and impact. Objective: The aim of this study was to assess the frequency and potential clinical impact of periodic leg movements during sleep (PLMS), with or without arousals, as recorded incidentally from children before and after adenotonsillectomy (AT). Methods: Children scheduled for AT for any clinical indications who participated in the Washtenaw County Adenotonsillectomy Cohort II were studied at enrollment and again 6 months thereafter. Assessments included laboratory-based polysomnography, a Multiple Sleep Latency Test (MSLT), parent-completed behavioral rating scales, neuropsychological testing, and psychiatric evaluation. Results: Participants included 144 children (81 boys) aged 3-12 years. Children generally showed mild to moderate obstructive sleep apnea (median respiratory disturbance index 4.5 (Q1 = 2.0, Q3 = 9.5)) at baseline, and 15 subjects (10%) had at least five periodic leg movements per hour of sleep (PLMI ≥ 5). After surgery, 21 (15%) of n = 137 subjects who had follow-up studies showed PLMI ≥ 5 (p = 0.0067). Improvements were noted after surgery in the respiratory disturbance index; insomnia symptoms; sleepiness symptoms; mean sleep latencies; hyperactive behavior; memory, learning, attention, and executive functioning on NEPSY assessments; and frequency of attention-deficit/hyperactivity disorder (DSM-IV criteria). However, PLMI ≥ 5 failed to show associations with worse morbidity in these domains at baseline or follow-up. New appearance of PLMI ≥ 5 after surgery failed to predict worsening of these morbidities (all p &gt; 0.05), with only one exception (NEPSY) where the magnitude of association was nonetheless negligible. Similar findings emerged for periodic leg movements with arousals (PLMAI ≥ 1). Conclusion: PLMS, with and without arousals, become more common after AT in children. However, results in this setting did not suggest substantial clinical impact. abstract_id: PUBMED:12970022 Sleep characteristics following adenotonsillectomy in children with obstructive sleep apnea syndrome. Objective: To compare the effect of adenotonsillectomy on rapid eye movement (REM)- and non-REM-related respiratory and sleep architecture characteristics in children with obstructive sleep apnea syndrome (OSAS). Study Design: This prospective study evaluated 36 children (median age, 6.9 years; range, 1.8 to 12.6 years) with OSAS using polysomnography before and a few months after adenotonsillectomy. Primary outcomes included the number of obstructive apnea and hypopnea and arousals per hour of sleep. Results: At 4.6 months (range, 1 to 16 months) after adenotonsillectomy, there was a significant improvement of all respiratory parameters. The median respiratory disturbance index (RDI) decreased from 4.1/h (range, 0 to 85/h) to 0.9/h (range, 0 to 13/h) after adenotonsillectomy (p &lt; 0.0001). The median non-REM RDI decreased from 3.0/h (range, 0 to 89/h) to 0.4/h (range, 0 to 13/h) [p &lt; 0.001] as compared with REM RDI, which decreased from 7.8/h (range, 0 to 69/h) to 2.3/h (range, 0 to 54/h) after adenotonsillectomy (p &lt; 0.01). Median arousal index decreased following adenotonsillectomy from 17.5/h (range, 7 to 57/h) to 14.0/h (range, 6 to 47/h) [p &lt; 0.03]. Conclusions: Adenotonsillectomy resulted in a greater improvement in non-REM RDI as compared with REM-RDI, and a decrease in the number of arousals. abstract_id: PUBMED:20620104 Approaches to the assessment of arousals and sleep disturbance in children. Childhood arousals, awakenings, and sleep disturbances during the night are common problems for both patients and their families. Additionally, inadequate sleep may contribute to daytime sleepiness, behavioral problems, and other important consequences of pediatric sleep disorders. Arousals, awakenings, and sleep disturbances can be quantified by routine polysomnography, and arousal scoring is generally performed as part of the standard polysomnogram. Here, we review current approaches to quantification of arousals and sleep disturbances and examine outcomes that have been associated with these measures. Initial data suggest that computer-assisted identification of non-visible arousals, cyclic alternating patterns, or respiratory cycle-related EEG changes may complement what can be accomplished by human scorers. Focus on contiguous bouts of sleep or specific sleep stages may prove similarly useful. Incorporation of autonomic arousal measures-such as heart rate variability, pulse transit time, or peripheral arterial tone-into standard reports may additionally capture subtle sleep fragmentation. abstract_id: PUBMED:24552365 Sleep respiratory disturbances and arousals at moderate altitude have overlapping electroencephalogram spectral signatures. An ascent to altitude has been shown to result in more central apneas and a shift towards lighter sleep in healthy individuals. This study employs spectral analysis to investigate the impact of respiratory disturbances (central/obstructive apnea and hypopnea or periodic breathing) at moderate altitude on the sleep electroencephalogram (EEG) and to compare EEG changes resulting from respiratory disturbances and arousals. Data were collected from 51 healthy male subjects who spent 1 night at moderate altitude (2590 m). Power density spectra of Stage 2 sleep were calculated in a subset (20) of these participants with sufficient artefact-free data for (a) epochs with respiratory events without an accompanying arousal, (b) epochs containing an arousal and (c) epochs of undisturbed Stage 2 sleep containing neither arousal nor respiratory events. Both arousals and respiratory disturbances resulted in reduced power in the delta, theta and spindle frequency range and increased beta power compared to undisturbed sleep. The similarity of the EEG changes resulting from altitude-induced respiratory disturbances and arousals indicates that central apneas are associated with micro-arousals, not apparent by visual inspection of the EEG. Our findings may have implications for sleep in patients and mountain tourists with central apneas and suggest that respiratory disturbances not accompanied by an arousal may, none the less, impact sleep quality and impair recuperative processes associated with sleep more than previously believed. Answer: Respiratory cycle-related EEG changes (RCREC) and arousals from sleep have been studied to determine if they predict neurobehavioral deficits and the response to adenotonsillectomy in children. However, the evidence suggests that these measures may not be reliable predictors in this context. A study involving children scheduled for adenotonsillectomy found that neither the arousal index nor RCREC at baseline were consistently associated with neurobehavioral morbidities, nor did they effectively predict outcomes after the surgery. The study concluded that sleep fragmentation, as reflected by standard arousals or RCREC, appears unlikely to explain neurobehavioral morbidity among children undergoing adenotonsillectomy (PUBMED:25126038). Another study investigated RCREC in healthy children and those with sleep-disordered breathing (SDB) during event-free breathing periods of sleep. It found that RCREC values were higher in children with SDB, particularly during REM sleep, and that this difference reduced after adenotonsillectomy. However, the study did not directly link RCREC to neurobehavioral outcomes (PUBMED:25083016). Research on the response of RCREC to CPAP treatment in adults with obstructive sleep apnea showed that RCREC tends to diminish with the alleviation of upper airway obstruction. While this study suggests that RCREC may reflect inspiratory microarousals, it does not address the prediction of neurobehavioral deficits or the response to adenotonsillectomy in children (PUBMED:22294810). A study aimed at developing a risk-stratification model for perioperative respiratory complications in children undergoing adenotonsillectomy found that preoperative sleep question responses, which could include information about RCREC or arousals, were unable to predict overall perioperative respiratory complications. This suggests that these sleep characteristics may not be useful for predicting surgical outcomes (PUBMED:34973525). In summary, while RCREC and arousals from sleep are associated with sleep disordered breathing and can be affected by treatments like adenotonsillectomy, current evidence does not support their use as predictors of neurobehavioral deficits or surgical outcomes in children undergoing adenotonsillectomy (PUBMED:25126038, PUBMED:25083016, PUBMED:34973525).
Instruction: Are 25 SNPs from the CARDIoGRAM study associated with ischaemic stroke? Abstracts: abstract_id: PUBMED:23631657 Are 25 SNPs from the CARDIoGRAM study associated with ischaemic stroke? Background And Purpose: The Coronary Artery Disease Genome-Wide Replication and Meta-Analysis Study (CARDIoGRAM) reported 25 single-nucleotide polymorphisms (SNPs) on 15 chromosomes to be associated with coronary artery disease (CAD) risk. Because common vascular risk factors are shared between CAD and ischaemic stroke (IS), these SNPs may also be related to IS overall or one or more of its pathogenetic subtypes. Methods: We performed a candidate gene study comprising 3986 patients with IS and 2459 control subjects. The 25 CAD-associated SNPs reported by CARDIoGRAM were examined by allelic association analysis including logistic regression. Weighted and unweighted genetic risk scores (GRSs) were also compiled and likewise analysed against IS. We furthermore considered the IS main subtypes large-vessel disease (LVD), small-vessel disease and cardioembolic stroke [according to Trial of Org 10172 in Acute Stroke Treatment (TOAST)] separately. Results: SNP rs4977574 on chromosome 9p21.3 was associated with overall IS [odds ratio (OR) = 1.12; 95% confidence interval (CI): 1.04-1.20; P = 0.002] as well as LVD (OR = 1.36; 95% CI: 1.13-1.64; P = 0.001). No other SNP was significantly associated with IS or any of its main subtypes. Analogously, the GRSs did not show any noticeable effect. Conclusions: Besides the previously reported association with SNPs on chromosome 9p21, this study did not detect any significant association between IS and CAD-susceptible genetic variants. Also, GRSs compiled from these variants did not predict IS or any pathogenetic IS subtype, despite a total sample size of 6445 participants. abstract_id: PUBMED:31970634 Two Novel SNPs in the PLCL2 Gene Associated with Large Artery Atherosclerotic Stroke Identified by Fine-Mapping. A genome-wide association study (GWAS) reported that the single nucleotide polymorphism (SNP) rs4618210 in the PLCL2 gene is related to myocardial infarction (MI) in the Japanese population, but no study has examined the correlation of PLCL2 with ischemic stroke (IS). The present study was designed to investigate whether the genetic variation in PLCL2 is associated with large artery atherosclerotic (LAA) stroke in a Han Chinese population. Tagging SNPs (tSNPs) of the PLCL2 gene were determined by a fine-mapping strategy and were genotyped by improved multiplex ligation detection reaction (iMLDR) technology in 669 LAA stroke patients and 668 healthy controls. A logistic regression model was used to analyze the associations between genetic variation at PLCL2 and the risk of LAA stroke. Two SNPs were significantly associated with the risk of LAA stroke after adjusting for potential confounders: for rs4685423, the AA genotype and CA genotype decreased the risk of LAA stroke compared with the CC genotype (multivariate-adjusted, P = 0.001); for rs4618210, the AA genotype and GA genotype decreased the risk of LAA stroke compared with the GG genotype (multivariate-adjusted, P = 0.007). In addition, haplotype analysis indicated that compared with haplotype TTT, haplotype TAT decreased the risk of LAA stroke in block 2 (adjusted OR, 0.706; 95% CI, 0.550-0.907; P = 0.006). The analysis of SNP-SNP interactions showed that rs4685423 was the most influential contributor to LAA stroke risk. SNPs rs4685423 and rs4618210 in the PLCL2 gene may be related to the risk of LAA stroke in Han Chinese. abstract_id: PUBMED:28421636 Leveraging cell type specific regulatory regions to detect SNPs associated with tissue factor pathway inhibitor plasma levels. Tissue factor pathway inhibitor (TFPI) regulates the formation of intravascular blood clots, which manifest clinically as ischemic heart disease, ischemic stroke, and venous thromboembolism (VTE). TFPI plasma levels are heritable, but the genetics underlying TFPI plasma level variability are poorly understood. Herein we report the first genome-wide association scan (GWAS) of TFPI plasma levels, conducted in 251 individuals from five extended French-Canadian Families ascertained on VTE. To improve discovery, we also applied a hypothesis-driven (HD) GWAS approach that prioritized single nucleotide polymorphisms (SNPs) in (1) hemostasis pathway genes, and (2) vascular endothelial cell (EC) regulatory regions, which are among the highest expressers of TFPI. Our GWAS identified 131 SNPs with suggestive evidence of association (P-value &lt; 5 × 10-8 ), but no SNPs reached the genome-wide threshold for statistical significance. Hemostasis pathway genes were not enriched for TFPI plasma level associated SNPs (global hypothesis test P-value = 0.147), but EC regulatory regions contained more TFPI plasma level associated SNPs than expected by chance (global hypothesis test P-value = 0.046). We therefore stratified our genome-wide SNPs, prioritizing those in EC regulatory regions via stratified false discovery rate (sFDR) control, and reranked the SNPs by q-value. The minimum q-value was 0.27, and the top-ranked SNPs did not show association evidence in the MARTHA replication sample of 1,033 unrelated VTE cases. Although this study did not result in new loci for TFPI, our work lays out a strategy to utilize epigenomic data in prioritization schemes for future GWAS studies. abstract_id: PUBMED:29348973 Associations between four types of single-nucleotide polymorphisms in PLA2G7 gene and clinical atherosclerosis: a meta-analysis. Background: Previous studies suggested that some types of single nucleotide polymorphisms (SNPs) in PLA2G7 genes, encoding Lp-PLA2 have been reported to yield an antiatherogenic effect, but other studies mentioned otherwise. Thus, a comprehensive study to explore the effect of SNPs in PLA2G7 genes (V279F, A379V, R92H, I198T) toward clinical atherosclerosis is needed. Methods: We searched eligible studies from PubMed, EBSCO, ProQuest, Science Direct, Springer, and Cochrane databases for case-control studies to assess the between four types of SNPs in PLA2G7 gene with risk of clinical atherosclerosis (CVD = cardiovascular disease, CAD = coronary artery disease, PAD = peripheral artery disease, ischemic stroke). All studies were assessed under Hardy-Weinberg Equilibrium, an additive model. This meta-analysis was performed by RevMan 5.3 to provide pooled estimate for odds ratio (ORs) with 95% confidence intervals (95% CIs). Results: Fourteen clinical studies met our inclusion criteria. Those included 12,432 patients with clinical atherosclerosis and 10,171 were controls. We found that ORs of two variants SNPs (V279F, R92H) were associated with clinical atherosclerosis {V279F, OR = 0.88 (95% CI, 0.81-0.95); p = 0.0007, I2 = 40%}, {R92H, OR = 1.29 (95% CI, 1.09-1.53); p = 0.003, I2 = 73%}. Meanwhile, there was no significant associations between the other two, A379V {OR = 1.08 (95% CI, 0.93-1.26); p = 0.31, I2 = 78%} and I198T {OR = 1.12 (95% CI = 0.79-1.59); p = 0.53, I2 = 81%}. Conclusions: These results suggested that V279F polymorphism in PLA2G7 gene has a protective effect for clinical atherosclerosis, whereas R92H polymorphism might contribute toward increased risk of clinical atherosclerosis. abstract_id: PUBMED:29627009 The combined effects of cardiovascular disease related SNPs on ischemic stroke. Purpose: Previous studies have revealed multiple common variants associated with known risk factors for cardiovascular disease (CVD). Ischemic stroke (IS) and CVD share several risk factors with each having substantial heritability. We aimed to generate a multi-locus genetic risk score (GRS) for IS based on CVD related SNPs to evaluate their combined effects on IS. Methods: A total of 851 patients and 977 controls were selected from Beijing, Tianjin, Shandong, Shanxi, Shaanxi and Heilongjiang communities. The candidate genes were genotyped by PCR-hybridization. Information about demographic factors, history of disease (such as hypertension), and lifestyle was obtained using structured questionnaires. A GRS model weighted by the absolute value of regression coefficient β was established to comprehensively assess the association between candidate SNPs and IS. Using the area under the receiver operating characteristic curve (AUC) to evaluate the value of GRS on predicting IS. Results: The GRS of cases was 2.87 ± 0.28, which was significantly higher than controls' GRS (2.78 ± 0.30) (P &lt; 0.000). With the increase of the GRS, the risk of IS became higher (Ptrend &lt; 0.000). Subjects in the top quartile of the GRS had about 1.9-fold increased risk of IS compared with subjects in the lowest quartile (OR adjusted = 1.880, 95%CI = 1.442-2.452, P &lt; 0.000). The AUC = 0.580, P &lt; 0.000. Conclusion: 13 CVD related SNPs had combined effects on IS. The GRS of cases was significantly higher than controls' GRS. As the GRS increased, the risk of IS increased. The GRS model has some value for the prediction of IS. abstract_id: PUBMED:32307645 CYP2B6 Polymorphisms Are Associated with Ischemic Stroke Risk in a Chinese Han Population. Genetic factors have been demonstrated to play an important role in the pathology of ischemic stroke (IS). This study was conducted to explore the association between CYP2B6 polymorphisms and IS risk in a Chinese Han population. Four single-nucleotide polymorphisms (SNPs) in CYP2B6 from 477 cases and 495 controls were genotyped using the Agena MassARRAY. The odds ratio (OR) and 95% confidence interval (CI) were calculated under genetic models and haplotype analysis to assess the association between SNPs and IS risk. We found that rs2099361 was associated with an increased IS risk (CC vs. AA: overall: OR = 1.85, 95% CI: 1.16-2.93, P = 0.010; age ≤ 60: OR = 1.94, 95% CI: 1.02-3.70, P = 0.045; male: OR = 2.17, 95% CI: 1.22-3.86, P = 0.009). The GT genotype of rs4803420 was associated with a reduced IS risk (OR = 0.74, 95% CI: 0.57-0.98, P = 0.036); the GG genotype was associated with an increased IS risk in women (OR = 2.31, 95% CI: 1.00-5.31, P = 0.049). The rs1038376 polymorphism was associated with reduced IS risk for age ≤ 60 years (AT vs. TT: OR = 0.63, 95% CI: 0.40-0.99, P = 0.046). Interestingly, there were significant differences in some clinical indicator levels between case and control groups, and genotypes of SNPs. Our results indicated that CYP2B6 polymorphisms (rs2099361, rs4803420, and rs1038376) were associated with the risk of IS. Further studies are still needed to validate our findings with larger sample sizes. abstract_id: PUBMED:37240062 SERPINE1 mRNA Binding Protein 1 Is Associated with Ischemic Stroke Risk: A Comprehensive Molecular-Genetic and Bioinformatics Analysis of SERBP1 SNPs. The SERBP1 gene is a well-known regulator of SERPINE1 mRNA stability and progesterone signaling. However, the chaperone-like properties of SERBP1 have recently been discovered. The present pilot study investigated whether SERBP1 SNPs are associated with the risk and clinical manifestations of ischemic stroke (IS). DNA samples from 2060 unrelated Russian subjects (869 IS patients and 1191 healthy controls) were genotyped for 5 common SNPs-rs4655707, rs1058074, rs12561767, rs12566098, and rs6702742 SERBP1-using probe-based PCR. The association of SNP rs12566098 with an increased risk of IS (risk allele C; p = 0.001) was observed regardless of gender or physical activity level and was modified by smoking, fruit and vegetable intake, and body mass index. SNP rs1058074 (risk allele C) was associated with an increased risk of IS exclusively in women (p = 0.02), non-smokers (p = 0.003), patients with low physical activity (p = 0.04), patients with low fruit and vegetable consumption (p = 0.04), and BMI ≥25 (p = 0.007). SNPs rs1058074 (p = 0.04), rs12561767 (p = 0.01), rs12566098 (p = 0.02), rs6702742 (p = 0.036), and rs4655707 (p = 0.04) were associated with shortening of activated partial thromboplastin time. Thus, SERBP1 SNPs represent novel genetic markers of IS. Further studies are required to confirm the relationship between SERBP1 polymorphism and IS risk. abstract_id: PUBMED:34321906 C5 Variant rs10985126 is Associated with Mortality in Patients with Symptomatic Coronary Artery Disease. Background: Complement component 5a (C5a) is a highly potent anaphylatoxin with a variety of pro-inflammatory effects. C5a contributes to progression of atherosclerosis and inhibition of the receptor (C5aR) might offer a therapeutic strategy in this regard. Single nucleotide polymorphisms (SNPs) of the C5 gene may modify protein expression levels and therefore function of C5a and C5aR. This study aimed to examine associations between clinically relevant C5a SNPs and the prognosis of patients with symptomatic coronary artery disease (CAD). Furthermore, we sought to investigate the influence of C5 SNPs on C5aR platelet surface expression and circulating C5a levels. Methods: C5 variants (rs25681, rs17611, rs17216529, rs12237774, rs41258306, and rs10985126) were analyzed in a consecutive cohort of 833 patients suffering from symptomatic coronary artery disease (CAD). Circulating C5a levels were determined in 116 patients whereas C5aR platelet surface expression was measured in 473 CAD patients. Endpoints included all-cause mortality, myocardial infarction (MI), and ischemic stroke (IS). Homozygous carriers (HC) of the minor allele (rs10985126) showed significantly higher all-cause mortality than major allele carriers. While we could not find significant associations between rs10985126 allele frequency and C5aR platelet surfazl ce expression, significantly elevated levels of circulating C5a were found in HC of the minor allele of the respective genotype. rs17216529 allele frequency correlated with the composite combined endpoint and bleeding events. However, since the number of HC of the minor allele of this genotype was low, we cannot draw a robust conclusion about the observed associations. Conclusion: In this study, we provide evidence for the prognostic relevance of rs10985126 in CAD patients. C5 rs10985126 may serve as a prognostic biomarker for risk stratification in high-risk CAD patients and consequently promote tailored therapies. abstract_id: PUBMED:37986083 Association of MMP3, MMP14, and MMP25 gene polymorphisms with cerebral stroke risk: a case-control study. Background: Cerebral stroke (CS) is the leading cause of death in China, and a complex disease caused by both alterable risk factors and genetic factors. This study intended to investigate the association of MMP3, MMP14, and MMP25 single nucleotide polymorphisms (SNPs) with CS risk in a Chinese Han population. Methods: A total of 1,348 Han Chinese were recruited in this case-control study. Four candidate loci including rs520540 A/G and rs679620 T/C of MMP3, rs2236302 G/C of MMP14, and rs10431961 T/C of MMP25 were successfully screened. The correlation between the four SNPs and CS risk was assessed by logistic regression analysis. The results were analyzed by false-positive report probability (FPRP) for chance or significance. The interactions between four SNPs associated with CS risk were assessed by multifactor dimensionality reduction (MDR). Results: rs520540 A/G and rs679620 C/T SNP in MMP3 were associated with risk of CS in allele, codominant, dominant and log-additive models. Ischemic stroke risk were significantly lower in carriers with rs520540-A allele and rs679620-T allele than those with G/G or C/C genotypes. However, rs520540-A allele and rs679620-T allele were associated with higher risk of hemorrhagic stroke. Stratified analysis showed that these two SNPs were associated with reduced risk of CS in aged &lt; 55 years, non-smoking and non-drinking participants, and rs679620 SNP also reduced CS risk in male participants. The levels of uric acid, high-density lipoprotein cholesterol, and eosinophil were different among patients with different genotypes of rs520540 and rs679620. No statistically significant association was found between MMP14 rs2236302 G/C or MMP25 rs10431961 T/C with CS even after stratification by stroke subtypes, age, gender as well as smoking and drinking conditions in all the genetic models. Conclusion: MMP3 rs520540 A/G and rs679620 C/T polymorphisms were associated with CS risk in the Chinese Han population, which provides useful information for the prevention and diagnosis of CS. abstract_id: PUBMED:26924317 Fine-Mapping of ABO Gene Identifies Two Novel SNPs Associated with Large Artery Atherosclerotic Stroke in a Chinese Han Population. Recently, a single nucleotide polymorphism (SNP) rs505922 in ABO gene was related to large artery atherosclerotic (LAA) stroke in Caucasian populations by genome-wide association study (GWAS). This study aimed to determine whether ABO gene polymorphisms are associated with LAA stroke in Chinese Han population. A case-control study was designed, and 644 patients with LAA stroke and 642 healthy controls were enrolled. Ten tagging SNPs (tSNPs) located in ABO gene were genotyped. Associations between genotypes and LAA stroke were analyzed with logistic regression model after adjustment of potential confounders. Although rs505922 was not associated with LAA stroke (TT genotype, adjusted OR = 1.32; 95 % CI, 0.94 to 1.87), two novel SNPs, rs8176668 (AT genotype, adjusted OR = 0.71; 95 % CI, 0.55 to 0.92) and rs2073824 (AA genotype, adjusted OR = 0.72; 95 % CI, 0.57 to 0.92), were associated with LAA stroke. Haplotype analysis indicated that haplotype TC (adjusted OR = 0.72; 95 % CI, 0.54 to 0.95; P = 0.018) in block 1 and haplotype ACA in block 2 (OR = 0.73; 95 % CI, 0.58 to 0.91; P = 0.005) were associated with LAA stroke. Multifactor dimensionality reduction (MDR) analysis in the single-locus model indicated that rs2073824 was the most important attributor for predicting risk of LAA stroke. No significant SNP-SNP interactions among the tested SNPs were detected. The results indicated that the genetic variants in ABO gene may influence the risk of LAA stroke in Chinese Han population. Answer: The study reported in abstract PUBMED:23631657 investigated whether 25 single-nucleotide polymorphisms (SNPs) associated with coronary artery disease (CAD) risk from the CARDIoGRAM study were also related to ischaemic stroke (IS) or its subtypes. The study included 3986 patients with IS and 2459 control subjects. The results showed that only one SNP, rs4977574 on chromosome 9p21.3, was associated with overall IS and large-vessel disease (LVD), a subtype of IS. No other SNP from the CARDIoGRAM study was significantly associated with IS or any of its main subtypes, and genetic risk scores (GRSs) compiled from these variants did not predict IS or any pathogenetic IS subtype. In conclusion, besides the association with SNP rs4977574 on chromosome 9p21, the study did not detect any significant association between IS and the 25 CAD-susceptible genetic variants from the CARDIoGRAM study. Therefore, the majority of the 25 SNPs from the CARDIoGRAM study are not associated with ischaemic stroke, based on the findings of this study.
Instruction: Does Health Literacy Level Influence the Effectiveness of a Kiosk-Based Intervention Delivered in the Pediatric Emergency Department? Abstracts: abstract_id: PUBMED:26333526 Does Health Literacy Level Influence the Effectiveness of a Kiosk-Based Intervention Delivered in the Pediatric Emergency Department? Objective: This study assesses parents' literacy skills and evaluates how literacy levels influenced the effectiveness of a health communication intervention designed to improve safety knowledge in low-income, urban families. Methods: A total of n = 450 parents of children aged 4 to 66 months completed the Rapid Estimate of Adult Literacy in Medicine (REALM) and participated in a randomized trial of an injury prevention intervention delivered via computer kiosk in a pediatric emergency department. A safety knowledge test was administered by telephone 2 to 4 weeks later. Results: More than one-third of parents were assessed by the REALM to have marginal (30%) or inadequate (8%) reading levels; the remaining 62% of parents had adequate reading levels. REALM scores were independently associated with knowledge gains for poison storage and smoke alarms. Conclusions: Participants reading level had an independent and significant effect on safety knowledge outcomes. Literacy level should be considered in all patient education efforts. abstract_id: PUBMED:36604284 Using Comic-Based Concussion Discharge Instructions to Address Caregiver Health Literacy in the Emergency Department. Introduction: This study compared the effectiveness of comic-based with text-based concussion discharge instructions on improving caregiver knowledge. This study also examined the role of social determinants of health on comprehension instructions. Methods: This was an observational study of the caregivers of pediatric concussion patients. Caregivers' health literacy and demographics related socioeconomic factors were obtained. After the patients' evaluation in the emergency department, caregivers were given printed comic-based concussion discharge instructions. Caregivers were contacted 3 days later and tested overall knowledge of discharge instructions' content. These survey results were compared with historical controls who received text-based instructions. Results: A total of 120 participants were recruited, and 86 participants completed follow-up procedures. When comparing the caregivers' recall ability with a comic-based vs traditional text-based instructions, caregivers with comic-based content were more likely to accurately recall overall discharge instructions (77.5% vs 44%, P &lt; .001), particularly physical rest and activity restrictions (86.5% vs 63%, P &lt; .001). Caregivers also were less likely to misidentify a red flag symptom (7.5% vs 19%, P &lt; .04). Comic-based instructions did not increase recall of cognitive rest instructions or postconcussive symptoms. When examining demographic factors, caregivers who could not recall 3 postconcussive symptoms were more likely to be Hispanic or Black, less likely to be college educated, and more likely to have low health literacy. Discussion: Novel methods should be explored to adequately prepare caregivers for continuing postconcussive care at home. Discharge instructions must be tailored to address caregivers' baseline health literacy and how caregivers digest and retain information. abstract_id: PUBMED:36684956 Development and implementation of a community health literacy hub, 'Health Kiosk'-A grassroots innovation. Being health literate is important to get sufficient health information, to navigate the health system, to access appropriate care and to be able to self-manage health. As such it is a key determinant of health. There is a need for innovative measures to improve health literacy among people living in socioeconomically vulnerable circumstances. Literature shows that this innovation needs to: have "low-threshold access" to health resources in a community-based, outreaching way; be adapted to the needs of the target group; provide reliable and understandable health information adapted to the target population, and support people in developing confidence to act on that knowledge. In response to this need, this article describes-guided by the principles underpinning the Integrated Community Care (ICC) framework-the development and implementation process of a grassroots innovation, namely "Health Kiosk" in a socioeconomically vulnerable area in the northern part of a Belgian city. To be able to focus on the core activity of the Health Kiosk-i.e., stimulating healthy living and health literacy-community building and considering the spatial environment of the neighborhood formed a fundamental basis. Several core ingredients of the Health Kiosk are important to stimulate health literacy among socioeconomically vulnerable groups, namely: (1) working in a community-based, outreaching way; (2) providing accessible health information and support to act on that knowledge; and (3) working in a flexible and independent way to adapt to local needs. As such, the Health Kiosk forms a community health literacy hub with low-threshold access for people living in socioeconomically vulnerable circumstances. abstract_id: PUBMED:30489492 Health Equity Demands Health Literacy: Ethics in the Pediatric Emergency Department. The ability of the patient or the parent, in pediatrics, to read, understand, and act upon health information is termed health literacy. Health literacy has been shown to be of primary importance when determining a patient's ability to achieve optimal health. As physicians, we often fail to recognize the enormous obstacles facing our patients. In the pediatric emergency department (PED), communication is complicated. Physicians must be able to effectively relay information to the patient's caregiver while still not forgetting to provide developmentally appropriate instructions to the child. Individuals who do not have a good understanding of what is needed to properly care for themselves or their children are at a disadvantage, and it is therefore the responsibility of the pediatric provider to do all they can to identify gaps in health literacy. As providers, we need to always be questioning as to whether we properly conveyed the information to our patients. Teaching which results in good understanding is the ultimate goal when treating and releasing our patients in the pediatric emergency department. Matching the method of delivery of information and education to the family's health literacy will help the care team deliver effective information so that it is applied at home hopefully preventing a rapid revisit. abstract_id: PUBMED:38019720 Health Care Provider Bias in Estimating the Health Literacy of Caregivers in a Pediatric Emergency Department. Background: Health literacy is a growing concern because of its effects on communication and health outcomes. One aspect of this communication is the ability of the health care provider to estimate the health literacy of a patient or their caregiver. The objectives of this study are to quantify misestimation of caregiver health literacy by providers and identify potential descriptive or demographic factors that might be related to those misestimations. Methods: Providers were asked to perceive descriptive factors and estimate the health literacy of caregivers in a pediatric Emergency Department. Then, the health literacy of the caregiver was tested using the Short Assessment of Health Literacy, and cross-tabulated with provider estimates. Results: Providers correctly estimated the health literacy of the caregivers 60% of the time, and misestimates were often underestimates (27.7%) rather than overestimates (12.3%). Providers overestimated the health literacy of 24.1% of fathers and only 9.8% of mothers (P = 0.012). They correctly estimated the health literacy of 63.9% of English-speaking caregivers compared with 30.6% of Spanish-speaking caregivers, and underestimated the health literacy of 50% of Spanish-speaking caregivers and 24.8% of English-speaking caregivers (P &lt; 0.001). Providers correctly estimated the health literacy of 34.4% of racially and ethnically diverse caregivers compared with 71.5% of White/non-Hispanic caregivers. They underestimated the health literacy of 52.1% of these racially and ethnically diverse caregivers and 16.8% of White/non-Hispanic caregivers (P &lt; 0.001). Conclusions: Providers often overestimate and underestimate the health literacy of parents in the pediatric emergency department. Misestimates are related to race, caregiver role, and language spoken by the caregiver. When providers misestimate health literacy, they may use words or phrases that are above or below the health literacy level of the caregiver. These results suggest a need for further health literacy research and interventions in provider education and clinical practice. abstract_id: PUBMED:19564810 Impact of a health literacy intervention on pediatric emergency department use. Objective: The aim of this study was to measure the impact of a simple parent health literacy intervention on emergency department and primary care clinic usage patterns. Methods: Study participants consisted of parents who brought their children to the Harbor-UCLA Medical Center pediatric emergency department for nonurgent complaints. Study participants filled out questionnaires regarding their management of children's mild health complaints and where respondents first seek help when their children become sick. After completing the questionnaires, participants were educated about how to use the health aid book What to Do When Your Child Gets Sick and provided a free copy. After 6 months, telephone follow-up interviews were conducted to assess whether the health literacy intervention had influenced the participants' management of their children's mild health complaints and their health care resource usage patterns. Results: One hundred thirteen parents were enrolled in the preintervention phase, and 61 were successfully interviewed at 6 months by telephone. Before and after comparisons demonstrated a 13% reduction in the percentage of respondents who stated they would go to the emergency department first if their child became sick. In addition, 30% fewer respondents reported actual visits to the emergency department in the previous 6 months. Regarding specific low-acuity scenarios, significantly fewer participants would take their child to the emergency department for a low-grade fever with a temperature of 99.5 degrees F and for vomiting for 1 day. There was no significant change in the proportion of parents who would take their child to the emergency department for earache or cough. Conclusions: Health literacy interventions may reduce nonurgent emergency department visits and help mitigate emergency department overcrowding and the rising costs of health care. abstract_id: PUBMED:26477440 Novel emergency department registration kiosk for HIV screening is cost-effective. High operating costs challenge sustainability of successful US emergency department (ED) HIV screening programs. Free-standing registration kiosks could potentially reduce the marginal costs of ED HIV screening. We investigated incremental cost-effectiveness ratio (CER) per new HIV diagnosis for a kiosk-based approach for offering screening at ED registration versus a testing staff-based approach to offer testing at the bedside. A rapid oral-fluid HIV screening program, instituted in a US ED since 2005, had a rate of new HIV diagnosis 0.16% in 2012. A two-phase quasi experimental design, including a testing staff-based approach to offer testing at the bedside (Phase I, August and September 2011) and a kiosk-based approach to offer testing at ED registration (Phase II, December 2011 and January 2012), was performed. CER per new HIV diagnosis was defined as total cost of the screening program divided by number of newly diagnosed cases. Costs included screening program personnel (study coordinator, testing staff, and kiosk helpers), diagnostic assays (rapid and confirmatory tests), and kiosks (2 kiosks, software, and IT consulting fees). Sensitivity analyses were performed. Data from our dedicated testing staff (DTS) program (Phase I) resulted in an estimated 5434 patients tested in one year and 9 newly diagnosed HIV-infected patients (95% CI: 3, 18). Data from the kiosk program (Phase II), resulted in a projected 4571 ED patients tested in one year and 21 newly diagnosed HIV-infected patients (95% CI: 4, 70). The overall cost was $ 201,433 for the DTS program, versus $292,008 for the kiosk program. Incremental CER per new HIV diagnosis for kiosk-based approach was $7523 (range: $1780-90,025 by sensitivity analysis). Our pilot data demonstrated that the use of kiosks for HIV screening was potentially more cost-effective than a testing staff-based bedside approach. abstract_id: PUBMED:31402511 The influence of health literacy on emergency department utilization and hospitalizations in adolescents with sickle cell disease. Objective: Healthcare spending in the US is $3.2 trillion. $1.1 trillion is attributed to hospital care, including emergency department (ED) visits and hospitalizations. There is a relationship between ED utilization, hospitalizations, and health literacy in the general population. Health literacy may play a role in frequent ED visits and hospitalizations in patients with sickle cell disease (SCD). The purpose of this paper is to describe the relationship among health literacy levels, annual hospital encounters, annual clinic visits, annual ED visits, and annual hospitalizations in 134 Black, non-Hispanic adolescents aged 10-19 years with SCD. Design: This is a cross-sectional, descriptive correlational study evaluating facilitators and barriers to health literacy and clinical outcomes in adolescents with SCD. Sample: Data were collected from 134 Black, non-Hispanic adolescents with SCD at a large, tertiary care center in Texas. Measurements: The Newest Vital Sign and REALM-Teen health literacy instruments were used to evaluate health literacy. Results: Contrasting previous studies evaluating the influence of health literacy on ED visits and hospitalizations in the general population, there were no significant relationships within this sample. Conclusions: This study gives insight into future research to evaluate other potential influences on ED utilization and hospitalizations in pediatric patients with SCD. abstract_id: PUBMED:31761525 Implementation of a health-literate patient decision aid for chest pain in the emergency department. Objective: The aim of this study was to investigate the implementation of a new health-literacy-tested patient decision aid for chest pain in Emergency Department (ED) patients. Outcomes included disposition, knowledge, decisional conflict and satisfaction prior to discharge. Patient health literacy was explored as a factor that may explain disparities in sub-group analysis of all outcomes. Methods: A health-literacy adapted tool was deployed using a pre/post intervention design. Patients enrolled during the intervention period were given the adapted chest pain decision aid that was used in conversation with their emergency medicine physician to decide on their course of action prior to being discharged. Results: A total of 169 participants were surveyed and used in the final analysis. Patients in the usual care group were 2.6 times more likely to be admitted for chest pain than patients in the intervention group. Knowledge scores were higher in the intervention group, while no significant differences were observed in decisional conflict and patient satisfaction, or by patient health literacy level. Conclusion And Practice Implications: Using the adapted chest pain decision tool in emergency medicine may improve knowledge and reduce admissions, while addressing known barriers to understanding related to patient health literacy. abstract_id: PUBMED:23680294 The relationship between parent health literacy and pediatric emergency department utilization: a systematic review. Background: Low health literacy in parents can potentially impact understanding of a child's diagnosis and treatment course. No reviews have addressed parent health literacy in the emergency department (ED), the relationship between parent health literacy and child ED utilization, or the impact of low literacy interventions on child ED utilization. Objective: To systematically evaluate the peer-reviewed literature pertaining to parental health literacy and ED utilization. The following key questions were addressed: question (Q) 1) What is the prevalence of low health literacy, as estimated by validated health literacy measures, of parents in the ED? Q2) Is parent low health literacy related to ED use for children? Q3) Do low literacy interventions targeting parents likely to have low health literacy affect ED use for children? Data Sources: The authors reviewed 483 unduplicated titles and abstracts published between 1980 and May 2012 using PubMed and CINAHL, with 117 retained for full review and 17 included in the final analytic review. Study Eligibility Criteria, Participants, And Interventions: All included articles had a valid measure of parent health literacy and a Q1) descriptive measurement of the population, Q2) ED utilization, or Q3) utilized a low literacy educational intervention. Study Appraisal And Synthesis Methods: One author extracted data verified by a second author. Studies were rated for quality by both authors. Results: Q1) A median of 30% (interquartile range 22-36%) of parents in the ED possesses low health literacy. Q2) Studies investigating the relationship between health literacy and ED yielded mixed results. Q3) Seven of 8 low literacy interventions were associated with a reduction in ED use. Random effects pooled odds ratios from 6 studies showed intervention effectiveness (odds ratio 0.35; 95% CI 0.15-0.81). Limitations: No intervention studies measured health literacy, limiting the ability to determine whether the low literacy intervention targeted health literacy. Conclusions And Implications Of Key Findings: Roughly 1 in 3 parents of children presenting to the ED have low health literacy. Importantly, interventions targeting parents likely to have low health literacy have an impact in reducing ED utilization. Answer: Yes, health literacy level does influence the effectiveness of a kiosk-based intervention delivered in the pediatric emergency department. A study assessing parents' literacy skills and evaluating how literacy levels influenced the effectiveness of a health communication intervention designed to improve safety knowledge in low-income, urban families found that participants' reading level had an independent and significant effect on safety knowledge outcomes. More than one-third of parents were assessed to have marginal or inadequate reading levels, and these levels were independently associated with knowledge gains for poison storage and smoke alarms. The study concluded that literacy level should be considered in all patient education efforts (PUBMED:26333526). Additionally, another study that compared the effectiveness of comic-based with text-based concussion discharge instructions on improving caregiver knowledge also examined the role of social determinants of health on comprehension instructions. It found that caregivers with comic-based content were more likely to accurately recall overall discharge instructions, particularly physical rest and activity restrictions, and were less likely to misidentify a red flag symptom. The study highlighted that novel methods should be explored to adequately prepare caregivers for continuing postconcussive care at home and that discharge instructions must be tailored to address caregivers' baseline health literacy (PUBMED:36604284). Furthermore, the development and implementation of a community health literacy hub, 'Health Kiosk', aimed to improve health literacy among people living in socioeconomically vulnerable circumstances, also supports the notion that literacy level is a key determinant of health and that providing accessible health information and support to act on that knowledge is crucial (PUBMED:36684956). In summary, health literacy level is a significant factor in the effectiveness of kiosk-based interventions in the pediatric emergency department, and educational efforts should be tailored to accommodate varying literacy levels to ensure the best outcomes for patients and their families.
Instruction: Do unfavourable working conditions explain mental health inequalities between ethnic groups? Abstracts: abstract_id: PUBMED:26289668 Do unfavourable working conditions explain mental health inequalities between ethnic groups? Cross-sectional data of the HELIUS study. Background: Ethnic inequalities in mental health have been found in many high-income countries. The purpose of this study is to test whether mental health inequalities between ethnic groups are mediated by exposure to unfavourable working conditions. Methods: Workers (n = 6278) were selected from baseline data of the multi-ethnic HELIUS study. Measures included two indices of unfavourable working conditions (lack of recovery opportunities, and perceived work stress), and two mental health outcomes (generic mental health: MCS-12 and depressive symptoms: PHQ-9). Mediation of the relationships between ethnicity and mental health by unfavourable working conditions was tested using the bias-corrected bootstrap confidence intervals technique. Linear models with and without the mediators included, and adjusted for gender and age. Attenuation was calculated as the change in B between the models with and without mediators. Results: The sample comprised Dutch (1355), African Surinamese (1290), South-Asian Surinamese (1121), Turkish (1090), Ghanaian (729), and Moroccan (693) workers. After controlling for age and gender, all ethnic minorities had a higher risk of mental health problems as compared to the Dutch host population, with the exception of Ghanaians in the case of depressive symptoms, and African Surinamese workers with regard to both outcomes. The Turkish group stands out with the lowest mental health on both mental health indices, followed by Moroccan and South-Asian Surinamese workers. A lack of recovery opportunities mediated the relationship between ethnic group and a higher risk of mental health problems. Perceived work stress did not contribute to the explanation of ethnic inequalities. Conclusions: The higher risk of mental health problems in ethnic minority groups can be partly accounted for by a lack of recovery opportunities at work, but not by perceived work stress. This may imply that workplace prevention targeting recovery opportunities have the potential to reduce ethnic inequalities, but ethnic-specific experiences at the workplace need to be further explored. abstract_id: PUBMED:37814035 Health inequalities among young workers: the mediating role of working conditions and company characteristics. Objective: Few studies have investigated health inequalities among young workers. The objectives of this study are to assess the extent of health inequalities in a sample of job starters and to explore the contribution of job demands and organisational factors. Methods: We analyze data from the BIBB/BAuA Youth Employment Survey 2012. The cross-sectional survey includes a representative sample of 3214 German employees, apprentices, and trainees aged 15-24 years. Individuals were grouped by their years of schooling into low (&lt; 12 years) and high levels of education (≥ 12 years). Regression analysis estimated the link between education and four health outcomes: self-rated health, number of health events, musculoskeletal symptoms, and mental health problems over the last 12 months. Counterfactual mediation analysis tested for indirect effects of education via working conditions (i.e., physical and psychosocial job demands) and company characteristics (i.e., company size, health prevention measures, financial situation, downsizing). All analyses were adjusted for age, sex, nationality, region, working hours, job tenure, employment relationship, and economic sector. Results: Highly educated workers reported better self-rated health (b = 0.24, 95% CI 0.18-0.31) and lower numbers of health events (Rate Ratio (RR) = 0.74, 95% CI 0.67-0.82), musculoskeletal symptoms (RR = 0.73, 95% CI 0.66-0.80) and mental health problems (RR = 0.84, 95% CI 0.76-0.93). Total job demands explained between 21.6% and 87.2% of the educational differences (depending on health outcome). Unfavourable company characteristics were associated with worse health, but showed no or only small mediation effects. Conclusions: Health inequalities are already present at the early working career due to socio-economically stratified working hazards. To enhance prevention measures that aim at reducing inequalities in workplace health, we propose shifting attention towards earlier stages of life. abstract_id: PUBMED:25911619 Psychosocial work exposures among European employees: explanations for occupational inequalities in mental health. Background: Social inequalities in mental health have been demonstrated but understanding the mechanisms remains unclear. This study aims at exploring the role of psychosocial work factors in explaining occupational inequalities in mental health among European employees. Methods: The study sample covered 33,443 employees coming from the European Working Conditions Survey 2010. Mental health was measured by the WHO-5 well-being index and socioeconomic position by occupation. Twenty-five psychosocial work factors were constructed including job demands, job influence and development, role stressors, social support, quality of leadership, discrimination, violence at work, working hours, job promotion, job insecurity and work-life imbalance. Multilevel linear regressions and bootstrap analyses were performed. Results: Occupational differences were observed for poor mental health and almost all psychosocial work factors. Factors related to job demands, influence and development at work, social relationships and leadership, working hours and other factors contributed to explain the occupational inequalities in mental health. In particular, factors related to influence and development contributed substantially. Among men, workplace violences were found to contribute little whereas among women these factors did not play a role. Conclusions: Future prevention interventions should have a broad and comprehensive focus in order to reduce social inequalities in mental health. abstract_id: PUBMED:36674290 Working Conditions and Mental Health in a Brazilian University. The highest prevalence of mental illnesses and mental suffering in contemporary society has raised awareness of the theme and their connection to work. In Brazil, university servants (professors and technical-administrative staff) are a focused occupational group. We developed this research with the objective of exploring the relationship between the perception of working conditions and the mental health of these servants. Structured questionnaires were applied to 285 servants, 33.5% being professors and 66.5% technical-administrative staff. Regarding working conditions, the questionnaires included items that measured 15 primary factors and questions about their contracts and legal conditions. To evaluate mental health, the participants answered a questionnaire about common psychic symptoms, negative and positive affects, self-esteem, and family-work conflict. We composed groups of participants according to their mental health indicator scores (cluster analysis), and after that, we compared the mean scores in working conditions for the groups. Then, we found that the mean scores of 13 from the 15 working condition factors were significantly different between the mental health groups. Our results showed the importance of improving working conditions in universities to prevent mental illnesses. Understanding the content of each working condition factor presents potency to contribute to defining the priorities among different aspects of working conditions. abstract_id: PUBMED:35049474 Prevalence of common mental disorders and treatment receipt for people from ethnic minority backgrounds in England: repeated cross-sectional surveys of the general population in 2007 and 2014. Background: Concerns persist that some ethnic minority groups experience longstanding mental health inequalities in England. It is unclear if these have changed over time. Aims: To assess the prevalence of common mental disorders (CMDs) and treatment receipt by ethnicity, and changes over time, using data from the nationally representative probability sample in the Adult Psychiatric Morbidity Surveys. Method: We used survey data from 2007 (n = 7187) and 2014 (n = 7413). A Clinical Interview Schedule - Revised score of ≥12 indicated presence of a CMD. Treatment receipt included current antidepressant use; any counselling or therapy; seeing a general practitioner about mental health; or seeing a community psychiatrist, psychologist or psychiatric nurse, in the past 12 months. Multivariable logistic regression assessed CMD prevalence and treatment receipt by ethnicity. Results: CMD prevalence was highest in the Black group; ethnic variation was explained by demographic and socioeconomic factors. After adjustment for these factors and CMDs, odds ratios for treatment receipt were lower for the Asian (0.62, 95% CI 0.39-1.00) and White Other (0.58, 95% CI 0.38-0.87) groups in 2014, compared with the White British group; for the Black group, this inequality appeared to be widening over time (2007 treatment receipt odds ratio 0.68, 95% CI 0.38-1.23; 2014 treatment receipt odds ratio 0.23, 95% CI 0.13-0.40; survey year interaction P &lt; 0.0001). Conclusions: Treatment receipt was lower for all ethnic minority groups compared with the White British group, and lowest among Black people, for whom inequalities appear to be widening over time. Addressing socioeconomic inequality could reduce ethnic inequalities in mental health problems, but this does not explain pronounced treatment inequalities. abstract_id: PUBMED:15498404 Inequalities in mental health in the working population Objectives: To analyze inequalities in mental health in the working population by gender and professional qualifications and to identify psychosocial risk factors and employment conditions related to the mental health of this population. Methods: We performed a cross-sectional study using data from the Barcelona Health Survey 2000. The working population aged 16-64 years (2322 men and 1836 women) was included. Mental health was measured with the General Health Questionnaire (GHQ-12). Adjusted odds ratios (aOR) and their 95% confidence intervals (CI) were calculated by means of multivariate logistic regression models separated by job qualifications and gender. Results: The prevalence of poor mental health ranged from 8% among men working in non-manual occupations to 19% in women working in manual jobs. Women were more likely to report poor mental health status than men, although sex differences were greater among manual workers (aOR = 2.26; 95%CI, 1.68-3.05 for women compared to men in the same group). Differences according to qualifications were found among women only (aOR = 1.58 [95%CI, 1.22-2.05] for women working in manual jobs compared to those working in non-manual jobs), while no differences were found among men according to qualifications. Psychosocial risk factors were associated with mental health: demand was associated in all groups, autonomy only in non-manual occupations, and social support only in the most highly qualified working women. Employment conditions such as working a split shift (working day with a long lunch break) or having a temporary contract were associated with mental health in manual occupations only. Conclusions: Mental health among the working population is related to professional qualifications and gender. Women are at greater risk than men, especially those working in manual occupations. Psychosocial occupational factors are related to mental health status, showing different patterns depending on gender and professional qualifications. abstract_id: PUBMED:34609255 Working conditions and mental health functioning among young public sector employees. Background: The associations between adverse working conditions and mental disorders are well established. However, associations between adverse working conditions and poor mental health functioning is a less explored area. This study examines these associations among younger public sector employees of the City of Helsinki, Finland. Methods: We use data from the Young Helsinki Health Study with a representative sample of the employees of the City of Helsinki, aged 19-39 years (n=4 217). Mental health functioning was measured with mental composite summary of the Short Form 36. Working conditions included factors related to both the psychosocial (job control and job demands) and the physical work environment (physical workload). To examine the associations, we used logistic regression models with adjustments for socio-demographics, other working conditions and health-related covariates. Results: After adjustment for sociodemographic characteristics, poor health, health behaviours and other occupational exposures, high job demands (OR=1.69; 95% CI=1.45-1.97) and low job control (OR=1.65; 95% CI=1.40-1.94) were associated with poor mental health functioning. High physical workload was not associated with the outcome (OR=0.87; 95% CI=0.72-1.05) after the adjustments. Conclusions: Adverse psychosocial working conditions were associated with mental health functioning, whereas physical working conditions were not. As impaired functioning is likely to cause health-related lost productivity and can lead to work disability, further research and interventions with a balanced approach focusing on both psychosocial working conditions and mental health functioning are recommended. abstract_id: PUBMED:26094941 Risk and resilience: health inequalities, working conditions and sickness benefit arrangements: an analysis of the 2010 European Working Conditions survey. In this article we ask whether the level of sickness benefit provision protects the health of employees, particularly those who are most exposed to hazardous working conditions or who have a little education. The study uses the European Working Condition Survey that includes information on 20,626 individuals from 28 countries. Health was measured by self-reported mental wellbeing and self-rated general health. Country-level sickness benefit provision was constructed using spending data from Eurostat. Group-specific associations were fitted using cross-level interaction terms between sickness benefit provision and physical and psychosocial working conditions respectively, as well as those with little education. The mental wellbeing of employees exposed to psychosocial job strain and physical hazards, or who had little education, was better in countries that offer more generous sickness benefit. These results were found in both men and women and were robust to the inclusion of GDP and country fixed effects. In the analyses of self-reported general health, few group-specific associations were found. This article concludes that generous sickness benefit provision may strengthen employee's resilience against mental health risks at work and risks associated with little education. Consequently, in countries with a generous provision of sickness benefit, social inequalities in mental health are smaller. abstract_id: PUBMED:27197816 Contribution of working conditions to occupational inequalities in depressive symptoms: results from the national French SUMER survey. Objectives: Social inequalities in mental health have been observed, but explanations are still lacking. The objectives were to evaluate the contribution of a large set of psychosocial work factors and other occupational exposures to social inequalities in mental health in a national representative sample of employees. Methods: The sample from the cross-sectional national French survey SUMER 2010 included 46,962 employees: 26,883 men and 20,079 women. Anxiety and depression symptoms were measured using the Hospital Anxiety and Depression scale. Occupation was used as a marker of social position. Psychosocial work factors included various variables related to the classical job strain model, psychological demands, decision latitude, social support, and other understudied variables related to reward, job insecurity, job promotion, esteem, working time/hours, and workplace violence. Other occupational exposures of chemical, biological, physical, and biomechanical nature were also studied. Weighted age-adjusted linear regression analyses were performed. Results: Occupational gradients were found in the exposure to most psychosocial work factors and other occupational exposures. Occupational inequalities were observed for depressive symptoms, but not for anxiety symptoms. The factors related to decision latitude (and its sub-dimensions, skill discretion, and decision authority), social support, and reward (and its sub-dimensions, job promotion, job insecurity, and esteem) contributed to explain occupational inequalities in depressive symptoms. Decision latitude played a major role in the explanation. Workplace violence variables contributed among men only. Other exposures of physical and biomechanical nature also displayed significant contributions. Conclusions: Comprehensive prevention policies at the workplace may help to reduce social inequalities in mental health in the working population. abstract_id: PUBMED:34530844 Employment and working conditions of nurses: where and how health inequalities have increased during the COVID-19 pandemic? Background: Nurses and midwives play a critical role in the provision of care and the optimization of health services resources worldwide, which is particularly relevant during the current COVID-19 pandemic. However, they can only provide quality services if their work environment provides adequate conditions to support them. Today the employment and working conditions of many nurses worldwide are precarious, and the current pandemic has prompted more visibility to the vulnerability to health-damaging factors of nurses' globally. This desk review explores how employment relations, and employment and working conditions may be negatively affecting the health of nurses in countries such as Brazil, Croatia, India, Ireland, Italy, México, Nepal, Spain, and the United Kingdom. Main Body: Nurses' health is influenced by the broader social, economic, and political system and the redistribution of power relations that creates new policies regarding the labour market and the welfare state. The vulnerability faced by nurses is heightened by gender inequalities, in addition to social class, ethnicity/race (and caste), age and migrant status, that are inequality axes that explain why nurses' workers, and often their families, are exposed to multiple risks and/or poorer health. Before the COVID-19 pandemic, informalization of nurses' employment and working conditions were unfair and harmed their health. During COVID-19 pandemic, there is evidence that the employment and working conditions of nurses are associated to poor physical and mental health. Conclusion: The protection of nurses' health is paramount. International and national enforceable standards are needed, along with economic and health policies designed to substantially improve employment and working conditions for nurses and work-life balance. More knowledge is needed to understand the pathways and mechanisms on how precariousness might affect nurses' health and monitor the progress towards nurses' health equity. Answer: Unfavourable working conditions have been identified as a contributing factor to mental health inequalities between ethnic groups. The HELIUS study found that ethnic minorities had a higher risk of mental health problems compared to the Dutch host population, and this was partly accounted for by a lack of recovery opportunities at work. However, perceived work stress did not significantly contribute to explaining these ethnic inequalities (PUBMED:26289668). Similarly, the BIBB/BAuA Youth Employment Survey 2012 indicated that health inequalities among young workers were present early in their working careers due to socio-economically stratified working hazards. Total job demands explained a significant portion of the educational differences in health outcomes, suggesting that working conditions play a mediating role in health inequalities (PUBMED:37814035). The European Working Conditions Survey 2010 also showed occupational differences in mental health, with psychosocial work factors such as job demands, influence and development at work, social relationships, and leadership contributing to explaining the occupational inequalities in mental health (PUBMED:25911619). In Brazil, a study on university servants highlighted the importance of improving working conditions to prevent mental illnesses, as the mean scores of 13 out of 15 working condition factors were significantly different between the mental health groups (PUBMED:36674290). The Adult Psychiatric Morbidity Surveys in England revealed that treatment receipt for common mental disorders (CMDs) was lower for all ethnic minority groups compared to the White British group, with the Black group experiencing widening inequalities over time. While addressing socioeconomic inequality could reduce ethnic inequalities in mental health problems, this does not fully explain the pronounced treatment inequalities (PUBMED:35049474). In conclusion, unfavourable working conditions do play a role in explaining mental health inequalities between ethnic groups, as evidenced by the mediation of factors such as lack of recovery opportunities, job demands, and psychosocial work factors. However, these conditions are not the sole contributors, and other factors such as socioeconomic status, access to treatment, and broader social determinants also influence these disparities.
Instruction: Do probiotics improve eradication response to Helicobacter pylori on standard triple or sequential therapy? Abstracts: abstract_id: PUBMED:27994474 Probiotics improve the efficacy of standard triple therapy in the eradication of Helicobacter pylori: a meta-analysis. Introduction: Helicobacter pylori colonization is present in half of the world's population and can lead to numerous gastrointestinal diseases if left untreated, including peptic ulcer disease and gastric cancer. Although concurrent triple therapy remains the recommended treatment regimen for H. pylori eradication, its success rate and efficacy have been declining. Recent studies have shown that the addition of probiotics can significantly increase eradication rates by up to 50%. This meta-analysis examines the impact of probiotic supplementation on the efficacy of standard triple therapy in eradicating H. pylori. Methods: A comprehensive literature search was conducted using PubMed, Cochrane Central Registry of Controlled Trials, and Google Scholar (time of inception to 2016) to identify all published randomized control trials (RCTs) assessing the use of probiotics in addition to triple therapy for the treatment of H. pylori. Searches were conducted using the keywords "probiotics", "triple therapy", and "Helicobacter pylori". RCTs comparing the use of probiotics and standard triple therapy with standard triple therapy alone for any duration in patients of any age diagnosed with H. pylori infection were included. H. pylori eradication rates (detected using urea breath test or stool antigen) were analyzed as-per-protocol (APP) and intention-to-treat (ITT). Results: A total of 30 RCTs involving 4,302 patients APP and 4,515 patients ITT were analyzed. The addition of probiotics significantly increased eradication rates by 12.2% (relative risk [RR] =1.122; 95% confidence interval [CI], 1.091-1.153; P&lt;0.001) APP and 14.1% (RR =1.141; 95% CI, 1.106-1.175; P&lt;0.001) ITT. Probiotics were beneficial among children and adults, as well as Asians and non-Asians. No significant difference was observed in efficacy between the various types of probiotics. The risk of diarrhea, nausea, vomiting, and epigastric pain was also reduced. Conclusion: The addition of probiotics is associated with improved H. pylori eradication rates in both children and adults, as well as Asians and non-Asians. Lactobacillus, Bifidobacterium, Saccharomyces, and mixtures of probiotics appear beneficial in H. pylori eradication. Furthermore, the reduction in antibiotic-associated side effects such as nausea, vomiting, diarrhea, and epigastric pain improves medication tolerance and patient compliance. Given the consequences associated with chronic H. pylori infection, the addition of probiotics to the concurrent triple therapy regimen should be considered in all patients with H. pylori infection. However, further studies are required to identify the optimal probiotic species and dose. abstract_id: PUBMED:23680708 Do probiotics improve eradication response to Helicobacter pylori on standard triple or sequential therapy? Background: The standard triple therapy for the eradication of Helicobacter pylori consists of a combination of a proton pump inhibitor at a standard dose together with two antibiotics (amoxicillin 1000 mg plus either clarithromycin 500 mg or metronidazole 400 mg) all given twice daily for a period of 7-14 days. Recent reports have shown a dramatic decline in the rate of H. pylori eradication utilizing standard triple therapy from 95% down to 70-80%. Aims: Our study was designed to evaluate the effect of adding a probiotic as an adjuvant to common regimens used for H. pylori eradication. Materials And Methods: An open label randomized observational clinical study was designed to test three different regimens of H. pylori eradication treatment: Standard triple therapy with a concomitant probiotic added at the same time (n = 100), starting the probiotic for 2 weeks before initiating standard triple therapy along with the probiotic (n = 95), and the third regimen consists of the probiotic given concomitantly to sequential treatment (n = 76). The three arms were compared to a control group of patients treated with the traditional standard triple therapy (n = 106). Results: The eradication rate for the traditional standard therapy was 68.9%, and adding the probiotic "Bifidus infantis" to triple therapy, led to a successful rate of eradication of 83% (P &lt; 0.001). Pre-treatment with 2 weeks of B. infantis before adding it to standard triple therapy increased the success rate of eradication to 90.5%. Similar improvement in eradication rate was noted when B. infantis was added as an adjuvant to the sequential therapy leading to an eradication rate of 90.8%. Conclusion: Adding B. infantis as an adjuvant to several therapeutic regimens commonly used for the eradication of H. pylori infection significantly improves the cure rates. abstract_id: PUBMED:35706754 Randomized Clinical Trial on the Efficacy of Triple Therapy Versus Sequential Therapy in Helicobacter pylori Eradication. Introduction: Helicobacter pylori (H. pylori) colonization is prevalent all over the world, and it is associated with low socioeconomic status, poor hygiene, and overcrowding. Its eradication is important since it is an etiologic agent for gastritis, peptic ulcer, gastric carcinoma, and mucosa-associated lymphoid tissue lymphoma. Different regimens are available for the eradication of H. pylori and include triple therapy and sequential therapy. Our study aims to compare the efficacy of triple therapy versus sequential therapy in the eradication of H. pylori. Material And Methods: This randomized clinical trial was conducted at the Pakistan Institute of Medical Sciences Hospital, Islamabad, from September 2016 to September 2017 after the approval of the institutional review board. A total of 160 patients were enrolled and equally divided into two, group A and group B. A twice-daily dose of amoxicillin 1,000 mg, rabeprazole 20 mg, and clarithromycin 500 mg was given to group A for 10 days, while group B was initially given rabeprazole 20 mg and amoxicillin 1,000 mg two times daily for the first five days (i.e., induction phase), followed by triple therapy that included rabeprazole 20 mg, clarithromycin 500 mg, and metronidazole/tinidazole 500 mg twice daily for the next five days. A negative stool antigen test performed four weeks after the completion of therapy was considered an effective eradication. A proforma was used to collect data that included age, gender, city or province of residence, family income, group (group A or group B), and eradication efficacy. Analysis of the data was performed using the Statistical Package for the Social Sciences version 17 (SPSS Inc., Chicago, USA). Results: A total of 160 patients were included, with mean age and standard deviation of 40.02±24.4 years. The male/female ratio was 1.8:1. Successful eradication of H. pylori achieved in group A was 67.5% (N=54) in comparison to group B, which was 95% (N=76) (p=0.001). Conclusion: Sequential therapy was superior to triple therapy in H. pylori eradication. abstract_id: PUBMED:26131283 Probiotics improve efficacy and tolerability of triple therapy to eradicate Helicobacter pylori: a meta-analysis of randomized controlled trials. Objective: Gastric colonization by Helicobacter pylori is linked to a host of diseases, but eradication rates have declined in recent years. Some experimental studies suggest that probiotics may inhibit growth of H. pylori. This investigation was conducted to assess the impact of probiotics on both efficacy and tolerability of triple therapy to eradicate H. pylori. Methods: PubMed, Web of Science, and the Cochrane Collaboration were searched for relevant articles published through August 31, 2014. All analytics relied on commercially available software (Stata 11). Results: Twenty-three studies (N = 3900) qualified for meta-analysis. Pooled H. pylori eradication rates for triple therapy used alone and with added probiotics were 1464/2026 (72.26%; 95% CI, 67.66%-74.13) and 1513/1874 (80.74%; 95% CI, 74.68%-82.76%), respectively (odds ratio [OR] = 0.58; 95% CI, 0.50-0.68). Loss of appetite was similar in both groups (OR = 0.94; 95% CI, 0.61-1.45), but most adverse events (nausea, diarrhea, epigastric pain, vomiting, taste distortion, and skin rash) were mitigated through addition of probiotics. Publication bias was not evident, as indicated by Begg's and Egger's tests. Conclusions: Probiotics may improve the efficacy of triple therapy in eradicating gastric H. pylori and alleviate most treatment-related adverse events. abstract_id: PUBMED:27665525 Standard triple therapy versus sequential therapy for eradication of Helicobacter pylori in treatment naïve and retreat patients. Background And Study Aims: Untreated Helicobacter pylori infection causes increased risk of gastric cancer, GI morbidity and mortality. Standard treatment for eradication of Helicobacter pylori infection, is the triple therapy which consists of a proton pump inhibitor; together with two antibiotics (amoxicillin 1000mg with clarithromycin 500mg or metronidazole 400mg) given twice daily for 7-14days. Recent evidence revealed, that cure rates of Helicobacter pylori infection with triple therapy had fallen below satisfactory targets. Sequential therapy consisting of a twice daily dose of a PPI for ten days with Amoxicillin given at 1000mg twice daily in the first 5days followed by clarithromycin 500mg and Metronidazole 400mg given twice daily in the subsequent 5days, was recommended to improve eradication rates. We performed a randomised open label study to compare the efficacy of sequential against triple therapy in Helicobacter pylori naive and retreat patients. Patients And Methods: In a randomised open label observational study 485 patients fulfilling inclusion and exclusion criteria were randomly assigned to be treated with triple therapy (n=231) or sequential therapy (n=254). Eradication of Helicobacter pylori was documented with 14C Urea breath test (UBT) performed 6weeks after the treatment. Results: The intention-to-treat eradication rate was better in sequential therapy group 84.6% than triple therapy 68% (p&lt;0.001). Eradication rates were significantly higher for treatment naive than retreat patients in triple therapy group (70.5% and 58.3%, respectively, p&lt;0.01). A trend of a better response was observed in eradication rate for treatment naive 88.55% versus retreat 74.6% in sequential therapy group but was not statistically significant (p=0.76). Compliance was similar in the two groups, however side effects were less and the clinical response was better in the sequential therapy group. abstract_id: PUBMED:25590026 A comparison between standard triple therapy and sequential therapy on eradication of Helicobacter pylori in uremic patients: A randomized clinical trial. Background: The prevalence of peptic ulcer disease in hemodialysis dependent patients is higher than the general population. These patients are also more prone to upper gastrointestinal bleeding. The aim of this study was to compare the effects of a standard triple therapy with a sequential therapy on Helicobacter pylori eradication in azotemic and hemodialysis patients. Materials And Methods: Forty nine hemodialysis and azotemic patients, naïve to H. pylori treatment, were randomized into two groups to receive either standard triple therapy (pantoprazole 40 mg, amoxicillin 500 mg and clarithromycin 250 mg twice a day for 14 days) or a sequential therapy (pantoprazole 40 mg for 10 days, amoxicillin 500 mg twice a day for the first 5 days and clarithromycin 250 mg + tinidazole 500 mg twice a day just during the second 5 days). H. pylori eradication was evaluated by fecal H. pylori antigen assessment 8 weeks after the treatment. Results: Of 49 patients, 45 patients (21 in triple therapy group and 24 in the sequential group) completed the study. Based on intention to treat analysis, H. pylori eradication rates were 66.7% (95% confidence interval [CI]: 47.8-85.5%) in standard triple therapy group and 84% (95% CI: 69.6-98.3%) in sequential therapy group (P = 0.34). Per-protocol (PP) eradication rates were (95% CI: 76.2%. 6-89.3%) 54 and 87.5% (95% CI: 68.8-95.5%), respectively (P = 0.32). Conclusion: According to Maastricht III consensus report, the results of our study showed that sequential therapy might be a better choice compared with the standard triple therapy in azotemic and hemodialysis patients Iran. We propose to assess the effects of shorter-duration sequential therapy (less than 10 days) for H. pylori eradication. abstract_id: PUBMED:23180952 Adjuvant probiotics improve the eradication effect of triple therapy for Helicobacter pylori infection. Aim: To investigate whether the addition of probiotics can improve the eradication effect of triple therapy for Helicobacter pylori (H. pylori) infection. Methods: This open randomized trial recruited 234 H. pylori positive gastritis patients from seven local centers. The patients were randomized to one-week standard triple therapy (omeprazole 20 mg bid, clarithromycin 500 mg bid, and amoxicillin 1000 mg bid; OCA group, n = 79); two weeks of pre-treatment with probiotics, containing 3 × 10(7)Lactobacillus acidophilus per day, prior to one week of triple therapy (POCA group, n = 78); or one week of triple therapy followed by two weeks of the same probiotics (OCAP group, n = 77). Successful eradication was defined as a negative C13 or C14 urease breath test four weeks after triple therapy. Patients were asked to report associated symptoms at baseline and during follow-up, and side effects related to therapy were recorded. Data were analyzed by both intention-to-treat (ITT) and per-protocol (PP) methods. Results: PP analysis involved 228 patients, 78 in the OCA, 76 in the POCA and 74 in the OCAP group. Successful eradication was observed in 171 patients; by PP analysis, the eradication rates were significantly higher (P = 0.007 each) in the POCA (62/76; 81.6%, 95% CI 72.8%-90.4%) and OCAP (61/74; 82.4%, 95% CI 73.6%-91.2%) groups than in the OCA group (48/78; 61.5%, 95% CI 50.6%-72.4%). ITT analysis also showed that eradication rates were significantly higher in the POCA (62/78; 79.5%, 95% CI 70.4%-88.6%) and OCAP (61/77; 79.2%, 95% CI 70%-88.4%) groups than in the OCA group (48/79; 60.8%, 95% CI 49.9%-71.7%), (P = 0.014 and P = 0.015). The symptom relieving rates in the POCA, OCAP and OCA groups were 85.5%, 89.2% and 87.2%, respectively. Only one of the 228 patients experienced an adverse reaction. Conclusion: Administration of probiotics before or after standard triple therapy may improve H. pylori eradication rates. abstract_id: PUBMED:37195552 Comparison of the efficacies of triple, quadruple and sequential antibiotic therapy in eradicating Helicobacter pylori infection: A randomized controlled trial. Background And Aim: There is regional variation in the eradication rates of Helicobacter pylori (H. pylori) regimens depending on the local antibiotic resistance patterns. The aim of this study was to compare the efficacies of triple, quadruple and sequential antibiotic therapy in eradicating H. pylori infection. Methods: A total of 296 H. pylori-positive patients were randomized to receive one of the three regimens (triple, quadruple or sequential antibiotic therapy) and eradication rate was assessed by H. pylori stool antigen test. Results: The eradication rates of standard triple therapy, sequential therapy and quadruple therapy were 93%, 92.9% and 96.4%, respectively (p = 0.57). Conclusion: Fourteen days of standard triple therapy, 14 days of bismuth-based quadruple therapy and 10 days of sequential therapy are equally efficacious in eradicating H. pylori and all regimens have optimum H. pylori eradication rates. Trial Registration: ClinicalTrials.gov Identifier: CTRI/2020/04/024929. abstract_id: PUBMED:25232429 Sequential therapy versus standard triple therapy in Helicobacter pylori eradication in a high clarithromycin resistance setting. Sequential treatment scheme has been developed to overcome resistance problem in H. pylori eradication and favorable results have been obtained. This study compared the results of standard triple therapy with a sequential schema consisting of pantoprazole, amoxicillin, clarithromycin, and metronidazole in a high anti-microbial resistance setting. This retrospective study included subjects that underwent standard or sequential eradication treatment after a diagnosis of biopsy-documented H. pylori infection. Patients either received pantoprazole 40 mg bid, amoxicillin 1000 mg bid and clarithromycin 500 mg bid (PAC) for 10 days, or pantoprazole 40 mg bid and amoxicillin 1000 mg bid (PA) for the first 5 days of the treatment period and were then given pantoprazole 40 mg bid, clarithromycin 500 mg bid, and metronidazole 500 mg bid (PCM) in the remaining 5 days. Eradication was tested using urea breath test. The two treatment groups did not differ with regard to H. pylori eradication rate for both ITT population (63.9% versus 71.4% for standard and sequential therapy respectively, P = 0.278) and per protocol population (65.9% versus 74.1% for standard and sequential therapy respectively, P = 0.248). Although a sequential treatment appears to represent a plausible alternative, our findings suggest that alternative schedules may be required in certain populations to achieve higher success rates. abstract_id: PUBMED:28109257 Triple therapy versus sequential therapy for the first-line Helicobacter pylori eradication. Background: The eradication rate of Helicobacter pylori (H. pylori) with triple therapy which was considered as standard first-line treatment has decreased to 70-85%. The aim of this study is to compare 7-day triple therapy versus 10-day sequential therapy as the first line treatment. Methods: Data of 1240 H. pylori positive patients treated with triple therapy or sequential therapy from January 2013 to December 2015 were analyzed retrospectively. The patients who had undertaken previous H. pylori eradication therapy or gastric surgery were excluded. Results: There were 872 (74.3%) patients in the triple therapy group, and 302 (25.7%) patients in the sequential therapy group. There was no significant difference between the two groups regarding age, residence, comorbidities or drug compliance, but several differences were noted in endoscopic characteristics and indication for the treatment. The eradication rate of H. pylori by intention to treat analysis was 64.3% in the triple therapy group, and 81.9% in the sequential therapy group (P = 0.001). In per protocol analysis, H. pylori eradication rate in the triple therapy and sequential therapy group was 81.9 and 90.3%, respectively (P = 0.002). There was no significant difference in overall adverse events between the two groups (P = 0.706). For the rescue therapy, bismuth-containing quadruple therapy showed comparable treatment efficacy after sequential therapy, as following triple therapy. Conclusions: The eradication rate of triple therapy was below the recommended threshold. Sequential therapy could be effective and tolerable candidate for the first-line H. pylori eradication therapy. Answer: Yes, probiotics improve the eradication response to Helicobacter pylori when used with standard triple or sequential therapy. Several studies have demonstrated that the addition of probiotics to these treatment regimens significantly increases the eradication rates of H. pylori. A meta-analysis showed that probiotics increased eradication rates by 12.2% for as-per-protocol (APP) and 14.1% for intention-to-treat (ITT) analyses when added to standard triple therapy (PUBMED:27994474). Another study found that adding the probiotic Bifidus infantis to triple therapy led to an eradication rate of 83%, and pre-treatment with B. infantis before adding it to standard triple therapy increased the success rate to 90.5%. Similar improvements were noted when B. infantis was added as an adjuvant to sequential therapy, leading to an eradication rate of 90.8% (PUBMED:23680708). Furthermore, another meta-analysis confirmed that probiotics improve the efficacy of triple therapy in eradicating gastric H. pylori and alleviate most treatment-related adverse events (PUBMED:26131283). Additionally, a randomized trial found that administration of probiotics before or after standard triple therapy may improve H. pylori eradication rates (PUBMED:23180952). These findings suggest that probiotics are a beneficial adjunct to both standard triple therapy and sequential therapy for H. pylori eradication.
Instruction: Is chromosome analysis mandatory in the initial investigation of normovulatory women seeking infertility treatment? Abstracts: abstract_id: PUBMED:15958396 Is chromosome analysis mandatory in the initial investigation of normovulatory women seeking infertility treatment? Background: There is no agreement about the frequency of chromosomal abnormalities (CAs) in the female partner of an infertile couple and therefore there is no evidence base for determining whether karyotype analysis is mandatory before the initiation of infertility treatment. The aim of this prospective study was to estimate the prevalence of karyotype abnormalities in normovulatory women attending an infertility clinic and compare it to that known to be present in the newborn female population. Methods: Cytogenetic testing was performed in 1206 women with normal ovulatory cycle seeking infertility treatment. At least 15 GTG-banded metaphases were analysed in each case. In the case of a structural abnormality, fluorescent in situ hybridization (FISH) analysis and high resolution banding (HRB) were performed on a new blood sample to elucidate the aberration. When mosaicism was suspected, the number of analysed metaphases was increased to a total of 115 and an additional analysis of 200 metaphases was done on a second blood sample. Results: A chromosomal abnormality was demonstrated in 0.58% (95% CI: 0.28-1.19) of cases which did not differ significantly from that reported in female newborns (0.79%; 95% CI: 0.68-0.94). Balanced reciprocal translocation was observed in 0.4% of patients (n = 5), paracentric inversion of chromosome X in 0.08% (n = 1) and gonosomal mosaicism in 0.08% (n = 1). However, chromosomal aberrations were less common among females with primary infertility compared to those with secondary infertility (0.25 versus 1.25%, P = 0.04). Conclusions: The present study suggests that routine cytogenetic analysis cannot be advocated in normovulatory infertile women. Nevertheless, the relatively higher frequency of abnormal karyotypes in women with secondary infertility indicates that this subgroup of patients might benefit from a routine karyotype analysis. abstract_id: PUBMED:35937441 Psychobiological, clinical, and sociocultural factors that influence Black women seeking treatment for infertility: a mixed-methods study. Objective: To provide a comprehensive and multidimensional description and conceptualization of the experiences of Black women seeking treatment for infertility. Design: Convergent parallel mixed-methods study combining retrospective chart review data and semistructured interview data. Setting: Private infertility clinic. Patients: African American/Black women between 18 and 44 years of age who presented for an initial infertility evaluation with a male partner between January 2015 and September 2019 at an infertility clinic in the metropolitan Washington D.C. area. Interventions: None. Main Outcomes: Treatment seeking. Measures: Psychobiological, clinical, and sociocultural factors. Results: Along with the psychobiological, clinical, and sociocultural domains, we understood that Black women who sought treatment for infertility were older and overweight, had complex gynecological diagnoses, and experienced infertility for long periods of time. The delay in seeking treatment was possibly because of a low perceived risk of infertility, poor understanding of treatment options, inadequate referral patterns of primary care providers, and limited social support. Further, Black women experienced delays in seeking treatment because they attempted lifestyle-based self-interventions before considering medical interventions. Facilitators to care included psychological distress, complex gynecological medical history, and finding culturally competent providers. Conclusions: The study findings show that Black women in the United States are vulnerable to disparities in healthcare delivery, especially within reproductive endocrinology. Our findings highlight areas where Black women are experiencing missed opportunities for teaching, early identification, and early referrals for infertility-related concerns. Future studies should seek to reduce barriers to infertility treatment at the clinical and policy levels. abstract_id: PUBMED:27141468 Socio-Demographic Correlates of Women's Infertility and Treatment Seeking Behavior in India. Background: Infertility is an emergent issue in India. Until recently, very few studies have understood the patterns and consequences of infertility in India. Family planning programs in India also viewed exclusively the patterns and determinants of overfertility rather than infertility. Furthermore, there is the lack of information about treatment seeking behavior of infertile couples. Therefore, this paper aimed to examine the extent of infertility and treatment seeking behavior among infertile women in India. An attempt was also made to evaluate the effects of socio-demographic factors on treatment seeking behavior. Methods: The study used the data from the District Level Household and Facility Survey carried out in India during 2007-08. Several statistical techniques such as chi-square test, proportional hazard model and binary logistic regression model were used for the analysis. Results: Approximately, 8% of currently married women suffered from infertility in India and most of them were secondary infertile (5.8%). Within India, women's infertility rate was the highest in west Bengal (13.9 percent) and the lowest in Meghalaya (2.5 percent). About 80% of infertile women sought treatment but a substantial proportion (33%) received non-allopathic and traditional treatment due to expensive modern treatment and lack of awareness. Conclusion: In the context of policy response, it can be said that there is a need to improve the existing services and quality of care for infertile women. Treatment for infertility should be integrated into the larger reproductive health packages. abstract_id: PUBMED:24799871 The emotional-psychological consequences of infertility among infertile women seeking treatment: Results of a qualitative study. Background: Infertility is a major life event that brings about social and psychological problems. The type and rate these problems in the context of socio-cultural of different geographical areas and sex of people is different. Objective: The aim of this qualitative study was to explain the psychological consequences of infertility in Iranian infertile women seeking treatment. Materials And Methods: This qualitative study was done using qualitative content analysis on 25 women affected by primary and secondary infertility with no surviving children in 2012. They were purposefully selected with maximum sample variation from a large Fertility Health Research Center in Tehran, Iran. Data were collected using 32 semi-structured interviews and analyzed by the conventional content analysis method. Results: The findings of this study include four main themes: 1. Cognitive reactions of infertility (mental engagement; psychological turmoil). 2. Cognitive reactions to therapy process (psychological turmoil; being difficult to control in some situations; reduced self-esteem; feelings of failure). 3. Emotional-affective reactions of infertility (fear, anxiety and worry; loneliness and guilt; grief and depression; regret). 4. Emotional-affective reactions to therapy process (fear, anxiety and worry; fatigue and helplessness; grief and depression; hopelessness). Conclusion: This study revealed that Iranian infertile women seeking treatment face several psychological-emotional problems with devastating effects on the mental health and well-being of the infertile individuals and couples, while the infertility is often treated as a biomedical issue in Iranian context with less attention on the mental-emotional, social and cultural aspects. This article extracted from Ph.D. thesis. (Seyede Batool Hasanpoor-Azghady). abstract_id: PUBMED:26584236 An internet forum analysis of stigma power perceptions among women seeking fertility treatment in the United States. Infertility is a condition that affects nearly 30 percent of women aged 25-44 in the United States. Though past research has addressed the stigmatization of infertility, few have done so in the context of stigma management between fertile and infertile women. In order to assess evidence of felt and enacted stigma, we employed a thematic content analysis of felt and enacted stigma in an online infertility forum, Fertile Thoughts, to analyze 432 initial threads by women in various stages of the treatment-seeking process. We showed that infertile women are frequently stigmatized for their infertility or childlessness and coped through a variety of mechanisms including backstage joshing and social withdrawal. We also found that infertile women appeared to challenge and stigmatize pregnant women for perceived immoral behaviors or lower social status. We argue that while the effects of stigma power are frequently perceived and felt in relationships between infertile women and their fertile peers, the direction of the enacted stigma is related to social standing and feelings of fairness and reinforces perceived expressions of deserved motherhood in the United States. abstract_id: PUBMED:36303623 An exploration of treatment seeking behavior of women experienced infertility and need for services in rural India. Background: To make informed decisions on fertility treatment, couples need to understand the treatment options available to them. A wide range of treatment options is available from the traditional and biomedical service providers in India. There is a dearth of research to find out factors that influence the treatment-seeking behavior of couples, particularly in rural areas. Objectives: The study aimed to document the treatment-seeking behavior of women for their infertility problems. Further, the research focused on the socio-economic determinants affecting allopathic treatment-seeking of women and the services needed for couples experiencing infertility in rural India. Methods: The study is cross-sectional. Primary data were collected from the two high infertility prevalence districts. Complete mapping and listing were carried out to identify the eligible respondents. A total of 159 ever-married women (20-49 years) out of 172 identified women were interviewed. Bivariate and multivariate analyses were performed. Results: Among 159 interviewed women, only three did not seek any kind of treatment. Of the 156 women, 63, 65, and 28 women (mutually exclusive) received first, second and third-order treatment, respectively. The number of women decreased in the succeeding phases of infertility. Women aged above 35 years, were significantly less (OR = 0.310, p &lt; 0.05) compared to women aged below 30 years to receive allopathic treatment. The use of allopathic treatment was significantly three times higher among women who were educated (OR = 3.712, p &lt; 0.01) and two times higher among those who were exposed (OR = 2.217, p &lt; 0.5) to media. Further, for those who had felt the treatment was necessary, about 30, 44, 10, and 19% mentioned that due to unaffordability, inaccessibility, or inconveniences they couldn't consult allopathic treatment. Conclusions: Timely diagnosis and appropriate treatment play important role in infertility management. Women who are more educated and are exposed to media tend to consult allopathic treatment. Similarly, time and money spent on care vary significantly and independently by type of treatment and socioeconomic factors. There is a need for mandatory insurance coverage for infertility treatment enacted by the state government. In addition to the public services, the private sector and the traditional healers are both important alternative sources of first help. abstract_id: PUBMED:32112639 Psychiatric Disorders in Women Seeking Fertility Treatments: A Clinical Investigation in India. Fertility treatments began in several countries, including India, in the1970s. Despite various advancements in intra uterine insemination (IUI) and in vitro fertilization (IVF), empirical investigations on the psychological endurance and emotional tolerance of Indian women to such treatments are rather scarce. Thus, the aim of this study is to estimate the prevalence of psychiatric disorders in Indian women seeking fertility treatments. It is a cross-sectional study with three hundred women participants undergoing various treatments at the Manipal Assisted Reproductive Centre, Kasturba Medical College, Karnataka, India. Psychiatric disorders were assessed in women using the "ICD- 10 Classification of Mental and Behavioural Disorders" followed by descriptive data analysis. The results show that 78% of women have psychological issues and 45% of them have a diagnosable psychiatric condition. Adjustment Disorders, Anxiety Disorders and Mixed Anxiety and Depression Disorder are established as the top three categories of diagnoses. The findings of this study suggest that women have a high emotional stake in infertility treatments. The data highlights the need for modification of the existing treatment protocol (in Indian clinics) in ways that ensure the emotional wellbeing of patients. abstract_id: PUBMED:34674826 Fertility health information seeking among sexual minority women. Objective: To qualitatively explore and describe fertility information-seeking experiences of sexual minority women (SMW) couples using assisted reproduction. Design: Qualitative thematic analysis of 30 semistructured, in-depth individual and dyadic interviews with SMW couples. Setting: Video conferencing. Patient(s): Twenty self-identified lesbian, bisexual, and queer women comprising 10 same-sex cisfemale couples (10 gestational and 10 nongestational partners) using assisted reproduction technology in the United States. Intervention(s): Not applicable. Main Outcome Measure(s): We describe how SMW came to learn about ways to achieve pregnancy through information seeking, acquisition, appraisal, and use. Result(s): Analysis revealed three primary themes. First, uncertainty and information scarcity: SMW have basic knowledge about how to conceive but uncertainty persists due to information scarcity regarding how same-sex couples navigate assisted reproduction. Second, women attempt to collect fragmented information from disparate sources. The participants discussed a mixture of formal and informal, online, textual (books), and in-person seeking, finding, and synthesizing information that ranged from reliable to unreliable and from accurate to inaccurate. Finally, persistent heteronormative communication focused on the needs and conditions of male-female couples who experienced subfertility or infertility, rather than barriers related to social constraints and the absence of gametes that SMW sought to overcome. Conclusion(s): These findings support and extend existing evidence that has focused primarily on online fertility information seeking. Our findings suggest that shifts in fundamental assumptions about who seeks assisted reproductive support and why, together with improvements in fertility-related health communication, may result in more inclusive care for this population. abstract_id: PUBMED:35197683 Anti-Mullerian Hormone Levels in Indian Women Seeking Infertility Treatment: Are Indian Women Facing Early Ovarian Senescence? Background: Antimullerian hormone (AMH) is a key marker of ovarian reserve and predictor of response to fertility treatment. Aim: To understand the prevalence of low ovarian reserve in Indian women seeking infertility treatment, compare their AMH with age-matched fertile Indian controls and understand ethnic differences with Caucasian women. Setting And Design: Retrospective observational study done as collaboration between our in vitro fertilization centre and a laboratory with Pan-India presence. Materials And Methods: Women aged 20-44 years were selected as Group A (seeking infertility treatment n = 54,473), Group B (conceived naturally in the past; n = 283) and Group C (data of Caucasian women; n = 718). Serum AMH levels were measured and descriptive analysis done. Statistical Analysis: Descriptive statistics and Chi-square test. Results: In Group A, 28.7%, 48.7% and 70.6% of women aged &lt;30 years, 30-34 years and 35-39 years had serum AMH levels ≤2 ng/mL and the proportions were higher than Group B. The rate at which median AMH decreased was 1.1-2 times faster in Group B as compared to Group C. The decrease in median AMH across age groups in Group A was similar to Group B. Conclusions: Indian women in their late twenties and early thirties visiting fertility centers showed a worrisome trend of low AMH. Our study can be used as a reference for those women considering postponing pregnancy. It may be time to look at intangible cultural factors linked to social habits, ethnicity, diet, genetic predispositions, and environmental factors like endocrine disrupting chemicals contributing to premature ovarian senescence. abstract_id: PUBMED:24639762 A retrospective chromosome studies among Iranian infertile women: Report of 21 years. Background: The infertility is an important health problem, affecting about 15% of couples. The important role of genetic factors in pathogenesis of infertility is now increasingly recognized. The value of karyotyping women in the routine work-out of couples referred for sterility has long been recommended. Objective: The aim of this study was to define the frequency of all chromosomal aberrations among women which referred to our department due to infertility during the 21-year period. Materials And Methods: In this 21-year retrospective study, for the first time, we investigated 896 women which referred to our department due to infertility during 1986 to 2006. For chromosome analysis, heparinized peripheral blood samples were cultured, harvested and banded according to standard methods. Results: Out of 896 patients, 710 patients (79.24%) had a normal karyotype, and 186 patients (20.76%) showed abnormal karyotype. Among the abnormal ones 48 patients (25.81%) showed Turner's syndrome (45,X), and 45 patients (24.19%) were sex reversal with 46,XY karyotype. The rest of 93 patients (50%) revealed a wide range of chromosome abnormalities. Conclusion: Our results emphasized the importance of the standard cytogenetic methods in assessing the genetic characteristics of infertile females, which allows detecting a variety of somatic chromosome abnormalities, because some of these may interfere with the success of reproduction. Answer: Chromosome analysis is not considered mandatory in the initial investigation of normovulatory women seeking infertility treatment. A study conducted on 1206 women with normal ovulatory cycles seeking infertility treatment found that the prevalence of chromosomal abnormalities (CAs) was 0.58%, which did not significantly differ from that reported in female newborns (0.79%). The study concluded that routine cytogenetic analysis cannot be advocated in normovulatory infertile women. However, it did note that women with secondary infertility had a relatively higher frequency of abnormal karyotypes, suggesting that this subgroup might benefit from routine karyotype analysis (PUBMED:15958396). This position is supported by the fact that other factors, including psychobiological, clinical, and sociocultural elements, can influence infertility and treatment-seeking behavior. For example, studies have shown that Black women seeking infertility treatment may face barriers such as low perceived risk of infertility, poor understanding of treatment options, and limited social support, which can delay treatment seeking (PUBMED:35937441). Additionally, socio-demographic factors such as education level, media exposure, and socioeconomic status can affect treatment-seeking behavior and the type of treatment sought by women in different regions, such as in India (PUBMED:27141468, PUBMED:36303623). Moreover, the emotional and psychological consequences of infertility are significant and can impact women's well-being, further complicating the treatment-seeking process (PUBMED:24799871). Stigma and perceptions of stigma power also play a role in how women navigate infertility treatment, as seen in online forums where women discuss their experiences and coping mechanisms (PUBMED:26584236). In summary, while chromosome analysis can be a valuable tool in certain cases, it is not a mandatory initial step for all normovulatory women seeking infertility treatment. Other factors, including psychological, social, and demographic considerations, are also important in understanding and addressing infertility (PUBMED:15958396).
Instruction: Is addition of sodium fluoride to cyclical etidronate beneficial in the treatment of corticosteroid induced osteoporosis? Abstracts: abstract_id: PUBMED:9227164 Is addition of sodium fluoride to cyclical etidronate beneficial in the treatment of corticosteroid induced osteoporosis? Objective: To investigate whether administration of sodium fluoride (NaF) in addition to cyclical etidronate has a positive effect on bone mineral density (BMD) in patients with established osteoporosis during continued treatment with corticosteroids. Patients And Methods: 47 patients who were receiving treatment with corticosteroids were included in a two year randomised, double blind, placebo controlled trial. Established osteoporosis was defined as a history of a peripheral fracture or a vertebral deformity, or both, on a radiograph. All patients were treated with cyclical etidronate, calcium, and either NaF (25 twice daily) or placebo. Vitamin D was supplemented in the case of a low serum 25 (OH) vitamin D concentration. BMD of the lumbar spine and hips was measured at baseline and at 6, 12, 18, and 24 months. Results: After two years of treatment, the BMD of the lumbar spine in the etidronate/NaF group had increased by +9.3% (95% confidence intervals (CI): +2.3% to +16.2%, p &lt; 0.01), while the BMD in the etidronate/placebo group was unchanged: +0.3% (95% CI: -2.2% to +2.8%). The difference in the change in BMD between groups was +8.9% (95% CI: +1.9% to +16.0%, p &lt; 0.01). For the hips, no significant changes in BMD were observed in the etidronate/NaF group after two years: -2.5% (95% CI: -6.8% to +1.8%); in the etidronate/placebo group BMD had significantly decreased: -4.0% (95% CI: -6.6% to -1.4%; p &lt; 0.01). The difference between the groups was not significant: +1.5% (95% CI: -3.4% to +6.4%). No significant differences in number of vertebral deformities and peripheral fractures were observed between the two groups. Conclusion: The effect of combination treatment with NaF and etidronate on the BMD of the lumbar spine in corticosteroid treated patients with established osteoporosis is superior to that of etidronate alone. abstract_id: PUBMED:11036840 A pooled data analysis on the use of intermittent cyclical etidronate therapy for the prevention and treatment of corticosteroid induced bone loss. Objective: To conduct a pooled data analysis in a group of patients defined by sex, menopausal status, and underlying disease in order to examine the effect of intermittent cyclical etidronate in the prevention and treatment of corticosteroid induced osteoporosis. Methods: We selected 5 randomized, placebo controlled studies that examined the efficacy of intermittent cyclical etidronate therapy in which the raw data were available for analysis. Three were prevention studies and 2 treatment studies. The primary outcome was the difference between treatment groups in the percentage change from baseline in lumbar spine bone density. Secondary outcomes included the difference between treatment groups in the percentage change from baseline in femoral neck and trochanter bone density, and vertebral fracture rates. Results: Results are separately pooled for the prevention and treatment studies. The prevention studies had significant mean differences (95% CI) between groups in mean percentage change from baseline in lumbar spine, femoral neck, and trochanter bone density of 3.7 (2.6 to 4.7), 1.7 (0.4 to 2.9), and 2.8% (1.3 to 4.2) after one year of treatment, in favor of the etidronate group. The treatment studies displayed a mean difference between groups in mean percentage change from baseline in lumbar spine bone density of 4.8 (2.7 to 6.9) and 5.4% (2.5 to 8.4) after one and 2 years of therapy. In the prevention studies, a reduced fracture incidence was observed in the etidronate group compared with the placebo group (relative risk 0.50; CI 0.21 to 1.19). Conclusion: Etidronate therapy was effective in preventing bone loss in the prevention studies and in preventing or slightly increasing bone mass in the treatment studies. A fracture benefit was observed in postmenopausal women treated with etidronate in the prevention studies. abstract_id: PUBMED:14719212 Effect of intermittent cyclical etidronate therapy on corticosteroid induced osteoporosis in Japanese patients with connective tissue disease: 3 year followup. Objective: A 3 year prospective randomized study was conducted to clarify the efficacy of intermittent cyclical etidronate therapy on corticosteroid induced osteoporosis. Methods: A group of 102 Japanese patients were enrolled, each taking &gt; 7.5 mg of prednisolone daily for at least 90 days. Patients were randomly divided into 2 treatment groups: Group E (etidronate) took 200 mg etidronate disodium per day for 2 weeks with 3.0 g calcium lactate and 0.75 microg alphacalcidol daily; Group C (control) took 3.0 g calcium lactate and 0.75 microg alphacalcidol daily. Outcome measurements included changes from baseline in bone mineral density (BMD) of the lumbar spine and the rate of new vertebral fractures at 48 and 144 weeks. Results: The mean (+/- SD) lumbar spine BMD increased 3.7 +/- 5.6% (p &lt; 0.01) and 1.5 +/- 4.1% (NS) from baseline at 48 weeks and 4.8 +/- 6.9% (p &lt; 0.005) and 0.4 +/- 5.0% (NS) from baseline at 144 weeks in Group E and Group C, respectively. The improvement of BMD in Group E was significantly greater than in Group C at 144 weeks (p &lt; 0.01). In 3 subgroups, men and premenopausal and postmenopausal women, the postmenopausal women showed the greatest improvement. Mean percentage change in this subgroup was 10.1 +/- 8.0% and 1.35 +/- 6.4% in Group E and Group C, respectively. We noted that 2 patients in Group C had new vertebral fractures, whereas no fractures were observed in Group E. Conclusion: These results indicate that intermittent cyclical etidronate therapy is effective for the prevention and treatment of corticosteroid induced osteoporosis in patients with connective tissue diseases. abstract_id: PUBMED:12355861 The clinical benefits to bone mineral density were shown by cyclical oral etidronate administration in steroid induced osteoporosis Purpose: To compare the bone-mass effects of intermittent cyclic etidronate administration in patients of various rheumatic disease patients with corticosteroid-induced osteoporosis. Patients And Methods: We evaluated bone mineral density (BMD) of lumbar spine in 34 female patients (mean age: 46.4 +/- 13.7 y. o. 17-71) treated with long term corticosteroid (&gt; 6 months). Eighteen patients cyclically received etidronate orally (400 mg or 200 mg etidronate daily for 2 weeks, followed by 10-12 weeks drug-free periods). Twelve in these 18 patients received 400 mg (group A) and another 6 patients were treated with 200 mg/day (group B). Sixteen patients free from etidronate administrations were analysed as a control group. Results: Cyclical etidronate therapy showed significant increase in BMD. The BMD of lumbar spine increased from 0.760 +/- 0.10 g/cm 2 to 0.783 +/- 0.11 g/cm 2 (%change from baseline 2.91 +/- 2.56%/year) in group A treated patients after 12 months. Reduced BMD (%change from baseline 1.55 +/- 2.48%) was observed in 16 control group patients (P &lt; 0.0012). The BMD in group A was significantly high compared to group B or control after the etidronate treatment. In 7 of group A, BMD increased significantly on 6 months but no more significant increase was shown on 12 months compared to the value on 6 months. On the other hand the BMD tend to increased for after 2 years in intermittent cyclic etidronate treatment in 8 cases of group A. There were no adverse effects and abnormal laboratory data related to the administration of etidronate. Although only 2 cases of group A showed the findings of compression fracture before the study, but no new compression fracture appeared in any group during this study. Conclusion: It was shown that cyclical etidronate therapy is effective for steroid induced osteoporosis. abstract_id: PUBMED:7733124 Cyclical etidronate plus ergocalciferol prevents glucocorticoid-induced bone loss in postmenopausal women. Objective: To assess the benefit of cyclical etidronate plus ergocalciferol for the prevention of glucocorticoid-induced bone loss in a 2-year, prospective, open study based in an osteoporosis clinic. Patients And Methods: Group 1 consisted of 15 postmenopausal women (mean age 62.6 +/- 3.3 years) who commenced glucocorticoid therapy and were treated with cyclical etidronate (400 mg/d for the first month; thereafter, 400 mg/d for 2 weeks of every 3-month period), elemental calcium (1 g/d), and ergocalciferol (0.5 mg/wk). Group 2 consisted of 11 postmenopausal women (mean age 60.2 +/- 4.7 years) with glucocorticoid-induced osteoporosis, who were attending the clinic at the same time and were treated with calcium supplements only (1 g/d). Measurements: Lumbar spine and femoral neck bone mineral densities (BMD) were measured at baseline and after 12 and 24 months of glucocorticoid therapy using a dual energy x-ray absorptiometer. Results: The two groups did not differ with respect to age, years since the menopause, mean daily glucocorticoid dose, and baseline BMD values. During the first year of therapy, mean lumbar spine BMD increased from an initial value of 0.88 g/cm2 to 0.94 g/cm2, an increase of 7% per year (95% confidence interval [CI] 3.7% to 10.2%; P &lt; 0.001 compared with controls). Significant increases in BMD of 2.5% per year were also observed in the femoral neck (95% CI -1% to 6%; P &lt; 0.01 compared with controls). After the second year of cyclical etidronate therapy, femoral neck BMD continued to increase (P &lt; 0.05 compared with value at 12 months), while lumbar spine BMD remained stable. Conclusion: Chronic glucocorticoid therapy may result in bone loss at most skeletal sites. Therapy with cyclical etidronate plus ergocalciferol not only prevented glucocorticoid-induced bone loss, but even increased lumbar spine and femoral neck BMD in postmenopausal women commencing glucocorticoid therapy. abstract_id: PUBMED:10745305 Etridronate therapy in the treatment and prevention of osteoporosis. Etidronate disodium is an oral bisphosphonate compound known to reduce bone resorption through the inhibition of osteoclastic activity. This article is a review of its efficacy and safety in the treatment and prevention of postmenopausal and corticosteroid-induced osteoporosis. In general, studies of cyclical etidronate therapy (400 mg daily for 2 wk every 3 mo) have found a significant improvement in bone density. These studies have not been powered to study fracture incidence, but a reduced fracture rate has been found in some of the studies reviewed. Studies examining cyclical etidronate in the prevention of osteoporosis indicate it prevents early menopausal bone loss and is free of significant side effects. In both prevention of corticosteroid-induced osteoporosis and treatment of patients who have been on long-term corticosteroid therapy, cyclical etidronate appears to increase bone density and prevent further loss of bone. In summary, a review of available literature pertaining to the use of etidronate in prevention and treatment of primary and secondary osteoporosis has been presented. This review suggests etidronate, used as a cyclical therapy, is a safe and effective therapy. The weight of evidence suggests it is capable of reducing fracture risk in patients with osteoporosis. Increases in bone density at the spine and hip are not as pronounced as with some other bisphosphonates, particularly alendronate, but no direct clinical comparison trials of significant size or duration have been undertaken. abstract_id: PUBMED:18050368 Longterm effect of intermittent cyclical etidronate therapy on corticosteroid-induced osteoporosis in Japanese patients with connective tissue disease: 7-year followup. Objective: To determine the efficacy and safety of intermittent cyclical etidronate therapy of up to 7 years for corticosteroid-induced osteoporosis. Methods: One hundred two Japanese patients who originally participated in a 3-year prospective randomized study were enrolled into an open-label followup study. All patients had received &gt; 7.5 mg of prednisolone daily for at least 90 days before entry into the original study and were randomly assigned to 2 treatment arms: E, those receiving etidronate disodium (200 mg per day) for 2 weeks together with 3.0 g of calcium lactate and 0.75 microg of alphacalcidol daily; and C, controls receiving only the latter. Endpoints included changes from baseline in bone mineral density (BMD) of the lumbar spine and the rate of new vertebral fractures. Results: The mean (+/- SD) lumbar spine BMD had increased by 5.9% +/- 8.8% (p = 0.00007) and 2.2% +/- 5.8% (p = 0.013) from baseline after 7 years in groups E and C, respectively. This improvement in BMD in group E was significantly better than in group C (p = 0.02). The frequency of new vertebral fractures was lower in group E, resulting in reduction of the risk of such new fractures by 67% at year 7 (odds ratio 3.000; 95% confidence interval, 0.604 14.90; p = 0.18). There were no severe adverse events in group E during our study. Conclusion: Our results indicate that longterm (up to 7 years) intermittent cyclical etidronate therapy is safe and effective for prevention and treatment of corticosteroid-induced osteoporosis in patients with connective tissue diseases. abstract_id: PUBMED:12673885 Calcium, vitamin D and etidronate for the prevention and treatment of corticosteroid-induced osteoporosis in patients with rheumatic diseases. Introduction: Long-term glucocorticoid therapy, a major risk factor for the development of osteoporosis, is often necessary in chronically ill patients. At present there are no generally accepted guidelines for the prevention or treatment of steroid-induced osteoporosis. Methods: In an open prospective study we investigated 99 patients with chronic rheumatic diseases receiving &gt; or = 5 mg/day of prednisolone or the equivalent for at least one year. The objective was to identify osteoporosis risk factors in addition to glucocorticoid therapy and to evaluate the efficacy of prevention with calcium/vitamin D (group 1--patients with osteopenia) and treatment with cyclical etidronate (group 2--patients with osteoporosis). Biochemical markers of bone turnover, clinical parameters and bone mineral density (BMD) were measured. Results: Increasing age and postmenopausal status were associated with more advanced manifestations of steroid-induced osteoporosis (p &lt; 0.05). One year after the start of therapy parameters of bone metabolism increased significantly in group 1, while BMD did not change. In group 2, lumbar spine BMD increased significantly (p &lt; 0.05) whereas femoral neck BMD and bone metabolism parameters remained constant. The intensity of back pain decreased in both groups (p &lt; 0.05). There were fewer new fractures in group 2 than in group 1. Conclusion: Treatment with etidronate is effective in patients with glucocorticoid-induced osteoporosis. abstract_id: PUBMED:7653482 Cyclical etidronate reverses bone loss of the spine and proximal femur in patients with established corticosteroid-induced osteoporosis. Purpose: To compare the bone-mass effects of calcium supplementation and intermittent cyclic etidronate in patients with established corticosteroid-induced osteoporosis. Patients And Methods: Eighteen male and 21 female patients who had established corticosteroid-induced osteoporosis and were receiving chronic prednisone therapy (&gt; or = 10 mg/d) were enrolled in a prospective 12-month, open-label study. In addition to continuing prednisone therapy, patients received continuous calcium supplementation 500 mg/d (n = 20) or four cycles of intermittent cyclic etidronate therapy consisting of etidronate 400 mg/d for 14 days followed by calcium 500 mg/d for 76 days (n = 19). Bone mineral density (BMD) of the spine (L1 through L4) and proximal femur (total hip, femoral neck, trochanter, Ward's triangle) was measured by dual-energy x-ray absorptiometry at baseline, 6 months, and 12 months by staff blinded to the treatment. Serum calcium, phosphorus, and alkaline phosphatase were also measured at these times. Results: Treatment with intermittent cyclic etidronate for 12 months resulted in significant increases of 5.7% and 6.8% in BMD of the spine and proximal femur (total hip), respectively (P &lt; 0.02 versus baseline; P &lt; 0.001 versus calcium group). Calcium supplementation alone did not prevent significant losses of 3.4% and 4.1% in BMD at the respective sites (P &lt; 0.02 versus baseline). At the end of the study Z scores reflected significant increases in BMD of the spine and proximal femur (all regions) in the etidronate group (P &lt; 0.01), and significant decreases at the spine, proximal femur, and trochanter in the calcium group (P &lt; 0.01). After 12 months, the difference between the groups was 9.1% (P &lt; 0.01; 95% CI 6.3% to 11.9%) at the spine and 10.9% (P &lt; 0.01; 95% CI 7.8% to 14.1%) at the proximal femur (total hip). Seventeen (89%) of the etidronate-treated patients had increases in BMD of both skeletal sites, whereas only 2 (10%) and 3 (15%) of the calcium-treated patients had positive changes in BMD of the spine and proximal femur (total hip), respectively (P &lt; 0.01). Serum calcium, phosphorus, and alkaline phosphatase levels did not change significantly during the study in either treatment group. Both treatment regimens were well tolerated, with no interactions between prednisone therapy and the study medications. Analyses of response by subgroups (female/male, pulmonary/nonpulmonary indication for prednisone) showed no significant attribute-dependent changes during the 12-month study. At baseline, women had significantly lower BMD of the spine and proximal femur (total hip) (P &lt; 0.01), and patients with pulmonary disease had significantly longer duration of prednisone therapy and cumulative prednisone dose (P &lt; 0.03). Conclusions: Intermittent cyclic etidronate reversed the progressive loss of bone mineral density of the spine and proximal femur in female and male patients with established osteoporosis secondary to chronic corticosteroid (prednisone) therapy for pulmonary and nonpulmonary diseases. Calcium supplementation alone did not prevent or attenuate corticosteroid-induced losses. abstract_id: PUBMED:10405943 36 month intermittent cyclical etidronate treatment in patients with established corticosteroid induced osteoporosis. Objective: To determine the longterm safety and efficacy of etidronate therapy in patients in whom corticosteroid induced bone loss has already occurred. Methods: We performed a 36 month observational cohort study in which all data were obtained from Canadian Database of Osteoporosis and Osteopenia (CANDOO) patients. The etidronate group consisted of 24 patients who received 400 mg of etidronate disodium for 14 days, followed by 76 days of calcium carbonate (500 mg of elemental calcium), repeated every 3 mo; the control group included 37 patients who received calcium carbonate 500 to 1000 mg daily. Outcome measurements included changes within groups from baseline and differences between groups in the bone mineral density (BMD) of the lumbar spine, femoral neck, and trochanter at 12, 24, and 36 months. The incidence of vertebral fractures was also determined. Results: Etidronate therapy resulted in a meaningful percentage increase from baseline in lumbar spine BMD, primarily during the first 24 months of treatment, and this increase was sustained for the remainder of the 36 month study period (5.2%; p = 0.016). Analysis of covariance revealed a significant percentage difference (SD) between groups in lumbar spine BMD at 12 [5.5 (13.5) percent; p = 0.003] and 24 months [6.0 (17.4) percent; p = 0.011] in favor of the etidronate group. After 3 years of therapy, one patient (4%) experienced one vertebral fracture in the etidronate group, whereas 3 patients (8%) experienced 5 vertebral fractures in the control group. Conclusion: Etidronate treatment administered for 36 months reversed lumbar spine bone loss, and appeared to be safe in patients with established corticosteroid induced osteoporosis. Answer: The addition of sodium fluoride (NaF) to cyclical etidronate has been investigated for its potential benefits in the treatment of corticosteroid-induced osteoporosis. According to a two-year randomized, double-blind, placebo-controlled trial, the combination treatment with NaF and etidronate resulted in a significant increase in bone mineral density (BMD) of the lumbar spine in corticosteroid-treated patients with established osteoporosis, compared to etidronate alone. Specifically, the BMD of the lumbar spine in the etidronate/NaF group increased by +9.3%, while the BMD in the etidronate/placebo group was essentially unchanged (+0.3%). However, for the hips, no significant changes in BMD were observed in the etidronate/NaF group after two years, and no significant differences in the number of vertebral deformities and peripheral fractures were observed between the two groups (PUBMED:9227164). In summary, the addition of sodium fluoride to cyclical etidronate appears to be beneficial for increasing BMD of the lumbar spine in patients with corticosteroid-induced osteoporosis, although it does not seem to significantly affect hip BMD or reduce the number of fractures.
Instruction: Do psychological factors predict symptom severity in patients with subjective food hypersensitivity? Abstracts: abstract_id: PUBMED:20433401 Do psychological factors predict symptom severity in patients with subjective food hypersensitivity? Objective: We examined whether psychological factors such as general and gastrointestinal symptom-specific anxiety and depression could predict symptom severity in patients with unexplained, self-reported (subjective) food hypersensitivity. For the purpose, we translated and validated the Visceral Sensitivity Index (VSI). Material And Methods: Seventy consecutive patients completed questionnaires for Hospital Anxiety and Depression Scale, VSI, Irritable Bowel Syndrome Symptom Questionnaire, and Subjective Health Complaints Inventory. Relationship between scores on psychological factors and scores on somatic symptoms were studied by multiple regression analyses. Results: Most patients reported non-gastrointestinal symptoms in addition to their irritable bowel syndrome complaints, but general and symptom-specific anxiety, and depression could not explain a significant amount of the variance in somatic complaints. Gastrointestinal symptom-specific anxiety was a significant predictor of gastrointestinal complaints (p = 0.02), and age was the sole significant predictor of non-gastrointestinal complaints (p = 0.01). Approximately 90% of the total variance in symptom severity remained unexplained by the psychological factors. The Norwegian version of the VSI had satisfactory validity (Cronbach alfa = 0.93). Symptom-specific and general anxiety were significantly correlated (r = 0.48, p &lt; or = 0.0001). Conclusions: Psychological factors were not major predictors of symptom severity in patients with subjective food hypersensitivity. The Norwegian version of VSI had satisfactory validity. abstract_id: PUBMED:30153887 Factors that predict disease severity in atopic dermatitis: The role of serum basal tryptase. Background: Increased numbers of mast cells that contain tryptase are found in lesional atopic dermatitis (AD) skin. The association of serum basal tryptase (sBT) with anaphylactic reactions and mast cell diseases has recently been shown in children with venom and food allergy. Objective: We aimed to identify the risk factors that predict the severity of AD and the association of sBT levels with AD and disease severity. Method: AD diagnosis was made according to Hanifin and Rajka criteria. Disease severity was scored by the objective scoring atopic dermatitis (SCORAD) index. The sBT levels were measured. Skin-prick testing, total immunoglobulin E, eosinophil percentages and counts, and a questionnaire concerning the history of atopic diseases and the risk factors of AD were applied. Results: The children, ages 0.5 to 3.0 years, with AD (n = 85) were analyzed in two groups according to the presence (AD+/atopy+ [n = 55]) or absence (AD+/atopy- [n = 30]) of skin-prick test positivity. The comparisons were made with an age- and sex-matched control group (n = 82). The median (interquartile range) sBT in the AD+/atopy+, AD+/atopy-, and control groups were 5.01 ng/mL (2.75-6.79 ng/mL), 3.02 ng/mL (1.67-4.44 ng/mL), and 2.63 ng/mL (1.31-4.49 ng/mL), respectively (p = 0.003). The median (interquartile range) sBT levels were higher in patients with moderate-severe objective SCORAD index scores compared with the those with mild disease (3.85 ng/mL [2.04-5.91 ng/mL] versus 2.80 ng/mL [1.83-3.48 ng/mL]; p = 0.038). Multivariate logistic regression analysis showed that an sBT level of ≥3.9 ng/mL (odds ratio 8.77 [95% confidence interval, 1.87-41.18]; p = 0.006) was independently associated with an increased risk of moderate-severe AD (objective SCORAD index). Conclusion: To our knowledge, this was the first study that indicated that sBT levels may be important in the AD disease process and associated with the disease severity and atopy. abstract_id: PUBMED:21189836 Duodenal administered seal oil for patients with subjective food hypersensitivity: an explorative open pilot study. Short-term duodenal administration of n-3 polyunsaturated fatty acid (PUFA)-rich seal oil may improve gastrointestinal complaints in patients with subjective food hypersensitivity, as well as joint pain in patients with inflammatory bowel disease (IBD). The aim of the present explorative pilot study was to investigate whether 10-day open treatment with seal oil, 10 mL self-administrated via a nasoduodenal tube 3 times daily, could also benefit nongastrointestinal complaints and quality of life (QoL) in patients with subjective food hypersensitivity. Twenty-six patients with subjective food hypersensitivity, of whom 25 had irritable bowel syndrome (IBS), were included in the present study. Before and after treatment and 1 month posttreatment, patients filled in the Ulcer Esophagitis Subjective Symptoms Scale (UESS) and the Gastrointestinal Symptom Rating Scale (GSRS) for gastrointestinal symptoms and subjective health complaints (SHC) inventory for nongastrointestinal symptoms in addition to short form of the Nepean dyspepsia index (SF-NDI) for evaluation of QoL. Compared with baseline, gastrointestinal, as well as nongastrointestinal, complaints and QoL improved significantly, both at end of treatment and 1 month posttreatment. The consistent improvements following seal oil administration warrant further placebo-controlled trials for confirmation of effect. abstract_id: PUBMED:34764038 Exacerbating factors and disease burden in patients with atopic dermatitis. The number of patients with atopic dermatitis is on the rise worldwide, and Japan is no exception. According to recent estimates of the percentage of patients with atopic dermatitis in Japan by age, the majority of patients are between 20 and 44 years old. Because the peak age of onset of atopic dermatitis is during infancy, many patients may experience prolonged symptoms from infancy to adulthood. A prolonged clinical course also increases the burden of atopic dermatitis on affected patients. Decreased productivity due to work disruptions, reduced daily activity, higher direct medical costs, fatigue, and daytime sleepiness due to sleep disturbances are typical burdens on patients with atopic dermatitis. In order to reduce these burdens, it is necessary to shorten its clinical course and achieve long-term control without relying on medications, possibly by using avoidance or coping measures of aggravating factors. Typical aggravating factors of atopic dermatitis include irritant dermatitis, food allergy in children, sweating, and psychological stress in adults. Food allergy places a heavy burden on the quality of life of affected patients and their families. The effectiveness of educational interventions for sweating and psychological stress is unclear. We must also evaluate the economic burden and cost-effectiveness of interventions on the patient as aggravating factors to be addressed. abstract_id: PUBMED:19961557 Job stress and coping strategies in patients with subjective food hypersensitivity. Psychological distress may be causally related to multiple, unexplained somatic symptoms. We have investigated job stress, coping strategies and subjective health complaints in patients with subjective food hypersensitivity. Sixty-four patients were compared with 65 controls. All participants filled in questionnaires focusing on job stress, job demands and control, work environment, coping strategies and subjective health complaints. Compared with controls, patients scored significantly lower on job stress and job demands, and significantly higher on authority over job decisions. Coping strategies and satisfaction with work environment did not differ significantly between the two groups, but the patients reported significantly more subjective health complaints than the controls. Scores on job stress and job demands were generally low in patients with subjective food hypersensitivity. It is unlikely, therefore, that the patients' high scores on subjective health complaints are causally related to the work situation. abstract_id: PUBMED:15285275 Symptom relief and adherence in the rotary diversified diet, a treatment for environmental illness. Context: The rotary diversified diet, which involves food elimination and rotation of remaining allowed foods, is commonly used in the management of environmental illness. No studies have considered patient adherence while evaluating the effectiveness of the diet in controlling symptoms. Objective: The study examined the severity of patients' perceived symptoms and dietary adherence during treatment with a rotary diversified diet. Design: A prospective and exploratory study using purposive sampling and the following data collection methods: personal interviews, symptom severity questionnaires, and food records to assess dietary adherence. Setting: Private clinic of a Toronto, Ontario physician specializing in environmental medicine. Patients Or Other Participants: Twenty-five female residents of Toronto, Ontario (aged 25-67 years) diagnosed with environmental illness. Intervention: Patients were treated with a rotary diversified diet for 16 weeks. Main Outcome Measures: Symptom severity and dietary adherence were assessed after 4, 10, and 16 weeks of treatment. Adherence was assessed by comparing food records to the diet prescription. Results: At 16 weeks, patients reported a 50% decline in symptom severity for 5 of the 6 symptom categories assessed and for all categories combined. Those with closer elimination and rotation adherence reported a greater decline in gastrointestinal symptoms at 4 and 10 weeks of treatment, respectively. Improvement in total symptom severity was associated with closer rotation adherence at 10 weeks. Patients experienced difficulties in adhering to the diet. Conclusions: Results suggest that the diet, if followed, is beneficial, especially in improving gastrointestinal symptoms. Further evaluation of its effectiveness is limited by its complexity and the nature of environmental illness. Because the diet is difficult to follow over time, patients require extensive nutritional counseling and support. abstract_id: PUBMED:10427511 Psychological status and motivation for psychosocial intervention in patients with allergic disorders The aim of this study was to compare the psychological stress of patients with different forms of immediate type hypersensitivity and urticaria. Moreover, the patients' motivation for different forms of psychological treatment was assessed and an indication for psychosocial support was defined. 228 consecutive inpatients with insect venom allergies (ins), food intolerance (food), drug hypersensitivities (dru) and urticaria (urt) were evaluated by validated questionnaires regarding psychological strain and motivation for psychosocial treatments. Patients with food intolerance and urticaria showed significantly elevated psychological stress and higher motivation for psychosocial support as compared to those with insect venom allergies and drug intolerance. Patient education was the favourite technique for the patients (food 78%, urt 57%, dru 24%, ins 17%), followed by relaxation treatment. The most important predictors for the motivation were the wish for self-responsibility, a feeling of helplessness and social limitations. If strong indication criteria are applied, psychosocial support is indicated in only small subgroups of each patient group. In spite of that, the management of allergic disease should consider the potential need for psychosocial support. abstract_id: PUBMED:35616889 Organ-specific symptom patterns during oral food challenge in children with peanut and tree nut allergy. Background: Peanut and tree nut allergies are common in childhood and often severe in nature. The clinical picture shows a wide variety of symptoms. Objective: To analyze the distribution of clinical symptoms and severity during oral food challenges (OFC) in children. Methods: Analysis of 1.013 prospectively recorded, positive OFCs with peanut (n = 607), hazelnut (n = 266), walnut (n = 97), and cashew (n = 43). Symptoms were categorized as immediate-type skin, gastrointestinal, upper and lower respiratory, cardiovascular symptoms, and eczema exacerbation. Symptom severity and treatment were recorded. Results: Skin symptoms presented in 78%, followed by gastrointestinal (47%), upper (42%), and lower respiratory symptoms (32%). Cardiovascular symptoms presented in 6%. In three-quarter of the reactions, more than one organ was involved. Importantly, severe reactions occurred at every dose level. Peanut- and cashew-allergic patients had a higher relative risk of gastrointestinal symptoms compared with hazelnut- and walnut-allergic patients. Patients without vomiting had a 1.7 times higher risk developing immediate-type skin and/or lower respiratory symptoms. Three-quarter of the patients ever had eczema but worsening presented in only 10.5% of the OFCs. In patients with multiple food allergies, organs involved, eliciting dose and severity differed between allergens. Conclusion: Although comparisons between allergen groups with different clinical history, severity, comorbidities and laboratory data are difficult and might contain bias, our data confirm the high allergenic potential of peanut and tree nuts. The rare occurrence of eczema worsening emphasizes that avoidance diets of peanuts and tree nuts to cure eczema seem to be unnecessary and may hamper tolerance maintenance. abstract_id: PUBMED:27082554 Cardiovascular Risk Factors in Parents of Food-Allergic Children. Previous studies suggest that chronic stress may induce immune system malfunction and a broad range of adverse health outcomes; however, the underlying pathways for this relationship are unclear. Our study aimed to elucidate this question by examining the relationship between parental cardiovascular risk factors including systolic blood pressure (SBP), diastolic blood pressure (DBP), body mass index (BMI), and waist-to-hip ratio (WHR) and maternal psychological stress score (MPSS) relative to the severity of the child's food allergy (FA) and number of affected children. SBP, DBP, BMI, and WHR were measured and calculated at the time of recruitment by trained nurses. MPSS was obtained based on self-report questionnaires covering lifestyle adjustments, perceived chronic stress, and quality of life. General linear models examined whether caregiver chronic stress was associated with FA. For mothers with children under age 5 years, SBP, DBP and number of affected children had strong and graded relationships with severity of the child's FA. MPSS was also significantly and positively associated with child FA severity (P &lt; 0.001). However, no relationships were found between FA severity, BMI, or WHR for either parent. This was also the case for paternal SBP, DBP, and number of affected children of any age. There is a strong and graded link between cardiovascular risk and perceived stress in mothers of food-allergic children under age 5. Findings may have important implications for family-centered care of FA, may generalize to caregivers of children with chronic conditions, and extend the literature on allostatic load. abstract_id: PUBMED:30115513 Survey on changes in subjective symptoms, onset/trigger factors, allergic diseases, and chemical exposures in the past decade of Japanese patients with multiple chemical sensitivity. Background: Recently, with rapid changes in the Japanese lifestyle, the clinical condition of patients with multiple chemical sensitivity (MCS) may also have undergone change. Thus, we conducted a new survey for subjective symptoms, ongoing chemical exposures, the prevalence of allergic diseases, and presumed onset/trigger factors in patients with MCS and compared results with those of an old survey from ten years ago. Methods: The new survey was conducted from 2012 to 2015 and the old survey was independently conducted from 1999 to 2003, meaning it was not a follow-up study. Patients were initially diagnosed by physicians at five medical institutions with MCS specialty outpatient services, with 111 and 103 patients participating in the new and old surveys, respectively. The controls were a general population living in Japan, with 1313 and 2382 participants in the new and old surveys, respectively. Subjective symptoms and ongoing chemical exposure were evaluated using a quick environmental exposure sensitivity inventory. Additionally, from clinical findings recorded by an attending physician, the prevalence of allergic diseases and presumed onset/trigger factors were evaluated. Differences between new and old surveys were analyzed using logistic regression analyses and significance tests. Results: Compared with ten years ago: (1) Regarding factors affecting patients with ongoing chemical exposures, the proportion of patients affected decreased significantly for two items only (insecticides and second-hand smoke). The proportion of controls showing ongoing exposure to 8 out of 10 items changed significantly. (2) In patients, scores for chemical intolerances, other intolerances, and life impacts increased significantly. (3) In terms of the prevalence of allergic diseases among patients with MCS, bronchial asthma (adjusted odds ratio [AOR]: 5.19), atopic dermatitis (AOR: 3.77), allergic rhinitis (AOR: 5.34), and food allergies (AOR: 2.63) increased significantly, while hay fever (AOR: 0.38) and drug allergies (AOR: 0.40) decreased significantly. (4) With regard to construction and renovation, which was the presumed predominant onset/trigger factor for MCS 10 years ago, this decreased from 68.9% to 35.1%; in contrast, electromagnetic fields (0.0%-26.1%), perfume (0.0%-20.7%), and medical treatment (1.9%-7.2%) increased significantly, confirming the diversification of onset/trigger factors. Conclusion: Compared to ten years ago, for patients with MCS, an increase in avoidance behavior toward chemical substance exposures, which were presumed to be aggravating factors for symptoms, was confirmed. It has been suggested that the ongoing chemical exposure of the general population in Japan has largely changed. In addition, for patients with MCS, chemical intolerances and life impacts have become severe, the prevalence of the main allergic diseases has increased, and onset/trigger factors have become diversified. Answer: No, psychological factors do not appear to be major predictors of symptom severity in patients with subjective food hypersensitivity. A study examining the relationship between psychological factors such as general and gastrointestinal symptom-specific anxiety and depression with symptom severity in these patients found that these factors could not explain a significant amount of the variance in somatic complaints. Gastrointestinal symptom-specific anxiety was a significant predictor of gastrointestinal complaints, and age was the sole significant predictor of non-gastrointestinal complaints. However, approximately 90% of the total variance in symptom severity remained unexplained by the psychological factors (PUBMED:20433401).
Instruction: Does laterally rotated flap design influence the short-term periodontal status of second molars and postoperative discomfort after partially impacted third molar surgery? Abstracts: abstract_id: PUBMED:25872465 Does laterally rotated flap design influence the short-term periodontal status of second molars and postoperative discomfort after partially impacted third molar surgery? Purpose: To assess the influence of the surgical removal of partially impacted third molars (3Ms) and compare the effects of a 3-cornered laterally rotated flap (LRF) with primary closure (flap 1) and an envelope flap with secondary closure (flap 2) on the short-term periodontal status of the adjacent second molars (2Ms). We also assessed the postoperative complications after removal of the partially impacted 3M. Materials And Methods: A split mouth, randomized clinical study was designed. The study sample included patients with bilateral partially impacted 3Ms. The primary predictor variable was the type of flap design (flaps 1 and 2). The primary outcome variable was periodontal status (gingival recession [GR], probing depth [PD], plaque index [PI], and gingival index) of the 2Ms measured preoperatively and 90 days postoperatively. The secondary outcome variables were postoperative complications, including pain, facial swelling, alveolitis, and local wound infection. The other variables included gender, position of the 3Ms, and surgical difficulty. We performed descriptive, comparative, correlation, and multivariate analyses. Results: The sample included 28 patients aged 18 to 28 years. The GR, PD, and PI values with the flap 2 design were greater than those with the flap 1 design (P &lt; .05). Facial swelling with the flap 1 design was significantly greater than with the flap 2 design on the second postoperative day (P &lt; .05). The pain levels with the flap 1 design were significantly greater than those with the flap 2 design on the first and second postoperative days (P &lt; .05). According to the multivariate regression analyses, flap design was closely related to the periodontal status of the 2Ms and postoperative discomfort. Conclusion: The results of the present clinical study have shown that the flap design in partially impacted 3M surgery considerably influences the early periodontal health of the 2Ms and postoperative discomfort. However, although the 3-cornered LRF design might cause more pain and swelling, it could be the method of choice for partially impacted 3M surgery because of the early periodontal healing. abstract_id: PUBMED:33790564 Effects of Impacted Lower Third Molar Extraction on Periodontal Tissue of the Adjacent Second Molar. The extraction of impacted lower third molars (ILTM) is one of the most common procedures in oral-maxillofacial surgery. Being adjacent to lower second molars, most impacted lower third molars often lead to distal periodontal defects of adjacent second molars. Several symptoms may occur after extraction, such as periodontal pocket formation, loss of attachment, alveolar bone loss and even looseness of second molar resulting in extraction. The distal periodontal defects of second molars are affected by many factors, including periodontal conditions, age, impacted type of third molars, and intraoperative operations. At present, several studies have suggested that dentists can reduce the risk of periodontal defects of the second molar after ILTM extraction through preoperative evaluation, reasonable selection of flap design, extraction instruments and suture type, and necessary postoperative interventions. This review summarizes the research progress on the influence factors, interventions methods and some limitations of distal periodontal defects of adjacent second molar after extraction of impacted mandibular third molars, with the aim of opening up future directions for studying effects of ILTM extraction on periodontal tissue of the adjacent second molar. abstract_id: PUBMED:12029279 Influence of flap design on periodontal healing of second molars after extraction of impacted mandibular third molars. Objective: The aim of this study was to compare the influence of two mucoperiosteal flaps on periodontal healing of adjacent second molars after extraction of impacted mandibular third molars. Study Design: An envelope incision with a releasing incision anterior to the second molar (3-cornered flap) was used on one side and a Szmyd flap on the other side in 14 patients with bilateral impaction of mandibular third molars. The periodontal health of the second molars was evaluated before surgery and at 3 and 6 months postoperatively. A William's periodontal probe was used to measure the pocket depth, clinical attachment level, and bone level of the buccal and mesial surfaces of the second molars. The postoperative measurements were analyzed by using analysis of covariance, with the covariables being the preoperative measurements and variation factors being the type of flap used, the surface measured, and the time since the procedure. Results: No statistically significant differences were found in comparing measurements of probing depth, clinical attachment level, or bone level for the 2 types of flap used or the 2 surfaces measured. However, there was a statistically significant increase in all 3 measurements from the 3-month to the 6-month postoperative time. Conclusion: Independent of the design of the mucoperiosteal flap used in extracting an impacted mandibular third molar, the periodontal condition of the adjacent second molar worsened from 3 to 6 months, although it remained within normal values. abstract_id: PUBMED:28932049 Evaluation of two flap designs on the mandibular second molar after third molar extractions. Background: The extraction of third molars is associated with some clinical outcomes and periodontal problems. It is imperative to note that the type of incision used in the surgery for the removal of the impacted third molar is critical. The design of the flap influences the healing of the surgically created defect and damage to the distal periodontal area of the adjacent second molar. However, till date, there have been conflicting reports on the influence of different flap designs used for the surgical removal of impacted third molars. Aim: The present study aimed to comparatively evaluate the clinical outcomes and periodontal status of the adjacent second molar, when two different flap designs, namely, the envelope and the modified triangular flap designs were used. Materials And Methods: Sixty female patients with bilateral impacted third molars completed the study with envelope flap on one side and modified triangular flap design on the other side of the mandible for third molar removal. Clinical parameters including pain, dehiscence and swelling were assessed postoperatively and periodontal probing depth (PPD) on the distal aspect of adjacent second molar were assessed both pre- and post-operatively. Results: The results were assessed on 1, 3 and 8 days for pain using visual analog scale. The subjective perception of swelling was evaluated on 3, 7 and 15 days postoperatively in a similar manner. The results of the periodontal parameters were evaluated both preoperatively and 3 months postoperatively, with cautious exploration using a University of North Carolina (UNC)-15 periodontal probe. The statistically significant results for swelling and PPD were noted for the two flap groups using the Chi-square test (P &lt; 0.05). Conclusion: The study revealed that the modified triangular flap had lesser postoperative PPDs and dehiscence. The envelope flap was better when swelling was analyzed. The pain scores, though slightly higher for the modified triangular flap group, were not statistically significant. abstract_id: PUBMED:21519580 Complications in surgical removal of impacted mandibular third molars in relation to flap design: clinical and statistical evaluations. Objective: The extraction of an impacted mandibular third molar may result in periodontal complications on the distal surface of the adjacent second molar. The purpose of this study was to compare the influence of three full-thickness flaps on the periodontal healing of the adjacent second molar after extraction of impacted mandibular third molars. Method And Materials: Forty-five volunteers with bilateral impaction of the mandibular third molars were selected. Each patient was randomly assigned to one of three groups: group A (envelope flap modified by Thibauld and Parant), group B (Laskin triangular flap), and group C (envelope flap modified by Laskin). The periodontal health of the second molars was evaluated at 3, 6, 12, and 24 months after surgery via clinical measurements. Results: After 21 days, there was no correlation between postoperative complications (such as edema and alveolitis) and flap design. However, there was a statistically significant reduction of pocket probing depth (PPD) and increase of clinical attachment level (CAL) in group B compared to the other groups (P&lt;.05) 24 months after surgery. Conclusion: The effect of the type of flap used for mandibular third molar surgery on the periodontal status of the second molars as well as the factors that influence this outcome remains uncertain. Regardless of the flap design, the periodontal conditions of the adjacent second molar deteriorated after 12 and 24 months. The decision to use a certain type of flap should be based on the surgeon's preference. abstract_id: PUBMED:37234672 Effects of modified triangular flap for third molar extraction on distal periodontal health of second molar: A randomized controlled study. Objective: The aim of this study was to assess the effect of flap design for impacted mandibular third molar extraction on the distal periodontal tissue of their neighbors clinically, immunologically, and microbiologically. Study Design: This randomized controlled study comprised 100 patients who were allocated randomly to receive either a triangular flap or a modified triangular flap. The distal periodontal pocket depth, plaque index, bleeding on probing, the presence of Actinobacillus actinomycetemcomitans, Porphyromonas gingivalis and Prevotella intermedia, and the level of interleukin-1β, interleukin-8 and matrix metalloproteinase-8 of adjacent second molars were measured at baseline, and 1, 4 and 8 weeks after surgery. Results: After 1 and 4 weeks, distal periodontal conditions of adjacent second molars deteriorated, along with an increase in subgingival microbiota and inflammatory factors in both groups. And compared to the modified triangular flap group, the triangular flap group significantly increased (p &lt; 0.05). Prevotella intermedia, interleukin-1β and probing depth were positively correlated in both groups. After 8 weeks, they returned to the preoperative level. Conclusions: In this study, both flap designs for impacted mandibular third molar extractions was associated with worse clinical periodontal indices, increased inflammatory biomarkers of gingival crevicular fluid, and more subgingival pathogenic microbiota within 4 weeks. But compared with the triangular flap, the modified triangular flap was better for distal periodontal health of adjacent second molars, which provides certain directions for clinical treatment. abstract_id: PUBMED:32308296 Periodontal Status of the Adjacent Second Molar after Impacted Mandibular Third Molar Surgical Extraction. Objective: The purpose of this study was to evaluate the change in periodontal status of the adjacent second molar of the impacted mandibular third molar after surgical extraction and its association with the third molar condition in the presurgical stages, including position, eruption level, and local complications. Materials And Methods: The study was based on a 6-month follow-up of 38 patients (19 males and 19 females; Mean age: 21.89 ± 2.74) recruited consecutively after surgical extraction of an impacted lower third molar. The third molar's presurgical position, eruption level, and local complications were examined. Periodontal status, including Plaque Index (PI), Gingival Index (GI), and gingival bleeding on probing (BOP), of the teeth in the adjacent sextant was clinically evaluated. The pocket depth (PD) and the distance between the epithelial attachment and the adjacent second molar's occlusal surface were clinically measured; and the distance between the alveolar bone crest and cementoenamel junction (AC-CEJ) of the adjacent second molar was evaluated by the periapical film. All measures were recorded at the time of surgery and 1, 3, and 6 months after surgery. Results: The values of PI, GI, BOP, PD, and EA-OS were significantly reduced after 1, 3, and 6 months compared to baseline data. The AC-CEJ was decreased after 1 month but significantly increased after 3 and 6 months. Presurgical local complications of the impacted third molar mostly were significantly associated with the periodontal status of the adjacent sextant. Conclusion: There was a significant improvement of periodontal conditions of the second molar and adjacent sextant after impacted third molar surgery. abstract_id: PUBMED:23019499 Comparison of the influence of two flap designs on periodontal healing after surgical extraction of impacted third molars. Background And Aims: Impacted lower third molar is found in 90% of the general population. Impacted lower third molar surgery may result in periodontal complications on the distal surface of the adjacent second molar. The aim of this study was to evaluate the effect of flap design on the periodontal status of the second molar after lower third molar surgery. Materials And Methods: Twenty patients, with an age range of 18-26 years, participated in the present study. The inclusion criteria consisted of the presence of bilateral symmetrical impacted third molars on panoramic radiographs. The subjects were randomly divided into two groups. The impactions on the left and right sides were operated by Szmyd and triangular flaps, respectively. Postoperative management and medications were similar for both groups. The subjects were evaluated at two-week, one-month, and six-month postoperative intervals by a surgeon who was blind to the results. Data was analyzed by t-test using SPSS 11 software. Results: There were no significant differences in clinical attachment loss, pocket depth, bone level, plaque index, and free gingival margin between the two flaps (p&gt;0.05). Conclusion: The results of the present study did not show any differences in pocket depth, clinical attachment level, bone level and FGM (free gingival margin) between the two flap designs under study. abstract_id: PUBMED:36553934 Effects of Flap Design on the Periodontal Health of Second Lower Molars after Impacted Third Molar Extraction. The purpose of this study was to compare the envelope flap and triangular flap for impacted lower third molar (M3) extraction and their effects on the periodontal health of adjacent second molars (M2). A population of 60 patients undergoing M3 extraction with the envelope flap (Group A) or triangular flap (Group B) was analyzed, comparing probing pocket depth (PPD), clinical attachment level (CAL), and gingival recession (REC) recorded at six sites (disto-lingual, mid-lingual, mesio-lingual, disto-vestibular, mid-vestibular, and mesio-vestibular) before (T0) and 6 months after extraction (T1). There was a statistically significant mean difference in PPD and CAL at two sites, disto-vestibular (dv) and disto-lingual (dl), between values recorded before and 6 months after surgery for either Group A or Group B. Furthermore, for the same periodontal records, at 6 months after surgery, a statistically significant difference was recorded between younger and older patients, implying that the healing process was more beneficial for younger patients. No significant differences were found between the two groups (A and B) in PPDdl, PPDdv, CALdl, and CALdv, confirming that the mucoperiosteal flap design does not influence the periodontal healing process of second molars. abstract_id: PUBMED:30894952 Incidence of root canal treatment of second molars following adjacent impacted third molar extraction. Background/purpose: The aim of this study was to evaluate the incidence of requirement for root canal treatment of adjacent second molars following the surgical extraction of an impacted third molar. Materials And Methods: The dental records of 6323 consecutive patients who had impacted third molars removed surgically were evaluated and the incidence of postoperative root canal treatment requirement of adjacent second molars was determined. Patients who required root canal treatment of neighboring second molars were accepted as the study group, while the remaining patients were accepted as a control group. Sex, age at the time of the operation, localization of third molar, the depth of impaction, angulation of the tooth, and the professional experience of the surgeon performing the operation were evaluated from patient records. Results: The incidence of requirement of root canal treatment for second molars following a neighboring impacted third molar extraction was 0.17% (11/6323) and invariably occurred in the mandible. The mean age of the study group was found to be significantly higher than the control group (31 years vs. 23 years). The years of professional experience of the surgeons was significantly lower in the study group than in the control group. Conclusion: Although the incidence is minimal, iatrogenic subluxation injuries occurring during the surgical removal of impacted third molars can lead to pulpal complications and a requirement for root canal treatment of adjacent second molars. Answer: Yes, the laterally rotated flap design does influence the short-term periodontal status of second molars and postoperative discomfort after partially impacted third molar surgery. A study comparing a 3-cornered laterally rotated flap (LRF) with primary closure (flap 1) and an envelope flap with secondary closure (flap 2) found that the gingival recession (GR), probing depth (PD), and plaque index (PI) values were greater with the flap 2 design than with the flap 1 design. Additionally, facial swelling with the flap 1 design was significantly greater than with the flap 2 design on the second postoperative day, and the pain levels with the flap 1 design were significantly greater than those with the flap 2 design on the first and second postoperative days. The study concluded that flap design in partially impacted third molar (3M) surgery considerably influences the early periodontal health of the second molars (2Ms) and postoperative discomfort (PUBMED:25872465). Other studies have also indicated that flap design can affect periodontal healing and postoperative outcomes. For instance, the modified triangular flap was found to result in lesser postoperative probing pocket depths (PPDs) and dehiscence compared to the envelope flap, which was better when swelling was analyzed (PUBMED:28932049). Additionally, a study comparing the envelope flap and triangular flap found no significant differences in clinical attachment loss, pocket depth, bone level, plaque index, and free gingival margin between the two flaps (PUBMED:23019499). However, another study showed that a modified triangular flap was better for the distal periodontal health of adjacent second molars compared to a triangular flap (PUBMED:37234672). Overall, these findings suggest that the choice of flap design in the surgical removal of partially impacted third molars can have an impact on the periodontal status of adjacent second molars and the level of postoperative discomfort experienced by patients.
Instruction: Size does matter: can we reduce the radiotherapy field size for selected cases of anal canal cancer undergoing chemoradiation? Abstracts: abstract_id: PUBMED:19282157 Size does matter: can we reduce the radiotherapy field size for selected cases of anal canal cancer undergoing chemoradiation? Aims: Chemoradiation is the standard of care for the treatment of anal canal cancer, with surgery reserved for salvage. For tumours with uninvolved inguinal nodes, it is standard to irradiate the inguinal nodes prophylactically, resulting in large field sizes, which contribute to acute and late toxicity. The aim of this single-centre retrospective study was to determine if, in selected cases, prophylactic inguinal nodal irradiation could be avoided. Materials And Methods: Between August 1998 and August 2004, 30 patients with biopsy-proven squamous cell anal canal cancer were treated with chemoradiation using one phase of treatment throughout. A three-field beam arrangement was used without attempting to treat the draining inguinal lymph nodes prophylactically. The radiotherapy dose prescribed was 50Gy in 25 daily fractions over 5 weeks. Concomitant chemotherapy was delivered with the radiation using mitomycin-C 7-12mg/m(2) on day 1 and protracted venous infusional 5-fluorouracil 200mg/m(2)/day throughout radiotherapy. Results: All patients had clinically and radiologically uninvolved inguinal and pelvic nodes and all had primary lesions that were T3 or less. The median age at diagnosis was 65 years (range 41-84). The median follow-up was 41 months (range 24-113). The mean posterior field size was 14x15cm and the mean lateral field size was 12x15cm. All patients achieved a complete response. Ninety-four per cent of patients (28/30) were alive and disease free. The two patients who died did so of unrelated causes and were disease free at death. Four patients relapsed and all were salvaged with surgery; two for local disease requiring abdominoperineal resection, one with an inguinal nodal relapse requiring inguinofemoral block dissection and one for metastatic disease to the liver who underwent liver resection. Conclusions: This single-centre retrospective study supports the treatment for selected cases of anal canal cancer with smaller than standard radiation fields, avoiding prophylactic inguinal nodal irradiation. Hopefully this will translate into reduced acute and late toxicity. In future studies we would suggest that consideration is given as to whether omission of prophylactic inguinal nodal irradiation for early stage tumours should be explored. abstract_id: PUBMED:27523411 Radiotherapy for anal canal cancers Indications, doses and techniques of conformal radiotherapy for anal canal cancers are presented. The recommendations for delineation of the target volumes and organs at risk are detailed. abstract_id: PUBMED:25702647 The anal canal as a risk organ in cervical cancer patients with hemorrhoids undergoing whole pelvic radiotherapy. Aims And Background: Tolerance of the anal canal tends to be ignored in patients with cervical cancer undergoing whole pelvic radiotherapy. However, patients with hemorrhoids may be troubled with low radiation dose. We tried to analyze the dose-volume statistics of the anal canal in patients undergoing whole pelvic radiotherapy. Methods: The records of 31 patients with cervical cancer who received definite or postoperative radiotherapy at one institution were reviewed. Acute anal symptoms, such as anal pain and bleeding, were evaluated from radiotherapy start to 1 month after radiotherapy completion. Various clinical and dosimetric factors were analyzed to characterize relations with acute anal complications. Results: The anal verge was located an average of 1.2 cm (range -0.6~3.9) below the lower border of the ischial tuberosity and an average of 2.7 cm (range -0.6~5.7) behind the sacral promontory level. The presence of hemorrhoids before radiotherapy was found to be significantly associated with acute radiation-induced anal symptoms (p = 0.001), and the mean induction dose for anal symptoms was 36.9 Gy. No patient without hemorrhoids developed an anal symptom during radiotherapy. Dosimetric analyses of V30 and V40 showed marginal correlations with anal symptoms (p = 0.07). Conclusions: The present study suggests a relation between acute anal symptoms following radiotherapy and acute hemorrhoid aggravation. Furthermore, the location of the anal verge was found to be variable, and consequently doses administered to the anal canal also varied substantially. Our results caution careful radiation treatment planning for whole pelvic radiotherapy, and that proper clinical management be afforded patients with hemorrhoids during radiotherapy. abstract_id: PUBMED:9480525 Cancer of the anal canal Anal carcinoma is a rare malignant tumor, It occurs in only 0.02% of all malignant neoplasms. In Mexico, the incidence is of 1.5%, and only 0.18% belong to the anal canal. In recent years it has been reported an increased incidence of this tumor due to the association with the human papilloma virus in HIV positive patients. The most common histological forms are the epidermoid and the cloacogenic carcinomas. The most relevant prognostic factors are the size of the tumor and the presence of lymph node metastasis. Surgery has been the traditional form of treatment but the combined use of chemotherapy and radiotherapy seems to have the best results and surgery is reserved for local recurrences or palliation. A review of our experience at the National Institute of Cancer at Mexico city with the management of this tumor was performed. Thirty-four patients with the diagnosis of carcinoma of the anal canal were included of which none of them received previous treatment or have the diagnosis of AIDS. Patients were divided in four groups according to the form of treatment (surgery, radiation, and chemoradiation either with 5FU-MMC or 5FU and CDDP). The group that received chemotherapy with 5FU and CDDP combined with radiotherapy had the best results in terms of clinical response, survival and toxicity. The size of the tumor and the presence of lymph node metastasis are the prognostic factors that influence in survival: tumor smaller than 5 cm without lymph node metastasis have the best prognosis (p: 0.01 and p: 0.00004). Epidermoid carcinoma have a better prognosis than cloacogenic carcinoma (p: 0.07). abstract_id: PUBMED:30983261 Canal anal carcinoma Canal anal carcinoma. Anal carcinomas are rare, but their incidence has increased in recent years. They are induced by the Human papillomas virus (mostly genotype 16). The prevalence is high among HIV-infected men who have sex with men (MSM) and primary prevention by vaccination against HPV is a source of hope in this population. Screening is based on the detection and treatment of precancerous lesions, called anal intra-epithelial neoplasia, which can be of low grade or high grade. It concerns a category of HIV-infected patients: MSM, history of condyloma or precancerous/cancerous lesions of the cervix. Treatment, based on a combination of simultaneous chemotherapy and radiation therapy, allows a complete response rate of 80%. In case of persistence or tumor recurrence, abdominoperineal resection remains the treatment of choice. Advanced diseases can benefit from highly effective chemotherapy combinations or even in the future, combination of chemotherapy and immunotherapy. abstract_id: PUBMED:34955416 Radiotherapy of anal canal cancer. We present the update of the recommendations of the French society for radiation oncology on external radiotherapy and brachytherapy of anal canal carcinoma. The following guidelines are presented: indications, treatment procedure, as well as dose and dose-constraints objectives, immediate postoperative management, post-treatment evaluation, and long-term follow-up. abstract_id: PUBMED:26337477 Recommendations for the management of cancers of the anal canal Anal canal carcinomas remain rare, but their management has improved recently. The PET-CT is now used as a standard at the first diagnosis and after relapses. The introduction of intensity-modulated irradiation techniques makes it possible to better conform the pelviperineal and inguinal volumes, improving the homogeneity of the irradiation while sparing some pelvic structures, thus reducing acute and late effects. Nevertheless, the conversion from 3D to intensity-modulated radiotherapy needs a specific and careful approach, mainly for the management of the perineal region, where relapses and complications occur. Last but not least, new chemotherapy associations are studied for metastatic disease. abstract_id: PUBMED:30509284 Impact of VMAT-IMRT compared to 3D conformal radiotherapy on anal sphincter dose distribution in neoadjuvant chemoradiation of rectal cancer. Background: Neoadjuvant radio- or chemoradiation (nIRT) therapy is the standard treatment for loco-regional advanced rectal cancer patients of the lower or middle third. Currently, intensity modulated radiation therapy (IMRT) is not the recommended radiation technique even though IMRT has advantages compared to 3D-radiation regarding dose sparing to organs at risk like small bowel and urinary bladder. So far, the benefit of IMRT concerning the anal sphincter complex is not examined. With this study we intended to evaluate the dose distribution on the anal sphincters of rectal cancer patients treated with IMRT in comparison with 3D-techniques. Methods: We selected 16 patients for the IMRT-group and 16 patients for the 3D-group with rectal cancer of the middle third who were treated in our institute. All patients received 45 Gy in a chemoradiation protocol. Patients in both groups were matched regarding stage, primary tumor distance to the anal verge and size of the tumor. We delineated the internal and external anal sphincters, the addition of both sphincters and the levator ani muscle in all patients. Subsequently, we evaluated and compared dose parameters of the different sphincters in both groups and analysed the configuration of the isodoses in the area of the caudal radiation field, respectively. Results: Most of the relevant dose parameters of the caudal sphincters (Dmean, Dmedian, V10-V40) were significantly reduced in the IMRT-group compared to the 3D-group. Accordingly, the isodoses at the caudal edge of the target volume in the IMRT group demonstrated a steep dose fall. The levator ani muscle always was included into the planned target volumes and received the full dose in both groups. Conclusions: The modern VMAT-IMRT can significantly reduce the dose to the anal sphincters for rectal cancer patients of the middle third who were treated with conventional chemoradiation therapy. abstract_id: PUBMED:30397394 So Now My Patient Has Squamous Cell Cancer: Diagnosis, Staging, and Treatment of Squamous Cell Carcinoma of the Anal Canal and Anal Margin. Squamous cell carcinomas of the anal canal and the anal margin are rare malignancies that are increasing in incidence. Patients with these tumors often experience delayed treatment due to delay in diagnosis or misdiagnosis of the condition. Distinguishing between anal canal and anal margin tumors has implications for staging and treatment. Chemoradiation therapy is the mainstay of treatment for anal canal squamous cell, with abdominoperineal resection reserved for salvage treatment in cases of persistent or recurrent disease. Early anal margin squamous cell carcinoma can be treated with wide local excision, but more advanced tumors require a combination of chemoradiation therapy and surgical excision. abstract_id: PUBMED:22942796 Epidermoid cancer of the anal canal. Anal canal cancer is a rare cancer with incidence that continues to rise. This has been in large part due to increased prevalence of immunosuppressed conditions such as organ transplantation and human immunodeficiency virus along with transmission of the human papillomavirus. Identification of high-risk groups and close monitoring of these groups can help to detect earlier stages of cancer. Chemoradiation therapy remains the mainstay of treatment with excellent outcomes. Surgery for anal canal carcinoma remains as a salvage technique for failed chemoradiation or recurrent disease. Answer: The question of whether the radiotherapy field size can be reduced for selected cases of anal canal cancer undergoing chemoradiation is addressed in a single-centre retrospective study (PUBMED:19282157). The study aimed to determine if prophylactic inguinal nodal irradiation could be avoided in cases where the inguinal nodes were uninvolved. The study included 30 patients with squamous cell anal canal cancer who were treated with chemoradiation using a three-field beam arrangement that did not include prophylactic treatment of the draining inguinal lymph nodes. The prescribed radiotherapy dose was 50Gy in 25 daily fractions over 5 weeks, with concomitant chemotherapy using mitomycin-C and protracted venous infusional 5-fluorouracil. The results showed that all patients achieved a complete response, and 94% were alive and disease-free at the median follow-up of 41 months. Four patients relapsed but were salvaged with surgery. The study concluded that treatment with smaller than standard radiation fields, avoiding prophylactic inguinal nodal irradiation, is supported for selected cases of anal canal cancer. This approach could potentially reduce acute and late toxicity, and the authors suggest that future studies should explore the omission of prophylactic inguinal nodal irradiation for early-stage tumors (PUBMED:19282157). Other literature on the topic of anal canal cancer treatment supports the use of chemoradiation as the standard of care, with surgery reserved for salvage (PUBMED:9480525, PUBMED:30983261, PUBMED:22942796). Intensity-modulated irradiation techniques have been introduced to better conform to pelviperineal and inguinal volumes, improving dose homogeneity and reducing acute and late effects (PUBMED:26337477). Additionally, the use of IMRT has been shown to significantly reduce the dose to the anal sphincters in rectal cancer patients, suggesting potential benefits in sparing organs at risk (PUBMED:30509284). In conclusion, the evidence suggests that for selected cases of anal canal cancer with uninvolved inguinal nodes, it may be possible to reduce the radiotherapy field size without compromising treatment efficacy, potentially reducing toxicity. However, careful consideration and further studies are warranted to confirm these findings and establish guidelines for such an approach.
Instruction: Sexual history taking: a dying skill? Abstracts: abstract_id: PUBMED:24824776 Sexual history taking: a dying skill? Background: Many adolescents are having sex and adolescents with life-limiting illnesses are no exception. It is therefore important for health care professionals to take a sexual history and provide advice about sexually transmitted diseases, unintended pregnancies, and ways of reducing high-risk sexual behaviors. Consultations should provide a forum for discussion and education. A literature review revealed no previous studies on this topic. Objective: Our aim was to review medical consultations between adolescents with life-limiting illnesses and pediatricians to establish whether sex was discussed. Methods: The clinical medical notes of 25 adolescents aged 12 to 18 years, under the care of a community team specializing in patients with nonmalignant life-limiting conditions at a District General Hospital in the United Kingdom (UK) were selected at random. Researchers retrospectively reviewed handwritten notes and typed letters in the medical records with a view to establishing whether a sexual history was taken on any occasion. Results: None of the health care professionals took a sexual history from any of the adolescents on any occasion despite multiple clinic attendances. Conclusion: Sexual health is described by the World Health Organization as a basic human right. Clinicians may struggle to accept that adolescents with life-limiting illnesses may want to talk about sex, and this study has highlighted it as a topic that is generally ignored. Health professionals should include sexual health in routine palliative assessments. Adolescents with life-limiting illnesses should not be denied the right to holistic health care. abstract_id: PUBMED:16315683 How to take a sexual history. Under the National Strategy for Sexual Health and HIV, most patients seeking or requiring routine sexual health care are now offered the option of being treated by the primary health care team, rather than a specialised genito-urinary medicine clinic. Taking a sexual history and making a risk assessment is a key skill for making a diagnosis and care plan. This article offers a structured approach to this task, particularly for nurses, midwives and other community health professionals. It also describes the often sensitive core questions that the professional may need to ask in order to obtain an effective sexual history and determine the risks for a particular patient. abstract_id: PUBMED:35640287 Qualitative Assessment of Bias and Comfort in Inclusive Sexual History Taking Skills of Physician Assistant Students. Purpose: Sexual history taking is an integral skill for clinicians, as sexual health is a component of a complete medical evaluation. Medical curricula lack effective sexual history instruction, creating gaps in clinicians' confidence and proficiency. Average sexual and gender minority (SGM) curricular inclusion content is 5 hours over a 4-year span. This study investigated how students perceive their comfort level and biases during a simulated sexual history taking encounter. Methods: Data were derived from student reflection assignments following simulated sexual history interviews. Researchers analyzed and coded data. Themes were labeled and paired with corresponding quotes from data. Results: Comfort and bias were predetermined main themes, each with eight subcategories that emerged including embarrassment, insight, lack of exposure, comfort/discomfort with sexual subject matter, and preparedness. Students' personal perceptions of comfort and biases represented a broad spectrum within the overarching concepts. Conclusions: Trainee insight can guide educational and instructional modifications on proficient, inclusive sexual history taking. Exercises with sexual history inter¬views inclusive of SGM populations are essential tools to build student comfort with sexual content topics and diminish potential for invasive biases to undermine the integrity of sexual history taking. Future research is necessary, including implementation of pre and post surveys to gauge efficacy of instruction. abstract_id: PUBMED:37035026 Sexual function history taking in medicine. Sexual history taking is important for the proper diagnosis and treatment of sexual dysfunction. It is often neglected in a clinical setting and it is also underreported by patients due to stigma and hesitation. Here we have described how we should take sexual function history taking during any sexual dysfunction. abstract_id: PUBMED:38331478 Taking a Sexual History: Best Practices. Recognizing the holistic definitions of sexual health, health-care providers must approach sexual health history taking with sensitivity, inclusivity, and a trauma-informed perspective. Many versions of what a sexual history should look like exist but certain principles are commonly found. Education of health-care providers on sexual history taking can involve reviewing the components of the sexual history but should also include the importance of using nonstigmatizing language, having a patient-centered approach, and practicing trauma-informed and culturally sensitive care. abstract_id: PUBMED:18082995 Patients' perspectives on sexual history taking in Korea. Objective: This study was conducted to assess patients' beliefs and attitudes towards physicians taking their sexual history during routine medical visits in Korea, where Confucianism is the core societal value. Methods: A survey questionnaire was administered to determine the patients' perspectives to sexual history taking, their actual experience of being asked about sexual issues by physicians, their belief in the importance of sexual history taking, their attitudes and cooperativeness towards each component of sexual history, and the effect of the physicians' age and gender on their comfort level during interview. Results: 74.6% of respondents had never been asked about their sexual issues by physicians. Most patients showed a positive attitude and cooperativeness in general, although more than 25% had a negative attitude and were uncooperative with regards to certain components of sexual issues. Their comfort level to sexual history taking was not influenced by the physicians' age. However, female patients felt more comfortable discussing sexual issues with female physicians. Conclusion: Sexual history taking was often overlooked during routine medical visits in Korea, although patients showed a relatively positive and cooperative attitude. Women showed a greater preference for female physicians. Practice Implications: Sexual history taking should be more facilitated in clinical practice and requires a deliberate approach and skill. abstract_id: PUBMED:2751788 Improving the preparation of preclinical students for taking sexual histories. The authors evaluated human sexuality training programs at two California medical schools. In one program, students had no experience taking a sexual history. In the other, students were randomly assigned either to conduct or to observe a brief sexual history interview with a community volunteer. The students who conducted an interview showed more significant improvements in knowledge of human sexuality, perceived appropriateness of taking a sexual history and perceived personal skill in taking a sexual history than did the students who neither observed nor took a sexual history and also developed more critical views of practicing physicians' skills in taking such histories. The students who observed an interview improved more in knowledge and perceived personal skill than did the students who had no interview experience. abstract_id: PUBMED:10597764 Lesbians' sexual history with men: implications for taking a sexual history. Background: Health care providers may not solicit a comprehensive sexual history from lesbian patients because of provider assumptions that lesbians have not been sexually active with men. We performed this study to assess whether women who identify themselves as lesbians have a history of sexual activities with men that have implications for receipt of preventive health screening. Objective: To convey the importance for health care providers to know their patients' sexual history when making appropriate recommendations for preventive health care. Methods: A survey was printed in a national news magazine aimed at homosexual men, lesbians, and bisexual men and women. The sample included 6935 self-identified lesbians from all 50 US states. The outcomes we measured were respondents' number of lifetime male sexual partners and partners during the past year, their lifetime history of specific sexual activities (e.g., vaginal intercourse, anal intercourse), their lifetime condom use, and their lifetime history of sexually transmitted diseases. Results: Of respondents, 77.3% had 1 or more lifetime male sexual partners, 70.5% had a lifetime history of vaginal intercourse, 17.2% had a lifetime history of anal intercourse, and 17.2% had a lifetime history of a sexually transmitted disease. Exactly 5.7% reported having had a male sexual partner during the past year. Conclusion: These findings reinforce the need for providers to know their patients' sexual history regardless of their reported sexual orientation, especially with regard to recommendations for Papanicolaou smears and screening for sexually transmitted diseases. abstract_id: PUBMED:32006207 Women's Cortisol Stress Responsivity, Sexual Arousability, and Sexual History. Life history theory and the adaptive calibration model state that characteristics of one's early environment influence individual differences in both neuroendocrine reactivity to stress and sexual risk-taking behavior. However, few studies have directly examined the relationship between neuroendocrine reactivity to stress and risky sexual behavior. This study used multilevel modeling to test whether cortisol reactivity and recovery in response to laboratory stress were associated with women's history of sexual behavior and their sexual arousability in response to laboratory sexual stimuli. Participants were 65 women (35% heterosexual, 44% bisexual, and 21% lesbian) who completed two laboratory sessions, two weeks apart. Women's self-reported sexual arousability to sexual stimuli interacted with their sexual abuse history to predict their trajectories of cortisol stress reactivity and recovery. Cortisol reactivity and recovery were not associated with women's sexual risk taking, such as the age of sexual debut, sociosexuality, or lifetime number of sexual partners. abstract_id: PUBMED:16422856 A history of sexual medicine in the United kingdom. The history of sexual medicine in the United Kingdom since the 19th century is reviewed, with particular reference to masturbation, homosexuality, contraception, and in the past four decades, the treatment of sexual dysfunction. The medical profession's tendency to deal with sexual issues according to the sociopolitical and moral issues of the time is emphasized, and whereas "sex negativism" has prevailed within the medical profession for most of this historical period, there has been a succession of individuals within the profession who have presented a more positive approach to defining and promoting sexual health. Four tracks within sexual medicine over the past 30 years are described: the psychoanalytic approach of the Institute of Psychosexual Medicine, modern "sex therapy," psychophysiological sex research, and the involvement of andrology in the assessment and treatment of erectile dysfunction. The impact of Viagra is seen as the most recent chapter in this history. Answer: The abstracts provided suggest that sexual history taking is not a dying skill, but rather an area of clinical practice that requires improvement and greater emphasis within healthcare education and patient care. The study from the UK highlighted that none of the healthcare professionals took a sexual history from adolescents with life-limiting illnesses, despite multiple clinic attendances, indicating a significant gap in practice (PUBMED:24824776). This suggests that while the skill may not be dying, it is underutilized or avoided in certain patient populations. Another abstract emphasizes the importance of taking a sexual history as part of routine sexual health care and offers a structured approach to this task, particularly for nurses, midwives, and other community health professionals (PUBMED:16315683). This indicates that the skill is recognized and there are efforts to improve its application in primary care settings. A qualitative assessment of physician assistant students' comfort and biases in taking sexual histories, especially inclusive of sexual and gender minority populations, suggests that educational interventions are necessary to build comfort and proficiency in this area (PUBMED:35640287). This implies that while the skill is not dying, there is a need for better training and awareness. The importance of sexual history taking for the diagnosis and treatment of sexual dysfunction is also emphasized, although it is often neglected in clinical settings (PUBMED:37035026). This further supports the notion that the skill is not dying but requires more attention and normalization within medical practice. Best practices for taking a sexual history are discussed, highlighting the need for sensitivity, inclusivity, and a trauma-informed perspective (PUBMED:38331478). This suggests that while the skill is present, its quality and execution can be improved. In summary, the abstracts indicate that sexual history taking is a critical skill that is not dying but is underutilized, often neglected, and in need of enhancement through better education and practice. Health professionals are encouraged to include sexual health in routine assessments and to approach the subject with the necessary sensitivity and inclusivity to provide holistic care to all patients.
Instruction: Are you SURE? Abstracts: abstract_id: PUBMED:32810992 Is Sure Start an Effective Preventive Intervention? Background: Sure Start was established with the aim of eliminating child poverty and social exclusion. Method: The findings from the reports of the National Evaluation of Sure Start Team, published in November 2005, are reviewed and critiqued. Results: The family and child functioning after 3 years of Sure Start, as compared with Sure Start-to-be areas, showed very few significant differences, with some indication of adverse effects in the most disadvantaged families. Conclusions: These findings are discussed in relation to their service, research and policy implications-with the conclusion that the research evaluation was well conducted, but the findings are inconclusive. There are lessons on how to improve Sure Start and what should have been done differently. abstract_id: PUBMED:28360436 Conditional Sure Independence Screening. Independence screening is powerful for variable selection when the number of variables is massive. Commonly used independence screening methods are based on marginal correlations or its variants. When some prior knowledge on a certain important set of variables is available, a natural assessment on the relative importance of the other predictors is their conditional contributions to the response given the known set of variables. This results in conditional sure independence screening (CSIS). CSIS produces a rich family of alternative screening methods by different choices of the conditioning set and can help reduce the number of false positive and false negative selections when covariates are highly correlated. This paper proposes and studies CSIS in generalized linear models. We give conditions under which sure screening is possible and derive an upper bound on the number of selected variables. We also spell out the situation under which CSIS yields model selection consistency and the properties of CSIS when a data-driven conditioning set is used. Moreover, we provide two data-driven methods to select the thresholding parameter of conditional screening. The utility of the procedure is illustrated by simulation studies and analysis of two real datasets. abstract_id: PUBMED:31692981 A Generic Sure Independence Screening Procedure. Extracting important features from ultra-high dimensional data is one of the primary tasks in statistical learning, information theory, precision medicine and biological discovery. Many of the sure independent screening methods developed to meet these needs are suitable for special models under some assumptions. With the availability of more data types and possible models, a model-free generic screening procedure with fewer and less restrictive assumptions is desirable. In this paper, we propose a generic nonparametric sure independence screening procedure, called BCor-SIS, on the basis of a recently developed universal dependence measure: Ball correlation. We show that the proposed procedure has strong screening consistency even when the dimensionality is an exponential order of the sample size without imposing sub-exponential moment assumptions on the data. We investigate the flexibility of this procedure by considering three commonly encountered challenging settings in biological discovery or precision medicine: iterative BCor-SIS, interaction pursuit, and survival outcomes. We use simulation studies and real data analyses to illustrate the versatility and practicability of our BCor-SIS method. abstract_id: PUBMED:25301976 SURE Estimates for a Heteroscedastic Hierarchical Model. Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein's unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. abstract_id: PUBMED:33945126 How risk-prone are people when facing a sure loss? Negative interest rates as a convenient conceptual framework. People occasionally face sure loss prospects. Do they seek risk in search of better outcomes or contend with the sure loss and focus on what is left to be saved? We addressed this question in three experiments akin to a negative interest rate framework. Specifically, we asked participants to allocate money (Experiments 1 and 2) or choose (Experiment 3) between two options: (i) a loss option where, for sure, they would end up with less, or (ii) a mixed gamble with a positive expected outcome, but also the possibility of an even larger loss. Risk aversion (i.e., choosing the sure loss) ranged from 80% to 36% across the three experiments, dependent on varied sizes of sure losses or expected outcomes. However, overall, the majority (&gt; 50%) of allocations and choices were for the sure loss. Our findings indicate a tolerance for sure losses at the expense of mixed gambles yielding much better expected outcomes. We discuss the implications of this sure-loss tolerance for psychological research, its implications in terms of (cumulative) prospect theory, and what the results mean for the implementation of negative interest rates. abstract_id: PUBMED:28127109 Ultrahigh-Dimensional Multiclass Linear Discriminant Analysis by Pairwise Sure Independence Screening. This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. abstract_id: PUBMED:35104204 Asexual or not sure: Findings from a probability sample of undergraduate students. Objective: Present study aims to: 1) examine demographic correlates of LGB, asexual, or not sure participants 2) describe the prevalence of diverse sexual behaviors, 3) assess the prevalence of event-level sexual behaviors and 4) examine predictors of sexual pleasure. Participants: 761 non-heterosexual undergraduates at a large, public U.S. university. Methods: Randomly sampled undergraduate students completed a confidential, cross-sectional online survey. Results: Of 761 non-heterosexual respondents; 567 identified as LGB, 47 asexual, and 147 not sure. Asexual students, those not sure were less likely to report having engaged in solo and partnered sexual activities and report sexual activities being less pleasurable at most recent sexual event, compared with LGB students. This difference (relative to LGB) became nonsignificant when accounting for reported sexual activities. Conclusions: Our findings inform how college students define and experience their sexual identities and assist college health professionals in training on sexuality and prevention of risk factors. abstract_id: PUBMED:31731879 SURE Test Accuracy for Decisional Conflict Screening among Parents Making Decisions for Their Child. Background. We aimed to validate the SURE test for use with parents in primary care. Methods. A secondary analysis of cluster randomized trial data was used to compare the SURE test (index, higher score = less conflict) to the Decisional Conflict Scale (DCS; reference, higher score = greater conflict). Our a priori hypothesis was that the scales would correlate negatively. We evaluated the association between scores and estimated the proportion of variance in the DCS explained by the SURE test. Then, we dichotomized each measure using established cutoffs to calculate diagnostic accuracy and internal consistency with confidence intervals adjusted for clustering. We evaluated the presence of effect modification by sex, followed by sex-specific calculation of validation statistics. Results. In total, 185 of 201 parents completed a DCS and SURE test. Total DCS (mean = 4.2/100, SD = 14.3) and SURE test (median 4/4; interquartile range, 4-4) scores were significantly correlated (ρ = -0.36, P &lt; 0.0001). The SURE test explained 34% of the DCS score variance. Internal consistency (Kuder-Richardson 20) was 0.38 (P &lt; 0.0001). SURE test sensitivity and specificity for identifying decisional conflict were 32% (95% confidence interval [CI], 20%-44%) and 96% (95% CI, 93%-100%), respectively. The SURE test's positive likelihood ratio was 8.4 (95% CI, 0.1-17) and its negative likelihood ratio was 0.7 (95% CI, 0.53-0.87). There were no significant differences between females and males in DCS (P = 0.5) or SURE test (P = 0.97) total scores; however, correlations between test total scores (-0.37 for females v. for -0.21 for males; P = 0.001 for the interaction) and sensitivity and specificity were higher for females than males. Conclusions. SURE test demonstrated acceptable psychometric properties for screening decisional conflict among parents making a health decision about their child in primary care. However, clinicians cannot be confident that a negative SURE test rules out the presence of decisional conflict. abstract_id: PUBMED:30363778 The almost sure local central limit theorem for products of partial sums under negative association. Let {Xn,n≥1} be a strictly stationary negatively associated sequence of positive random variables with EX1=μ&gt;0 and Var(X1)=σ2&lt;∞ . Denote Sn=∑i=1nXi,pk=P(ak≤(∏j=1kSj/(k!μk))1/(γσ1k)&lt;bk) and γ=σ/μ the coefficient of variation. Under some suitable conditions, we derive the almost sure local central limit theorem limn→∞1logn∑k=1n1kpkI{ak≤(∏j=1kSjk!μk)1/(γσ1k)&lt;bk}=1a.s., where σ12=1+1σ2∑j=2∞Cov(X1,Xj)&gt;0 . abstract_id: PUBMED:23776141 Validation of SURE, a four-item clinical checklist for detecting decisional conflict in patients. Background: We sought to determine the psychometric properties of SURE, a 4-item checklist designed to screen for clinically significant decisional conflict in clinical practice. Methods: This study was a secondary analysis of a clustered randomized trial assessing the effect of DECISION+2, a 2-hour online tutorial followed by a 2-hour interactive workshop on shared decision making, on decisions to use antibiotics for acute respiratory infections. Patients completed SURE and also the Decisional Conflict Scale (DCS), as the gold standard, after consultation. We evaluated internal consistency of SURE using the Kuder-Richardson 20 coefficient (KR-20). We compared DCS and SURE scores using the Spearman correlation coefficient. We assessed sensitivity and specificity of SURE scores (cut-off score ≤3 out of 4) by identifying patients with and without clinically significant decisional conflict (DCS score &gt;37.5 on a scale of 0-100). Results: Of the 712 patients recruited during the trial, 654 completed both tools. SURE scores showed adequate internal consistency (KR-20 coefficient of 0.7). There was a significant correlation between DCS and SURE scores (Spearman's ρ = -0.45, P &lt; 0.0001). The prevalence of clinically significant decisional conflict as estimated by the DCS was 5.2% (95% CI 3.7-7.3). Sensitivity and specificity of SURE ≤3 were 94.1% (95% CI 78.9-99.0) and 89.8% (95% CI 87.1-92.0), respectively. Conclusions: SURE shows adequate psychometric properties in a primary care population with a low prevalence of clinically significant decisional conflict. SURE has the potential to be a useful screening tool for practitioners, responding to the growing need for detecting clinically significant decisional conflict in patients. Answer: Based on the provided abstracts, the term "SURE" appears in various contexts, each with a different meaning and application. In the context of statistical methods and data analysis, "SURE" refers to Stein's Unbiased Risk Estimate, a technique used for shrinkage estimation in hierarchical models (PUBMED:25301976), and it is also used in the context of variable selection and feature screening in high-dimensional data analysis (PUBMED:32810992, PUBMED:28360436, PUBMED:31692981, PUBMED:28127109, PUBMED:30363778). For instance, "SURE" is mentioned in relation to Sure Independence Screening (SIS) methods, which are used for selecting important variables in large datasets, such as in the context of generalized linear models (PUBMED:28360436), multiclass linear discriminant analysis (PUBMED:28127109), and other statistical learning applications (PUBMED:31692981). In the context of health and psychology, "SURE" is used as an acronym for a four-item clinical checklist designed to detect decisional conflict in patients (PUBMED:23776141, PUBMED:31731879). The SURE test is used to screen for decisional conflict among patients making health decisions, with the aim of identifying those who may be experiencing significant uncertainty or difficulty in making a choice. The SURE test has been validated and shown to have adequate psychometric properties for this purpose (PUBMED:23776141). In summary, "SURE" can refer to a statistical estimation technique or a clinical screening tool, depending on the context. The abstracts do not provide a singular answer to the question "Are you SURE?" as the term "SURE" is not used in a uniform way across the different abstracts.
Instruction: Certification and specialization: do they matter in the outcome of acute myocardial infarction? Abstracts: abstract_id: PUBMED:31834024 Impact of disease-specific care certification on clinical outcome and healthcare performance of myocardial infarction in Taiwan. Background: The relationship between certification for specific disease care and clinical outcome was not well known. Previous studies regarding the effect of certification for acute stroke centers were limited by their cross-sectional design. This study aimed to investigate the effect of disease-specific care (DSC) certification on healthcare performance and clinical outcome of acute myocardial infarction (AMI). Methods: This retrospective, longitudinal, controlled study was performed by analyzing the nationwide Taiwan Clinical Performance Indicators dataset from 2011 to 2018. Hospitals undergoing DSC certification for coronary care and reporting AMI indicators 1 year before, during, and 1 year after certification were included in group C, whereas hospitals not seeking DSC certification but reporting AMI indicators during the same period were included in group U. The primary endpoint was in-hospital mortality of AMI. Results: In total, 20 hospitals (9 in group C and 11 in group U) and up to 16 173 AMI cases were included for analysis. In-hospital mortality was similar between both groups at baseline. However, the in-hospital mortality was significantly improved during and after certification periods in comparison with that at baseline in group C (6.8% vs 8.4%, p = 0.04; 6.7% vs 8.4%, p = 0.02), whereas there was no significant change in group U, resulting in a statistically significant difference between both groups during and after certification periods (odds ratio = 0.74 [95% CI = 0.60-0.91] and 0.78 [95% CI = 0.64-0.96]). Compared with group U, the improvement in healthcare performance indicators, such as door-to-electrocardiography time &lt;10 minutes, blood testing for low-density lipoprotein cholesterol level, prescribing a beta-blockade or a P2Y12 receptor inhibitor during hospitalization, prescribing a statin on discharge, and consultation for cardiac rehabilitation, was significant in group C. Conclusion: The current study demonstrated the beneficial effect of DSC certification on clinical outcome of AMI probably mediated through quality improvement during the healthcare process. abstract_id: PUBMED:26384518 Association of Physician Certification in Interventional Cardiology With In-Hospital Outcomes of Percutaneous Coronary Intervention. Background: The value of American Board of Internal Medicine certification has been questioned. We evaluated the Association of Interventional Cardiology certification with in-hospital outcomes of patients undergoing percutaneous coronary intervention (PCI) in 2010. Methods And Results: We identified physicians who performed ≥10 PCIs in 2010 in the CathPCI Registry and determined interventional cardiology (ICARD) certification status using American Board of Internal Medicine data. We compared in-hospital outcomes of patients treated by certified and noncertified physicians using hierarchical multivariable models adjusted for differences in patient characteristics and PCI volume. Primary end points were all-cause in-hospital mortality and bleeding complications. Secondary end points included emergency coronary artery bypass grafting, vascular complications, and a composite of any adverse outcome. With 510,708 PCI procedures performed by 5175 physicians, case mix and unadjusted outcomes were similar among certified and noncertified physicians. The adjusted risks of in-hospital mortality (odds ratio, 1.10; 95% confidence interval, 1.02-1.19) and emergency coronary artery bypass grafting (odds ratio, 1.32; 95% confidence interval, 1.12-1.56) were higher in the non-ICARD-certified group, but the risks of bleeding and vascular complications and the composite end point were not statistically significantly different between groups. Conclusions: We did not observe a consistent association between ICARD certification and the outcomes of PCI procedures. Although there was a significantly higher risk of mortality and emergency coronary artery bypass grafting in patients treated by non-ICARD-certified physicians, the risks of vascular complications and bleeding were similar. Our findings suggest that ICARD certification status alone is not a strong predictor of patient outcomes and indicate a need to enhance the value of subspecialty certification. abstract_id: PUBMED:11112721 Certification and specialization: do they matter in the outcome of acute myocardial infarction? Purpose: To learn whether there are differences among certified and self-designated cardiologists, internists, and family practitioners in terms of the mortality of their patients with acute myocardial infarction (AMI). Method: Data on all patients admitted with AMI were collected for calendar year 1993 by the Pennsylvania Health Care Cost Containment Council and analyzed. Certified and self-designated family practitioners, internists, and cardiologists (n = 4,546) were compared with respect to the characteristics of their patients' illnesses. In addition, a regression model was fitted in which mortality was the dependent measure and the independent variables were the probability of death, hospital characteristics (location and the availability of advanced cardiac care), and physician characteristics (patient volume, years since graduation from medical school, specialty, and certification status). Results: On average, cardiologists treated more patients than did generalists, and their patients were less severely ill. In the regression analysis, all variables were statistically significant except the availability of advanced cardiac care. Holding all other variables constant, treatment by a certified physician was associated with a 15% reduction in mortality among patients with AMI. Conclusions: Less patient mortality was associated with treatment by physicians who were cardiologists, cared for larger numbers of AMI patients, were closer to their graduation from medical school, and were certified. abstract_id: PUBMED:30149726 Specialty Board Certification Rate as an Outcome Metric for GME Training Institutions: A Relationship With Quality of Care. Educational outcome measures, known to be associated with the quality of care, are needed to support improvements in graduate medical education (GME). This retrospective observational study sought to determine whether there was a relationship between the specialty board certification rates of GME training institutions and the quality of care delivered by their graduates. It is based on 7 years of hospitalizations in Pennsylvania (N = 354,767) with diagnoses of acute myocardial infarction, congestive heart failure, gastrointestinal hemorrhage, or pneumonia. The 2,265 attending physicians were self-identified internists, and they completed their training in 59 institutions. The percentage of board-certified physicians from each training institution, excluding the physician herself or himself, was calculated and an indicator of whether it exceeded 80% was created. This was analyzed against inhospital mortality and length of stay, adjusted for patient/physician/hospital characteristics. There were significantly lower odds of mortality (adjusted Odd's ratio [OR] = .92, 95% CI [0.86, 0.98]) and log length of stay (adjusted OR = .98, 95% CI [.94, .99]) when the attending physician trained in a residency program with an 80% or greater certification rate. The results suggest that specialty certification rates may be a useful educational outcome for residency training programs. abstract_id: PUBMED:16637823 Physician board certification and the care and outcomes of elderly patients with acute myocardial infarction. Background: Patients and purchasers prefer board-certified physicians, but whether these physicians provide better quality of care and outcomes for hospitalized patients is unclear. Objective: We evaluated whether care by board-certified physicians after acute myocardial infarction (AMI) was associated with higher use of clinical guideline recommended therapies and lower 30-day mortality. Subjects And Methods: We examined 101,251 Medicare patients hospitalized for AMI in the United States and compared use of aspirin, beta-blockers, and 30-day mortality according to the attending physicians' board certification in family practice, internal medicine, or cardiology. Results: Board-certified family practitioners had slightly higher use of aspirin (admission: 51.1% vs 46.0%; discharge: 72.2% vs 63.9%) and beta-blockers (admission: 44.1% vs 37.1%; discharge: 46.2% vs 38.7%) than nonboard-certified family practitioners. There was a similar pattern in board-certified Internists for aspirin (admission: 53.7% vs 49.6%; discharge: 78.2% vs 68.8%) and beta-blockers (admission: 48.9% vs 44.1%; discharge: 51.2% vs 47.1). Board-certified cardiologists had higher use of aspirin compared with cardiologists certified in internal medicine only or without any board certification (admission: 61.3% vs 53.1% vs 52.1%; discharge: 82.2% vs 71.8% vs 71.5%) and beta-blockers (admission: 52.9% vs 49.6% vs 41.5%; discharge: 54.7% vs 50.6% vs 42.5%). In multivariate regression analyses, board certification was not associated with differences in 30-day mortality. Conclusions: Treatment by a board-certified physician was associated with modestly higher quality of care for AMI, but not differences in mortality. Regardless of board certification, all physicians had opportunities to improve quality of care for AMI. abstract_id: PUBMED:35470191 Associations between initial American Board of Internal Medicine certification and maintenance of certification status of attending physicians and in-hospital mortality of patients with acute myocardial infarction or congestive heart failure: a retrospective cohort study of hospitalisations in Pennsylvania, USA. Objective: To determine whether internists' initial specialty certification and the maintenance of that certification (MOC) is associated with lower in-hospital mortality for their patients with acute myocardial infarction (AMI) or congestive heart failure (CHF). Design: Retrospective cohort study of hospitalisations in Pennsylvania, USA, from 2012 to 2017. Setting: All hospitals in Pennsylvania. Participants: All 184 115 hospitalisations for primary diagnoses of AMI or CHF where the attending physician was a self-designated internist. Primary Outcome Measure: In-hospital mortality. Results: Of the 2575 physicians, 2238 had initial certification and 820 were eligible for MOC. After controlling for patient demographics and clinical characteristics, hospital-level factors and physicians' demographic and medical school characteristics, both initial certification and MOC were associated with lower mortality. The adjusted OR for initial certification was 0.835 (95% CI 0.756 to 0.922; p&lt;0.001). Patients cared for by physicians with initial certification had a 15.87% decrease in mortality compared with those cared for by non-certified physicians (mortality rate difference of 5.09 per 1000 patients; 95% CI 2.12 to 8.05; p&lt;0.001). The adjusted OR for MOC was 0.804 (95% CI 0.697 to 0.926; p=0.003). Patients cared for by physicians who completed MOC had an 18.91% decrease in mortality compared with those cared for by MOC lapsed physicians (mortality rate difference of 6.22 per 1000 patients; 95% CI 2.0 to 10.4; p=0.004). Conclusions: Initial certification was associated with lower mortality for AMI or CHF. Moreover, for patients whose physicians had initial certification, an additional advantage was associated with its maintenance. abstract_id: PUBMED:24615598 The effect of certification and accreditation on quality management in 4 clinical services in 73 European hospitals. Objective: To investigate the relationship between ISO 9001 certification, healthcare accreditation and quality management in European hospitals. Design: A mixed method multi-level cross-sectional design in seven countries. External teams assessed clinical services on the use of quality management systems, illustrated by four clinical pathways. Setting And Participants: Seventy-three acute care hospitals with a total of 291 services managing acute myocardial infarction (AMI), hip fracture, stroke and obstetric deliveries, in Czech Republic, France, Germany, Poland, Portugal, Spain and Turkey. Main Outcome Measure: Four composite measures of quality and safety [specialized expertise and responsibility (SER), evidence-based organization of pathways (EBOP), patient safety strategies (PSS) and clinical review (CR)] applied to four pathways. Results: Accreditation in isolation showed benefits in AMI and stroke more than in deliveries and hip fracture; the greatest significant association was with CR in stroke. Certification in isolation showed little benefit in AMI but had more positive association with the other conditions; greatest significant association was in PSS with stroke. The combination of accreditation and certification showed least benefit in EBOP, but significant benefits in SER (AMI), in PSS (AMI, hip fracture and stroke) and in CR (AMI and stroke). Conclusions: Accreditation and certification are positively associated with clinical leadership, systems for patient safety and clinical review, but not with clinical practice. Both systems promote structures and processes, which support patient safety and clinical organization but have limited effect on the delivery of evidence-based patient care. Further analysis of DUQuE data will explore the association of certification and accreditation with clinical outcomes. abstract_id: PUBMED:15325922 Impact of interventionalist volume, experience, and board certification on coronary angioplasty outcomes in the era of stenting. It has been suggested that percutaneous coronary intervention (PCI) by high-volume operators may be associated with better outcomes. However, the relation between operator and outcome is confounded by hospital caseloads of PCI, with busier hospitals generally having better outcomes. We assessed the effect of operator characteristics (volume of PCI, years in practice, and board certification status) on contemporary outcomes of PCI in a busy center with high-volume operators. Between 1999 and 2001, 12,293 PCIs were performed at our center by 28 interventionalists. Patients' clinical risk was assessed with the previously validated Beaumont PCI Risk Score. Operators were classified as producing low, medium, or high volume (tertiles of annual PCI volume &lt; or =92, 93 to 140, or &gt;140, respectively), as less, medium, or great experience (tertiles of years in practice &lt; or =8, 9 to 14, or &gt;14 years, respectively), and board certified (68%) or not. In-hospital death rate and a composite end point (death, coronary artery bypass graft surgery, myocardial infarction, or stroke) occurred in 0.99% and 2.59% of patients, respectively. Operator volume, experience, and board certification showed no univariate or multivariate relation with the study end points. The Beaumont PCI Risk Score showed a strong independent relation with in-hospital death rate (adjusted odds ratio 1.37, 95% confidence interval 1.31 to 1.43, p &lt;0.0001) and composite end point (odds ratio 1.19, 95% confidence interval 1.16 to 1.22, p &lt;0.0001). We conclude that, in contemporary PCI practice at a large center with high-volume operators, in-hospital outcomes are not affected by operator volume, experience, or board certification. Rather, patients' clinical risk score is the overriding determinant of clinical outcomes. Our findings emphasize the power of a well-organized high-volume system to minimize the impact of operator factors on outcomes of PCI. abstract_id: PUBMED:19444406 Acute thoracic pain: Chest Pain Unit - the certification campaign of the German Society of Cardiology The Chest Pain Unit (CPU) Task Force of the German Society of Cardiology inaugurated elaborated prerequisites for a CPU certification program to evaluate CPUs across the country. For this reason, a consensus document including criteria for CPUs was developed and published in October 2008. Aim of this effort is to ensure a network of elaborated centers which meet or exceed quality-of-care measures in order to improve the standard of care of patients with acute thoracic pain. After application and a formal checkup of the institution, the minimum requirements are assessed by an expert committee of the German Society of Cardiology according to presubmitted documentation of the care processes for patients with acute thoracic pain. Components of certification include characteristic locations, equipment, diagnostic and therapeutic strategies, cooperations, staff education, and organization. Certification specifically implies algorithms for ST segment elevation myocardial infarction, non-ST segment elevation myocardial infarction, unstable angina, stable angina, hypertensive crisis, acute pulmonary embolism, acute aortic syndrome, cardiogenic shock, and resuscitation. Availability of a catheter laboratory ready within the facility is mandatory. The CPU and the cath lab are obliged to be available 24 h per day over 365 days per year. After successful documentation review, a certification audit team reviews the facility's application, infrastructure, patient care, and each of the requirements according to the consensus document on site and makes recommendations to the expert committee. Certification is finally awarded by the expert committee of the German Society of Cardiology to those CPUs which fulfill the dedicated requirements and successfully run through the complete certification process. Within this process, CPUs can plan and organize the delivery of care in a systematic manner, and the differentiation between minimum requirements and best practice allows further developments and innovations. abstract_id: PUBMED:30044257 The Impact of Formal Training and Certification on the Relationship Between Volume and Outcomes in Percutaneous Coronary Interventions. Background: Little data are available on the impact of formal training and certification on the relationship between volumes and outcome in percutaneous coronary interventions (PCIs).The objective of this report is to study the relationship between PCI volume and outcome for a formally trained interventional cardiologist who is certified by the American Board on Internal Medicine - Interventional Cardiology subspecialty board. Methods: The operator witnessed 3 different PCI volumes/yr over a 15-year practice period (2000-2014): &lt;50 PCI/yr (years 2000-2006; n = 179), 50-100 PCI/yr (years 2007-2010; n = 256), and &gt;100 PCI/yr (years 2011-2014; n = 427). Angiographic and procedural success rates were compared between the 3 volume groups, as well as in-hospital cardiovascular events (death, recurrent myocardial infarction, repeat PCI, stroke, or coronary artery bypass surgery). Results: The in-hospital mortality rate throughout the study period was 0.8% and was not statistically significant among the 3 volume groups. There was also no significant difference among the 3 groups with respect to recurrent myocardial infarction or repeat PCI. There was a slightly higher rate of same-stay elective coronary artery bypass grafting in the early low-volume period compared with the other 2 groups (2.2% vs. 0.8% vs. 0.2%; P = 0.04). The overall angiographic and procedural success rates were 97.3% and 96.5%, and they were not significantly different among the 3 groups. Conclusions: Our study shows that the angiographic and procedural success rates of PCI, as well as the in-hospital mortality, do not seem to be dependent on the annual volume for formally trained and certified interventional cardiologists. Answer: Certification and specialization appear to have an impact on the outcome of acute myocardial infarction (AMI) based on the evidence from several studies. A study in Taiwan found that hospitals with disease-specific care (DSC) certification for coronary care showed significant improvement in in-hospital mortality rates for AMI during and after certification periods compared to hospitals without such certification. This suggests that DSC certification may lead to better clinical outcomes for AMI, potentially through improvements in healthcare processes (PUBMED:31834024). Another study indicated that treatment by certified physicians was associated with a 15% reduction in mortality among patients with AMI. This study suggests that certification status, along with other factors such as being a cardiologist, managing a larger number of AMI patients, and being closer to graduation from medical school, can be associated with lower patient mortality (PUBMED:11112721). Furthermore, a retrospective cohort study in Pennsylvania, USA, found that initial certification and maintenance of certification (MOC) by internists were associated with lower in-hospital mortality for patients with AMI or congestive heart failure. Patients cared for by physicians with initial certification had a significant decrease in mortality, and an additional advantage was associated with maintaining that certification (PUBMED:35470191). However, the relationship between certification and outcomes is not always straightforward. One study found that while board-certified physicians had higher use of guideline-recommended therapies for AMI, board certification was not associated with differences in 30-day mortality (PUBMED:16637823). Another study did not observe a consistent association between interventional cardiology certification and the outcomes of percutaneous coronary intervention (PCI) procedures, although there was a higher risk of mortality and emergency coronary artery bypass grafting in patients treated by non-certified physicians (PUBMED:26384518). In summary, certification and specialization in cardiology and related fields are generally associated with improved healthcare performance indicators and clinical outcomes for AMI. However, the impact of certification may vary depending on the specific context and other contributing factors.
Instruction: Is chronic total coronary occlusion a risk factor for long-term outcome after minimally invasive bypass grafting of the left anterior descending artery? Abstracts: abstract_id: PUBMED:20417767 Is chronic total coronary occlusion a risk factor for long-term outcome after minimally invasive bypass grafting of the left anterior descending artery? Background: Chronic total occlusion (CTO) of coronary vessels is still a challenge for percutaneous coronary intervention and recent data show unfavorable long-term results compared with medical therapy. It is unclear whether CTO is also a negative predictor for long-term outcome in minimally invasive bypass grafting. Methods: From 1996 to 2007 minimally invasive surgical revascularization of the left internal mammary artery to the left anterior descending artery (LAD) was performed in 1,800 patients. Demographic data, risk factors, perioperative outcome, and annual follow-up were obtained from all patients. Estimated survival and freedom from major adverse cardiac and cerebrovascular events or recurrence of angina with log-rank tests and Cox regression analysis for identification of independent risk factors were calculated for patients with (420 patients) and without (1,380 patients) CTO of the LAD. Results: Revascularization of the LAD could be completed in all but one patient (99.8% success rate with CTO). At 5 years estimated overall survival was 90.5% (95% confidence interval [CI] 85.8 to 95.5) with CTO and 90.4% (95% CI 85.8 to 95.1) without CTO (p = 0.91). Freedom from major adverse cardiac and cerebrovascular events and angina with or without CTO at 5 years was 83.2% (95% CI 77.6 to 88.8) and 85.5% (95% CI 82.6 to 88.1), respectively (p = 0.64). Chronic occlusion of the target vessel and other preoperative factors were not identified as risk factors for major adverse cardiac and cerebrovascular events during follow-up. Conclusions: As opposed to percutaneous coronary intervention, minimally invasive bypass grafting of a totally occluded LAD is almost always possible and chronic occlusion is not a negative predictor for short and long-term outcome. Minimally invasive bypass grafting of the LAD should be considered the treatment of choice for chronically occluded left anterior descending arteries. abstract_id: PUBMED:10220678 Early experience with minimally invasive direct coronary artery bypass grafting with the internal thoracic artery. Objective: Minimally invasive direct coronary artery bypass is performed under direct vision without sternotomy or cardiopulmonary bypass. The technique can be used in both primary and reoperative cases by employing the internal thoracic artery to perform arterial revascularization of the anterior surface of the heart. Methods: Patients were selected who had significant coronary artery disease limited to 1 or 2 coronary distributions on the anterior surface of the heart. Coronary target vessels were grafted with the internal thoracic artery through a small anterior thoracotomy. After partial heparinization the anastomosis was facilitated by local coronary occlusion and handheld stabilization. Results: Between August 1994 and July 1997, 162 patients underwent minimally invasive direct coronary artery bypass grafting with the internal thoracic artery. The left and right internal thoracic arteries were used for grafting of the left anterior descending artery in 142 patients (88%), the proximal right coronary artery in 7 patients (4%), existing saphenous vein grafts in 5 patients (3%), and diagonal branches in 2 patients (1%). Sequential grafting with the left internal thoracic artery was performed in 2 patients (1%) and bilateral internal thoracic artery grafting was performed in 4 patients (3%). Eight patients (4.9%) died within 30 days after the operation, 3 of cardiac causes. Seven additional patients died during the follow-up period. Nine patients (5.6%) required reintervention for graft stenosis or occlusion during follow-up. Of 141 patients seen 2 or more weeks after the operation, 135 (96%) had resolution of their anginal symptoms at a mean follow-up of 12 months (range 0-31 months). Conclusions: Anterior minimally invasive direct coronary artery bypass grafting with the internal thoracic artery avoids the risks of repeated sternotomy, aortic manipulation, and cardiopulmonary bypass. There was a low rate of reintervention, and patients had excellent resolution of anginal symptoms. Postoperative length of stay was comparatively short, and continued follow-up will be essential to evaluate long-term graft patency and patient survival. abstract_id: PUBMED:21619999 Successful combined minimally invasive direct coronary artery bypass and transapical aortic valve implantation. Transapical aortic valve implantation is indicated in high-risk patients with aortic stenosis and peripheral vascular disease requiring aortic valve replacement. Minimally invasive direct coronary artery bypass grafting is also a valid, minimally invasive option for myocardial revascularization in patients with critical stenosis on the anterior descending coronary artery. Both procedures are performed through a left minithoracotomy, without cardiopulmonary bypass, aortic cross-clamping, and cardioplegic arrest. We describe a successful combined transapical aortic valve implantation and minimally invasive direct coronary bypass in a high-risk patient with left anterior descending coronary artery occlusion and severe aortic valve stenosis. abstract_id: PUBMED:9567045 Pitfall of minimally invasive direct coronary artery bypass Six thoroughly selected patients underwent minimally invasive direct coronary artery bypass grafting (MIDCAB). While monitoring left ventricular function with transesophageal echocardiography, MIDCAB was done by performing small left thoracotomy through the fourth intercostal space, dissection of the left internal thoracic artery without thoracoscopy, ischemic preconditioning, and grafting of the internal thoracic artery to the left anterior descending coronary artery with 8-0 polypropylene continuous suture. A home-made cardiac stabilizer and Visuflow enabled us to perform precise suturing of the internal thoracic artery. The patency of all grafts was confirmed by early transthoracic Doppler echocardiography and selective angiography. A new stenosis of the coronary artery distal to the anastomosis was detected probably due to coronary snaring in one patient. The anastomosis sites were confined to the distal segments of the left anterior descending coronary artery in MIDCAB patients. The optimal anastomosis site may be missed in the patients with proximal left anterior descending artery disease. An experimental study of myocardial tissue oxygen saturation using near infrared spectroscopy showed that two times of coronary occlusion and reperfusion provided satisfactory effects of ischemic preconditioning. Measurement of the myocardial tissue oxygen saturation may be helpful for confirming effective ischemic preconditioning and a safe coronary occlusion during MIDCAB. Although MIDCAB is an attractive procedure, we should consider the accuracy of anastomosis, the risk of possible incomplete revascularization, the indications, and long-term results. abstract_id: PUBMED:10220684 On-line assessment of regional ventricular wall motion by transesophageal echocardiography with color kinesis during minimally invasive coronary artery bypass grafting. Objective: Our objective was to determine the changes in regional ventricular wall motion during minimally invasive direct coronary artery bypass grafting by color kinesis using transesophageal echocardiography. Methods: Minimally invasive coronary artery bypass grafting was performed in 34 patients, during which transesophageal echocardiography was used. Thirteen patients had isolated disease of the left anterior descending artery. Regional ventricular wall motion was analyzed by color kinesis with the SONOS 2500 transesophageal echocardiograph (Hewlett-Packard Co, Andover, Mass). On-line assessment of regional wall motion was continued during the operation. Results: Wall motion abnormalities during ischemia were present in 4 cases, left ventricular mid-anterior hypokinesis in 3 cases, and left ventricular apical-lateral hypokinesis in 1 case. In all cases, wall motion was maintained after bypass. In patients with total coronary occlusion, changes in wall motion did not occur during anastomosis. Conclusions: Color kinesis allowed us to evaluate the change in regional ventricular wall motion induced by myocardial ischemia during minimally invasive coronary artery bypass grafting both objectively and quantitatively. abstract_id: PUBMED:24618056 Impact of coronary chronic total occlusions on long-term mortality in patients undergoing coronary artery bypass grafting. Objectives: The presence of a coronary chronic total occlusion (CTO) is a common consideration in favour of surgical revascularization. However, studies have shown that not all patients undergoing coronary artery bypass grafting (CABG) have a bypass graft placed on the CTO vessel. The aim of this study was to determine the prevalence of CTO among patients referred for CABG and the significance of incomplete CTO revascularization in these patients. Methods: The study included 405 consecutive patients undergoing CABG during a 2-year period. Clinical, echocardiographic and angiographic data were collected. Determination of whether or not a CTO was bypassed was made by correlating data from the surgical reports and preprocedural angiograms. The primary end point of this study was 5-year all-cause mortality. Results: Two hundred and twenty-one CTOs were found in 174 patients: 132 patients (76) had 1 CTO; 37 (21) had 2 CTOs and 5 (3) had 3 CTOs. Of the 221 CTOs, 191 (86%) were bypassed. All left anterior descending (LAD) CTOs were grafted; however, 12 of left circumflex and 22% of right coronary artery CTOs did not receive bypass grafts. Incomplete CTO revascularization was associated with older age, more comorbidities, including stroke, renal impairment and lower ejection fraction. However, incomplete CTO revascularization was not associated with increased 5-year mortality. Conclusions: Coronary CTOs are a common finding in patients referred for bypass surgery. The presence of a CTO is not independently associated with an adverse long-term outcome. While most CTOs are successfully bypassed, failure to revascularize a non-LAD CTO is not associated with adverse long-term outcome. abstract_id: PUBMED:9567035 Eighty cases of minimally invasive direct coronary artery bypass grafting Between March 1996 and November 1997, 80 patients with a mean age of 70 years (45-89) have undergone minimally invasive direct coronary artery bypass grafting via anterior minithoracotomy or subxiphoid incision with left internal thoracic artery and right epigastric artery using local coronary occlusion on a beating heart. Cardiac-related hospital mortality was 2.5% (2/80). Routine angiographic assessment of anastomotic patency showed an overall patency. rate of 94.6%, but demonstrated the severe stenosis at the anastomotic site in 8 patients. Further study is required to establish the efficacy of minimally invasive direct coronary artery bypass grafting and combination therapy with PTCA. abstract_id: PUBMED:9436550 Minimally invasive direct coronary artery bypass grafting: two-year clinical experience. Background: Interest in minimally invasive coronary artery bypass grafting has been increasing. Methods: From April 1994 through December 1996, 199 patients (age, 36 to 93 years) underwent minimally invasive coronary artery bypass grafting through minithoracotomy, subxiphoid, and lateral thoracotomy incisions, with internal mammary artery, gastroepiploic artery, and composite grafts placed using local coronary artery occlusion. Results: The conversion rate to sternotomy was 7% (14/199). Preoperative risk factors included unstable angina (n = 83), reoperative coronary artery bypass grafting (n = 54), low ejection fraction (n = 53), congestive heart failure (n = 44), renal insufficiency (n = 25), chronic obstructive pulmonary disease (n = 36), cerebrovascular accident (n = 22), and diffuse vascular disease (n = 47). Morbidity included wound infections (n = 5), reoperation for management of bleeding (n = 6) and acute graft occlusion (n = 2), perioperative stroke (n = 1), atrial fibrillation (n = 14), and perioperative myocardial infarction (n = 7). The operative mortality was 3.8% (7/185). The number of grafts placed in 185 patients was as follows: single, 156; double, 28; and triple, 1. Early (less than 36 hours) angiography and Doppler flow assessment of the coronary anastomoses in 85% of the patients showed that 92% were patent. Routine use of mechanical stabilization of the coronary artery since April 1996 was found to be associated with an increase in the patency rate of the left internal mammary artery-left anterior descending coronary artery anastomosis to 97%, versus 89% (p = 0.055) associated with conventional immobilization techniques. Of the 148 patients followed up beyond 1 month (range, 1 to 32 months; mean, 9.2 +/- 7.4 months) postoperatively, 3 have died (3 to 7 months), and of the 145 survivors the cardiac-related event (percutaneous transluminal coronary angioplasty, reoperation, readmission for recurrent angina, and congestive heart failure)-free interval was 93%. Conclusions: The minimally invasive coronary artery bypass grafting operation is safe and effective. Regional cardiac wall mechanical immobilization enhances the early graft patency and must be considered an essential part of this operation. abstract_id: PUBMED:10369640 Clinical experience with minimally invasive reoperative coronary bypass surgery. Objective: To minimize the risk of standard and reoperative coronary artery bypass, we developed a minimally invasive approach. In this study we have evaluated the effectiveness of this technique. Method: Between April 1994 and September 1995, 12 men and 6 women, aged 55-84 years (mean, 69 years) with chronic stable angina (4) and recent post-myocardial infarction unstable angina (14), with left ventricular ejection fractions ranging 17-60% (mean 37%), underwent reoperative coronary artery bypass grafting using 7-cm mini-left and right anterior thoracotomy and subxiphoid incisions. Coronary artery anastomoses were carried out on beating hearts with local coronary occlusion. Ischemic preconditioning, beta and calcium channel blockers and the maintenance of mean arterial pressure at 75-80 mm Hg, were used as adjuncts for myocardial protection. The internal mammary artery was isolated under direct vision up to the second rib with excision of the fourth costal cartilage. Coronary artery target sites were the left anterior descending in 12, right coronary artery in 4, obtuse marginal in 3, posterior descending in 1 and diagonal branch in 1 patient. Arterial grafts (mammary, right gastroepiploic, radial), either as single or composite grafts, were used liberally. Preoperative risk factors included congestive heart failure (7), chronic renal insufficiency (5), second reoperation (2), third reoperation (1), cerebrovascular disease (5), prior angioplasty (8) and preoperative intra-aortic balloon pumping in two patients. Results: There was no perioperative mortality with minimal morbidity. Twelve patients underwent patency study of the grafts 48-72 h postoperatively. Ten of the twelve grafts were patent; one internal mammary artery graft to the left anterior descending coronary artery (&lt;1.5 mm) early in our series was occluded and one additional left internal mammary graft had a kink several centimeters away from the anastomosis, which was successfully opened by angioplasty. At a mean follow-up interval of 8 months all 16 surviving patients are in functional class I or II and all of them remain free of angina. Conclusion: In selected patients reoperative coronary artery bypass grafting can be performed with this minimally invasive approach with a low perioperative morbidity and mortality rate and satisfactory early graft patency rate with good symptomatic improvement. abstract_id: PUBMED:26936837 Entrapment of guidewire in left anterior descending artery: Surgical management. Entrapment of a guidewire during coronary artery interventions is rare and requires prompt treatment. A 52-year-old man underwent a primary percutaneous transluminal coronary angioplasty to the left anterior descending artery. A fractured guidewire was retained in the distal left anterior descending artery, which caused left ventricular dysfunction and total occlusion of the left anterior descending artery. He underwent endarterectomy with removal of the guidewire and bypass grafting. He had a normal postoperative period. The modalities for management of an entrapped guidewire are discussed. Answer: Chronic total coronary occlusion (CTO) is not a risk factor for long-term outcome after minimally invasive bypass grafting of the left anterior descending artery (LAD). A study that included 1,800 patients who underwent minimally invasive surgical revascularization of the LAD with the left internal mammary artery from 1996 to 2007 found that the presence of CTO in the LAD did not negatively impact the estimated survival or freedom from major adverse cardiac and cerebrovascular events and angina at 5 years. The success rate of revascularization in patients with CTO was 99.8%, and chronic occlusion of the target vessel was not identified as a risk factor for major adverse cardiac and cerebrovascular events during follow-up. The study concluded that minimally invasive bypass grafting of a totally occluded LAD is almost always possible and that CTO is not a negative predictor for short and long-term outcomes, suggesting that minimally invasive bypass grafting should be considered the treatment of choice for chronically occluded LADs (PUBMED:20417767).
Instruction: Does sex moderate the relationship between anxiety and pain? Abstracts: abstract_id: PUBMED:23346964 Does sex moderate the relationship between anxiety and pain? Objectives: Sex differences exist in the relationship between anxiety and pain, although findings are mixed. One reason could be because a number of anxiety measures have been used. Therefore, this study aimed to identify the core components within commonly used pain anxiety measures, and see whether these components are differentially related to sensation and pain thresholds in men and women. DESIGN, MAIN OUTCOME MEASURES: One hundred and eighty-nine healthy adults (119 female) completed the Fear of Pain Questionnaire, Pain Catastrophising Scale, Pain Anxiety Symptoms Scale, Anxiety Sensitivity Index-3 and the Depression Anxiety Stress Scale. Thermal sensation and pain thresholds, mechanical sensation and pressure pain thresholds were also collected. Results: A Principal Components Analysis of anxiety measures revealed three constructs: general distress, cognitive intrusion and fear of pain from injury/insult. Sex did not moderate the relationship between these anxiety constructs and sensation/pain thresholds. However, a significant main effect of sex was found to predict thermal pain thresholds. Conclusion: Preliminary indications suggest that pain anxiety dimensions can be reduced to three core constructs, and used to examine pain sensation. However, sex did not moderate this relationship. Further research is required to establish the extent and strength of sex differences in the relationship between anxiety and pain. abstract_id: PUBMED:28968238 Sex Differences in the Psychophysical Response to Contact Heat in Moderate Cognitive Impairment Alzheimer's Disease: A Cross-Sectional Brief Report. Background: People with Alzheimer's disease (AD) report pain less frequently and receive less pain medication than people without AD. Recent studies have begun to elucidate how pain may be altered in those with AD. However, potential sex differences in pain responsiveness have never been explored in these patients. It is unclear whether sex differences found in prior studies of healthy young and older individuals extend to people with AD. Objective: The purpose of this study was to examine sex differences in the psychophysical response to experimental thermal pain in people with AD. Methods: Cross-sectional analysis of 14 male and 14 female age-matched (≥65 years of age, median = 74) and AD severity-matched (Mini-Mental State Exam score &lt;24, median = 16) communicative people who completed thermal psychophysics. Results: There was a statistically significant main effect of sex for both temperature and unpleasantness ratings that persisted after controlling for average and current pain (mixed-effects general liner model: temperature: p = 0.004, unpleasantness: p &lt; 0.001). Females reported sensing mild pain and moderate pain percepts at markedly lower temperatures than did males (mild: Cohen's d = 0.72, p = 0.051, moderate: Cohen's d = 0.80, p = 0.036). By contrast, males rated mild and moderate thermal pain stimuli as more unpleasant than did females (mild: Cohen's d = 0.80, p = 0.072, moderate: Cohen's d = 1.32, p = 0.006). There were no statistically significant correlations of temperature with perceived unpleasantness for mild or moderate pain (rs = 0.29 and rs = 0.20 respectively, p &gt; 0.05). Conclusions: Results suggest experimental pain-related sex differences persist in older adults with AD in a different manner than those previously demonstrated in cognitively intact older adults. These findings could potentially aid in developing targeted pain management approaches in this vulnerable population. Further studies are warranted to replicate the findings from this pilot work. abstract_id: PUBMED:34740794 The role of negative emotions in sex differences in pain sensitivity. Pain perception varies widely among individuals due to the varying degrees of biological, psychological, and social factors. Notably, sex differences in pain sensitivity have been consistently observed in various experimental and clinical investigations. However, the neuropsychological mechanism underlying sex differences in pain sensitivity remains unclear. To address this issue, we quantified pain sensitivity (i.e., pain threshold and tolerance) using the cold pressure test and negative emotions (i.e., pain-related fear, pain-related anxiety, trait anxiety, and depression) using well-established questionnaires and collected magnetic resonance imaging (MRI) data (i.e., high-resolution T1 structural images and resting-state functional images) from 450 healthy subjects. We observed that, as compared to males, females exhibited lower pain threshold and tolerance. Notably, sex differences in pain sensitivity were mediated by pain-related fear and anxiety. Specifically, pain-related fear and anxiety were the complementary mediators of the relationship between sex and pain threshold, and they were the indirect-only mediators of the relationship between sex and pain tolerance. Besides, structural MRI data revealed that the amygdala subnuclei (i.e., the lateral and basal nuclei in the left hemisphere) volumes were the complementary mediators of the relationship between sex and pain-related fear, which further influenced pain sensitivity. Altogether, our results provided a comprehensive picture of how negative emotions (especially pain-related negative emotions) and related brain structures (especially the amygdala) contribute to sex differences in pain sensitivity. These results deepen our understanding of the neuropsychological underpinnings of sex differences in pain sensitivity, which is important to tailor a personalized method for treating pain according to sex and the level of pain-related negative emotions for patients with painful conditions. abstract_id: PUBMED:35616460 Guidelines in Practice: Moderate Sedation and Analgesia. Moderate sedation and analgesia (MSA) can help patients experience less anxiety and discomfort, tolerate procedures that do not require general anesthesia, and maintain the ability to respond to verbal commands. Nurses administer MSA in a variety of clinical areas, and facility leaders may have difficulty creating a single standard of care for this task. Completion of a presedation assessment that includes the patient in the decision-making process is an important aspect of care. When administering MSA, nurses should have immediate unrestricted patient access and no competing responsibilities that could distract them from monitoring and assessing the patient. Nurses should complete education and competency verification activities before administering MSA. AORN recently revised the "Guideline for care of the patient receiving moderate sedation/analgesia," and this article addresses the standard of care, the presedation assessment, patient monitoring, and competency; it also includes scenarios describing specific concerns in two patient care areas. abstract_id: PUBMED:16885016 Evidence for sex differences in the relationships of pain, mood, and disability. Unlabelled: Disability demonstrates strong univariate associations with pain and negative mood. These relationships are more complex at the multivariate level and might be further complicated by sex differences. We investigated sex differences in the relationships of pain and negative mood to overall disability and to disability in specific functional domains. One hundred ninety-seven consecutive patients with low back, myofascial, neck, arthritis, and fibromyalgia pain were recruited from university pain clinics and completed measures of disability and negative mood. Overall disability and disability in voluntary activities were significantly associated with pain and negative mood (factor score) for both sexes. Significant sex differences emerged in the strength of the disability-mood relationship, with women evincing a stronger relationship. Disability in obligatory activities was also significantly related to pain and negative mood for both sexes; however, there were no sex differences in the strength of these relationships. Mediation analyses indicated that, in men, negative mood partially mediated the relationship between pain and both overall disability and disability in voluntary activities; mediation was not supported for disability in obligatory activities. In women, negative mood fully mediated the relationship between pain and all 3 types of disability. These data suggest that disability is more directly related to pain in men. In women, the effect of pain on disability appears to operate through negative mood. Perspective: Results of this study demonstrate that sex differences exist in the relationships of pain, mood, and disability. Men and women might thus benefit from treatment interventions that differentially target these variables. abstract_id: PUBMED:38149036 Sex differences in the association between smoking and central sensitization: A cross-sectional study. Introduction: Despite the acknowledged interconnection between smoking and pain, research on the relationship between smoking and central sensitization (CS) is scarce; this pain mechanism has attracted recent research attention. Considering potential sex differences, this cross-sectional study aimed to investigate the association between smoking and CS. Methods: Overall, 415 adult participants from an outpatient clinic underwent evaluation. The analysis focused on determining the relationship between smoking status and CS by differentiating between sexes. Data were collected on smoking presence or absence (independent variable) and CS (dependent variable) for each sex, with age, education level, drinking history, depression, and anxiety as covariates. CS was evaluated using the Central Sensitization Inventory. Following a descriptive analysis of the study population's characteristics, logistic regression analysis was employed to assess the relationships. Results: The average participant age was 42.3 years, with 59% being women. Among women, a significant association was found between smoking status and higher CS severity (AOR=3.21; 95% CI 1.29-7.99, p&lt;0.01), after accounting for confounding variables. Conversely, no significant association was observed for men (AOR=1.50; 95% CI 0.63-3.60, p=0.36). Interaction by sex on the relationship between smoking and CS was not statistically significant (p=0.23). Conclusions: This study suggests a potential association between smoking and CS in women, whereas no conclusive relationship was observed among men. These findings indicate the necessity of considering CS when examining the relationship between smoking and pain. abstract_id: PUBMED:34349555 Moderate to Severe Osteoarthritis Pain and Its Impact on Patients in the United States: A National Survey. Purpose: Osteoarthritis (OA) is one of the most common causes of chronic pain and a leading cause of disability in the US. The objective of this study was to examine the clinical and economic burden of OA by pain severity. Patients And Methods: We used nationally representative survey data. Adults ≥18 years with self-reported physician-diagnosed OA and experiencing OA pain were included in the study. OA pain severity was measured using the Short Form McGill Pain Questionnaire Visual Analog Scale (SF-MPQ-VAS). Data were collected for demographics, clinical characteristics, health-related quality of life (HRQoL), productivity, OA treatment, adherence to pain medication, and healthcare resource utilization. Univariate analysis was performed to examine differences between respondents with moderate-to-severe OA pain vs those with mild OA pain. Results: Higher proportions of respondents with moderate-to-severe OA pain (n=3798) compared with mild OA pain (n=2038) were female (69.4% vs 57.3%), &lt;65 years of age (54.8% vs 43.4%), and not employed (70.6% vs 64.5%). Respondents with moderate-to-severe OA pain experienced OA pain daily (80.8% vs 48.8%), were obese (53.0% vs 40.5%), had more comorbidities (sleep disturbance, insomnia, depression, and anxiety), and reported significantly poorer health status and HRQoL, and greater productivity and activity impairment (all P&lt;0.05). Moderate-to-severe OA pain respondents were prescribed significantly more pain medications than mild OA pain respondents (41.0% vs 17.0%) and had higher adherence (75.9% vs 64.1%) yet were less satisfied with their pain medications (all P&lt;0.001). Outpatient and emergency room visits, and hospitalizations in the 6 months prior to the survey were significantly higher in moderate-to-severe OA pain respondents vs those with mild OA pain (all P&lt;0.05). Conclusion: Patient and clinical burden was significantly greater in moderate-to-severe OA pain respondents vs mild OA pain respondents and may inform decision-making for appropriate resource allocation and effective management strategies that target specific subgroups. abstract_id: PUBMED:27076175 Developmental trajectories of paediatric headache - sex-specific analyses and predictors. Background: Headache is the most common pain disorder in children and adolescents and is associated with diverse dysfunctions and psychological symptoms. Several studies evidenced sex-specific differences in headache frequency. Until now no study exists that examined sex-specific patterns of change in paediatric headache across time and included pain-related somatic and (socio-)psychological predictors. Method: Latent Class Growth Analysis (LCGA) was used in order to identify different trajectory classes of headache across four annual time points in a population-based sample (n = 3 227; mean age 11.34 years; 51.2 % girls). In multinomial logistic regression analyses the influence of several predictors on the class membership was examined. Results: For girls, a four-class model was identified as the best fitting model. While the majority of girls reported no (30.5 %) or moderate headache frequencies (32.5 %) across time, one class with a high level of headache days (20.8 %) and a class with an increasing headache frequency across time (16.2 %) were identified. For boys a two class model with a 'no headache class' (48.6 %) and 'moderate headache class' (51.4 %) showed the best model fit. Regarding logistic regression analyses, migraine and parental headache proved to be stable predictors across sexes. Depression/anxiety was a significant predictor for all pain classes in girls. Life events, dysfunctional stress coping and school burden were also able to differentiate at least between some classes in both sexes. Conclusions: The identified trajectories reflect sex-specific differences in paediatric headache, as seen in the number and type of classes extracted. The documented risk factors can deliver ideas for preventive actions and considerations for treatment programmes. abstract_id: PUBMED:25146012 An analysis of moderate sedation protocols used in dental specialty programs: a retrospective observational study. Introduction: Pain and anxiety control is critical in dental practice. Moderate sedation is a useful adjunct in managing a variety of conditions that make it difficult or impossible for some people to undergo certain dental procedures. The purpose of this study was to analyze the sedation protocols used in 3 dental specialty programs at the Case Western Reserve University School of Dental Medicine, Cleveland, OH. Methods: A retrospective analysis was performed using dental school records of patients receiving moderate sedation in the graduate endodontic, periodontic, and oral surgery programs from January 1, 2010, to December 31, 2012. Information was gathered and the data compiled regarding the reasons for sedation, age, sex, pertinent medical conditions, American Society of Anesthesiologists physical status classifications, routes of administration, drugs, dosages, failures, complications, and other information that was recorded. Results: The reasons for the use of moderate sedation were anxiety (54%), local anesthesia failures (15%), fear of needles (15%), severe gag reflex (8%), and claustrophobia with the rubber dam (8%). The most common medical conditions were hypertension (17%), asthma (15%), and bipolar disorder (8%). Most patients were classified as American Society of Anesthesiologists class II. More women (63.1%) were treated than men (36.9%). The mean age was 45 years. Monitoring and drugs varied among the programs. The most common tooth treated in the endodontic program was the mandibular molar. Conclusions: There are differences in the moderate sedation protocols used in the endodontic, periodontic, and oral surgery programs regarding monitoring, drugs used, and record keeping. abstract_id: PUBMED:34620202 The impact of Post-COVID-Syndrome on functioning - results from a community survey in patients after mild and moderate SARS-CoV-2-infections in Germany. Background: In COVID-19 survivors a relatively high number of long-term symptoms have been observed. Besides impact on quality of life, these symptoms (now called Post-COVID-Syndrome) may have an impact on functioning and may also hinder to participation in social life in affected people. However, little is known about developing such syndrome a for patients with mild and moderate COVID-19 who did not need hospitalization or intensive care. Methods: A cross-sectional study in 1027 patients with mild or moderate COVID-19 was performed in two communities in Bavaria, Germany. The Rehabilitation-Needs-Survey (RehabNeS) including the Short Form 36 Health Survey (SF-36) on health-related quality of life, was used. Descriptive statistics were calculated. Results: In all, 97.5% of patients reported one symptom in the infection stage, such as fatigue, respiratory problems, limitations of the senses of taste and smell, fear and anxiety and other symptoms. In this time period, 84.1% of the participants experienced activity limitations and participation restrictions such as carrying out daily routines, handling stress, getting household tasks done, caring for/supporting others, and relaxing and leisure concerns. In all, 61.9% of participants reported persisting symptoms more than 3 months after infection. These were fatigue, sleep disturbances, respiratory problems, pain, fear and anxiety, and restrictions in movement; 49% of the participants reported activity limitations and participation restrictions. Predominately, these were handling stress, carrying out daily routines, looking after one's health, relaxing and leisure activities and doing house work. The impacts on quality of life and vocational performance were rather low. Conclusion: The results show that long-term symptoms after mild and moderate COVID-19 are common and lead to limitations of activities and participation. However, it seems that in most cases they are not severe and do not lead to frequent or serious issues with quality of life or work ability. Answer: The relationship between anxiety and pain in relation to sex differences has been explored in various studies, but the findings are not entirely consistent. According to the research abstracts provided, some studies have found that sex does not moderate the relationship between anxiety constructs and sensation/pain thresholds (PUBMED:23346964), while others have identified sex-specific differences in pain sensitivity and responses to pain. For instance, one study found that sex differences in pain sensitivity were mediated by pain-related fear and anxiety, with females exhibiting lower pain threshold and tolerance compared to males (PUBMED:34740794). Another study reported that females with Alzheimer's disease sensed mild and moderate pain at lower temperatures than males, but males rated the same stimuli as more unpleasant than females (PUBMED:28968238). This suggests that experimental pain-related sex differences persist in older adults with AD in a different manner than those previously demonstrated in cognitively intact older adults. Additionally, evidence for sex differences in the relationships of pain, mood, and disability has been found, with women showing a stronger relationship between disability and mood compared to men (PUBMED:16885016). Furthermore, a study on smoking and central sensitization found a significant association between smoking status and higher central sensitization severity in women, but not in men (PUBMED:38149036). Overall, while some studies do not find sex to be a moderating factor between anxiety and pain (PUBMED:23346964), others provide evidence that sex differences do exist in pain sensitivity and responses, and these differences are influenced by negative emotions such as fear and anxiety (PUBMED:34740794). These findings suggest that sex may play a role in the relationship between anxiety and pain, but further research is required to establish the extent and strength of these differences.
Instruction: Is pregnancy associated with severe dengue? Abstracts: abstract_id: PUBMED:30902046 Factors associated with the development of Congenital Zika Syndrome: a case-control study. Background: We aim to investigate possible maternal- and pregnancy-related factors associated with the development of Congenital Zika Syndrome (CZS) in children of mothers with probable gestational infection. Methods: This case-control study, we recruited mother-infant pairs between May 2015 and October 2017 in a pediatric infectious disease clinic in Rio de Janeiro. Inclusion criteria required either that the mother reported Zika infection symptoms during pregnancy or that the infant presented with clinical or imaging features of the CZS. Exclusion criteria included detection of an alternative cause for the patient's presentation or negative polymerase chain reaction assays for Zika in all specimens tested within 12 days from the beginning of maternal symptoms. Infants with CZS (CDC definition) were selected as cases and infants without CZS, but with probable maternal Zika virus infection during pregnancy, were selected as controls. Maternal and pregnancy-related informations were collected and their relationship to the presence of congenital anomalies due to CZS was assessed by Fisher exact or Mann-Whitney test. Results: Out of the 42 included neonates, 24 (57.1%) were diagnosed with CZS (cases). The mean maternal age at the birth was 21 years old. The early occurrence of maternal symptoms during pregnancy was the only variable associated with CZS (odds ratio = 0.87, 95% CI: 0.78-0.97). Case's mothers presented symptoms until the 25th week of gestational age (GA), while control's mothers presented until 36th weeks of GA. Income; illicit drug, alcohol, or tobacco use during pregnancy; other infections during pregnancy (including previous dengue infection) were not associated with CZS. Conclusions: Our study corroborates the hypothesis that Zika virus infection earlier in pregnancy is a risk factor to the occurrence of congenital anomalies in their fetuses. abstract_id: PUBMED:33379281 Zika Virus Infection in Tourists Travelling to Thailand: Case Series Report. Thailand is a popular tourist destination where Zika virus (ZIKV) transmission is currently active. To our knowledge, there are no reports of ZIKV infection imported from Thailand and affecting children. Here, we describe the clinical and microbiological findings in three cases of vector-borne ZIKV infection: An 11-year-old boy, a 2-year-old girl, and her pregnant mother, this last case leading to the prenatal exposure of her second baby to ZIKV in the second trimester of pregnancy. All patients were diagnosed after traveling to Thailand between September 2019 and January 2020. No complications were detected in any patient at follow-up, and the prenatally exposed fetus showed no abnormalities during intensive antenatal health care monitoring. On postnatal study, there were no clinical signs or microbiological findings of mother-to-child ZIKV transmission. ZIKV IgG was initially positive, but seroreversion occurred at 4 months of life. This report describes the clinical and serological evolution of vector-borne ZIKV infection occurring in dengue-naïve tourists returning from Thailand. The World Health Organization currently recommends that pre-travel advice to prevent arbovirus infection should be maintained in travelers to Southeast Asia. abstract_id: PUBMED:29738826 Co-infection with Zika and Chikungunya viruses associated with fetal death-A case report. We describe a case of fetal death associated with a recent infection by Chikungunya virus (CHIKV) in a Brazilian pregnant woman (positive RT-PCR in blood and placenta). Zika virus (ZIKV) infection during pregnancy was also identified, based on a positive RT-PCR in a fetal kidney specimen. The maternal infection caused by the ZIKV was asymptomatic and the CHIKV infection had a classical clinical presentation. The fetus had no apparent anomalies, but her weight was between the 3rd and 10th percentile for the gestational age. This is the second case report of congenital arboviral co-infection and the first followed by antepartum fetal death. abstract_id: PUBMED:33395432 Congenital abnormalities associated with Zika virus infection-Dengue as potential co-factor? A systematic review. Zika virus (ZIKV) emerged in Brazil during 2013-2014 causing an epidemic of previously unknown congenital abnormalities. The frequency of severe congenital abnormalities after maternal ZIKV infection revealed an unexplained geographic variability, especially between the Northeast and the rest of Brazil. Several reasons for this variability have been discussed. Prior immunity against Dengue virus (DENV) affecting ZIKV seems to be the most likely explanation. Here we summarise the current evidence regarding this prominent co-factor to potentially explain the geographic variability. This systematic review followed the PRISMA guidelines. The search was conducted up to May 15th, 2020, focussing on immunological interactions from Zika virus with previous Dengue virus infections as potential teratogenic effect for the foetus. Eight out of 339 screened studies reported on the association between ZIKV, prior DENV infection and microcephaly, mostly focusing on antibody-dependent enhancement (ADE) as potential pathomechanism. Prior DENV infection was associated with enhancement for ZIKV infection and increased neurovirulence in one included in vitro study only. Interestingly, the seven in vivo studies exhibited a heterogeneous picture with three studies showing a protective effect of prior DENV infections and others no effect at all. According to several studies, socio-economic factors are associated with increased risk for microcephaly. Very few studies addressed the question of unexplained variability of infection-related microcephaly. Many studies focussed on ADE as mechanism without measuring microcephaly as endpoint. Interestingly, three of the included studies reported a protective effect of prior DENV infection against microcephaly. This systematic review strengthens the hypothesis that immune priming after recent DENV infection is the crucial factor for determining protection or enhancement activity. It is of high importance that the currently ongoing prospective studies include a harmonised assessment of the potential candidate co-factors. abstract_id: PUBMED:33898725 Data quality and arbovirus infection associated factors in pregnant and non-pregnant women of childbearing age in Brazil: A surveillance database analysis. The dengue surveillance system in Brazil has registered changes in the disease's morbidity and mortality profile over successive epidemics. Vulnerable groups, such as pregnant women, have been particularly hard hit. This study assessed the quality of notifications of dengue cases among pregnant women and non-pregnant women of childbearing age in Brazil, in addition to discussing the factors associated with arbovirus infection in the group of pregnant women. We carried out a retrospective study of cases registered in the national arbovirus surveillance system between 2007 and 2017. The indicator for assessing quality was incompleteness. Logistic regression was used to analyze the association between dengue during pregnancy and sociodemographic, epidemiological, clinical, and laboratory variables. The incompleteness of the data in the notification form for dengue cases in women of childbearing age and pregnant women indicates a significant loss of information. Dengue was shown to be positively associated with Social Determinants of Health in both groups, with more severe effects among pregnant women. The incompleteness of the data can limit the quality of information from the notification system and the national assessment of the situation of the disease in women of childbearing age and pregnant women. abstract_id: PUBMED:38392915 Zika Virus-A Reemerging Neurotropic Arbovirus Associated with Adverse Pregnancy Outcomes and Neuropathogenesis. Zika virus (ZIKV) is a reemerging flavivirus that is primarily spread through bites from infected mosquitos. It was first discovered in 1947 in sentinel monkeys in Uganda and has since been the cause of several outbreaks, primarily in tropical and subtropical areas. Unlike earlier outbreaks, the 2015-2016 epidemic in Brazil was characterized by the emergence of neurovirulent strains of ZIKV strains that could be sexually and perinatally transmitted, leading to the Congenital Zika Syndrome (CZS) in newborns, and Guillain-Barre Syndrome (GBS) along with encephalitis and meningitis in adults. The immune response elicited by ZIKV infection is highly effective and characterized by the induction of both ZIKV-specific neutralizing antibodies and robust effector CD8+ T cell responses. However, the structural similarities between ZIKV and Dengue virus (DENV) lead to the induction of cross-reactive immune responses that could potentially enhance subsequent DENV infection, which imposes a constraint on the development of a highly efficacious ZIKV vaccine. The isolation and characterization of antibodies capable of cross-neutralizing both ZIKV and DENV along with cross-reactive CD8+ T cell responses suggest that vaccine immunogens can be designed to overcome these constraints. Here we review the structural characteristics of ZIKV along with the evidence of neuropathogenesis associated with ZIKV infection and the complex nature of the immune response that is elicited by ZIKV infection. abstract_id: PUBMED:23437522 Sequence of viral infection associated to pregnancy in a dengue outbreak in Santiago de Cuba in 2006 Introduction: several dengue outbreaks have taken place in Santiago de Cuba province in the last few years, in which pregnant women have been involved. Objectives: to determine the immunity and to describe the role of dengue infection and its sequence. Methods: an observational and descriptive study was conducted to characterize dengue immunity in mothers and children after 10 and 12 months of birth and to determine the influence of certain viral infection sequences in pregnant women who suffered this disease during the dengue 3 epidemics in Santiago de Cuba. To this end, serum samples from 25 females tested dengue 3-positive and from children born to them after 10 and 12 months of childbirth were studied. IgG titers and viral infection sequences were determined and analyzed according to the World Health Organization dengue classification criteria. Results: the children did not present with the antibodies and the viral infection sequences associated to mothers; in order of frequency, the same percentage was observed in DEN2/DEN3, DEN1/DEN2/DEN3 (21,74 %); but lower percentage in DEN1/DEN3 (17,39 %). Conclusions: the children did not develop humoral immunity (IgG) despite some manifestations inherent to the disease. The secondary infections prompted the most serious forms of the disease. abstract_id: PUBMED:27618421 Encephalomyelitis Associated With Dengue Fever. N/A abstract_id: PUBMED:29133154 Adverse birth outcomes associated with Zika virus exposure during pregnancy in São José do Rio Preto, Brazil. Objectives: We aimed to report the first 54 cases of pregnant women infected by Zika virus (ZIKV) and their virologic and clinical outcomes, as well as their newborns' outcomes, in 2016, after the emergence of ZIKV in dengue-endemic areas of São Paulo, Brazil. Methods: This descriptive study was performed from February to October 2016 on 54 quantitative real-time PCR ZIKV-positive pregnant women identified by the public health authority of São José do Rio Preto, São Paulo, Brazil. The women were followed and had clinical and epidemiologic data collected before and after birth. Adverse outcomes in newborns were analysed and reported. Urine or blood samples from newborns were collected to identify ZIKV infection by reverse transcription PCR (RT-PCR). Results: A total of 216 acute Zika-suspected pregnant women were identified, and 54 had the diagnosis confirmed by RT-PCR. None of the 54 women miscarried. Among the 54 newborns, 15 exhibited adverse outcomes at birth. The highest number of ZIKV infections occurred during the second and third trimesters. No cases of microcephaly were reported, though a broad clinical spectrum of outcomes, including lenticulostriate vasculopathy, subependymal cysts, and auditory and ophthalmologic disorders, were identified. ZIKV RNA was detected in 18 of 51 newborns tested and in eight of 15 newborns with adverse outcomes. Conclusions: Although other studies have associated many newborn outcomes to ZIKV infection during pregnancy, these same adverse outcomes were rare or nonexistent in this study. The clinical presentation the newborns we studied was mild compared to other reports, suggesting that there is significant heterogeneity in congenital Zika infection. abstract_id: PUBMED:34588365 The Factors Associated with Prevention and Control Practices against Zika Virus Infection among Pregnant Women in Malaysia, a Dengue-Endemic Country. The aim of this study was to analyze the prevention and control practices against Zika virus (ZIKV) infection among pregnant women in Malaysia. A cross-sectional study was conducted among pregnant women who received antenatal care. Only 7.8% reported that they abstained from sex or used barrier methods during sexual intercourse to prevent ZIKV transmission. The odds of abstaining from sex or using barrier methods were significantly higher among the Chinese ethnic group than the Malay ethnic group (odds ratio [OR] = 4.569; 95% confidence interval [CI]: 2.696-7.743). A total ZIKV knowledge score of 12-24 was also significantly associated with higher odds of abstaining from sex or using barrier methods than a score of 0-11 (OR = 2.819; 95% CI: 1.682-4.725). In total, 38.6% of women had a higher mean total score for all the current mosquito control practices than they did before hearing of ZIKV. Participants living in bungalows or village houses (OR = 2.006; 95% CI: 1.023-3.936) and low-rise houses (OR = 1.566; 95% CI: 1.109-2.212) were more likely to have a higher mean total score of current mosquito control practices than the practices before hearing of ZIKV compared to those in high-rise houses. The lack of ZIKV knowledge and control practices provides insight for the pre-emptive and prompt development of health education systems for pregnant women during antenatal care visits. Answer: The provided abstracts do not directly address the association between pregnancy and severe dengue outcomes. However, some of the abstracts discuss the impact of arboviruses, including dengue, on pregnant women and their fetuses, as well as the quality of surveillance data for dengue among pregnant women. For instance, the study in PUBMED:33898725 discusses the dengue surveillance system in Brazil and notes that vulnerable groups, such as pregnant women, have been particularly hard hit by dengue. The study aimed to assess the quality of notifications of dengue cases among pregnant women and non-pregnant women of childbearing age, and it found that the incompleteness of the data in the notification form for dengue cases indicates a significant loss of information. This suggests that while pregnant women are recognized as a vulnerable group for dengue, the quality of data collection may not be sufficient to accurately assess the association between pregnancy and severe dengue outcomes. Another study in PUBMED:23437522 investigated the immunity and the role of dengue infection and its sequence in pregnant women during a dengue outbreak. The study found that secondary infections prompted the most serious forms of the disease, which could imply that pregnant women with secondary dengue infections may experience more severe outcomes, although this is not explicitly stated in the abstract. While the abstracts provided do not offer a definitive answer to the question of whether pregnancy is associated with severe dengue, they do highlight the importance of considering pregnant women as a vulnerable population in the context of dengue outbreaks and the need for better data to understand the risks and outcomes for this group.
Instruction: Does antenatal care attendance prevent anemia in pregnancy at term? Abstracts: abstract_id: PUBMED:25772912 Does antenatal care attendance prevent anemia in pregnancy at term? Background: Anemia in pregnancy is one of the public health problems in the developed and developing world. If uncontrolled it is a major indirect cause of maternal and perinatal morbidity and mortality. This is worst in settings with poor prenatal practices. Quality prenatal interventions therefore are expected to prevent or ameliorate this disorder in pregnancy. Nigerian scientific literatures are full of data on anemia in pregnancy, but few of them are on the influence of prenatal care on maternal anemia. This study, therefore, sought to appraise the role of antenatal care (ANC) services in the prevention of anemia in pregnancy at term in Nigerian women. Objectives: The aim was to estimate the prevalence of anemia at first antenatal visit and determine if antenatal attendance prevents anemia at term among prenatal Nigerian women. To measure the hematocrit levels at booking and at term respectively and compare the proportion anemic at booking with the proportion anemic at term. Materials And Methods: A retrospective cross-sectional comparative study of 3442 prenatal women in a mission hospital in South-South Nigeria from 2009 to 2013. Venous blood hematocrit was estimated from each woman at booking and at term, and the prevalence of anemia for the two periods were compared. Results: There were 1205 subjects with hematocrit of below 33% at booking, an anemia prevalence of 32.2% at booking in this population. At term or delivery at term 736 (21.4% odds ratio [OR] =2.3, P &lt; 0.0001) of the 1052 subjects that fulfilled the study criteria had their anemia corrected, a 69.9% prevention, while 316 (9.2%, OR = 0.43, P &lt; 0.0001) persisted despite their antenatal attendance. The subjects were similar in most of the confounding factors like parity, social class, mean age, body mass index and gestational age at delivery (P value: all &gt; 0.05). Conclusion: The prevalence of anemia in pregnancy is still high in our setting. Quality ANC appeared a valuable preventive intervention that should be made widely available, accessible and affordable to all pregnant women. abstract_id: PUBMED:33430935 eRegCom-Quality Improvement Dashboard for healthcare providers and Targeted Client Communication to pregnant women using data from an electronic health registry to improve attendance and quality of antenatal care: study protocol for a multi-arm cluster randomized trial. Background: This trial evaluates interventions that utilize data entered at point-of-care in the Palestinian maternal and child eRegistry to generate Quality Improvement Dashboards (QID) for healthcare providers and Targeted Client Communication (TCC) via short message service (SMS) to clients. The aim is to assess the effectiveness of the automated communication strategies from the eRegistry on improving attendance and quality of care for pregnant women. Methods: This four-arm cluster randomized controlled trial will be conducted in the West Bank and the Gaza Strip, Palestine, and includes 138 clusters (primary healthcare clinics) enrolling from 45 to 3000 pregnancies per year. The intervention tools are the QID and the TCC via SMS, automated from the eRegistry built on the District Health Information Software 2 (DHIS2) Tracker. The primary outcomes are appropriate screening and management of anemia, hypertension, and diabetes during pregnancy and timely attendance to antenatal care. Primary analysis, at the individual level taking the design effect of the clustering into account, will be done as intention-to-treat. Discussion: This trial, embedded in the implementation of the eRegistry in Palestine, will inform the use of digital health interventions as a health systems strengthening approach. Trial Registration: ISRCTN Registry, ISRCTN10520687 . Registered on 18 October 2018. abstract_id: PUBMED:36304821 Adequacy of antenatal care services utilisation and its effect on anaemia in pregnancy. Anaemia in pregnancy remains a critical public health concern in many countries including Ghana and it poses severe consequences in the short to long-term for women and their unborn babies. Although antenatal care (ANC) is largely provided for pregnant women, the extent its utilisation protects against anaemia in pregnancy remains largely understudied. The study assessed the adequacy of ANC services utilisation and its effect on anaemia among pregnant women in the Wa Municipality of Ghana. A facility-based cross-sectional survey was conducted. Probability proportionate to size sampling and systematic random sampling were used to select the facilities and 353 respondents. While 80⋅2 % of the pregnant women reported having received a sufficient number of ANC services provided, the prevalence of the overall ANC adequacy was only 44⋅2 %. After adjusting for potential confounders, pregnant women who could not achieve adequate ANC attendance were 2⋅3 times more likely to be anaemic in the third trimester of gestation AOR = 2⋅26 (95 % CI 1⋅05, 4⋅89), compared to their counterparts who maintained adequate ANC attendance. Adequate ANC attendance was a consistent and significant predictor of anaemia in pregnancy in the third trimester. Health and nutrition education on the need for early initiation of ANC attendance and support for the consumption of diversified diets are two possible interventions that can help contain anaemia in pregnancy. abstract_id: PUBMED:21432081 Improved perinatal health through qualified antenatal care in urban Phnom Penh, Cambodia. Objectives: The aim of this study is to examine the utilities of antenatal care with comprehensive health education qualified in Phnom Penh for the health of mothers and infants during perinatal and postpartum periods. Attention was given to the existing socioeconomic disparties among women in this urban area, and the utilities were discussed irrespective of socioeconomic status. Methods: A total of 436 pregnant women in an urban area in Phnom Penh were selected using a complete survey in randomly sampled villages and were followed up. Participating in antenatal care with comprehensive health education at least three time was regarded as the use of "qualified antenatal care" during pregnancy. In this study, we investigated the independent associations of the use of qualified antenatal care with the following outcome variables after the adjustment for the influence of socieconomic variables: postpartum maternal health knowledge, postpartum maternal anemia, low birth weight, and infant immunization. Results: Of the 314 subjects who completed the follow-up examination, 66.8% used qualified antenatal care during pregnancy. The use of qualified antenatal care was positively associated with postpartum maternal health knowledge (OR=2.38, 95% CI: 1.12-5.05). and reductions in the incidences of postpartum anemia (OR=0.22,95% CI: 0.05-0.95) and low birth weight (OR=0.05,95% CI: 0.01-0.39) after the adjustment of the influence of socioeconomic status. The infants born to mothers who used qualified antenatal care had significantly higher coverage of BCG, DPT(1), and DTP(3) immunizations (P&lt;0.001,P&lt;0.001, andP&lt;0.01, respectively), independent of their socioeconomic conditions. Conclusion: This study shows the solid utilities of qualified antenatal care in Phnom Penh for perinatal health. abstract_id: PUBMED:36434534 Measuring the quality of antenatal care in a context of high utilisation: evidence from Telangana, India. Background: Antenatal care coverage has dramatically increased in many low-and middle-income settings, including in the state of Telangana, India. However, there is increasing evidence of shortfalls in the quality of care women receive during their pregnancies. This study aims to examine dimensions of antenatal care quality in Telangana, India using four primary and secondary data sources. Methods: Data from two secondary statewide data sources (National Family Health Survey (NFHS-5), 2019-21; Health Management Information System (HMIS), 2019-20) and two primary data sources (a facility survey in 19 primary health centres and sub-centres in selected districts of Telangana; and observations of 36 antenatal care consultations at these facilities) were descriptively analysed. Results: NFHS-5 data showed about 73% of women in Telangana received all six assessed antenatal care components during pregnancy. HMIS data showed high coverage of antenatal care visits but differences in levels of screening, with high coverage of haemoglobin tests for anaemia but low coverage of testing for gestational diabetes and syphilis. The facility survey found missing equipment for several key antenatal care services. Antenatal care observations found blood pressure measurement and physical examinations had high coverage and were generally performed correctly. There were substantial deficiencies in symptom checking and communication between the woman and provider. Women were asked if they had any questions in 22% of consultations. Only one woman was asked about her mental health. Counselling of women on at least one of the ten items relating to birth preparedness and on at least one of six danger signs occurred in 58% and 36% of consultations, respectively. Conclusion: Despite high coverage of antenatal care services and some essential maternal and foetal assessments, substantial quality gaps remained, particularly in communication between healthcare providers and pregnant women and in availability of key services. Progress towards achieving high quality in both content and experience of antenatal care requires addressing service gaps and developing better measures to capture and improve women's experiences of care. abstract_id: PUBMED:34255723 An Electronic Registry for Improving the Quality of Antenatal Care in Rural Bangladesh (eRegMat): Protocol for a Cluster Randomized Controlled Trial. Background: Digital health interventions (DHIs) can alleviate several barriers to achieving better maternal and child health. The World Health Organization's guideline recommendations for DHIs emphasize the need to integrate multiple DHIs for maximizing impact. The complex health system of Bangladesh provides a unique setting for evaluating and understanding the role of an electronic registry (eRegistry) for antenatal care, with multiple integrated DHIs for strengthening the health system as well as improving the quality and utilization of the public health care system. Objective: The aim of this study is to assess the effect of an eRegistry with DHIs compared with a simple digital data entry tool without DHIs in the community and frontline health facilities. Methods: The eRegMat is a cluster-randomized controlled trial conducted in the Matlab North and Matlab South subdistricts in the Chandpur district, Bangladesh, where health facilities are currently using the eRegistry for digital tracking of the health status of pregnant women longitudinally. The intervention arm received 3 superimposed data-driven DHIs: health worker clinical decision support, health worker feedback dashboards with action items, and targeted client communication to pregnant women. The primary outcomes are appropriate screening as well as management of hypertension during pregnancy and timely antenatal care attendance. The secondary outcomes include morbidity and mortality in the perinatal period as well as timely first antenatal care visit; successful referrals for anemia, diabetes, or hypertension during pregnancy; and facility delivery. Results: The eRegistry and DHIs were co-designed with end users between 2016 and 2018. The eRegistry was implemented in the study area in July 2018. Recruitment for the trial started in October 2018 and ended in June 2020, followed by an 8-month follow-up period to capture outcome data until February 2021. Trial results will be available for publication in June 2021. Conclusions: This trial allows the simultaneous assessment of multiple integrated DHIs for strengthening the health system and aims to provide evidence for its implementation. The study design and outcomes are geared toward informing the living review process of the guidelines for implementing DHIs. Trial Registration: ISRCTN Registry ISRCTN69491836; https://www.isrctn.com/ISRCTN69491836. International Registered Report Identifier (irrid): DERR1-10.2196/26918. abstract_id: PUBMED:2572482 A study of antenatal care at village level in rural Tanzania. Antenatal care is an acknowledged measure for the reduction of maternal and perinatal mortality. In the rural village of Ilula, Tanzania, the possible impact of antenatal care on mortality was studied longitudinally on the basis of the 707 women delivered in the study period. Ninety-five percent of the antenatal records were available. Anemia, malaria and anticipated obstetric problems were the most frequent reasons for interventions. Among the women from the area who were delivered in hospital, 90% had been referred there. No relationship was found between the number of antenatal visits and the pregnancy outcome, but perinatal mortality was correlated to a low birth weight. Even with a mean attendance rate of six visits and full coverage by antenatal care maternal and perinatal mortality remains high. abstract_id: PUBMED:31692871 Anaemia at antenatal care initiation and associated factors among pregnant women in West Gonja District, Ghana: a cross-sectional study. Introduction: Anaemia in pregnancy remains a critical public health concern in many African settings; but its determinants are not clear. The purpose of this study was to assess anaemia at antenatal care initiation and associated factors among pregnant women in a local district of Ghana. Methods: A facility-based cross-sectional survey was conducted. A total of 378 pregnant women attending antenatal care at two health facilities were surveyed. Data on haemoglobin level, helminths and malaria infection status at first antenatal care registration were extracted from antenatal records booklets of each pregnant women. Questionnaires were then used to collect data on socio-demographic and dietary variables. Binary and multivariate logistic regression analyses were done to assess factors associated with anaemia. Results: The prevalence of anaemia was 56%, with mild anaemia being the highest form (31.0%). Anaemia prevalence was highest (73.2%) among respondents aged 15-19 years. Factors that significantly independently reduced the odds of anaemia in pregnancy after controlling for potential confounders were early (within first trimester) antenatal care initiation (AOR=5.01; 95% CI =1.41-17.76; p=0.013) and consumption of egg three or more times in a week (AOR=0.30; 95% CI=0.15-0.81; P=0.014). Conclusion: Health facility and community-based preconception and conception care interventions must not only aim to educate women and community members about the importance of early ANC initiation, balanced diet, protein and iron-rich foods sources that may reduce anaemia, but must also engage community leaders and men to address food taboos and cultural prohibitions that negatively affect pregnant woman. abstract_id: PUBMED:35252905 A systematic review and narrative synthesis of antenatal interventions to improve maternal and neonatal health in Nepal. Background: Maternal and neonatal mortality rates remain high in many economically underdeveloped countries, including Nepal, and good quality antenatal care can reduce adverse pregnancy outcomes. However, identifying how to best improve antenatal care can be challenging. Objective: To identify the interventions that have been investigated in the antenatal period in Nepal for maternal or neonatal benefit. We wanted to understand their scale, location, cost, and effectiveness. Study Design: Online bibliographic databases (Cochrane Central, MEDLINE, Embase, CINAHL Plus, British Nursing Index, PsycInfo, Allied and Complementary Medicine) and trial registries (ClinicalTrials.gov and the World Health Organization Clinical Trials Registry Platform) were searched from their inception till May 24, 2020. We included all studies reporting any maternal or neonatal outcome after an intervention in the antenatal period. We screened the studies and extracted the data in duplicate. A meta-analysis was not possible because of the heterogeneity of the interventions and outcomes, so we performed a narrative synthesis of the included studies. Results: A total of 25 studies met our inclusion criteria. These studies showed a variety of approaches toward improving antenatal care (eg, educational programs, incentive schemes, micronutrient supplementation) in different settings (home, community, or hospital-based) and with a wide variety of outcomes. Less than a quarter of the studies were randomized controlled trials, and many were single-site or reported only short-term outcomes. All studies reported having made a positive impact on antenatal care in some way, but only 3 provided a cost-benefit analysis to support implementation. None of these studies focused on the most remote communities in Nepal. Conclusion: Our systematic review found good quality evidence that micronutrient supplementation and educational interventions can bring important clinical benefits. Iron and folic acid supplementation significantly reduces neonatal mortality and maternal anemia, whereas birth preparedness classes increase the uptake of antenatal and postnatal care, compliance with micronutrient supplementation, and awareness of the danger signs in pregnancy. abstract_id: PUBMED:31489004 Knowledge about the importance of antenatal care among females of child bearing age living in a suburban community of Lahore. Background And Objectives: During the past few decades, females had been making conscious decision to have antenatal checkup from skilled health care provider due to improved education which had played a vital role to enhance their awareness regarding the importance of this comprehensive health service. The objective was to find out the perception of females of child bearing age about the importance of antenatal care during their last pregnancy, living in a suburban community of Kot-Lakhodare Lahore. Methods: Females of reproductive age (15-49 years) living in Kot-Lakhodare were enrolled. Through a structured questionnaire, using convenient sampling technique, 1224 females of childbearing were interviewed through a cross sectional survey. The study was completed between January till August 2016. Question pertaining to their sociodemographic characteristics, perception about the importance of antenatal care services and comorbid conditions during the last pregnancy were asked. Data was analyzed by using SPSS version 21. The ethical approval both institutional and individual were duly taken. Results: Mean age was 32 ±7.8SD years with education up to primary. Three quarter of females were un- employed with monthly income less than rupees 25,000 / month. All females 869(73%) who had perception about importance of antenatal checkup during their last pregnancy had no history of anemia, hypertension, diabetes and abortion (p&lt;0.05). These females coming for the antenatal checkup delivered uneventfully by normal vaginal route and preferred government and private hospital for delivery and were vaccinated against Tetanus Toxoid (p&lt;0.05). Conclusion: The females of child bearing age during the last pregnancy had adequate knowledge about the importance of antenatal care which played an important role in prevention of comorbid conditions like anemia, hypertension, diabetes and risk of abortion. Moreover, they had the perception of importance of institutionalized deliveries, tetanus Toxoid vaccination coverage during pregnancy. Answer: Antenatal care (ANC) attendance has been associated with the prevention of anemia in pregnancy at term. A study conducted in South-South Nigeria found that at the first antenatal visit, the prevalence of anemia was 32.2%, but at term or delivery, 69.9% of the subjects who had attended antenatal care had their anemia corrected, indicating that quality ANC is a valuable preventive intervention (PUBMED:25772912). Similarly, a study in the Wa Municipality of Ghana showed that pregnant women who did not achieve adequate ANC attendance were 2.3 times more likely to be anemic in the third trimester compared to those who maintained adequate ANC attendance, suggesting that adequate ANC attendance is a significant predictor of anemia prevention in pregnancy (PUBMED:36304821). In Phnom Penh, Cambodia, the use of qualified antenatal care was positively associated with reductions in the incidences of postpartum anemia, indicating the benefits of comprehensive health education and ANC (PUBMED:21432081). Moreover, a study in West Gonja District, Ghana, found that early antenatal care initiation and consumption of egg three or more times a week significantly reduced the odds of anemia in pregnancy (PUBMED:31692871). However, it is important to note that while ANC attendance has been shown to be beneficial in preventing anemia, the quality of care received during ANC visits is also crucial. A study in Telangana, India, highlighted that despite high coverage of ANC services, there were substantial quality gaps, particularly in communication between healthcare providers and pregnant women and in the availability of key services (PUBMED:36434534). This suggests that simply attending ANC may not be sufficient; the content and experience of care provided are also important factors in preventing anemia at term. In summary, ANC attendance, particularly when it includes quality care and comprehensive health education, appears to be effective in preventing anemia in pregnancy at term.
Instruction: Does early functional outcome predict 1-year mortality in elderly patients with hip fracture? Abstracts: abstract_id: PUBMED:30976999 Long-term functional outcome after a low-energy hip fracture in elderly patients. Background: The incidence of hip fractures is increasing. Elderly patients with a hip fracture frequently present with comorbidities, which are associated with higher mortality rates. Clinical studies regarding long-term functional outcome and mortality in hip fractures are rare. The aim of this study was to analyse the functional outcome and the mortality rate after a follow-up of 5 years in elderly patients with a hip fracture. Materials And Methods: This combined retrospective and cross-sectional study included patients aged 65 years or older with a low energy hip fracture who underwent surgery in the Maastricht University Medical Center+, the Netherlands. Data such as demographics and mortality rates were retrospectively collected and functional outcome (i.e. mobility, pain, housing conditions and quality of life) was assessed by a questionnaire. Results: Two hundred and sixteen patients were included in this study (mean age 82.2, SD ± 7.5). No significant differences were found in pain before hip fracture and after 1-year and 5-year follow-ups. Long-term functional outcome deteriorated after a hip fracture, with a significant increase in the use of walking aids (p &lt; 0.001), a significant decrease of patients living in a private home (p &lt; 0.001), and a low physical quality of life (SF-12 PCS = 27.1). The mortality incidences after 30-day, 1-year and 5-year follow-ups were 7.9%, 37.0% and 69.4%, respectively. Conclusion: Long-term functional outcome in elderly patients with hip fractures significantly deteriorated, with an increased dependency for mobility and housing conditions and a decreased physical quality of life. In addition, hip fractures are associated with high mortality rates at the 5-year follow-up. Level Of Evidence: Level III, a retrospective cohort study. abstract_id: PUBMED:23546850 Does early functional outcome predict 1-year mortality in elderly patients with hip fracture? Background: Hip fractures in the elderly are followed by considerable risk of functional decline and mortality. Questions/purposes: The purposes of this study were to (1) explore predictive factors of functional level at discharge, (2) evaluate 1-year mortality after hip fracture compared with that of the general population, and (3) evaluate the affect of early functional outcome on 1-year mortality in patients operated on for hip fractures. Methods: A total of 228 consecutive patients (average age, 77.6 ± 7.4 years) with hip fractures who met the inclusion criteria were enrolled in an open, prospective, observational cohort study. Functional level at discharge was measured with the motor Functional Independence Measure (FIM) score, which is the most widely accepted functional assessment measure in use in the rehabilitation community. Mortality rates in the study population were calculated in absolute numbers and as the standardized mortality ratio. Multivariate regression analysis was used to explore predictive factors for motor FIM score at discharge and for 1-year mortality adjusted for important baseline variables. Results: Age, health status, cognitive level, preinjury functional level, and pressure sores after hip fracture surgery were independently related to lower discharge motor FIM scores. At 1-year followup, 57 patients (25%; 43 women and 14 men) had died. The 1-year hip fracture mortality rate compared with that of the general population was 31% in our population versus 7% for men and 23% in our population versus 5% for women 65 years or older. The 1-year standardized mortality rate was 341.3 (95% CI, 162.5-520.1) for men and 301.6 (95% CI, 212.4-391.8) for women, respectively. The all-cause mortality rate observed in this group was higher in all age groups and in both sexes when compared with the all-cause age-adjusted mortality of the general population. Motor FIM score at discharge was the only independent predictor of 1-year mortality after hip fracture. Conclusions: Functional level at discharge is the main determinant of long-term mortality in patients with hip fracture. Motor FIM score at discharge is a reliable predictor of mortality and can be recommended for clinical use. abstract_id: PUBMED:33936949 Mortality profile after 2 years of hip fractures in elderly patients treated with early surgery. Background: In geriatric age group, hip fractures tend to become a major public health hazard. Due to this high occurrence, there is a need to develop standardized, effective, and multidisciplinary management for treatment. These elderly patients have excessive mortality that can extend ahead of the time of recovery. Early surgery after hip fractures has lead to a notable reduction in mortality rates. Still, it is considerably high as compared to other fractures. Methods: 266 patients of &gt;65 years who were operated within 72 h hours in a tertiary level health care centre for hip fractures were included. They were evaluated with X-rays and grade of Singh's index was noted. Mortality rates and the factors associated with it such as age, sex, co morbidities (using Charlson's co morbidity Index/CCI) were evaluated after 2 year follow up. Results: The overall 2-year mortality reported in our study population was 11.2%. It was broadly lower as compared to most of the other studies. It was 6.3% in females as compared to 18.1% in males. While it was reported to be only 6% in 65-74 years of age, it was 25% in patients who were 85 years and above. 76.6% of the patients had Singh's index of ≤ grade 3 showing osteoporosis. The patients with Low Charlson's score showed only 4.2% mortality while those with high Charlson's score showed 25.5% mortality. Conclusion: It was concluded that Mortality among elderly patients after early surgery after osteoporotic hip fractures is quite significant. The factors for improvement in long term survival post-hip fracture may include changing treatment patterns, increasing life expectancy and early surgery. Increase in age, female sex, and high CCI Scores were major risk factors of mortality after hip fractures in a 2-year follow-up period. abstract_id: PUBMED:34765379 Predictive Value of Blood Parameters and Comorbidities on Three-Month Mortality in Elderly Patients With Hip Fracture. Background Knowing the factors that increase the risk of death in patients with hip fractures will help us to take precautions and intervene when necessary in the pre- and postoperative periods. Therefore, it is important to have inexpensive and practical biomarkers that can predict postoperative complications and mortality. The present study aimed to identify the factors that contribute to early mortality in elderly patients with hip fractures in the first three months after trauma, as well as the parameters that may be determinants of mortality. Methods The data of 1,015 patients over 65 years of age with femoral neck and intertrochanteric fractures admitted between January 2009 and January 2020 were retrospectively reviewed. A total of 763 patients who met the inclusion criteria were included in the study. Our study was designed to include 110 (14.4%) patients in Group 1 who were determined to have died within three months after the diagnosis of hip fracture and 653 (85.6%) patients in Group 2 who were determined not to have died within one year after the trauma. Age, gender, comorbid diseases, American Society of Anesthesiologists (ASA) score, type of anesthesia, operation time, type of implant used, time until surgery, and some biochemical blood values were compared between the two groups. Our data were analyzed statistically using the IBM Statistical Product and Service Solutions (SPSS) software for Windows, v. 25.0 (IBM SPSS Statistics for Windows, Armonk, NY). Results Of all of the patients, 370 (48.5%) were female and 393 (51.5%) were male. The patients who survived had an average age of 76.08, while the patients who died had an average age of 80.57. The mean age among the groups is significantly higher in patients who died. High creatinine, alanine aminotransferase (ALT), lactate dehydrogenase (LDH), and low albumin values were found to be associated with mortality. Conclusion It has been determined that advanced age, delayed operation time, high ASA score, and the number of comorbid diseases are associated with mortality in elderly patients with hip fractures, and biomarkers, such as creatinine, ALT, and LDH, can be used as markers for early mortality. With the increase of studies of similar nature, it will be possible to calculate a systematic risk map for mortality in elderly patients with a proximal femur fracture. abstract_id: PUBMED:29929867 Hip fracture in the elderly patient: Prognostic factors for mortality and functional recovery at one year Objective: The aim of this study is to identify the risks factors for mortality and functional recovery in elderly patients admitted to hospital with a hip fracture. Materials And Methods: Longitudinal prospective study in patients 80 years old or more and patients between 75 and 79 in residential home care with a hip fracture and with a past medical history of dementia or followed-up by the Geriatric Unit. A total of 359 patients were included, and the demographic data, previous functional status, comorbidity, type of fracture, and dementia were recorded. The data collected during admission included time to surgery, delirium, functional recovery, length of stay, placement at discharge, and mortality. Patients were followed-up for one year and details were collected on placement at the end of follow-up, functional recovery, medical complications, and mortality. Results: The baseline characteristics of the patients with a strong association with mortality after a hip fracture were old age (&gt; 92 years), medical complications delaying surgery (HR 2.17; 95% CI; 1.27-3.73), diagnosis of dementia (HR 1.78; 95% CI; 1.15-2.75), or heart failure (HR 1.75; 95% CI; 1.12-2.75). The fitted multivariable regression models showed that functional impairment before the hip fracture or lack of functional recovery are associated with higher mortality, and patients with increased age, delirium, dementia, and previous functional impairment showed worse functional recovery. Conclusion: In the elderly patients with a hip fracture, increased age, comorbidity and previous functional status is associated with mortality. Functional recovery prognosis will depend on age, previous functional status, past medical history of dementia, and the presence of delirium during admission. abstract_id: PUBMED:34268353 Predictive factors associated with the clinical outcome of intertrochanteric hip fracture in high-risk elderly patients treated with total hip arthroplasty versus percutaneous external fixation. Background: Little is known regarding the survival and functional recovery of elderly intertrochanteric hip fracture (IHF) patients after total hip arthroplasty (THA) versus percutaneous external fixation (PEF). This study aims to analyze the prognostic factors of THA and PEF in elderly IHF patients. Methods: A total of 155 consecutive elderly patients (mean age of 80 years) diagnosed with IHF were retrospectively reviewed from our database between January 1, 2010, and December 31, 2018. The preoperative, intraoperative and postoperative covariates were analyzed by two independent surgical cohorts: THA and PEF. The main outcomes included the hip function score, all-cause mortality within 1 year after surgery, and overall survival. Covariables and their influence on independent outcomes were analyzed using multivariate regression models. Results: The median follow-up period was 5.1 years, and 6 patients were lost to follow-up. At the endpoint, 70 of 85 patients treated with THA and 37 of 70 patients treated with PEF survived, exhibiting mean Harris hip scores of 84.4 and 69.0, respectively. The Kaplan-Meier curves and log-rank tests showed no significant difference in overall survival. After adjusting for the covariates, the surgical mode was a unique prognostic factor affecting hip function recovery, and two prognostic factors (leukocyte count and D-dimer) were correlated with 1-year all-cause mortality. Age at admission, fracture classification, D-dimer level and surgical mode were identified as prognostic factors affecting overall survival. After adjusting for the former three covariates, THA reduced the risk of death by 67.20% compared with PEF (HR 0.328, 95% CI, 0.121-0.890). Conclusions: Despite the nonsignificant difference in 1-year all-cause mortality, THA demonstrated superior midterm survival and hip function recovery in elderly IHF patients compared with PEF. Predictive factors, including age at admission, fracture classification, D-dimer level and surgical mode, are associated with the overall survival of IHF in high-risk elderly patients. abstract_id: PUBMED:24954835 Do depressive symptoms on hospital admission impact early functional outcome in elderly patients with hip fracture? Background: Depression is the most common mood disorder in elderly people and one of the most prevalent comorbidities in older people with hip fracture. While several authors have confirmed that depressive symptoms assessed at a later stage after hip fracture impact functional outcome and mortality, the role of depressive symptoms identified at an earlier stage after hip fracture remains understudied. The aim of the present study was to determine if depressive symptoms assessed on hospital admission impact early functional outcome after hip fracture surgery. Methods: We studied 112 patients who underwent surgery for hip fracture during a 6-month period. Depressive symptoms were assessed using the 30-item Geriatric Depression Scale on admission to the acute setting. Multidimensional assessment included sociodemographic characteristics, general health status, cognitive status, functional status prior to injury, and perioperative variables. The primary outcome measure was motor Functional Independence Measure at discharge. Results: Adjusted multivariate regression analysis revealed that the presence of moderate to severe depressive symptoms (Geriatric Depression Scale ≥ 20), older age, and female gender were independently related to motor Functional Independence Measure at discharge. Conclusion: Increasing levels of depressive symptoms in elderly hip fracture patients influence short-term functional outcome. We strongly support the introduction of routine assessment of this baseline comorbidity, especially in female patients. Failure to identify such patients is a missed opportunity for possible improvement of early functional outcome after hip fracture in elderly. abstract_id: PUBMED:30217470 Poor nutritional status but not cognitive or functional impairment per se independently predict 1 year mortality in elderly patients with hip-fracture. Background & Aims: Hip fractures are strongly associated with mortality in the elderly. Studies investigating predisposing factors have suggested a negative impact of poor nutritional, cognitive and functional status on patient survival, however their independent prognostic impact as well as their interactions remain undefined. This study aimed to determine whether poor nutritional status independently predicts 1 year post-fracture mortality after adjusting for cognitive and functional status and for other clinically relevant covariates. Methods: 1211 surgically treated hip fracture elderly (age ≥ 65) patients consecutively admitted to the Orthopaedic Surgery Unit of the "Azienda Sanitaria Universitaria Integrata Trieste" (ASUITs), Cattinara Hospital, Trieste, Italy and managed by a dedicated orthogeriatric team. Pre-admission nutritional status was evaluated by Mini Nutritional Assessment (MNA) questionnaire, cognitive status by Short Portable Mental Status Questionnaire (SPMSQ) and functional status by Activity of Daily Living (ADL) questionnaire. All other clinical data, including comorbidities, type of surgery, post-operative complications (delirium, deep vein thrombosis, cardiovascular complications, infections, need for blood transfusions) were obtained by hospital clinical records and by mortality registry. Results: Poor nutritional status (defined as MNA ≤23.5), increased cognitive and functional impairment were all associated with 3-, 6- and 12 month mortality (p &lt; 0.001). Both cognitive and functional impairment were associated with poor nutritional status (p &lt; 0.001). Logistic regression analysis demonstrated that the association between nutritional status and 3-, 6- and 12- month mortality was independent of age, gender, comorbidities, type of surgery and post-operative complications as well as of cognitive and functional impairment (p &lt; 0.001). In contrast, the associations between mortality and cognitive and functional impairment were independent (p &lt; 0.001) of demographic (age, gender) and clinical covariates but not of malnutrition. Kaplan-Meier analysis showed a lower mean survival time (p &lt; 0.001) in patients with poor nutritional status compared with those well-nourished. Conclusions: In hip fracture elderly patients, poor nutritional status strongly predicts 1 year mortality, independently of demographic, functional, cognitive and clinical risk factors. The negative prognostic impact of functional and cognitive impairment on mortality is mediated by their association with poor nutritional status. abstract_id: PUBMED:30944084 Mortality and functional independence one year after hip fracture surgery: extracapsular fracture versus intracapsular fracture Objectives: Outcome in hip fracture patients tends to be poor, with an associated death rate of 20 to 33%. The primary aim of our monocentric retrospective study was to compare mortality rates one year after surgery in patients with extracapsular fracture versus patients with intracapsular fracture of the proximal femur. Our secondary aims were the evaluation of functional independence and the rate of institutionalization one year after surgery. Methods: We compared two groups of 100 patients. The first group had an average age of 83.2 years, and the patients underwent total hip replacement for intracapsular fracture. Patients in the second group, who underwent osteosynthesis for extracapsular fracture, were aged 83.6 years on average. Results: One year post-surgery, there was not a significant difference in mortality between the two groups (23% for extracapsular fracture vs 22% for intracapsular fracture). The rate of independent walking was significantly better in the intracapsular fracture group (42.3% vs 27.3%, p=0.047), and the rate of institutionalization was significantly higher in the extracapsular fracture group (35.8% vs 17.3%, p=0.043). Conclusion: Elderly patients with hip fracture are prone to poor outcomes. When compared with osteosynthesis, total hip replacement does not lead to higher mortality rates though it is a more complex surgery. Our findings raise questions regarding of treatment for extracapsular fracture and the choice between osteosynthesis or total hip replacement with a reconstruction of the proximal femur. abstract_id: PUBMED:31256198 Altered seric levels of albumin, sodium and parathyroid hormone may predict early mortality following hip fracture surgery in elderly. Purpose: To analyse a wide set of routine laboratory parameters at admission to predict mortality within 30 post-operative days in elderly patients with hip fracture, as well as calculate the critical values of those biomarkers. Method: Data of 994 patients older than 65 years with hip fracture were analysed of which 89 (8.2%) died within 30 post-operative days. Variables described in the literature with potential influence on early mortality were collected, including demographics, fracture type, American Society of Anesthesiologists score, Charlson's comorbidity index and pre-operative Hodkinson's mental test and the Katz index for activities of daily living. In addition, an exhaustive collection of biomarkers from routine blood testing at admission was performed. Critical levels of biomarkers were calculated by the method of area under ROC curve. Results: At admission, early mortality group had significantly higher Charlson's index (p = 0.001) and lower the Katz index (p = 0.001). The surgical delay also was significantly longer in that group (p = 0.001). In univariate analyses, serum concentration at admission of total protein (p = 0.004), albumin (p = 0.001), sodium (p = 0.001), and parathyroid hormone (PTH) (p = 0.001) were significantly different between both groups. In multivariate analysis, serum albumin &lt; 2.9 g/dL (p = 0.013), sodium &lt; 127 mEq/L (p = 0.035) and PTH &gt; 65 pg/mL (p = 0.005) were predictors of early mortality. The three biomarkers together accounted for 67% of the variability in early mortality. Conclusion: The association of altered levels at admission of serum concentration of albumin, sodium and PTH was predictor of early mortality following hip fracture surgery in elderly patients. Answer: Yes, early functional outcome does predict 1-year mortality in elderly patients with hip fracture. According to the study by PUBMED:23546850, functional level at discharge is the main determinant of long-term mortality in patients with hip fracture. The motor Functional Independence Measure (FIM) score at discharge was found to be the only independent predictor of 1-year mortality after hip fracture. This suggests that patients with lower motor FIM scores at discharge, indicating poorer functional status, have a higher risk of mortality within the first year following their hip fracture.
Instruction: Routine chest radiography of children with cancer hospitalized for fever and neutropenia: is it really necessary? Abstracts: abstract_id: PUBMED:9305718 Routine chest radiography of children with cancer hospitalized for fever and neutropenia: is it really necessary? Background: Although there have been two reports suggesting that it is not necessary to obtain chest radiographs of all children with cancer who are hospitalized for fever and neutropenia, this practice continues. Methods: Fifty-four children with cancer who were hospitalized for 108 episodes of fever and neutropenia were followed prospectively. Data on their respiratory signs and symptoms were collected on admission and throughout their hospital course. Chest radiographs were obtained at the discretion of the pediatric oncology attending physician and were interpreted by a pediatric radiologist. Results: Pneumonia was documented by chest radiograph in 4 of the 108 episodes (3.7%) of fever and neutropenia. In 10 of the 108 episodes, the children had abnormal respiratory findings; this group included the 4 children with pneumonia documented by chest X-ray examination. None of the children with normal respiratory findings hospitalized for the remaining 98 episodes had pneumonia. Chest radiographs were not obtained for 40 of the 108 episodes of fever and neutropenia. None of the children with these 40 episodes had respiratory abnormalities and all recovered without a problem. Chest radiographs were obtained for the remaining 68 episodes of fever and neutropenia. Of the four children in this group with pneumonia documented by chest X-ray, two were diagnosed on admission, and another two whose initial radiographs were normal developed pneumonia later in their hospital course. There were no differences in age, absolute neutrophil count, temperature at presentation, or type of malignancy between the children who had chest radiographs and the children who did not. Conclusions: Pneumonia is an uncommon cause of infection in children with cancer hospitalized for fever and neutropenia. Therefore, the authors believe it is not necessary to obtain a chest radiograph in children with no respiratory abnormalities who are hospitalized for fever and neutropenia. [See editorial on pages 1009-10, this issue.] abstract_id: PUBMED:15266405 Is routine chest radiography necessary for the initial evaluation of fever in neutropenic children with cancer? Background: The yield of routine chest radiography (CXR) as part of the initial management of febrile neutropenic pediatric oncology patients is questionable. Procedure: We retrospectively analyzed the clinical records of neutropenic (absolute neutrophil count &lt; or = 0.5 x 10(9)/L) children with cancer, admitted with oral temperature &gt; or = 38 degrees C to our institution, between January 2001 and October 2002. Following admission, patients received tobramycin plus (piperacillin or ticarcillin-clavulanic acid). Admission routine CXRs were reviewed. Clinical and radiological features were compared with the discharge diagnosis. Age, underlying disease, and the presence of pulmonary symptoms or signs were studied as possible predictors of CXR findings related to pneumonia. Results: In total, 88 patients experienced 170 episodes of fever. A routine admission CXR was obtained for 157 of the episodes. Radiologists found 20 (12.7%) abnormal CXR (6 with a segmental or lobar consolidation considered as a pneumonia). In addition, two patients with abnormal admission CXR developed lobar consolidation on a repeat film, later in their hospital course. There were no differences in age and type of underlying disease between children with or without pneumonia. Respiratory symptoms were initially present in 58 cases. Seven (12%) had pneumonia. Among the 99 asymptomatic cases only one (1%) patient had a pneumonia (P = 0.0041). This child had a positive blood culture for P. aeruginosa at the time of admission. None of the children had initial therapy modified on the basis of radiologic findings. Conclusion: In this study, pneumonia is an unusual cause of fever (5%), especially in the absence of respiratory signs or symptoms (1%). Admission CXR should be reserved for the neutropenic pediatric oncology patient presenting with fever and abnormal respiratory findings. abstract_id: PUBMED:22278307 Diagnostic value of routine chest radiography in febrile, neutropenic children for early detection of pneumonia and mould infections. Background: Despite recent studies failing to demonstrate the value of routine chest radiography (CXR) in the initial evaluation of the febrile neutropenic patient with cancer, this screening test is advocated by some experts. We evaluated the benefits of CXR for early diagnosis of pulmonary infection at St. Jude Children's Research Hospital (SJCRH) with emphasis on early recognition of mould infections. Patients And Methods: We reviewed the courses of 200 consecutive febrile neutropenic pediatric patients to determine if routine CXR at initial evaluation was useful in the identification of clinically occult pneumonia. We also reviewed all cases of proven or probable mould infections from the opening of SJCRH in 1962 until 1998 when routine CXR was no longer practiced in our institution to identify cases that were first recognized by routine CXR. Results: Of 200 febrile neutropenic patients, pulmonary abnormalities consistent with pneumonia were detected by routine CXR in only five patients without pulmonary signs or symptoms. In only one case was a change in management considered. Of the 70 patients with pulmonary mould infection identified from 1962 to 1998, routine CXR was performed in 45 patients at the onset of a febrile, neutropenic episode in which a mould infection was diagnosed. Routine CXR was pivotal in the recognition of the mould infection in only two cases over this 36-year period. Conclusion: CXR is warranted in the evaluation of the newly febrile neutropenic pediatric oncology patient only when respiratory signs or symptoms are present. abstract_id: PUBMED:3183701 Use of routine chest radiography in the evaluation of fever in neutropenic pediatric oncology patients. Evaluation of febrile episodes in children who have become neutropenic during treatment for malignant disease has traditionally included radiography of the chest. It has been our impression that the yield of such examination is low. To test this hypothesis we reviewed all chest radiographs (CXRs) obtained in the above setting in our institution over the last 3 years. These radiographs were independently reviewed by two of us (R.C., J.F.). Sixty-one patients experienced 134 febrile neutropenic episodes for which a CXR was obtained. Only eight (6%) of these films revealed any abnormality. After careful review it was apparent that four of these radiographs did not represent a infectious process. Thus only four of 134 films (2.9%) indicated pulmonary infection as the probable cause of fever in the patient. All four of these patients had prominent respiratory signs or symptoms. Of patients who were febrile but without pulmonary signs/symptoms, only one of 49 had an abnormal radiograph. We feel that such a low yield (at most 2%) calls into question the routine practice of obtaining a CXR in the febrile neutropenic child who is otherwise asymptomatic. abstract_id: PUBMED:1913490 The yield of routine chest radiography in children with cancer hospitalized for fever and neutropenia. A routine admission chest radiograph (CXR) in pediatric patients with cancer who are admitted to the hospital for fever and neutropenia has been advised because the signs and symptoms of pneumonia may be absent. The authors studied 131 consecutive patient admissions for fever and neutropenia to evaluate the diagnostic yield of routine CXR. All patients had a complete history, physical examination, complete blood count, blood culture, urinalysis, urine culture, and CXR. Patients routinely started ceftazidime monotherapy. Results of the CXR were correlated with the presence or absence of signs and symptoms of respiratory disease. Of 128 CXR results, 26 (20%) were abnormal (13 with known malignant disease, 2 with atelectasis, 3 with peribronchial cuffing, and 8 with pneumonia [6%]). Three patients with pneumonia were asymptomatic. Therefore, only 3 of 128 patients (2.3%) had pneumonia on CXR not suspected by physical examination. None would have had initial therapy modified based on the CXR finding alone. The authors concluded that the incidence of pneumonia in a child with fever and neutropenia is low and that routine CXR at diagnostic evaluation is unnecessary in the asymptomatic ambulatory patient. abstract_id: PUBMED:14602135 Routine radiography does not have a role in the diagnostic evaluation of ambulatory adult febrile neutropenic cancer patients. Cancer patients treated with chemotherapy are susceptible to bacterial infections. When an adult patient presents with febrile neutropenia, standard diagnostic care includes physical examination, laboratory diagnostics, chest X-ray (CXR) and sinus radiography. However, the yield of routine radiography in the diagnostic evaluation of ambulatory adult febrile neutropenic patients with normal findings at their physical examination is questionable. Two CXRs and one sinus X-ray were obtained in 109 and 106 febrile neutropenic episodes after chemotherapy in ambulatory adult patients who had no clinical signs suggesting pulmonary infection or sinusitis. We found that in only two of 109 (1.8%; 95% Confidence Interval (CI): 0.3-5.8%) febrile neutropenic episodes without clinical signs of new pulmonary disease, the CXR showed a consolidation suggesting pneumonia. In addition, in five of 88 (5.7%; 95% CI: 2.2-12.0%) febrile episodes in asymptomatic patients, sinus X-ray suggested sinusitis. In none of these seven episodes was a change of antibiotic therapy necessary. In the absence of clinical signs indicating pneumonia or sinusitis, the yield of CXR and sinus radiography in ambulatory adult cancer patients presenting with febrile neutropenia is minimal; CXR and sinus radiography should no longer be performed on a routine basis. abstract_id: PUBMED:15481080 Is routine chest radiography necessary for the initial evaluation of fever in neutropenic children with cancer? N/A abstract_id: PUBMED:9305699 Routine chest radiography for pediatric oncology patients with febrile neutropenia: is it really necessary? N/A abstract_id: PUBMED:22050289 Systematic review and meta-analysis of the value of clinical features to exclude radiographic pneumonia in febrile neutropenic episodes in children and young people. Introduction: Children and young people who present with febrile neutropenia (FNP) secondary to malignancies or their treatment frequently do not undergo routine chest radiography. With shorter courses of antibiotic therapy, failure to recognise pneumonia and consequent under-treatment could produce significant problems. Methods: The review was conducted determine the value of the absence of clinical features of lower respiratory tract infection in excluding radiographic pneumonia at presentation of FNP using Centre for Reviews and Dissemination methods. It was registered with the HTA Registry of systematic reviews, CRD32009100453. Ten bibliographic databases, conference proceedings, reference lists and citations were searched. Cohort studies which compared clinical examination to radiographic findings were included. Results were summarised by random-effects meta-analysis. Results: Four studies were included. Synthesis of the three higher-quality studies gave imprecise estimates of the average sensitivity (75%; 95% CI 52% to 89%) and average specificity (69%; 95% CI 57% to 78%) for clinical examination in the detection of radiographic pneumonia. If the prevalence of pneumonia is 5%, these estimates produce a negative predictive value of 98% (95% CI 96% to 99%). Alternatively, there remains a 1.9% probability of pneumonia (95% CI 0.7% to 4.2%). Conclusion: Signs and symptoms of lower respiratory infection have only moderate sensitivity and specificity for pneumonia; the low prevalence of the condition justifies the routine withholding of chest radiographs. However, for those with a predisposition to pneumonia, or re-presenting after a short course of antibiotic therapy, a chest X-ray should be performed despite an absence of signs. abstract_id: PUBMED:33175714 The utility of chest X-ray vs. computed tomography in febrile neutropenia patients presenting to the emergency department. Introduction: Pulmonary infections are not uncommon in patients with febrile neutropenia. Physicians have agreed to perform a chest X-ray (CXR) for all febrile neutropenic patients presenting with respiratory signs/symptoms. Nevertheless, they were divided into two groups when it came to asymptomatic febrile neutropenic patients (i.e. without respiratory signs/symptoms). A superior alternative to CXR is Computed Tomography (CT). CT, in comparison to CXR, was shown to have better sensitivity in detecting pulmonary foci. The aim of our study is to compare the diagnostic performance of CT and CXR in febrile neutropenic patients presenting to the emergency department, regardless of their clinical presentation. We are also interested in the predictors of pneumonia on chest imaging. Methodology: This is a retrospective cohort study conducted on febrile neutropenic adult cancer patients presenting to the emergency department of the American University of Beirut Medical Center. Results: 11.4% of 263 patients had pneumonia although 27.7% had respiratory signs/symptoms. 17.1% of those who were symptomatic and did a CXR were found to have pneumonia. 41.7% of those who were symptomatic and did a CT were found to have pneumonia. 30% had negative findings on CXR but pneumonia on CT. Conclusion: Patients with positive findings of pneumonia on chest imaging mainly had solid tumors, profound neutropenia, a higher CCI and a longer LOS. The presence of respiratory signs is the main predictor of positive pneumonia on chest imaging. CT is superior to CXR in detecting pulmonary foci in the population studied. Answer: The necessity of routine chest radiography for children with cancer hospitalized for fever and neutropenia has been questioned by several studies. The findings suggest that pneumonia is an uncommon cause of infection in these patients, and the routine use of chest radiographs may not be necessary, especially in the absence of respiratory signs or symptoms. A prospective study followed 54 children hospitalized for 108 episodes of fever and neutropenia and found that pneumonia was documented in only 4 episodes (3.7%). None of the children with normal respiratory findings had pneumonia, leading the authors to conclude that chest radiographs are not necessary for children with no respiratory abnormalities who are hospitalized for fever and neutropenia (PUBMED:9305718). Another study retrospectively analyzed 170 episodes of fever in neutropenic children with cancer and found that pneumonia was an unusual cause of fever (5%), particularly in the absence of respiratory signs or symptoms (1%). The study concluded that admission chest radiography should be reserved for patients presenting with fever and abnormal respiratory findings (PUBMED:15266405). A review at St. Jude Children's Research Hospital evaluated the benefits of routine chest radiography for early diagnosis of pulmonary infection, with emphasis on early recognition of mould infections. The study concluded that chest radiography is warranted only when respiratory signs or symptoms are present (PUBMED:22278307). Similarly, another study reviewed chest radiographs obtained for febrile neutropenic episodes and found a very low yield (2.9%) for indicating pulmonary infection as the probable cause of fever. The study suggested that routine chest radiography is not necessary for asymptomatic febrile neutropenic children (PUBMED:3183701). A study evaluating the diagnostic yield of routine chest radiography in children with cancer hospitalized for fever and neutropenia found that only 2.3% of patients had pneumonia on chest radiography not suspected by physical examination, and none would have had initial therapy modified based on the radiographic findings alone. This led to the conclusion that routine chest radiography at diagnostic evaluation is unnecessary in asymptomatic patients (PUBMED:1913490). In summary, the evidence suggests that routine chest radiography for children with cancer hospitalized for fever and neutropenia is not necessary unless there are respiratory signs or symptoms indicative of a potential pulmonary infection.
Instruction: Is dermoscopy (epiluminescence microscopy) useful for the diagnosis of melanoma? Abstracts: abstract_id: PUBMED:9001863 Dermoscopy (epiluminescence microscopy) of pigmented skin lesions. Current status and evolving trends. Dermoscopy (epiluminescence microscopy) is a noninvasive technique that is designed for in vivo microscopic examination of pigmented skin lesions, particularly for the early recognition of malignant melanoma. Since its introduction, dermoscopy technique has undergone extensive improvements; the instruments have become more readily available; and the diagnostic indications, benefits, and limitations have been better delineated. This article offers a concise review of the technique of dermoscopy, assesses the current status, and makes some predictions for future applications. abstract_id: PUBMED:18547883 Digital epiluminescence dermoscopy for pigmented cutaneous lesions, primary care physicians, and telediagnosis: a useful tool? Background: Digital epiluminescence dermoscopy is a relatively recent tool, based on the acquisition of high-definition digital images, for the diagnosis of pigmented cutaneous lesions. Purpose: To verify the usefulness of digital dermoscopy in detecting pigmented lesions with features which may lead to suspicion of malignancy, when the examination is carried out by primary care physicians (PCP), not expert in that kind of diagnosis. Another target was an appraisal of the effectiveness and safety of telediagnosis based on epiluminescence digital dermoscopy on pigmented lesions. Methods: Digital images from some peripheral centres (235 lesions) have been forwarded in real time to the reference centre (Unit of Plastic Surgery, University of Siena, Italy), with a double judgement by each primary care physician ('benign' or 'suspicious of malignancy') on the basis of anamnesis and clinical examination at first step, and dermoscopy as second step. The image analysis carried out from the reference centre identified every lesion examined as 'to be controlled' (219 lesions) or 'to be removed' (16 lesions). Results: Regarding the patients with dermoscopic examination (197 subjects, 235 lesions), the investigation reduced the number of lesions suspected of malignancy from 68 to 29 after the first dermoscopy, and from 29 to 16 after the re-examination of the image by the central unit researchers. Fourteen lesions suspected of malignancy when examined in the peripheral centres were then evaluated as benign by the central unit researchers, while one lesion, judged as benign at first (always labelled as 'benign' by the PCP), was then revealed as a dysplastic naevus. Conclusion: Digital dermoscopy can be enhanced by telediagnosis, which provides a better control of cutaneous pigmented lesions in the peripheral areas, thus reducing the number of consultations in specialised centres. abstract_id: PUBMED:11594860 Is dermoscopy (epiluminescence microscopy) useful for the diagnosis of melanoma? Results of a meta-analysis using techniques adapted to the evaluation of diagnostic tests. Objective: To assess, by means of meta-analysis techniques for diagnostic tests, the accuracy of dermoscopic (also known as dermatoscopy and epiluminescence microscopy) diagnosis of melanoma performed by experienced observers vs. naked-eye clinical examination. Data Sources: MEDLINE, EMBASE, PASCAL-BIOMED, and BIUM databases were screened through May 31, 2000, without any language restrictions. Study Selection: Original studies were selected when the following criteria were met: spectrum of lesions well described, histologic findings as standard criterion, and calculated or calculable sensitivity and specificity. Eight of 672 retrieved references were retained. Data Extraction: Three investigators extracted data. In case of disagreement, consensus was obtained. Summary receiver operating characteristic curve analysis was used to describe the central tendency of the studies, and to compare dermoscopy and clinical examination. Data Synthesis: Selected studies represented 328 melanomas, mostly less than 0.76 mm thick, and 1865 mostly melanocytic benign pigmented skin lesions. For dermoscopic diagnosis of melanoma, the sensitivity and specificity ranges were 0.75 to 0.96 and 0.79 to 0.98, respectively. Dermoscopy had significantly higher discriminating power than clinical examination, with respective estimated odds ratios of 76 (95% confidence interval, 25-223) and 16 (95% confidence interval, 9-31) (P =.008), and respective estimated positive likelihood ratios of 9 (95% confidence interval, 5.6-19.0) and 3.7 (95% confidence interval, 2.8-5.3). The roles of the number of lesions analyzed, the percentage of melanoma lesions, the instrument used, and dermoscopic criteria used in each study could not be proved. Conclusion: For experienced users, dermoscopy is more accurate than clinical examination for the diagnosis of melanoma in a pigmented skin lesion. abstract_id: PUBMED:15121210 Demonstration of residual perifollicular pigmentation in localized vitiligo--a reverse and novel application of digital epiluminescence dermoscopy. Digital epiluminescence dermoscopy (microscopy) is usually employed to examine melanomas and other pigmented lesions. We report its reverse application in assisting the early diagnosis of a depigmentation condition-localized vitiligo. A pattern of depigmentation with residual reservoirs of perifollicular pigments is clearly visualized. This pattern is not seen in other disorders of depigmentation. Such pattern signifies focally active or repigmenting vitiligo and thus, clearly serves as a useful guide for the cases wherein there exists a doubt about the possible diagnosis. Further studies are warranted to affirm the specificity and applicability of our observations. abstract_id: PUBMED:8214386 Histopathologic correlates of structures seen on dermoscopy (epiluminescence microscopy). Dermoscopy (epiluminescence microscopy) is an in vivo technique that enables the clinician to visualize a variety of structures in pigmented cutaneous lesions that are not discernible by naked-eye examination. To identify the histologic correlates of these structures, a series of 71 pigmented neoplasms was documented photographically with and without dermoscopy. These lesions then underwent total excision and careful step-sectioning so that the resulting histologic slides could be correlated with the dermoscopic photographs. The histologic correlates of the pigment network, brown globules, black dots, blotches, hypopigmented areas, white areas, grey-blue areas, and whitish veil are identified. The structures seen under dermoscopy have specific histologic correlates. Understanding these histopathologic correlates will allow clinicians to better evaluate the dermoscopic features of pigmented lesions. abstract_id: PUBMED:36831472 Long-Term Sequential Digital Dermoscopy of Low-Risk Patients May Not Improve Early Diagnosis of Melanoma Compared to Periodical Handheld Dermoscopy. Sequential digital dermoscopy (SDD) enables the diagnosis of a subgroup of slow-growing melanomas that lack suspicious features at baseline examination but exhibit detectable change on follow-up. The combined use of total-body photography and SDD is recommended in high-risk subjects by current guidelines. To establish the usefulness of SDD for low-risk individuals, we conducted a retrospective study using electronic medical records of low-risk patients with a histopathological diagnosis of cutaneous melanoma between 1 January 2016 and 31 December 2019, who had been referred and monitored for long-term follow-up of clinically suspicious melanocytic nevi. We sought to compare the distribution of "early" cutaneous melanoma, defined as melanoma in situ and pT1a melanoma, between SDD and periodical handheld dermoscopy in low-risk patients. A total of 621 melanomas were diagnosed in a four-year timespan; 471 melanomas were diagnosed by handheld dermoscopy and 150 by digital dermoscopy. Breslow tumor thickness was significantly higher for melanomas diagnosed by handheld compared to digital dermoscopy (0.56 ± 1.53 vs. 0.26 ± 0.84, p = 0.030, with a significantly different distribution of pT stages between the two dermoscopic techniques. However, no significant difference was found with respect to the distribution of pT stages, mean Breslow tumor thickness, ulceration, and prevalence of associated melanocytic nevus in tumors diagnosed on periodical handheld dermoscopy compared to SDD. Our results confirm that periodical dermoscopic examination enables the diagnosis of cutaneous melanoma at an earlier stage compared to first-time examination as this was associated in our patients with better prognostic features. However, in our long-term monitoring of low-risk subjects, Breslow tumor thickness and pT stage distribution did not differ between handheld periodical dermoscopy and SDD. abstract_id: PUBMED:12780722 Cutaneous endometriosis: non-invasive analysis by epiluminescence microscopy. The clinical appearance of cutaneous endometriosis can share some features with malignant melanoma, thus representing a possible cause for concern in both patient and clinician. In recent years, the use of epiluminescence microscopy (ELM, dermoscopy) has proved useful in improving the accuracy of diagnosis of pigmented skin lesions. The purpose of this study was to analyse the dermoscopic features of cutaneous endometriosis with histopathological correlation. We studied a case which showed homogeneous reddish pigmentation, regularly distributed. Within this typical pigmentation there were small red globular structures, but more defined and of a deeper hue, which we called 'red atolls'. ELM thus revealed a distinctive pattern in cutaneous endometriosis. abstract_id: PUBMED:34284940 Bowen's disease of the penile shaft presenting as a pigmented macule: dermoscopy, reflectance confocal microscopy and histopathological correlation. The penile localization of pigmented Bowen's disease has been rarely reported and has been mostly related to human papillomavirus infection. Early diagnosis and treatment are important to prevent progression to invasive squamous cell carcinoma. However, diagnosis can be challenging because it may be difficult to distinguish from melanoma, even using dermoscopy. Reflectance confocal microscopy may be useful in suggesting the bedside diagnosis before the histopathological confirmation. A case of penile pigmented Bowen's disease is described along with its dermoscopy and reflectance confocal microscopy findings and their correlation with histopathology. abstract_id: PUBMED:9159055 Diagnosis of pigmented skin lesions aided by epiluminescence microscopy. Early diagnosis of superficial melanoma (Clark Level I, II) remains the best approach to reduce the death rate from this malignant neoplasm. Today's well-informed patients understand the need to have changing moles evaluated. However, many benign pigmented lesions that undergo changes in appearance do not require excision. Epiluminescence microscopy can facilitate the differential diagnosis of cutaneous pigmented lesions and help determine which of these require biopsy. abstract_id: PUBMED:24134084 Dermoscopy for the family physician. Noninvasive in vivo imaging techniques have become an important diagnostic aid for skin cancer detection. Dermoscopy, also known as dermatoscopy, epiluminescence microscopy, incident light microscopy, or skin surface microscopy, has been shown to increase the clinician's diagnostic accuracy when evaluating cutaneous neoplasms. A handheld instrument called a dermatoscope or dermoscope, which has a transilluminating light source and standard magnifying optics, is used to perform dermoscopy. The dermatoscope facilitates the visualization of subsurface skin structures that are not visible to the unaided eye. The main purpose for using dermoscopy is to help correctly identify lesions that have a high likelihood of being malignant (i.e., melanoma or basal cell carcinoma) and to assist in differentiating them from benign lesions clinically mimicking these cancers. Colors and structures visible with dermoscopy are required for generating a correct diagnosis. Routinely using dermoscopy and recognizing the presence of atypical pigment network, blue-white color, and dermoscopic asymmetry will likely improve the observer's sensitivity for detecting pigmented basal cell carcinoma and melanoma. A two-step algorithm based on a seven-level criterion ladder is the foundation for dermoscopic evaluation of skin lesions. The first step of the algorithm is intended to help physicians differentiate melanocytic lesions from the following nonmelanocytic lesions: dermatofibroma, basal cell carcinoma, seborrheic keratosis, and hemangioma. The second step is intended to help physicians differentiate nevi from melanoma using one of several scoring systems. From a management perspective, the two-step algorithm is intended to guide the decision-making process on whether to perform a biopsy, or to refer or reassure the patient. Answer: Yes, dermoscopy, also known as epiluminescence microscopy, is useful for the diagnosis of melanoma. A meta-analysis using techniques adapted to the evaluation of diagnostic tests found that dermoscopy performed by experienced observers has significantly higher discriminating power than naked-eye clinical examination for the diagnosis of melanoma in pigmented skin lesions. The sensitivity and specificity ranges for dermoscopic diagnosis of melanoma were 0.75 to 0.96 and 0.79 to 0.98, respectively, indicating that for experienced users, dermoscopy is more accurate than clinical examination (PUBMED:11594860). Furthermore, dermoscopy is a noninvasive technique designed for in vivo microscopic examination of pigmented skin lesions, particularly for the early recognition of malignant melanoma. The technique has undergone extensive improvements, and its diagnostic indications, benefits, and limitations have been better delineated over time (PUBMED:9001863). The use of digital epiluminescence dermoscopy, which involves the acquisition of high-definition digital images, has also been shown to be useful. It can help primary care physicians, who are not experts in diagnosing pigmented lesions, to detect lesions with features that may lead to suspicion of malignancy. Telediagnosis based on digital dermoscopy has been effective and safe, providing better control of cutaneous pigmented lesions in peripheral areas and reducing the number of consultations in specialized centers (PUBMED:18547883). In summary, dermoscopy is a valuable tool for the diagnosis of melanoma, offering a higher accuracy compared to clinical examination alone, and can be enhanced by digital techniques and telediagnosis to improve access and efficiency in melanoma detection.
Instruction: Chronic disease detection and access: does access improve detection, or does detection make access more difficult? Abstracts: abstract_id: PUBMED:22546593 Chronic disease detection and access: does access improve detection, or does detection make access more difficult? Background: The recorded detection of chronic disease by practices is generally lower than the prevalence predicted by population surveys. Aim: To determine whether patient-reported access to general practice predicts the recorded detection rates of chronic diseases in that setting. Design And Setting: A cross-sectional study involving 146 general practices in Leicestershire and Rutland, England. Method: The numbers of patients recorded as having chronic disease (coronary heart disease, chronic obstructive pulmonary disease, hypertension, diabetes) were obtained from Quality and Outcomes Framework (QOF) practice disease registers for 2008-2009. Characteristics of practice populations (deprivation, age, sex, ethnicity, proportion reporting poor health, practice turnover, list size) and practice performance (achievement of QOF disease indicators, patient experience of being able to consult a doctor within 2 working days and book an appointment &gt;2 days in advance) were included in regression models. Results: Patient characteristics (deprivation, age, poor health) and practice characteristics (list size, turnover, QOF achievement) were associated with recorded detection of more than one of the chronic diseases. Practices in which patients were more likely to report being able to book appointments had reduced recording rates of chronic disease. Being able to consult a doctor within 2 days was not associated with levels of recorded chronic disease. Conclusion: Practices with high levels of deprivation and older patients have increased rates of recorded chronic disease. As the number of patients recorded with chronic disease increased, the capacity of practices to meet patients' requests for appointments in advance declined. The capacity of some practices to detect and manage chronic disease may need improving. abstract_id: PUBMED:35458941 Development and Validation of a Digital Image Processing-Based Pill Detection Tool for an Oral Medication Self-Monitoring System. Long-term adherence to medication is of critical importance for the successful management of chronic diseases. Objective tools to track oral medication adherence are either lacking, expensive, difficult to access, or require additional equipment. To improve medication adherence, cheap and easily accessible objective tools able to track compliance levels are necessary. A tool to monitor pill intake that can be implemented in mobile health solutions without the need for additional devices was developed. We propose a pill intake detection tool that uses digital image processing to analyze images of a blister to detect the presence of pills. The tool uses the Circular Hough Transform as a feature extraction technique and is therefore primarily useful for the detection of pills with a round shape. This pill detection tool is composed of two steps. First, the registration of a full blister and storing of reference values in a local database. Second, the detection and classification of taken and remaining pills in similar blisters, to determine the actual number of untaken pills. In the registration of round pills in full blisters, 100% of pills in gray blisters or blisters with a transparent cover were successfully detected. In the counting of untaken pills in partially opened blisters, 95.2% of remaining and 95.1% of taken pills were detected in gray blisters, while 88.2% of remaining and 80.8% of taken pills were detected in blisters with a transparent cover. The proposed tool provides promising results for the detection of round pills. However, the classification of taken and remaining pills needs to be further improved, in particular for the detection of pills with non-oval shapes. abstract_id: PUBMED:20148148 The detection and treatment of depression in the physically ill. DEPRESSION AND CHRONIC PHYSICAL ILLNESS ARE IN RECIPROCAL RELATIONSHIP WITH ONE ANOTHER: not only do many chronic illnesses cause higher rates of depression, but depression has been shown to antedate some chronic physical illnesses. Depression associated with physical illness is less well detected than depression occurring on its own, and various ways of improving both the detection and treatment of depression accompanying physical illness are described. This paper is in four parts, the first dealing with the evidence for depression having a special relationship with physical disorders, the second dealing with detection of depression in physically ill patients, the third with the treatment of depression, and the fourth describing the advantages of treating depression among physically ill patients. abstract_id: PUBMED:37630131 Integrated Plastic Microfluidic Device for Heavy Metal Ion Detection. The presence of heavy metal ions in soil, air and water constitutes an important global environmental threat, as these ions accumulate throughout the food chain, contributing to the rise of chronic diseases, including, amongst others, cancer and kidney failure. To date, many efforts have been made for their detection, but there is still a need for the development of sensitive, low-cost, and portable devices able to conduct on-site detection of heavy metal ions. In this work, we combine microfluidic technology and electrochemical sensing in a plastic chip for the selective detection of heavy metal ions utilizing DNAzymes immobilized in between platinum nanoparticles (PtNPs), demonstrating a reliable portable solution for water pollution monitoring. For the realization of the microfluidic-based heavy metal ion detection device, a fast and easy-to-implement fabrication method based on the photolithography of dry photosensitive layers is proposed. As a proof of concept, we demonstrate the detection of Pb2+ ions using the prototype microfluidic device. abstract_id: PUBMED:36291805 Label-Free Surface Enhanced Raman Spectroscopy for Cancer Detection. Blood is a vital reservoir housing numerous disease-related metabolites and cellular components. Thus, it is also of interest for cancer diagnosis. Surface-enhanced Raman spectroscopy (SERS) is widely used for molecular detection due to its very high sensitivity and multiplexing properties. Its real potential for cancer diagnosis is not yet clear. In this study, using silver nanoparticles (AgNPs) as substrates, a number of experimental parameters and scenarios were tested to disclose the potential for this technique for cancer diagnosis. The discrimination of serum samples from cancer patients, healthy individuals and patients with chronic diseases was successfully demonstrated with over 90% diagnostic accuracies. Moreover, the SERS spectra of the blood serum samples obtained from cancer patients before and after tumor removal were compared. It was found that the spectral pattern for serum from cancer patients evolved into the spectral pattern observed with serum from healthy individuals after the removal of tumors. The data strongly suggests that the technique has a tremendous potential for cancer detection and screening bringing the possibility of early detection onto the table. abstract_id: PUBMED:30746003 Difficult Vascular Access Anatomy Associated with Decreased Success of Revascularization in Emergent Thrombectomy. Background: Thrombectomy has become established as a successful treatment strategy for ischemic stroke, and consequently, more patients are undergoing this procedure. Due to comorbid conditions, chronic disease states, and advanced age, many patients have anatomy which complicates revascularization, specifically difficult aortic arch anatomy, or tortuous common and internal artery anatomy, or both. Methods: In the present study, these unfavorable anatomic parameters were analyzed for 53 patients undergoing acute thrombectomy for ischemic stroke. Statistical analysis was performed and the outcome TICI scores were compared. 26 of the patients analyzed had features of difficult femoral access. Results: Difficult arch anatomy was associated with unsuccessful revascularization (p = 0.03, Fisher's exact) with only 53% of patients with this feature having favorable TICI scores. Difficult common carotid access was also associated with unsuccessful revascularization (p = 0.004, Fisher's exact) with 38% success. There was a trend toward significance for unsuccessful revascularization for difficult internal carotid artery access (p = 0.06, Fisher's exact). Conclusion: Any combination of the aforementioned anatomic parameters was associated with the decreased success of treatment which was an independent predictor in multivariate analysis (p = 0.009). As difficult access anatomy is commonly encountered in patients undergoing emergent thrombectomy, it is important for the treating physician to be prepared and to adapt access strategies to increase the likelihood of successful revascularization. abstract_id: PUBMED:34099088 The Promise of Disease Detection Dogs in Pandemic Response: Lessons Learned From COVID-19. One of the lessons learned from the coronavirus disease 2019 (COVID-19) pandemic is the utility of an early, flexible, and rapidly deployable disease screening and detection response. The largely uncontrolled spread of the pandemic in the United States exposed a range of planning and implementation shortcomings, which, if they had been in place before the pandemic emerged, may have changed the trajectory. Disease screening by detection dogs show great promise as a noninvasive, efficient, and cost-effective screening method for COVID-19 infection. We explore evidence of their use in infectious and chronic diseases; the training, oversight, and resources required for implementation; and potential uses in various settings. Disease detection dogs may contribute to the current and future public health pandemics; however, further research is needed to extend our knowledge and measurement of their effectiveness and feasibility as a public health intervention tool, and efforts are needed to ensure public and political support. abstract_id: PUBMED:31449759 Caries Detection with Near-Infrared Transillumination Using Deep Learning. Dental caries is the most prevalent chronic condition worldwide. Early detection can significantly improve treatment outcomes and reduce the need for invasive procedures. Recently, near-infrared transillumination (TI) imaging has been shown to be effective for the detection of early stage lesions. In this work, we present a deep learning model for the automated detection and localization of dental lesions in TI images. Our method is based on a convolutional neural network (CNN) trained on a semantic segmentation task. We use various strategies to mitigate issues related to training data scarcity, class imbalance, and overfitting. With only 185 training samples, our model achieved an overall mean intersection-over-union (IOU) score of 72.7% on a 5-class segmentation task and specifically an IOU score of 49.5% and 49.0% for proximal and occlusal carious lesions, respectively. In addition, we constructed a simplified task, in which regions of interest were evaluated for the binary presence or absence of carious lesions. For this task, our model achieved an area under the receiver operating characteristic curve of 83.6% and 85.6% for occlusal and proximal lesions, respectively. Our work demonstrates that a deep learning approach for the analysis of dental images holds promise for increasing the speed and accuracy of caries detection, supporting the diagnoses of dental practitioners, and improving patient outcomes. abstract_id: PUBMED:36945112 Evaluation of workplace hypertension preventative and detection service in a Ghanaian University. Objectives: This study sought to evaluate the effectiveness of a pharmacist-led hypertension screening, preventative and detection services at the workplace. Methods: This was a prospective study conducted among staff at the Kwame Nkrumah University of Science and Technology from September 2019 to September 2020. Staff were screened for hypertension and interviewed via a structured questionnaire to gather data on their lifestyle practices and risk of hypertension. Prehypertensive individuals were educated and followed up for 6 months and all participants who had blood pressure consistently above 140/90 mmHg (hypertension) were referred to the University Hospital. Key Findings: Out of 162 participants screened, 19 (11.7%) were classified as stage 1 hypertensive, 5 (3.1%) as stage 2 hypertensive and 74 (45.7%) as prehypertensive. The commonest modifiable risk factor identified was body mass index &gt; 25 kg/m2 (99, 61.1%) and physical inactivity (97, 59.9%). Eleven (61%) out of 18 participants referred to the physician were confirmed hypertensive and prescribed medications. After a 6 month follow-up, there was a reduction in the mean systolic and diastolic blood pressures (P &lt; 0.05); and 47 (63.5%) out of 74 initially classified as prehypertensive had their blood pressures within the normal range. Conclusion: Workplace preventative and detection services can effectively lead to the identification of risk factors, promotion of lifestyle changes and detection of hypertension. Such services should be integrated into workplace systems to aid the prevention and detection of chronic conditions such as hypertension. abstract_id: PUBMED:38268711 Efficient and generalizable cross-patient epileptic seizure detection through a spiking neural network. Introduction: Epilepsy is a global chronic disease that brings pain and inconvenience to patients, and an electroencephalogram (EEG) is the main analytical tool. For clinical aid that can be applied to any patient, an automatic cross-patient epilepsy seizure detection algorithm is of great significance. Spiking neural networks (SNNs) are modeled on biological neurons and are energy-efficient on neuromorphic hardware, which can be expected to better handle brain signals and benefit real-world, low-power applications. However, automatic epilepsy seizure detection rarely considers SNNs. Methods: In this article, we have explored SNNs for cross-patient seizure detection and discovered that SNNs can achieve comparable state-of-the-art performance or a performance that is even better than artificial neural networks (ANNs). We propose an EEG-based spiking neural network (EESNN) with a recurrent spiking convolution structure, which may better take advantage of temporal and biological characteristics in EEG signals. Results: We extensively evaluate the performance of different SNN structures, training methods, and time settings, which builds a solid basis for understanding and evaluation of SNNs in seizure detection. Moreover, we show that our EESNN model can achieve energy reduction by several orders of magnitude compared with ANNs according to the theoretical estimation. Discussion: These results show the potential for building high-performance, low-power neuromorphic systems for seizure detection and also broaden real-world application scenarios of SNNs. Answer: The relationship between access to healthcare and the detection of chronic diseases is complex. On one hand, patient-reported access to general practice does not necessarily predict the recorded detection rates of chronic diseases. In fact, practices where patients were more likely to report being able to book appointments had reduced recording rates of chronic disease (PUBMED:22546593). This suggests that while access to healthcare is crucial, it does not automatically lead to higher detection rates of chronic diseases. On the other hand, the detection of chronic diseases can impact access to healthcare services. As the number of patients recorded with chronic disease increased, the capacity of practices to meet patients' requests for appointments in advance declined (PUBMED:22546593). This indicates that the detection of chronic diseases might strain healthcare resources, potentially making access more difficult for patients. Moreover, the detection of chronic diseases is influenced by various factors, including the presence of comorbid conditions, chronic disease states, and advanced age, which can complicate revascularization due to difficult vascular access anatomy (PUBMED:30746003). Additionally, the detection of depression, which is a common comorbidity in physically ill patients, is less well detected when it accompanies physical illness (PUBMED:20148148). This further complicates the relationship between access and detection. In the context of specific chronic conditions like epilepsy, advancements in technology such as spiking neural networks (SNNs) for cross-patient seizure detection show promise for improving detection while potentially reducing energy consumption and costs (PUBMED:38268711). Similarly, the development of tools like a digital image processing-based pill detection tool (PUBMED:35458941) and integrated microfluidic devices for heavy metal ion detection (PUBMED:37630131) can facilitate the monitoring and management of chronic diseases. In summary, while access to healthcare is essential for the detection of chronic diseases, increased detection rates can strain healthcare resources, potentially making access more challenging. Technological advancements are contributing to improved detection methods, which may help to balance the relationship between access and detection of chronic diseases.
Instruction: Fissured fibrous cap of vulnerable carotid plaques and symptomaticity: are they correlated? Abstracts: abstract_id: PUBMED:24157737 Correlation between fissured fibrous cap and contrast enhancement: preliminary results with the use of CTA and histologic validation. Background And Purpose: Previous studies demonstrated that carotid plaques analyzed by CTA can show contrast plaque enhancement. The purpose of this preliminary work was to evaluate the possible association between the fissured fibrous cap and contrast plaque enhancement. Materials And Methods: Forty-seven consecutive (men = 25; average age = 66.8 ± 9 years) symptomatic patients studied by use of a multidetector row CT scanner were prospectively analyzed. CTA was performed before and after contrast and radiation doses were recorded; analysis of contrast plaque enhancement was performed. Patients underwent carotid endarterectomy en bloc; histologic sections were prepared and evaluated for fissured fibrous cap and microvessel attenuation. The Mann-Whitney test was performed to evaluate the differences between the 2 groups. A multiple logistic regression analysis was performed to assess the effect of fissured fibrous cap and microvessel attenuation on contrast plaque enhancement. Receiver operating characteristic curve and area under the curve were also calculated. Results: Twelve patients had fissured fibrous cap. In 92% (11/12) of fissured fibrous cap-positive plaques, we found contrast plaque enhancement, whereas in 69% (24/35) of the plaques without fissured fibrous cap contrast plaque enhancement was found. The Mann-Whitney test showed a statistically significant difference between the contrast enhancement in plaques with fissured fibrous cap (Hounsfield units = 22.6) and without fissured fibrous cap (Hounsfield units = 12.9) (P = .011). On the regression analysis, both fissured fibrous cap and neovascularization were associated with contrast plaque enhancement (P = .0366 and P = .0001). The receiver operating characteristic curve confirmed an association between fissured fibrous cap and contrast plaque enhancement with an area under the curve of 0.749 (P = .005). Conclusions: The presence of fissured fibrous cap is associated with contrast plaque enhancement. Histologic analysis showed that the presence of fissured fibrous cap is associated with a larger contrast plaque enhancement compared with the contrast plaque enhancement of plaques without fissured fibrous cap. abstract_id: PUBMED:24995056 Evaluation of Fibrous Cap Rupture of Atherosclerotic Carotid Plaque with Thin-Slice Source Images of Time-of-Flight MR Angiography. Objective: To investigate the ability of source image of time-of-flight magnetic resonance angiography (TOF-MRA) in the detection of fibrous cap rupture of atherosclerotic carotid plaques. Materials And Methods: From the database of radiological information in our hospital, 35 patients who underwent carotid MR imaging and subsequent carotid endoarterectomy within 2 weeks were included in this retrospective study. MR imaging included thin-slice time-of-flight MR angiography, black-blood T1- and T2-weighted imaging. Sensitivity, specificity and accuracy were calculated for the detection of fibrous cap rupture with source image of TOF-MRA. The Cohen k coefficient was also calculated to quantify the degree of concordance of source image of TOF-MRA with histopathological data. Results: Sensitivity, specificity and accuracy in the detection of fibrous cap rupture were 90% (95%CI: 81-98), 69% (95%CI: 56-82) and 79% (95%CI: 71-87) with a k value of 0.59. The false positives (n = 15) were caused by partial-volume averaging between fibrous cap and lumen at the shoulder of carotid plaque. The false negatives (n = 5) were underestimated as partial thinning of fibrous cap. Conclusion: Source image of TOF-MRA can be useful in the detection of fibrous cap rupture with high sensitivity, but further technical improvement should be necessary to overcome shortcomings causing image degradation. abstract_id: PUBMED:37210966 BP-Net: Boundary and perfusion feature guided dual-modality ultrasound video analysis network for fibrous cap integrity assessment. Ultrasonography is one of the main imaging methods for monitoring and diagnosing atherosclerosis due to its non-invasiveness and low-cost. Automatic differentiation of carotid plaque fibrous cap integrity by using multi-modal ultrasound videos has significant diagnostic and prognostic value for cardiovascular and cerebrovascular disease patients. However, the task faces several challenges, including high variation in plaque location and shape, the absence of analysis mechanism focusing on fibrous cap, the lack of effective mechanism to capture the relevance among multi-modal data for feature fusion and selection, etc. To overcome these challenges, we propose a new target boundary and perfusion feature guided video analysis network (BP-Net) based on conventional B-mode ultrasound and contrast-enhanced ultrasound videos for assessing the integrity of fibrous cap. Based on our previously proposed plaque auto-tracking network, in our BP-Net, we further introduce the plaque edge attention module and reverse mechanism to focus the dual video analysis on the fiber cap of plaques. Moreover, to fully explore the rich information on the fibrous cap and inside/outside of the plaque, we propose a feature fusion module for B-mode and contrast video to filter out the most valuable features for fibrous cap integrity assessment. Finally, multi-head convolution attention is proposed and embedded into transformer-based network, which captures semantic features and global context information to obtain accurate evaluation of fibrous caps integrity. The experimental results demonstrate that the proposed method has high accuracy and generalizability with an accuracy of 92.35% and an AUC of 0.935, which outperforms than the state-of-the-art deep learning based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application. abstract_id: PUBMED:19218797 Fissured fibrous cap of vulnerable carotid plaques and symptomaticity: are they correlated? Preliminary results by using multi-detector-row CT angiography. Purpose: Carotid artery plaque with a disrupted fibrous cap is characterized by a higher tendency to rupture, resulting in a higher rate of transitory ischemic attack and stroke. The purpose of our study was to evaluate whether there is a statistically significant correlation between the presence of fissured fibrous cap (FFC) (assessed by using multi-detector-row CT angiography (MDCTA)) and ipsilateral symptomaticity. Material And Methods: 147 patients (105 males, 42 females; mean age 63 years, range 37-84) with a stenosis of at least 50% or a plaque alteration at sonography were retrospectively studied, yielding a total of 294 carotid arteries, by using a multi-detector-row CT (MDCT) scanner. A search for detection of FFC and a correlation with previously registered data about patients' symptomaticity by using statistical assessment were performed. Each examination was assessed independently by two readers and interobserver agreement was calculated. Results: Among the 147 patients included in the study group, 15 were excluded because of inadequate quality images. In the 132 remaining patients, for a total of 264 carotids assessed, 30 FFCs were detected by using MDCTA and overall there were 36 symptomatic patients (12 ipsilateral symptomatic patients with FFC). A statistical correlation between the presence of FFC and symptomaticity (p = 0.0032) was found. The kappa value between readers was 0.781. Conclusions: MDCT may depict FFC and the results of our study suggest that FFC may be used as an indicator for prediction of potential cerebrovascular pathology. The interobserver agreement obtained was good. abstract_id: PUBMED:15309350 MRI-derived measurements of fibrous-cap and lipid-core thickness: the potential for identifying vulnerable carotid plaques in vivo. Vulnerable plaques have thin fibrous caps overlying large necrotic lipid cores. Recent studies have shown that high-resolution MR imaging can identify these components. We set out to determine whether in vivo high-resolution MRI could quantify this aspect of the vulnerable plaque. Forty consecutive patients scheduled for carotid endarterectomy underwent pre-operative in vivo multi-sequence MR imaging of the carotid artery. Individual plaque constituents were characterised on MR images. Fibrous-cap and lipid-core thickness was measured on MRI and histology images. Bland-Altman plots were generated to determine the level of agreement between the two methods. Multi-sequence MRI identified 133 corresponding MR and histology slices. Plaque calcification or haemorrhage was seen in 47 of these slices. MR and histology derived fibrous cap-lipid-core thickness ratios showed strong agreement with a mean difference between MR and histology ratios of 0.02 (+/- 0.04). The intra-class correlation coefficient between two readers for measurements was 0.87 (95% confidence interval, 0.73 and 0.93). Multi-sequence, high-resolution MR imaging accurately quantified the relative thickness of fibrous-cap and lipid-core components of carotid atheromatous plaques. This may prove to be a useful tool to characterise vulnerable plaques in vivo. abstract_id: PUBMED:32601933 Evaluation of Carotid Plaque Rupture and Neovascularization by Contrast-Enhanced Ultrasound Imaging: an Exploratory Study Based on Histopathology. A significant portion of ischemic stroke is on account of emboli caused by fibrous cap rupture of vulnerable plaque with intraplaque neovascularization as a significant triggering factor to plaque vulnerability. Contrast-enhanced ultrasound (CEUS) could offer detailed information on plaque surface and intraplaque microvascular. This study aims to comprehensively assess the value of CEUS for the detection of plaque rupture and neovascularization in histologically verified plaques that had been removed from the patients who had undergone carotid endarterectomy (CEA). Fifty-one consecutive subjects (mean age, 67.0 ± 6.5 years; 43 [84.3%] men) scheduled for CEA were recruited. Standard ultrasound and CEUS were performed prior to surgery. Based on the direction of the contrast agents that diffuse within the plaques, plaques were divided as "inside-out" direction (contrast agents diffuse from the artery lumen towards the inside of the plaque) and non-inside-out direction. Plaque enhancement was assessed by using a semi-quantitative grading scale (grade 1: no enhancement; grade 2: moderate enhancement; grade 3: extensive enhancement). Plaques were evaluated for histopathologic characteristics according to Oxford Plaque Study (OPS) standard postoperative. Intraplaque neovascularization as manifested by the appearance of CD34-positive microvessels was characterized in terms of microvessel density (MVD), microvessel area (MVA), and microvessel shape (MVS). In 51 plaques, the sensitivity, specificity, positive, and negative predictive values of contrast agent inside-out direction diffusion for the detection of plaque fibrous cap rupture were 87.5%, 92.6%, 91.3%, and 89.3%, respectively. The incidence of cap rupture was significantly higher in contrast agent inside-out direction diffusion than non-inside-out direction diffusion (73.9% vs 25.0%, p &lt; 0.001), and inside-out direction diffusion did exhibit higher frequency of vulnerable plaques (OPS grades 3-4) (95.7% vs 53.6%, p = 0.001). Multivariate logistic regression analysis revealed the contrast agent inside-out direction diffusion as an independent correlate to plaque rupture (OR 8.5, 95% CI 2.4-30.1, p = 0.001). With increasing plaque enhancement, plaque MVD (p &lt; 0.001), plaque MVA (p = 0.012), and percentage of highly irregular-shaped microvessels increased (p &lt; 0.001). Contrast agent inside-out direction diffusion could indicate plaque rupture. The increase in plaque enhancement paralleled increased, larger, and more irregular-shaped microvessels, which may suggest an increased risk of plaque vulnerability. abstract_id: PUBMED:27168846 Assessment of vulnerable and unstable carotid atherosclerotic plaques on endarterectomy specimens. The types of lesion instability responsible for the majority of acute coronary events frequently include plaque disruption and plaque erosion with superimposed thrombosis. The term 'vulnerable plaque is used to describe atherosclerotic (ATS) plaques that are particularly prone to rupture and susceptible to thrombus formation, such as the thin-cap fibroatheroma (TCFA). The aim of the present study was to assess the morphological and histological differences between plaques that are unstable and those that are vulnerable to instability. Carotid artery endarterectomy specimens were obtained from 26 patients with carotid artery stenosis, consisting of 20 men and 6 women (age range, 35-80 years). Histological and morphometric methods were used to visualize and characterize the ATS plaques. Among the 26 carotid ATS plaques, 23% were stable, 23% were unstable and 54% were vulnerable. With regard to morphometric characteristics, the following mean values were obtained for the TCFA and unstable plaques, respectively: Fibrous cap thickness, 21.91 and 11.66 µM; proportion of necrotic core area in the total plaque area, 25.90 and 22.03%; and the proportion of inflammatory area in the total plaque area, 8.41 and 3.04%. No plaque calcification was observed in any of them. Since ATS coronary artery disease is considerably widespread and fatal, it is crucial to further study ATS lesions to obtain an improved understanding of the nature of vulnerable and unstable plaques. The methods used to detect plaque size, necrotic core area and fibrous cap thickness are considered to be particularly useful for identifying vulnerable and unstable plaques. abstract_id: PUBMED:18786671 Effects of varied lipid core volume and fibrous cap thickness on stress distribution in carotid arterial plaques. The rupture of atherosclerotic plaques is known to be associated with the stresses that act on or within the arterial wall. The extreme wall tensile stress is usually recognized as a primary trigger for the rupture of the plaque. The present study used one-way fluid-structure interaction simulation to investigate the impacts of fibrous cap thickness and lipid core volume to the wall tensile stress value and distributions on the fibrous cap. Von Mises stress was employed to represent the wall tensile stress (VWTS). A total of 13 carotid bifurcation cases were manipulated based on a base geometry in the study with varied combinations of fibrous cap thickness and lipid core volume in the plaque. Values of maximum VWTS and a stress value of VWTS_90, which represents the cut-off VWTS value of 90% in cumulative histogram of VWTS possessed at the computational nodes on the luminal surface of fibrous cap, were used to assess the risk of plaque rupture for each case. Both parameters are capable of separating the simulation cases into vulnerable and more stable plaque groups, while VWTS_90 is more robust for plaque rupture risk assessment. The results show that the stress level on the fibrous cap is much more sensitive to the changes in the fibrous cap thickness than the lipid core volume. A slight decrease of cap thickness can cause a significant increase of stress. For all simulation cases, high VWTS appears at the fibrous cap near the lipid core (plaque shoulder) regions. abstract_id: PUBMED:18194456 Matrix vesicles in the fibrous cap of atherosclerotic plaque: possible contribution to plaque rupture. Plaque rupture is the most common type of plaque complication and leads to acute ischaemic events such as myocardial infarction and stroke. Calcification has been suggested as a possible indicator of plaque instability. Although the role of matrix vesicles in the initial stages of arterial calcification has been recognized, no studies have yet been carried out to examine a possible role of matrix vesicles in plaque destabilization. Tissue specimens selected for the present study represented carotid specimens obtained from patients undergoing carotid endarterectomy. Serial frozen cross-sections of the tissue specimens were cut and mounted on glass slides. The thickness of the fibrous cap (FCT) in each advanced atherosclerotic lesion, containing a well developed lipid/necrotic core, was measured at its narrowest sites in sets of serial sections. According to established criteria, atherosclerotic plaque specimens were histologically subdivided into two groups: vulnerable plaques with thin fibrous caps (FCT &lt;100 microm) and presumably stable plaques, in which fibrous caps were thicker than 100 microm. Twenty-four carotid plaques (12 vulnerable and 12 presumably stable plaques) were collected for the present analysis of matrix vesicles in fibrous caps. In order to provide a sufficient number of representative areas from each plaque, laser capture microdissection (LCM) was carried out. The quantification of matrix vesicles in ultrathin sections of vulnerable and stable plaques revealed that the numbers of matrix vesicles were significantly higher in fibrous caps of vulnerable plaques than those in stable plaques (8.908+0.544 versus 6.208+0.467 matrix vesicles per 1.92 microm2 standard area; P= 0.0002). Electron microscopy combined with X-ray elemental microanalysis showed that some matrix vesicles in atherosclerotic plaques were undergoing calcification and were characterized by a high content of calcium and phosphorus. The percentage of calcified matrix vesicles/microcalcifications was significantly higher in fibrous caps in vulnerable plaques compared with that in stable plaques (6.705+/-0.436 versus 5.322+/-0494; P= 0.0474). The findings reinforce a view that the texture of the extracellular matrix in the thinning fibrous cap of atherosclerotic plaque is altered and this might contribute to plaque destabilization. abstract_id: PUBMED:16574926 How critical is fibrous cap thickness to carotid plaque stability? A flow-plaque interaction model. Background And Purpose: Acute cerebral ischemic events are associated with rupture of vulnerable carotid atheroma and subsequent thrombosis. Factors such as luminal stenosis and fibrous cap thickness have been thought to be important risk factors for plaque rupture. We used a flow-structure interaction model to simulate the interaction between blood flow and atheromatous plaque to evaluate the effect of the degree of luminal stenosis and fibrous cap thickness on plaque vulnerability. Methods: A coupled nonlinear time-dependent model with a flow-plaque interaction simulation was used to perform flow and stress/strain analysis in a stenotic carotid artery model. The stress distribution within the plaque and the flow conditions within the vessel were calculated for every case when varying the fibrous cap thickness from 0.1 to 2 mm and the degree of luminal stenosis from 10% to 95%. A rupture stress of 300 kPa was chosen to indicate a high risk of plaque rupture. A 1-sample t test was used to compare plaque stresses with the rupture stress. Results: High stress concentrations were found in the plaques in arteries with &gt;70% degree of stenosis. Plaque stresses in arteries with 30% to 70% stenosis increased exponentially as fibrous cap thickness decreased. A decrease of fibrous cap thickness from 0.4 to 0.2 mm resulted in an increase of plaque stress from 141 to 409 kPa in a 40% degree stenotic artery. Conclusions: There is an increase in plaque stress in arteries with a thin fibrous cap. The presence of a moderate carotid stenosis (30% to 70%) with a thin fibrous cap indicates a high risk for plaque rupture. Patients in the future may be risk stratified by measuring both fibrous cap thickness and luminal stenosis. Answer: Yes, there is a correlation between fissured fibrous cap of vulnerable carotid plaques and symptomaticity. A study that evaluated the presence of fissured fibrous cap (FFC) using multi-detector-row CT angiography (MDCTA) found a statistically significant correlation between the presence of FFC and symptomaticity (p = 0.0032). The study included 147 patients with at least 50% stenosis or plaque alteration at sonography, and among the 132 remaining patients after excluding those with inadequate quality images, 30 FFCs were detected. There were 36 symptomatic patients, with 12 ipsilateral symptomatic patients having FFC. The results suggest that FFC may be used as an indicator for predicting potential cerebrovascular pathology (PUBMED:18786671).
Instruction: Do patients in Dutch nursing homes have more pressure ulcers than patients in German nursing homes? Abstracts: abstract_id: PUBMED:23628407 Do patients in Dutch nursing homes have more pressure ulcers than patients in German nursing homes? A prospective multicenter cohort study. Objectives: To investigate whether the incidence of pressure ulcers in nursing homes in the Netherlands and Germany differs and, if so, to identify resident-related risk factors, nursing-related interventions, and structural factors associated with pressure ulcer development in nursing home residents. Design: A prospective multicenter cohort study. Setting: Ten nursing homes in the Netherlands and 11 nursing homes in Germany (around Berlin and Brandenburg). Participants: A total of 547 newly admitted nursing home residents, of which 240 were Dutch and 307 were German. Residents had an expected length of stay of 12 weeks or longer. Measurements: Data were collected for each resident over a 12-week period and included resident characteristics (eg, demographics, medical history, Braden scale scores, nutritional factors), pressure ulcer prevention and treatment characteristics, staffing ratios and other structural nursing home characteristics, and outcome (pressure ulcer development during the study). Data were obtained by trained research assistants. Results: A significantly higher pressure ulcer incidence rate was found for the Dutch nursing homes (33.3%) compared with the German nursing homes (14.3%). Six factors that explain the difference in pressure ulcer incidence rates were identified: dementia, analgesics use, the use of transfer aids, repositioning the residents, the availability of a tissue viability nurse on the ward, and regular internal quality controls in the nursing home. Conclusion: The pressure ulcer incidence was significantly higher in Dutch nursing homes than in German nursing homes. Factors related to residents, nursing care and structure explain this difference in incidence rates. Continuous attention to pressure ulcer care is important for all health care settings and countries, but Dutch nursing homes especially should pay more attention to repositioning residents, the necessity and correct use of transfer aids, the necessity of analgesics use, the tasks of the tissue viability nurse, and the performance of regular internal quality controls. abstract_id: PUBMED:23844636 Knowledge and use of pressure ulcer preventive measures in nursing homes: a comparison of Dutch and German nursing staff. Aims And Objectives: To examine the knowledge and use of pressure ulcer preventive measures among nursing staff in Dutch and German nursing homes. Background: Studies in the Netherlands and Germany have shown a large discrepancy in pressure ulcer prevalence rates among nursing homes in both countries and concluded that some of this variance could be explained by differences in pressure ulcer prevention. Design: A cross-sectional questionnaire survey nested in a prospective multicenter cohort study. Methods: A questionnaire was distributed to nursing staff employed in 10 Dutch nursing homes (n = 600) and 11 German nursing homes (n = 578). Data were collected in January 2009. Results: The response rate was 75·7% in the Netherlands (n = 454) and 48·4% in Germany (n = 283). Knowledge about useful pressure ulcer preventive measures was moderate in both countries, while nonuseful preventive measures were poorly known. On average, only 19·2% (the Netherlands) and 24·6% (Germany) of preventive measures were judged correctly as nonuseful. The same pattern could be seen with regard to the use of preventive measures, because nonuseful preventive measures were still commonly used according to the respondents. Conclusions: The results indicate that the respondents' knowledge and use of pressure ulcer preventive measures could be improved in both countries, especially for nonuseful measures. Changes and improvements can be achieved by providing sufficient education and refresher courses for nurses and nursing assistants employed within Dutch and German nursing homes. Relevance To Clinical Practice: Recurring education about pressure ulcer prevention is required among nursing staff employed in Dutch and German nursing homes, particularly in relation to the use of ineffective and outdated preventive measures. Obstacles regarding the implementation of preventive measures should be addressed to achieve a change in practice. abstract_id: PUBMED:20586840 Evaluation of the dissemination and implementation of pressure ulcer guidelines in Dutch nursing homes. Rationale, Aims And Objectives: Annual national prevalence surveys have been conducted in the Netherlands over the past 10 years and have revealed high prevalence rates in Dutch nursing homes. Pressure ulcer guideline implementation is one of the factors that can influence prevalence rates. Previous research has shown that these guidelines are often only partly implemented in Dutch nursing homes. Reasons for this lack of pressure ulcer guideline implementation are not known. Therefore, the aim of this study is to investigate the current situation regarding pressure ulcer guideline dissemination and implementation in Dutch nursing homes. Methods: Semi-structured interviews were conducted in eight nursing homes in the Netherlands from January till December 2008. In each nursing home, interviews were held with eight persons. Results: The implementation of pressure ulcer guidelines was lacking in some of the nursing homes. Risk assessment scales were often not used in practice, repositioning schemes were not always available and, when they were, they were often not used in practice. Knowledge about guideline recommendations was also lacking and pressure ulcer education was inadequate. Barriers to applying guideline recommendations in practice were mostly related to personnel and communication. Conclusions: The implementation of pressure ulcer guidelines does not seem to be successful in all nursing homes and needs more attention. Barriers mentioned by the interviewees in applying guideline recommendations need to be addressed. Providing adequate education for nursing home staff and increasing attention for pressure ulcer care can be the first steps in improving the implementation of pressure ulcer guidelines. abstract_id: PUBMED:21505937 Pressure ulcers in German nursing homes: frequencies, grades, and origins Background: The occurrence of pressure ulcers in long-term care facilities is regarded as a nursing-sensitive indicator of care. The aim of this study was to measure the frequency, categories, and points of origin of pressure ulcers in German nursing homes. Methods And Sample: In spring 2010, a nationwide prevalence study was conducted in 52 nursing homes (n=3610 residents). According to a standardized study protocol, trained nurses collected data about pressure ulcer risk and pressure ulcers. Results: The prevalence of pressure ulcers was 3.9% (95% CI 3.3-4.6). Excluding skin redness, the proportion of pressure ulcers of nursing home origin was 1.2% (95% CI 0.9-1.6). Risk-adjusted (adjusted for immobility) results showed no statistically significant differences between institutions. Conclusion: Compared to international figures, the prevalence of pressure ulcers in German nursing homes is very low. abstract_id: PUBMED:19634526 Pressure ulcer risk and pressure ulcer prevalence in German hospitals and nursing homes In the spring of 2008, the "Institut für Medizin-/Pflegepädagogik und Pflegewissenschaft der Charité - Universitätsmedizin Berlin" conducted a nationwide prevalence study for the seventh time. Among other things, data was collected concerning pressure ulcer risk and pressure ulcer prevalence in German nursing homes and hospitals. 3345 residents from 37 nursing homes and 3391 patients from 19 hospitals were included in this study. Altogether, 3192 of them were at risk for pressure ulcers. In nursing homes, the percentage of persons at risk for pressure ulcer development was 62.5 percent, in hospitals 39.4 percent. 297 persons from the at risk group had at least one pressure ulcer. Pressure ulcer prevalence in hospitals was 12.7 percent and in the nursing homes 7.3 percent, respectively. Concerning pressure ulcer risk and pressure ulcer prevalence, there were considerable differences between individual hospitals and departments. Even with comparable risk groups, differences in pressure ulcer prevalence were found. abstract_id: PUBMED:26059629 Current Dermatologic Care in Dutch Nursing Homes and Possible Improvements: A Nationwide Survey. Objectives: To assess the provision and need of dermatologic care among Dutch nursing home patients and to obtain recommendations for improvement. Design: Cross-sectional nationwide survey. Setting: All 173 nursing home organizations in the Netherlands. Participants: Physicians working in nursing homes. Measurements: Web-based questionnaire concerning the burden of skin diseases in nursing home patients, diagnostic procedures and therapy, collaboration with dermatologists, physicians' level of education, and suggestions for improvement. Results: A total of 126 (72.8%) nursing home organizations, with 1133 associated physicians participated in our study and received the questionnaire. A total of 347 physicians (30.6%) completed the questionnaire. Almost all respondents (99.4%) were recently confronted with skin diseases, mostly (pressure) ulcers, eczema, and fungal infections. Diagnostic and treatment options were limited because of a lack of availability and experience of the physicians. More live consultation of dermatologists was suggested as being important to improve dermatologic care. Other suggestions were better education, more usage of telemedicine applications, and better availability of diagnostic and/or treatment procedures like cryotherapy. Conclusion: Physicians in nursing homes are frequently confronted with skin diseases. Several changes in organization of care and education are expected to improve dermatologic care in nursing home patients. abstract_id: PUBMED:33648440 Physiotherapy in nursing homes. A qualitative study of physiotherapists' views and experiences. Background: There are distinct differences in the implementation of physiotherapeutic care in nursing homes. Both nationally and internationally staffing levels of physiotherapy differ significantly between and within nursing homes. Since legislation or guidelines that specify the parameters of physiotherapy required in nursing homes are lacking, it is unknown how physiotherapists currently estimate the usefulness and necessity of physiotherapy in individual situations in long-term care. The purpose of this study was to describe how physiotherapists actually work, and how they want to work, in daily practice in Dutch nursing homes. Methods: We performed a qualitative study with an online questionnaire. We asked 72 physiotherapists working in Dutch nursing homes to describe as accurately as possible usual care in nine different cases in long-term care. Furthermore we asked them to describe their role in the prevention and treatment of a number of indicators that measure the quality of care in nursing homes. Two reviewers thematically analysed the answers to the questionnaires. Results: Forty-six physiotherapists returned the questionnaire. Physiotherapy services include active exercise therapy aimed to improve mobility and movement dysfunctions, advising on prevention and management of falls, pressure ulcers, incontinence, malnutrition and sarcopenia, overweight, physical restraints, intertrigo, chronic wounds, behavioural and psychological symptoms in dementia, and physical inactivity, and ergonomic and behavioural training. The way and extent in which physiotherapists are involved in the various care- and functional problems differs and depends on organisational and personal factors such as, organisation's policy, type of ward, time pressure, staffing level, collaboration with other members of the multidisciplinary team, or lack of knowledge. Conclusion: Physiotherapists in nursing homes are involved in the prevention and management of different care situations and functional problems. The way in which they are involved differs between physiotherapist. Aiming for more uniformity seems necessary. A shared vision can help physiotherapists to work more consistently and will strengthen their position in nursing homes. abstract_id: PUBMED:11548463 The outcomes of restraint reduction program in nursing homes One of the problems in nursing home care in Taiwan is resident restraint, including physical and chemical restraints. This pre-experimental study was conducted to investigate whether a restraint reduction program could reduce the prevalence of restraint in nursing homes. Three registered nursing homes were randomly selected from nursing homes in the Kaohsiung area. Staff and residents of these nursing homes were educated in restraint alternatives, balance training and managing behavior problems in one month of interventions. Three days before and after interventions, prevalence of restraints, falls, and of pressure sores, balance reaction, frequency of agitation, use of psychotic drugs, as well as the restraint knowledge of the nursing staff, was measured. After the restraint reduction program, the prevalence of restraint and frequency of resident agitation decreased significantly. The prevalence of falls and pressure sores of residents was not changed significantly. The restraint knowledge of the nursing staff significantly increased after the restraint reduction program. The information from this study led to a better strategy to reduce restraint for the elderly in nursing homes. The results could also provide a model to improve the quality of care in nursing homes in Taiwan. abstract_id: PUBMED:19551618 Pressure ulcer prevalence in German nursing homes and hospitals: what role does the National Nursing Expert Standard Prevention of Pressure Ulcer play? Aim Of The Study: The aim of this study was to investigate the relationship between the use of the National Nursing Expert Standard Pressure Ulcer Prevention and the pressure ulcer prevalence in German nursing homes and hospitals. Methods: Data were collected within two nationwide surveys conducted by the Department of Nursing Science of the Charité, Berlin, Germany. The surveys, designed as cross-sectional prevalence studies, serve as an investigation of the amount of clinically relevant nursing phenomena, i. e., pressure ulcers. Prevalence per facility in the at-risk group was explored by a ranking procedure of the 95 nursing homes and hospitals. The facilities were divided into two groups according to whether they used the German Expert Standard to develop the local protocol or not. Results: The pressure ulcer prevalence of the at-risk group ranged from 0% to 24.6% in nursing homes and from 7% to 40% in hospitals. In about 40% of the hospitals and nursing homes the local protocol of pressure ulcer prevention was based on the German Expert Standard. The ranking figure indicates that there is no statistically significant relation between Expert Standard-based local protocols and the pressure ulcer prevalence in the at-risk group. Conclusion: A clear advantage to use the German Expert Standard compared with other sources cannot be shown with these data. However, a uniform pressure ulcer prevention is an essential quality feature of nursing care. The degree of implementation and the consequent transfer of the recommendations to daily practice should be evaluated regularly. abstract_id: PUBMED:12110077 Profiling nursing homes using Bayesian hierarchical modeling. Objectives: New methods developed to improve the statistical basis of provider profiling may be particularly applicable to nursing homes. We examine the use of Bayesian hierarchical modeling in profiling nursing homes on their rate of pressure ulcer development. Design: Observational study using Minimum Data Set data from 1997 and 1998. Setting: A for-profit nursing home chain. Participants: Residents of 108 nursing homes who were without a pressure ulcer on an index assessment. Measurements: Nursing homes were compared on their performance on risk-adjusted rates of pressure ulcer development calculated using standard statistical techniques and Bayesian hierarchical modeling. Results: Bayesian estimates of nursing home performance differed considerably from rates calculated using standard statistical techniques. The range of risk-adjusted rates among nursing homes was 0% to 14.3% using standard methods and 1.0% to 4.8% using Bayesian analysis. Fifteen nursing homes were designated as outliers based on their z scores, and two were outliers using Bayesian modeling. Only one nursing home had greater than a 50% probability of having a true rate of ulcer development exceeding 4%. Conclusions: Bayesian hierarchical modeling can be successfully applied to the problem of profiling nursing homes. Results obtained from Bayesian modeling are different from those obtained using standard statistical techniques. The continued evaluation and application of this new methodology in nursing homes may ensure that consumers and providers have the most accurate information regarding performance. Answer: Yes, patients in Dutch nursing homes have a significantly higher incidence of pressure ulcers compared to patients in German nursing homes. A prospective multicenter cohort study found that the pressure ulcer incidence rate was 33.3% for Dutch nursing homes, whereas it was 14.3% for German nursing homes (PUBMED:23628407).
Instruction: Creation of a reference dataset of neck sizes in children: standardizing a potential new tool for prediction of obesity-associated diseases? Abstracts: abstract_id: PUBMED:24952386 Creation of a reference dataset of neck sizes in children: standardizing a potential new tool for prediction of obesity-associated diseases? Background: Neck circumference (NC), is an emerging marker of obesity and associated disease risk, but is challenging to use as a screening tool in children, as age and sex standardized cutoffs have not been determined. A population-based sample of NC in Canadian children was collected, and age- and sex-specific reference curves for NC were developed. Methods: NC, waist circumference (WC), weight and height were measured on participants aged 6-17 years in cycle 2 of the Canadian Health Measures Survey. Quantile regression of NC versus age in males and females was used to obtain NC percentiles. Linear regression was used to examine association between NC, body mass index (BMI) and WC. NC was compared in healthy weight (BMI &lt; 85th percentile) and overweight/obese (BMI &gt; 85th percentile) subjects. Results: The sample included 936 females and 977 males. For all age and sex groups, NC was larger in overweight/obese children (p &lt; 0.0001). For each additional unit of BMI, average NC in males was 0.49 cm higher and in females, 0.43 cm higher. For each additional cm of WC, average NC in males was 0.18 cm higher and in females, 0.17 cm higher. Conclusion: This study presents the first reference data on Canadian children's NC. The reference curves may have future clinical applicability in identifying children at risk of central obesity-associated conditions and thresholds associated with disease risk. abstract_id: PUBMED:36674360 Phase Angle as a Potential Screening Tool in Adults with Metabolic Diseases in Clinical Practice: A Systematic Review. Background: Phase angle (PhA) has been used as mortality prognostic, but there are no studies about its possible use as a screening tool. Therefore, an assessment of the possible utility of PhA in clinical practice is required. The aim of this systematic review was to explore all recent available evidence of PhA, and its possible utility as a screening tool in clinical practice in subjects with chronic metabolic diseases. Materials And Methods: This systematic review was performed and written as stated in the PRISMA 2020 guidelines. The search was conducted in PubMed, ScienceDirect and SciElo. In order to be considered eligible, within the entire search, only articles involving PhA and their utility in metabolic diseases were included. Results: PhA was associated with hyperuricemia and vitamin D deficiency in obese subjects, and decreased cardiovascular risk and malnutrition in hospitalized patients. Conclusion: PhA may be a potential screening tool in clinical practice to evaluate different biomarkers, cardiovascular risk, and nutritional diagnosis in metabolic diseases in adults. abstract_id: PUBMED:28076630 Cutoffs and cardiovascular risk factors associated with neck circumference among community-dwelling elderly adults: a cross-sectional study. Context And Objective:: In elderly people, measurement of several anthropometric parameters may present complications. Although neck circumference measurements seem to avoid these issues, the cutoffs and cardiovascular risk factors associated with this parameter among elderly people remain unknown. This study was developed to identify the cutoff values and cardiovascular risk factors associated with neck circumference measurements among elderly people. Design And Setting:: Cross-sectional study conducted in two community centers for elderly people. Methods:: 435 elderly adults (371 women and 64 men) were recruited. These volunteers underwent morphological evaluations (body mass index and waist, hip, and neck circumferences) and hemodynamic evaluations (blood pressure values and heart rate). Receiver operating characteristic curve analyses were used to determine the predictive validity of cutoff values for neck circumference, for identifying overweight/obesity. Multivariate analysis was used to identify cardiovascular risk factors associated with large neck circumference. Results:: Cutoff values for neck circumference (men = 40.5 cm and women = 35.7 cm), for detection of obese older adults according to body mass index, were identified. After a second analysis, large neck circumference was shown to be associated with elevated body mass index in men; and elevated body mass index, blood pressure values, prevalence of type 2 diabetes and hypertension in women. Conclusion:: The data indicate that neck circumference can be used as a screening tool to identify overweight/obesity in older people. Moreover, large neck circumference values may be associated with cardiovascular risk factors. abstract_id: PUBMED:34431474 Developing neck circumference growth reference charts for Pakistani children and adolescents using the lambda-mu-sigma and quantile regression method. Objective: Neck circumference (NC) is currently used as an embryonic marker of obesity and its associated risks. But its use in clinical evaluations and other epidemiological purposes requires sex and age-specific standardised cut-offs which are still scarce for the Pakistani paediatric population. We therefore developed sex and age-specific growth reference charts for NC for Pakistani children and adolescents aged 2-18 years. Design: Cross-sectional multi-ethnic anthropometric survey (MEAS) study. Setting: Multan, Lahore, Rawalpindi and Islamabad. Participants: The dataset of 10 668 healthy Pakistani children and adolescents aged 2-18 years collected in MEAS were used. Information related to age, sex and NC were taken as study variables. The lambda-mu-sigma (LMS) and quantile regression (QR) methods were applied to develop growth reference charts for NC. Results: The 5th, 10th, 25th, 50th, 75th, 90th and 95th smoothed percentile values of NC were presented. The centile values showed that neck size increased with age in both boys and girls. During 8 and 14 years of age, girls were found to have larger NC than boys. A comparison of NC median (50th) percentile values with references from Iranian and Turkish populations reveals substantially lower NC percentiles in Pakistani children and adolescents compared to their peers in the reference population. Conclusion: The comparative results suggest that the uses of NC references of developed countries are inadequate for Pakistani children. A small variability between empirical centiles and centiles obtained by QR procedure recommends that growth charts should be constructed by QR as an alternative method. abstract_id: PUBMED:29037078 The accuracy of neck circumference for assessing overweight and obesity: a systematic review and meta-analysis. Context: Neck circumference (NC) has been suggested as an alternative measure to screen for excess body weight. Objective: The aim of this study was to demonstrate the accuracy of neck circumference (NC) as a measure for assessing overweight and obesity in both sexes in different age groups. Methods: Detailed individual search strategies were developed for each of the following bibliographic databases: Cochrane, LILACS, PubMed/MEDLINE, Science Direct, Scopus and Web of Science. The QUADAS-2 checklist was used to assess the methodology of the studies included. Results: Thirty-eight assessments were performed in 11 articles according to age, sex and weight status. Using sensitivity and specificity, 27 assessments (71.0%) considered NC an accurate measure to diagnose overweight and obesity. The best sensitivity and specificity were found for the age &gt;19 years (82.0%, 82.0%), female (80.0%, 73.0%), and obese (80.0%, 85.0%) categories. Conclusion: NC is an accurate tool for assessing overweight and obesity in males and females of different age groups and could be used to screen for excess body weight in routine medical practice or epidemiological studies. It is also believed that more studies will permit the creation of a reference dataset of NC cut-off values for world populations. abstract_id: PUBMED:20196929 Anthropometry as a tool for measuring malnutrition: impact of the new WHO growth standards and reference. Anthropometry is a useful tool, both for monitoring growth and for nutritional assessment. The publication by WHO of internationally agreed growth standards in 1983 facilitated comparative nutritional assessment and the grading of childhood malnutrition. New growth standards for children under 5 years and growth reference for children aged 5-19 years have recently (2006 and 2007) been published by WHO. Growth of children &lt;5 years was recorded in a multi-centre growth reference study involving children from six countries, selected for optimal child-rearing practices (breastfeeding, non-smoking mothers). They therefore constitute a growth standard. Growth data for older children were drawn from existing US studies, and upward skewing was avoided by excluding overweight subjects. These constitute a reference. More indicators are now included to describe optimal early childhood growth and development, e.g. BMI for age and MUAC for age. The growth reference for older children (2007) focuses on linear growth and BMI; weight-for-age data are age-limited and weight-for-height is omitted. Differences in the 2006 growth pattern from the previous reference for children &lt;5 are attributed to differences in infant feeding. The 2006 'reference infant' is at first heavier and taller than his/her 1983 counterpart, but is then lighter until the age of 5. Being taller in the 2nd year, he/she is less bulky (lighter for height) than the 1983 reference toddler. The spread of values for height and weight for height is narrower in the 2006 dataset, such that the lower limit of the normal range for both indices is set higher than in the previous dataset. This means that a child will be identified as moderately or severely underweight for height (severe acute malnutrition) at a greater weight for height than previously. The implications for services for malnourished children have been recognised and strategies devised. The emphasis on BMI-for-age as the indicator for thinness and obesity in older children is discussed. The complexity of calculations for health cadres without mathematical backgrounds or access to calculation software is also an issue. It is possible that the required charts and tables may not be accessible to those working in traditional health/nutrition services which are often poorly equipped. abstract_id: PUBMED:29204973 External Validation of a Tool Predicting 7-Year Risk of Developing Cardiovascular Disease, Type 2 Diabetes or Chronic Kidney Disease. Background: Chronic cardiometabolic diseases, including cardiovascular disease (CVD), type 2 diabetes (T2D) and chronic kidney disease (CKD), share many modifiable risk factors and can be prevented using combined prevention programs. Valid risk prediction tools are needed to accurately identify individuals at risk. Objective: We aimed to validate a previously developed non-invasive risk prediction tool for predicting the combined 7-year-risk for chronic cardiometabolic diseases. Design: The previously developed tool is stratified for sex and contains the predictors age, BMI, waist circumference, use of antihypertensives, smoking, family history of myocardial infarction/stroke, and family history of diabetes. This tool was externally validated, evaluating model performance using area under the receiver operating characteristic curve (AUC)-assessing discrimination-and Hosmer-Lemeshow goodness-of-fit (HL) statistics-assessing calibration. The intercept was recalibrated to improve calibration performance. Participants: The risk prediction tool was validated in 3544 participants from the Australian Diabetes, Obesity and Lifestyle Study (AusDiab). Key Results: Discrimination was acceptable, with an AUC of 0.78 (95% CI 0.75-0.81) in men and 0.78 (95% CI 0.74-0.81) in women. Calibration was poor (HL statistic: p &lt; 0.001), but improved considerably after intercept recalibration. Examination of individual outcomes showed that in men, AUC was highest for CKD (0.85 [95% CI 0.78-0.91]) and lowest for T2D (0.69 [95% CI 0.65-0.74]). In women, AUC was highest for CVD (0.88 [95% CI 0.83-0.94)]) and lowest for T2D (0.71 [95% CI 0.66-0.75]). Conclusions: Validation of our previously developed tool showed robust discriminative performance across populations. Model recalibration is recommended to account for different disease rates. Our risk prediction tool can be useful in large-scale prevention programs for identifying those in need of further risk profiling because of their increased risk for chronic cardiometabolic diseases. abstract_id: PUBMED:34069920 Percentile Reference Values for the Neck Circumference of Mexican Children. Neck circumference was studied for the first time in a pediatric population in 2010. Since then, various countries have proposed cutoff values to identify overweight, obesity, and metabolic syndrome. However, no reference values have been established for the Mexican child population. The aim of this study is to provide percentile reference values for the neck circumference of Mexican schoolchildren. Only normal-weight schoolchildren aged 6-11 years were included. Percentiles and growth charts were constructed based on the "Generalized Additive Model for Location, Scale and Shape" (GAMLSS). A total of 1059 schoolchildren (52.9% female) was evaluated. Weight, height, and BMI values were higher for males; however, this difference was not statistically significant. The 50th percentile for females was 24.6 cm at six years old and 28.25 cm at 11 years old, and for males, it was 25.75 cm and 28.76 cm, respectively. Both males and females displayed a pronounced increase in neck circumference between 10 and 11 years of age. The greatest variability was found in the 11-year-old group, with an increase of 5.5 cm for males and 5.4 cm for females. This study presents the first reference values for neck circumference for a Mexican child population. abstract_id: PUBMED:29501245 Forty years of reference values for respiratory system impedance in adults: 1977-2017. Objective: To provide an evidence-based review of published data regarding normal range reference values and prediction equations for measurements of respiratory impedance using forced oscillation technique (FOT) and impulse oscillometry (IOs) in adults. Methods: A non-language-restricted search was performed using forced oscillation technique and impulse oscillometry as primary terms. Original research studies reporting respiratory system impedance reference values or prediction equations based on cohorts of ≥100 healthy adults were included. Publications cited in identified studies were also considered for inclusion. Results: Of 882 publications identified, 34 studies were included: 14 studies of FOT, 19 studies of IOs, and one study of both techniques. Nineteen studies provided prediction equations. Most reports were from Europe (n = 20) and Asia (n = 12) and included relatively small cohorts (median = 264 subjects). Across publications, there was marked variability in performance and technique of impedance measurements. Height and sex emerged as major contributors to available prediction equations. The contribution of weight was more pronounced at the obese end of the weight spectrum. The contribution of age was less clear, and elderly were largely under-represented. Ethnicity likely plays a role, but was under-reported in currently available literature. Inclusion of current and former smokers in some studies further confound the results. Conclusions: Currently available literature providing reference values and prediction equations for respiratory impedance measurements in adults is limited. Until larger-scale standardized studies are available, the choice of prediction equations should be based on datasets that best represent the target patient population and modality in use within each pulmonary physiology laboratory. abstract_id: PUBMED:38472968 Dual-Energy CT Iodine Uptake of Head and Neck: Definition of Reference Values in a Big Data Cohort. Background: Despite a considerable amount of literature on dual-energy CT (DECT) iodine uptake of the head and neck, the physiologic iodine uptake of this region has not been defined yet. This study aims to establish reference values for the iodine uptake of healthy organs to facilitate clinical application. Methods: Consecutive venous DECT scans of the head and neck were reviewed, and unremarkable exams were included (n = 617). A total of 35 region of interest measurements were performed in 16 anatomical regions. Iodine uptake was compared among different organs/tissues and subgroup analysis was performed (male (n = 403) vs. female (n = 214); young (n = 207) vs. middle-aged (n = 206) vs. old (n = 204); and normal weight (n = 314) vs. overweight (n = 196) vs. obese (n = 107)). Results: Overall mean iodine uptake values ranged between 0.5 and 9.4 mg/mL. Women showed higher iodine concentrations in the cervical vessels and higher uptake for the parotid gland, masseter muscle, submandibular glands, sublingual glands, palatine tonsils, tongue body, thyroid gland, and the sternocleidomastoid muscle than men (p ≤ 0.04). With increasing age, intravascular iodine concentrations increased as well as iodine uptake for cerebellum and thyroid gland, while values for the tongue and palatine tonsils were lower compared to younger subjects (p ≤ 0.03). Iodine concentrations for parotid glands and sternocleidomastoid muscles decreased with a higher BMI (p ≤ 0.004), while normal-weighted patients showed higher iodine values inside the jugular veins, other cervical glands, and tonsils versus patients with a higher BMI (p ≤ 0.04). Conclusion: physiologic iodine uptake values of cervical organs and tissues show gender-, age-, and BMI-related differences, which should be considered in the clinical routine of head and neck DECT. Answer: The creation of a reference dataset of neck sizes in children is indeed being explored as a potential new tool for predicting obesity-associated diseases. Neck circumference (NC) has emerged as a marker of obesity and associated disease risk. However, its use as a screening tool in children has been challenging due to the lack of age- and sex-standardized cutoffs. To address this, a population-based sample of NC in Canadian children was collected, and age- and sex-specific reference curves for NC were developed. The study found that NC was larger in overweight/obese children across all age and sex groups, and there was a significant association between NC, body mass index (BMI), and waist circumference (WC) (PUBMED:24952386). Similarly, other studies have developed reference charts for NC in different populations. For instance, growth reference charts for NC were developed for Pakistani children and adolescents, showing that neck size increased with age in both boys and girls (PUBMED:34431474). Percentile reference values for NC were also established for Mexican children, providing a tool for identifying overweight and obesity in this population (PUBMED:34069920). The accuracy of NC as a measure for assessing overweight and obesity has been confirmed through systematic reviews and meta-analyses, indicating that NC is an accurate tool for assessing excess body weight in males and females of different age groups (PUBMED:29037078). Furthermore, NC has been associated with cardiovascular risk factors in community-dwelling elderly adults, suggesting its potential as a screening tool for identifying cardiovascular risk (PUBMED:28076630). Overall, these studies support the potential clinical applicability of NC measurements in identifying children and other age groups at risk of central obesity-associated conditions and thresholds associated with disease risk. However, it is important to note that the reference data and cutoffs may vary across different populations and should be used accordingly.
Instruction: Is low-risk hypertension fact or fiction? Abstracts: abstract_id: PUBMED:25620634 Antihypertensive therapy and the J-curve: fact or fiction? Hypertension is a major modifiable risk factor for cardiovascular morbidity and mortality. Despite more than five decades of hypertension treatment, there still exist both a lack of evidence and a clear consensus to answer a fundamental question: What is the optimal blood pressure target in patients with hypertension? Early epidemiologic studies suggested the notion of the lower the blood pressure, the better the outcomes; however, others have demonstrated a J-curve phenomenon with worse outcomes at both low and very high blood pressures. Although the existence of such a J-curve remains a topic of debate, there is now increasing recognition of target organ heterogeneity wherein the optimal blood pressure depends on the target organ in question. For cardiac protection, the current body of evidence does not support a systolic blood pressure goal of lower than 130-140 mmHg. For cerebrovascular protection, however, lower blood pressure seems to be better with a sustained reduction in events down to a systolic blood pressure of 110-120 mmHg. The J-curve phenomenon is therefore both fact and fiction based on the target organ in question. abstract_id: PUBMED:32207267 Prevalence and Current Management of Cardiovascular Risk Factors in Korean Adults Based on Fact Sheets. Korea is currently an aged society and is on the cusp of becoming a superaged society in a few years. The health burden of cardiovascular diseases increases with age, and the increasing prevalence of cardiovascular risk factors, such as obesity, hypertension, diabetes mellitus, and dyslipidemia, may be linked to increased population-level cardiovascular risk. In 2018, the prevalence of obesity in Korea was 35.7% (men, 45.4%; women, 26.5%) according to the Obesity Fact Sheet 2019, based on National Health Insurance Corporation medical checkup data. In 2016, the prevalence of diabetes was 14.4% in Koreans older than 30 years according to the Diabetes Fact Sheet published by the Korean Diabetes Association, based on data from the Korean National Health and Nutrition Examination Survey. The prevalence of hypertension in the total population of Korea in 2018 was 28.3% according to the Korean Hypertension Fact Sheet produced by the Korean Society of Hypertension. Lastly, the prevalence of dyslipidemia in 2018 was 40.5% according to the Dyslipidemia Fact Sheet published by the Korean Society of Lipid and Atherosclerosis. In this article, I would like to review the prevalence and current management of cardiovascular risk factors in Korea according to the fact sheets released by various associations. abstract_id: PUBMED:16053996 Is low-risk hypertension fact or fiction? cardiovascular risk profile in the TROPHY study. Background: The Trial of Preventing Hypertension (TROPHY) Study is designed to establish whether treating high normal blood pressure with a low-dose angiotensin receptor blocker, candesartan cilexetil, for 2 years reduces the rate of progression to hypertension compared with placebo treatment over a 4-year observation period. We are presenting the baseline cardiovascular risk factor profile of the 809 subjects randomized in the TROPHY Study. The risk factors in this analysis were as follows: cholesterol &gt;or=200 mg/dl; LDL-cholesterol &gt;or=160 mg/dL; HDL-cholesterol &lt;or=40 mg/dL (for men), &lt;or=50 mg/dL (for women); triglycerides &gt;or=150 mg/dL; body mass index &gt;or=25 kg/m2 (overweight and obese), fasting insulin &gt;or=20 mU/mL; heart rate &gt;or=80 beats/min; hematocrit &gt;or=43.5 % (men) and &gt;or=41.2% (women). Methods: The TROPHY Study is a 4-year randomized, placebo-controlled, multicenter clinical trial of 809 subjects with high normal blood pressure (BP), which is currently in progress. Results: The participants of the TROPHY study (mean age 49+/-8.1 years) with high normal BP (mean 134+/-4/85+/-4 mm Hg) had additional cardiovascular risk factors. Of the group, 96% had at least one, 81% had two or more, and 13% had five or more additional risk factors. Conclusions: Our data from individuals with high normal BP suggests clearly that the risk of cardiovascular disease begins to rise before the diagnosis of hypertension is evident. The overall risk in such subjects reflects both the rising BP and other concurring factors. It appears that truly low-risk hypertension only rarely exists. abstract_id: PUBMED:21424347 Diabetic cardiomyopathy--fact or fiction? Epidemiologic as well as clinical studies confirm the close link between diabetes mellitus and heart failure. Diabetic cardiomyopathy (DCM) is still a poorly understood "entity", however, with several contributing pathogenetic factors which lead in different stages of diabetes to characteristic clinical phenotypes. Hyperglycemia with a shift from glucose metabolism to increased beta-oxidation and consecutive free fatty acid damage (lipotoxicity) to the myocardium, insulin resistance, renin-angiotensin-aldosterone system (RAAS) activation, altered calcium homeostasis and structural changes from the natural collagen network to a stiffer matrix due to advanced glycation endproduct (AGE) formation, hypertrophy and fibrosis contribute to the respective clinical phenotypes of DCM. We propose the following classification of cardiomyopathy in diabetic patients: a) Diastolic heart failure with normal ejection fraction (HFNEF) in diabetic patients often associated with hypertrophy without relevant hypertension. Relevant coronary artery disease (CAD), valvular disease and uncontrolled hypertension are not present. This is referred to as stage 1 DCM. b) Systolic and diastolic heart failure with dilatation and reduced ejection (HFREF) in diabetic patients excluding relevant CAD, valvular disease and uncontrolled hypertension as stage 2 DCM. c) Systolic and/or diastolic heart failure in diabetic patients with small vessel disease (microvascular disease) and/or microbial infection and/or inflammation and/or hypertension but without CAD as stage 3 DCM. d) If heart failure may also be attributed to infarction or ischemia and remodeling in addition to stage 3 DCM the term should be heart failure in diabetes or stage 4 DCM. These clinical phenotypes of diabetic cardiomyopathy can be separated by biomarkers, non-invasive (echocardiography, cardiac magnetic resonance imaging) and invasive imaging methods (levocardiography, coronary angiography) and further analysed by endomyocardial biopsy for concomitant viral infection. The role of specific diabetic drivers to the clinical phenotypes, to macro- and microangiopathy as well as accompanying risk factors or confounders, e.g. hypertension, autoimmune factors or inflammation with or without viral persistence, need to be identified in each individual patient separately. Thus hyperglycemia, hyperinsulinemia and insulin resistance as well as lipotoxicity by free fatty acids (FFAs) are the factors responsible for diabetic cardiomyopathy. In stage 1 and 2 DCM diabetic cardiomyopathy is clearly a fact. However, precise determination of to what degree the various underlying pathogenetic processes are responsible for the overall heart failure phenotype remains a fiction. abstract_id: PUBMED:35809553 Psychometric evaluation of the Persian version of the Heart Disease Fact Questionnaire (HDFQ) in people with diabetes in Iran. Background And Aims: Public health and clinic-based educational strategies are desperately needed to stem the tide of death from heart disease among people with diabetes in low and middle-income countries. This study translated the Heart Disease Fact Questionnaire into Persian and evaluated its reliability and validity for use in Iran. Methods: Using rigorous translation methods, the 25-item scale was administered to Persian speakers with diabetes. The scale was evaluated for content validity, construct validity and reliability. Results: Participants were 268 patients with diabetes with mean age of 63.19 ± 16.61 years. The mean HDFQ score was 17.31 ± 5.11 (in the low range). Higher scores were associated with younger age, younger age of diabetes onset, higher education, higher employment position, family history of diabetes and hypertension, shorter diabetes duration, and adherence to home exercise regimens. Kuder-Richardson's reliability coefficient was good, i.e., 0.82. Confirmatory factor analysis showed that the factor loadings of all questions, except question number 25, were favorable, i.e., &gt;0.3. Model fit indices were favorable: Chi-square statistic to degree of freedom ratio (χ2⁄df) = 1.82, Comparative fit index = 0.96, Tucker-Lewis Index = 0.96 and root mean square error = 0.06. Conclusion: After removing item #25, the Persian heart disease fact questionnaire has good validity and reliability and can be used to inform and evaluate clinical and public health educational programs aimed at decreasing risk for heart disease among Persian speakers with diabetes. abstract_id: PUBMED:8443932 Current hypertension management: separating fact from fiction. In medicine, as in other fields, myths or speculations may be repeated so often and so widely that they are perceived as fact. To some extent, this may have occurred with regard to the treatment of hypertension, especially concerning the use of diuretics and beta blockers and the significance of their metabolic effects. An analysis of the available data indicates that the use of diuretics and, to some extent, beta-adrenergic inhibitors will effectively lower blood pressure and reduce morbidity and mortality. Similar analyses strongly suggest that the metabolic changes induced by these agents may not be of major clinical importance. The widespread dissemination of theories and speculations designed to convince physicians to avoid their use may have been overdone. Scientific facts, not extrapolations of data, should be used to make treatment decisions. abstract_id: PUBMED:31909366 Obesity Fact Sheet in Korea, 2018: Data Focusing on Waist Circumference and Obesity-Related Comorbidities. Background: The global prevalence of obesity has increased steadily in recent years. Waist circumference (WC) reflects body composition better than body mass index. The Korean Society for the Study of Obesity released the 2018 Obesity Fact Sheet to address the incidence of obesity-related comorbidities according to WC levels. Methods: Data from the Korean National Health Insurance Service health examination database from 2009 to 2016 were analyzed. Abdominal obesity was defined as a WC ≥90 cm in men and ≥85 cm in women. Incidence rates of comorbidities and all-cause mortality rates were calculated after standardizing by age and sex based on the 2010 census. Results: From 2009 to 2015, the incidence rates of type 2 diabetes mellitus, hypertension, myocardial infarction, and ischemic stroke increased both in men and women. Individuals with the lowest WC levels had the highest all-cause mortality rates followed by those with the highest WC levels in men, women, and the total population. The incidence rates of total cancer increased as WC levels escalated between 2009 and 2016. In men, the incidence rates of colorectal, prostate, and liver cancers increased as WC levels increased. The incidence rates of thyroid, colorectal, and stomach cancers increased as WC levels rose in women. In addition, medical expenses continuously increased as WC increased in both men and women. Conclusion: Based on the 2018 Obesity Fact Sheet, strategies for reducing the abdominal obesity and related comorbidities and medical expenses are a public health priority. abstract_id: PUBMED:22675158 Coarctation of the aorta and coronary artery disease: fact or fiction? Background: Aortic coarctation (CoA) is reported to predispose to coronary artery disease (CAD). However, our clinical observations do not support this premise. Our objectives were to describe the prevalence of CAD among adults with CoA and to determine whether CoA is an independent predictor of CAD or premature CAD. Methods And Results: The study population was derived from the Quebec Congenital Heart Disease Database. We compared patients with CoA and those with a ventricular septal defect, who are not known to be at increased risk of CAD. The prevalence of CAD in patients with CoA compared with those with ventricular septal defect was determined. We then used a nested case-control design to determine whether CoA independently predicted for the development of CAD. Of 756 patients with CoA who were alive in 2005, 37 had a history of CAD compared with 224 of 6481 patients with ventricular septal defect (4.9% versus 3.5%; P=0.04). Male sex (odds ratio [OR], 2.13; 95% confidence interval [CI], 1.62-2.80), hypertension (OR, 1.95; 95% CI, 1.44-2.64), diabetes mellitus (OR, 1.68; 95% CI, 1.09-2.58), age (OR per 10-year increase, 2.28; 95% CI, 2.09-2.48), and hyperlipidemia (OR, 11.58; 95% CI, 5.75-23.3) all independently predicted for the development of CAD. CoA did not independently predict for the development of CAD (OR, 1.04; 95% CI, 0.68-1.57) or premature CAD (OR for CoA versus ventricular septal defect, 1.44; 95% CI, 0.79-2.64) after adjustment for other factors. Conclusions: Although traditional cardiovascular risk factors independently predicted for the development of CAD, the diagnosis of CoA alone did not. Our findings suggest that cardiovascular outcomes of these patients may be improved with tight risk factor control. abstract_id: PUBMED:20972523 The J-shaped curve: fact or fiction? N/A abstract_id: PUBMED:16719843 Metabolic syndrome: fact or fiction. N/A Answer: Low-risk hypertension appears to be more fiction than fact. The TROPHY study, which aimed to determine whether treating high normal blood pressure with an angiotensin receptor blocker could reduce the rate of progression to hypertension, found that individuals with high normal blood pressure had additional cardiovascular risk factors. Specifically, 96% of the participants had at least one additional risk factor, 81% had two or more, and 13% had five or more additional risk factors. This suggests that the risk of cardiovascular disease begins to rise before the diagnosis of hypertension is evident, indicating that truly low-risk hypertension only rarely exists (PUBMED:16053996). Therefore, the concept of low-risk hypertension is largely considered fiction, as most individuals with high normal blood pressure already exhibit a risk profile that predisposes them to cardiovascular disease.
Instruction: Can one point-of-care glucose meter be used for all pediatric and adult hospital patients? Abstracts: abstract_id: PUBMED:21175272 Can one point-of-care glucose meter be used for all pediatric and adult hospital patients? Evaluation of three meters, including recently modified test strips. Background: within hospitals, bedside blood glucose measurements are widely used for investigating suspected hyper- or hypoglycemia, monitoring diabetes, and adjusting glucose-lowering medication. Numerous point-of-care glucose meters are available, but for large hospitals using the same meter in all settings has practical and economic advantages. This investigation sought to identify a meter that was accurate, precise, and free from interferences, making it suitable for use across all ages and diseases. Methods: lithium-heparinized whole blood was analyzed, under various conditions, on the HemoCue Glucose 201 (Hemocue AB, Ängelhom, Sweden), Accu-Chek Performa (Roche Diagnostics, Basel Switzerland) (using the newly reformulated maltose-insensitive strips), and Optium (Abbott Diabetes, Alameda, CA, USA) glucose meters and compared with plasma glucose measurements on the Vitros 5,1 FS analyzer (Ortho Clinical Diagnostics, Neckargemund, Germany). Results: biases of 3.2%, -5.8%, and -8% were found with Accu-Chek, Optium, and HemoCue, respectively. Within-run imprecision was 2.5-5.8%. Between-run imprecision was 3.1-6.8%, with the Accu-Chek performing best. All meters measured to 1.3 mmol/L with acceptable precision (coefficient of variation, &lt;14%). Varying hematocrits between 0.2 and 0.7 L/L affected results of all meters. Interference at clinically relevant concentrations of galactose and possibly maltose was demonstrated with the Accu-Chek. Conclusions: all three meters are sufficiently accurate and precise for in-hospital use. Because of possible interference by galactosemia or high hematocrit, the Accu-Chek is not the safest option for neonatal use. Patients receiving high doses of maltose in therapeutic infusions may still be at risk of being falsely classified as euglycemic or hyperglycemic with the reformulated Accu-Chek strips, and clinical evaluation of these strips in patients receiving maltose-containing infusions is urgently needed. abstract_id: PUBMED:27451045 Point-of-Care Glucose and Ketone Monitoring. Early and rapid identification of hypo- and hyperglycemia as well as ketosis is essential for the practicing veterinarian as these conditions can be life threatening and require emergent treatment. Point-of-care testing for both glucose and ketone is available for clinical use and it is important for the veterinarian to understand the limitations and potential sources of error with these tests. This article discusses the devices used to monitor blood glucose including portable blood glucose meters, point-of-care blood gas analyzers and continuous glucose monitoring systems. Ketone monitoring options discussed include the nitroprusside reagent test strips and the 3-β-hydroxybutyrate ketone meter. abstract_id: PUBMED:25355711 Point-of-care blood glucose testing for diabetes care in hospitalized patients: an evidence-based review. Glycemic control in hospitalized patients with diabetes requires accurate near-patient glucose monitoring systems. In the past decade, point-of-care blood glucose monitoring devices have become the mainstay of near-patient glucose monitoring in hospitals across the world. In this article, we focus on its history, accuracy, clinical use, and cost-effectiveness. Point-of-care devices have evolved from 1.2 kg instruments with no informatics to handheld lightweight portable devices with advanced connectivity features. Their accuracy however remains a subject of debate, and new standards for their approval have now been issued by both the International Organization for Standardization and the Clinical and Laboratory Standards Institute. While their cost-effectiveness remains to be proved, their clinical value for managing inpatients with diabetes remains unchallenged. This evidence-based review provides an overall view of its use in the hospital setting. abstract_id: PUBMED:10656736 Point-of-care glucose testing: effects of critical care variables, influence of reference instruments, and a modular glucose meter design. Objective: To assess the clinical performance of glucose meter systems when used with critically ill patients. Design: Two glucose meter systems (SureStepPro and Precision G) and a modular adaptation (Immediate Response Mobile Analysis-SureStepPro) were assessed clinically using arterial samples from critically ill patients. A biosensor-based analyzer (YSI 2700) and a hospital chemistry analyzer (Synchron CX-7) were the primary and secondary reference instruments, respectively. Patients And Setting: Two hundred forty-seven critical care patients at the University of California, Davis, Medical Center participated in this study. Outcome Measures: Error tolerances of +/-15 mg/dL for glucose levels &lt;/=100 mg/dL and +/-15% for glucose levels &gt;100 mg/dL were used to evaluate glucose meter performance; 95% of glucose meter measurements should fall within these tolerances. Results: Compared to the primary reference method, 98% to 100% of SureStepPro and 91% to 95% of Precision G measurements fell within the error tolerances. Paired differences of glucose measurements versus critical care variables (Po(2), pH, Pco(2), and hematocrit) were analyzed to determine the effects of these variables on meter measurements. Po(2) and Pco(2) decreased Precision G and SureStepPro measurements, respectively, but not enough to be clinically significant based on the error tolerance criteria. Hematocrit levels affected glucose measurements on both meter systems. Modular adaptation did not affect test strip performance. Conclusions: Glucose meter measurements correlated best with primary reference instrument measurements. Overall, both glucose meter systems showed acceptable performance for point-of-care testing. However, the effects of some critical care variables, especially low and high hematocrit values, could cause overestimated or underestimated glucose measurements. abstract_id: PUBMED:33848047 Agreement of blood glucose measured with glucose meter in arterial, central venous, and capillary samples in adult critically ill patients. Background: The measurement of blood glucose in critically ill patients is still performed in many ICUs with glucose meters and capillary samples. Several prevalent factors in these patients affect the accuracy of the results and should be interpreted with caution. A weak recommendation from the Surviving Sepsis Campaign (SSC) suggests the use of arterial blood rather than capillary blood for point of care testing using glucose meters. Aims And Objectives: To analyse the agreement between arterial, central venous, and capillary blood samples of glucose values measured by glucose meter in critically ill patients and study potential confounding factors. Design: Prospective cross-sectional study in a general intensive care unit (ICU). Patients needing insulin treatment (subcutaneous or intravenous) and blood glucose control were included. Methods: Standardized collection of blood samples and measurement of glucose values with a glucometer. Agreement was studied by the Bland-Altman method and stratified analysis of disagreement-survival plots was used to study the influence of haematocrit, pH range, SOFA score, capillary refilling time, intravenous insulin infusion, and lactic acid. Results: A total of 297 measurements from 54 patients were included. The mean arterial blood glucose was 150.42 (range 31-345 mg/dL). In the graphical analysis, there is a poor agreement both in capillary and venous central to arterial samples, but in opposite direction (underestimation of capillary and overestimation of central venous). Factors associated with a reduction in the agreement between arterial and capillary samples were elevated lactate, poor capillary refilling, and hemodynamic failure. Patients without hemodynamic compromise have an acceptable agreement with values for absolute differences of 16 mg/dL for a disagreement of 10%. Conclusions: In critically ill patients, the measurement of blood glucose with a glucose meter should be performed with arterial samples whenever possible. Capillary samples do not accurately estimate arterial blood glucose values in patients with shock and/or vasoactive drugs and underestimate the values in the range of hypoglycemia. Venous samples are subject to error because of potential contamination. Relevance To Clinical Practice: This study adds support to the recommendation of using arterial blood rather than capillary or venous blood when using glucose meters in critically ill patients, especially in those with hemodynamic failure. abstract_id: PUBMED:22718643 Is there a suitable point-of-care glucose meter for tight glycemic control? Evaluation of one home-use and four hospital-use meters in an intensive care unit. Background: Implementation of tight glycemic control (TGC) and avoidance of hypoglycemia in intensive care unit (ICU) patients require frequent analysis of blood glucose. This can be achieved by accurate point-of-care (POC) hospital-use glucose meters. In this study one home-use and four different hospital-use POC glucose meters were evaluated in critically ill ICU patients. Methods: All patients (n = 80) requiring TGC were included in this study. For each patient three to six glucose measurements (n = 390) were performed. Blood glucose was determined by four hospital-use POC glucose meters, Roche Accu-Check Inform II System, HemoCue Glu201DM, Nova StatStrip, Abbott Precision Xceed Pro, and one home-use POC glucose meter, Menarini GlucoCard Memory PC. The criteria described in ISO 15197, Dutch TNO quality guideline and in NACB/ADA-2011 were applied in the comparisons. Results: According to the ISO 15197, the percentages of the measured values that fulfilled the criterion were 99.5% by Roche, 95.1% by HemoCue, 91.0% by Nova, 96.6% by Abbott, and 63.3% by Menarini. According to the TNO quality guideline these percentages were 96.1% , 91.0% , 81.8% , 94.2% , and 47.7% , respectively. Application of the NACB/ADA guideline resulted in percentages of 95.6%, 89.2%, 77.9%, 93.4%, and 45.4%, respectively. Conclusions: When ISO 15197 was applied, Roche, HemoCue and Abbott fulfilled the criterion in this patient population, whereas Nova and Menarini did not. However, when TNO quality guideline and NACB/ADA 2011 guideline were applied only Roche fulfilled the criteria. abstract_id: PUBMED:25172876 Accuracy of point-of-care blood glucose measurements in critically ill patients in shock. A widely used method in monitoring glycemic status of ICU patients is point-of-care (POC) monitoring devices. A possible limitation to this method is altered peripheral blood flow in patients in shock, which may result in over/underestimations of their true glycemic status. This study aims to determine the accuracy of blood glucose measurements with a POC meter compared to laboratory methods in critically ill patients in shock. POC blood glucose was measured with a glucose-1-dehydrogenase-based reflectometric meter. The reference method was venous plasma glucose measured by a clinical chemistry analyzer (glucose oxidase-based). Outcomes assessed were concordance to ISO 15197:2003 minimum accuracy criteria for glucose meters, bias in glucose measurements obtained by the 2 methods using Bland-Altman analysis, and clinical accuracy through modified error grid analysis. A total of 186 paired glucose measurements were obtained. ISO 2003 accuracy criteria were met in 95.7% and 79.8% of POC glucose values in the normotensive and hypotensive group, respectively. Mean bias for the normotensive group was -12.4 mg/dL, while mean bias in the hypotensive group was -34.9 mg/dL. POC glucose measurements within the target zone for clinical accuracy were 90.2% and 79.8% for the normotensive and hypotensive group, respectively. POC blood glucose measurements were significantly less accurate in the hypotensive subgroup of ICU patients compared to the normotensive group. We recommend a lower threshold in confirming POC blood glucose with a central laboratory method if clinically incompatible. In light of recently updated accuracy standards, we also recommend alternative methods of glucose monitoring for the ICU population as a whole regardless of blood pressure status. abstract_id: PUBMED:25216451 Sensitive point-of-care monitoring of cardiac biomarker myoglobin using aptamer and ubiquitous personal glucose meter. Myoglobin (Myo), which is one of the early markers to increase after acute myocardial infarction (AMI), plays a major role in urgent diagnosis of cardiovascular diseases. Hence, monitoring of Myo in point-of-care is fundamental. Here, a novel assay for sensitive and selective detection of Myo was introduced using a personal glucose meter (PGM) as readout. In the presence of Myo, the anti-Myo antibody immobilized on the surface of polystyrene microplate could capture the target Myo. Then the selected aptamer against Myo, which was obtained using our screening process, was conjugated with invertase, and such aptamer-invertase conjugates bound to the immobilized Myo due to the Myo/aptamer interaction. Subsequently, the resulting "antibody-Myo-aptamer sandwich" complex containing invertase conjugates hydrolyzed sucrose into glucose, thus establishing direct correlation between the Myo concentration and the amount of glucose measured by PGM. By employing the enzyme amplification, as low as 50 pM Myo could be detected. This assay also showed high selectivity for Myo and was successfully used for Myo detection in serum samples. This work may provide a simple but reliable tool for early diagnosis of AMI in the world, especially in developing countries. abstract_id: PUBMED:22768884 Intraoperative accuracy of a point-of-care glucose meter compared with simultaneous central laboratory measurements. Background: Concerns have been raised about the use of point-of-care (POC) glucose meters in the hospital setting. Accuracy has been questioned especially in critically ill patients. Although commonly used in intensive care units and operating rooms, POC meters were not approved by the Food and Drug Administration for such use. Data on POC glucose meter performance during anesthesia are lacking. We evaluated accuracy of a POC meter in the intraoperative setting. Methods: We retrospectively reviewed 4,333 intraoperative records in which at least one intraoperative glucose was measured using electronic medical records at a large academic hospital. We evaluated the accuracy of a POC glucose meter (ACCU-CHEK® Inform, Roche Pharmaceuticals) based on the 176 simultaneous central laboratory (CL) blood glucose (BG) measurements that were found (i.e., documented collection times within 5 minutes). Point-of-care and central lab BG differences were analyzed by Bland-Altman and revised error grid analysis (rEGA). Results: Mean POC BG was 163.4 ± 64.7 mg/dl [minimum (min) 48 mg/dl, maximum (max) 537 mg/dl] and mean CL BG was 162.6 ± 65.1 mg/dl (min 44 mg/dl, max 502 mg/dl). Mean absolute difference between POC and CL BG was 24.3 mg/dl. Mean absolute relative difference was 16.5% with standard deviation 26.4%. Point-of-care measurements showed a bias of 0.8 relative to the corresponding CL value, with a precision of 39.0 mg/dl. Forty (23%) POC BG values fell outside the Clinical and Laboratory Standards Institute guideline and 3.4% POC measurements fell in zones C and D of the rEGA plot. Conclusions: The tested POC glucose meter performed poorly compared to a CL analyzer intraoperatively. Perioperative clinicians should be aware of limitations of specific POC glucose meters, and routine use of POC glucose meters as sole measurement devices in the intraoperative period should be carefully considered. abstract_id: PUBMED:8816600 Assessment of a critical limit protocol for point-of-care glucose testing. A critical limit protocol requiring that all point of care glucose meter readings &gt; 22.2 mmol/L (400 mg/dL) and &lt; 2.2 mmol/L (40 mg/dL) be immediately confirmed by the laboratory was assessed. A total of 193 (2%) of 9,523 glucose meter determinations (63 patients) were &gt; 22.2 or &lt; 2.2 mmol/L. One hundred twenty-two (63%) of critically high and low glucose readings were followed up, and 71 (37%) results were not. Seventy-seven percent (55 of 71) of results without follow up were in patients with multiple glucose meter/central lab comparisons, suggesting that users may have thought it unnecessary to confirm such results. Split sample quality control specimens showed good correlation (r = 0.927) between glucose meter and central lab results, whereas correlation for follow-up glucose results was poorer (r = 0.793), perhaps reflecting time delay in obtaining a lab sample. For follow-up results, only 18% of high/low critical limit glucose meter readings were confirmed by drawing a lab specimen within 10 minutes. Fifty-eight percent were in 17 patients with multiple previous glucose meter readings, suggesting that users may have though it less urgent to confirm a sequence of such results. Eleven follow-up results (9%) showed a &gt; 50% discordance between glucose meter/central lab with three (27%) glucose meter errors, emphasizing the need to confirm critically high/low glucose meter results to avoid potential errors. The critical limit protocol now requires that only the initial critically high/low glucose meter reading be confirmed by the lab and that these patients now be followed with lab values until glucose levels are between 5.6-16.7 mmol/L (100-300 mg/dL) before the glucose meter can again be used. Answer: Based on the provided abstracts, it is not advisable to use a single point-of-care glucose meter for all pediatric and adult hospital patients due to various factors that can affect the accuracy and precision of glucose measurements across different patient populations and clinical settings. One study found that while three evaluated glucose meters were sufficiently accurate and precise for in-hospital use, the Accu-Chek meter was not recommended for neonatal use due to possible interference from galactosemia or high hematocrit, and there were concerns about its use in patients receiving maltose-containing infusions (PUBMED:21175272). Another study highlighted the importance of understanding the limitations and potential sources of error with point-of-care glucose and ketone tests (PUBMED:27451045). The accuracy of point-of-care glucose monitoring devices has been a subject of debate, and while their clinical value for managing inpatients with diabetes is unchallenged, their cost-effectiveness remains to be proven (PUBMED:25355711). Critical care variables such as Po(2), pH, Pco(2), and hematocrit can affect glucose measurements, and hematocrit levels, in particular, can cause overestimated or underestimated glucose measurements (PUBMED:10656736). In critically ill patients, the measurement of blood glucose with a glucose meter should be performed with arterial samples whenever possible, as capillary samples do not accurately estimate arterial blood glucose values in patients with shock and/or vasoactive drugs (PUBMED:33848047). An evaluation of one home-use and four hospital-use point-of-care glucose meters in an ICU setting found that not all meters met the criteria for accuracy, with only one of the hospital-use meters meeting all applied criteria (PUBMED:22718643). Accuracy issues were also observed in critically ill patients in shock, where point-of-care blood glucose measurements were less accurate compared to normotensive patients (PUBMED:25172876). Intraoperative accuracy of a point-of-care glucose meter was found to be poor compared to central laboratory measurements, suggesting that perioperative clinicians should be cautious about relying solely on point-of-care glucose meters (PUBMED:22768884). In summary, while point-of-care glucose meters are valuable tools for managing glycemic control in hospitalized patients, their use should be tailored to specific patient populations and clinical scenarios, with careful consideration of factors that can affect measurement accuracy. It is not recommended to use a single point-of-care glucose meter universally for all pediatric and adult hospital patients.
Instruction: Patient satisfaction with a hospitalist procedure service: is bedside procedure teaching reassuring to patients? Abstracts: abstract_id: PUBMED:21480494 Patient satisfaction with a hospitalist procedure service: is bedside procedure teaching reassuring to patients? Background: In recent years, hospital medicine programs have adopted "procedure teams" that supervise residents in performing invasive bedside procedures. The effect of procedure teams on patient satisfaction is unknown. Objective: We sought to measure patient satisfaction with procedures performed by a hospitalist-supervised, intern-based procedure service (HPS) with a focus on patient perception of bedside communication. Design: This was a prospective survey. Methods: We surveyed all patients referred to the HPS for bedside thoracentesis, paracentesis, lumbar puncture, and arthrocentesis at a single academic medical center. Following each procedure, surveys were administered to English-speaking patients who could provide informed consent. Survey questions focused on patients' satisfaction with specific aspects of procedure performance as well as the quality and impact of communication with the patient and between members of the team. Results: Of 95 eligible patients, 65 (68%) completed the survey. Nearly all patients were satisfied or very satisfied with the overall experience (100%), explanation of informed consent (98%), pain control (92%), and expertise (95%) of physicians. The majority of patients were satisfied with procedure duration (88%) and in those with therapeutic procedures most (89%) were satisfied with improvement in symptoms. Hearing physicians discuss the procedure at the bedside was reassuring to most patients (84%), who felt this to be a normal part of doing a procedure (94%). Conclusions: Patients are highly satisfied with procedure performance by supervised trainees, and many patients were reassured by physician communication during the procedure. These results suggest that patient experience and teaching can be preserved with a hospitalist-supervised procedure service. abstract_id: PUBMED:35111475 The Impact of a New Internal Medicine Residency Program on Patient Satisfaction Scores for Teaching Hospitalist Faculty Compared to Non-teaching Hospitalist. Introduction: The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) is a national survey sent to patients to measure their inpatient experience. Graduate medical education programs may affect a sponsoring institution in various ways, but there has been little research into the effect of teaching hospitalist faculty on HCAHPS scores in a community-based hospital. The aim of the current study is to evaluate if the introduction of internal medicine resident physicians would affect the HCAHPS scores of patients admitted by hospitalist faculty physicians. Methods: This was a retrospective analysis of anonymous patient satisfaction survey data for internal medicine hospitalist teams from January 2019 to December 2019. Data were retrieved from the Press Ganey database. We compared two groups: teaching hospitalists (N = 12) and non-teaching hospitalists (N = 34). Data were divided into two time periods: January to June (pre-residents) and July to December (post-residents). Results: From January to June (pre-residents), 646 HCAHPS surveys were returned. For the post-resident cohort (July to December), a total of 487 surveys were returned. The "Recommend" domain, showed a significant improvement in the mean pre-resident to post-resident (57% to 69%; p = 0.0351). Conclusion: There was a significant increase in the mean rating of the "Recommend" hospital domain for the teaching hospitalists when compared to the non-teaching after the addition of a new internal medicine residency program. abstract_id: PUBMED:32212094 Impact of Hospitalist Team Structure on Patient-Reported Satisfaction with Physician Performance. Background: Patient experience is valuable because it reflects how patients perceive the care they receive within the healthcare system and is associated with clinical outcomes. Also, as part of the Hospital Value-Based Purchasing (HVBP) program, the Center for Medicare and Medicaid Services (CMS) rewards hospitals with financial incentives for patient experience as measured by the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. It is unclear how the addition of residents and advanced practice clinicians (APCs) to hospitalist-led inpatient teams affects patient satisfaction as measured by the HCAHPS and Press Ganey survey. Objective: To compare patient satisfaction with hospitalists on resident, APC, and solo hospitalist teams measured by HCAHPS and Press Ganey physician performance domain survey results. Design: Retrospective observational cohort study. Participants: All patients discharged from the Internal Medicine inpatient service between July 1, 2015, and July 1, 2018, who met HCAHPS survey eligibility criteria and completed a patient experience survey. Main Measures: HCAHPS and Press Ganey physician performance domain survey results. Key Results: No differences were observed in the selection of "top box" scores on the HCAHPS physician performance domain between resident, APC, and solo hospitalist teams. Adjusted Press Ganey physician performance domain survey results demonstrated significant differences between solo hospitalist and resident teams, with solo hospitalists having higher scores in three areas: time physician spent with you (4.58 vs. 4.38, p = 0.050); physician kept you informed (4.63 vs. 4.43, p = 0.047); and physician skill (4.80 vs. 4.63, p = 0.027). Solo hospitalists were perceived to have higher physician skill in comparison with hospitalist-APC teams (4.80 vs. 4.69, p = 0.042). Conclusion: While Press Ganey survey results suggest that patients have greater satisfaction with physicians on solo hospitalist teams, these differences were not observed on the HCAHPS physician performance survey domain, suggesting physician team structure does not impact HVBP incentive payments by CMS. abstract_id: PUBMED:34179412 Impact of Structured and Scheduled Family Meetings on Satisfaction in Patients Admitted to Hospitalist Service. Effective communication is key to patient satisfaction. Family meetings been shown to be effective in other settings such as critical care and palliative medicine. We evaluated the impact of scheduled and structured family meetings on patients admitted to the hospitalist service in terms of satisfaction with care delivery. More patients in the intervention group reported better understanding of their diagnosis, treatment plan, medications, and discharge plan. Based on these results, we advocate for structured and scheduled family meetings to be implemented as a communication tool for selected patients on the hospital medicine service to improve patient experience and satisfaction. abstract_id: PUBMED:26381606 Measuring patient experiences on hospitalist and teaching services: Patient responses to a 30-day postdischarge questionnaire. Background: Data comparing patient experiences between general medicine teaching and nonteaching hospitalist services are lacking. Objective: Evaluate hospitalized patients' experience on general medicine teaching and nonteaching hospitalist services by assessing patients' confidence in their ability to identify their physician(s), understand their roles, and their rating of the coordination and overall care. Methods: Retrospective cohort analysis of general medicine teaching and nonteaching hospitalist services from 2007 to 2013 at an academic medical center. Patients were surveyed 30-days after hospital discharge regarding their confidence in their ability to identify their physician(s), understand the role of their physician(s), and their perceptions of coordination and overall care. A 3-level, mixed effects logistic regression was performed to ascertain the association between service type and patient-reported outcomes. Results: Data from 4591 general medicine teaching and 1811 nonteaching hospitalist service patients demonstrated that those cared for by the hospitalist service were more likely to report being able to identify their physician (50% vs 45%, P &lt; 0.001), understand their role (54% vs 50%, P &lt; 0.001), and rate greater satisfaction with coordination (68 vs 64%, P = 0.006) and overall care (73% vs 67%, P &lt; 0.001). In regression models, the hospitalist service was associated with higher ratings in overall care (odds ratio [OR]: 1.33; 95% confidence interval [CI]: 1.15-1.47), even when hospitalists were the attendings on general medicine teaching services (OR: 1.17; 95% CI: 1.01-1.31). Conclusion: Patients on a nonteaching hospitalist service rated their overall care slightly better than patients on a general medicine teaching service. Team structure and complexity may play a role in this difference. abstract_id: PUBMED:34360394 Comparison of Patient Satisfaction in Inpatient Care Provided by Hospitalists and Nonhospitalists in South Korea. Background: A Korean hospitalist is a medical doctor in charge of inpatient care during hospital stays. The purpose of this study is to examine the patient satisfaction of hospitalist patients compared to non-hospitalist patients. Patient satisfaction is closely related to the outcome, quality, safety, and cost of care. Thus, seeking to achieve high patient satisfaction is essential in the inpatient care setting. Design, Setting, And Participants: This is a case-control study based on patient satisfaction survey by the Korean Health Insurance Review and Assessment Service. We measured patients' satisfaction in physician accessibility, consultation and care service skills, and overall satisfaction through logistic regression analyses. A total of 3871 patients from 18 facilities responded to 18 questionnaires and had health insurance claim data. Results: Hospitalist patients presented higher satisfaction during the hospital stay compared to non-hospitalist patients. For example, as per accessibility, hospitalist patients could meet their attending physician more than twice a day (OR: 3.46, 95% CI: 2.82-4.24). Concerning consultation and care service skills, hospitalists' explanations on the condition and care plans were easy to understand (OR: 2.33, 95% CI: 1.89-2.88). Moreover, overall satisfaction was significantly higher (β: 0.431, p &lt; 0.0001). Subgroup analyses were conducted by medical division and region. Hospitalist patients in the surgical department and the rural area had greater patient satisfaction in all aspects of the survey than non-hospitalist patients. Conclusions: Hospitalists' patients showed higher satisfaction during the hospital stay. Our study discovered that hospitalists could provide high-quality care as they provide onsite care continuously from admission to discharge. abstract_id: PUBMED:35962604 Factors reducing psychological satisfaction after the Nuss procedure in pediatric patients. Purpose: We examined patient satisfaction with postoperative chest appearance after Nuss procedure and analyzed the factors for postoperative low satisfaction. Methods: We retrospectively reviewed data of 133 patients who underwent the Nuss procedure from 2000 to 2016. Their medical records, X-rays, and computed tomography scans were evaluated. Haller index and concave rate were used as objective indices of the deformity. The questionnaires were used to evaluate satisfaction with the chest appearance by a linear scale including five markers (1: dissatisfaction, 5: satisfaction). The patients were divided into two groups: the low satisfaction (score = 1, 2) and the high satisfaction (score = 3-5). Results: The median age during the Nuss procedure was 7.6 (interquartile range, 5.8-12.8) years. Out of 133, 65 patients replied, and the mean postoperative satisfaction score was 3.8 ± 0.2. Out of the 65 respondents, 16 patients (24.6%) were classified as low satisfaction group. Haller index and concave rate were significantly higher and the previous instances of chest operation history were more frequent in the low satisfaction group than in the high satisfaction group, although there was no significant intergroup difference in terms of the postoperative concave rate. Conclusions: Severe deformity and previous chest operation history were considered to be factors for low satisfaction. abstract_id: PUBMED:33577741 Implementation of an academic hospital medicine procedure service: 5-year experience. Objectives: Procedural complications are a common source of adverse events in hospitals, especially where bedside procedures are often performed by trainees. Medical procedure services (MPS) have been established to improve procedural education, ensure patient safety, and provide additional revenue for services that are typically referred. Prior descriptions of MPS have reported outcomes over one to 2 years. We aim to describe the implementation and 5-year outcomes of a hospitalist-run MPS. Methods: We identified all patients referred to our MPS for a procedure over the 5-year span between 2014 and 2018. We manually reviewed all charts for complications of paracentesis, thoracentesis, central venous catheterization, and lumbar punctures performed by the MPS in both inpatient and outpatient settings. Annual charges for these procedures were queried using Current Procedural Terminology (CPT) codes. Results: We identified 3,634 MPS procedures. Of these, ultrasound guidance was used in 3224 (88.7%) and trainees performed 2701 (74%). Complications identified included pneumothorax (3.7%, n = 16) for thoracentesis, post-dural puncture headache (13.9%, n = 100) and bleeding (0.1%, n = 1) for lumbar puncture, ascites leak for diagnostic (1.6%, n = 8) and large volume (3.7%, n = 56) paracentesis, and bleeding (3.5%, n = 16) for central venous catheter placement. Prior to initiation of the MPS, total annual procedural charges were $90,437. After MPS implementation, charges increased to a mean of $787,352 annually in the last 4 years of the study period. Conclusions: Implementation of a hospitalist-run, academic MPS resulted in a large volume of procedures, high rate of trainee participation, low rates of complications, and significant increase in procedural charges over 5 years. Wider adoption of this model has the potential to further improve patient procedural care and trainee education. abstract_id: PUBMED:32642398 The Teaching Interaction Procedure as a Staff Training Tool. The teaching interaction procedure is an evidence-based procedure that has been utilized for the development of social skills. The teaching interaction procedure consists of labeling the targeted skill, providing a meaningful rationale for the importance of the skill, describing the steps of the targeted skill, modeling the skill, and providing feedback throughout the interaction. Although the teaching interaction procedure has been used to teach a variety of social skills to children and adolescents diagnosed with autism spectrum disorder (ASD) and other social and behavioral disorders, its use has not been evaluated for training staff. The purpose of this study was to evaluate the use of a teaching interaction procedure to teach 3 interventionists the skills to implement a teaching interaction procedure to target the development of social skills for children diagnosed with ASD. The results of a multiple-baseline design showed the teaching interaction procedure was effective at teaching all 3 interventionists how to implement a teaching interaction procedure. abstract_id: PUBMED:23288691 Hospitalist staffing and patient satisfaction in the national Medicare population. Background: Payers are increasingly holding hospitals accountable for patients' experiences with their care. This may conflict with another trend among US hospitals-greater hospitalist care-as hospitalists may have less familiarity with the history and preferences of their patients compared with primary-care physicians. Objective: Our objective was to better understand the relationship between hospitalist care and patients' experiences with their care. Design: This was a retrospective cohort study. Setting: The setting was 2843 US acute-care hospitals (bottom tertile or "non-hospitalist" hospitals: median of 0% of general-medicine patients cared for by hospitalists; middle tertile or "mixed" hospitals: median of 39.5%; top tertile or "hospitalist" hospitals: median of 76.5%). Patients: The patients were 132,814 hospitalized Medicare beneficiaries cared for by a general medicine physician in 2009. Measurements: The measurements were hospitalist use, based on Medicare claims data, and patient satisfaction, as measured by the Hospital Consumer Assessment of Healthcare Providers and Systems survey. Results: "Hospitalist" hospitals had better performance on global measures of patient satisfaction than "mixed" or "non-hospitalist" hospitals (overall satisfaction: 65.6% vs 63.9% vs 63.9%, respectively, P value for difference &lt; 0.001). Hospitalist hospitals performed better in 6 specific domains of care, with the largest difference in satisfaction with discharge compared with mixed or non-hospitalist hospitals (80.3% vs 79.1% vs 78.1%, P &lt; 0.001). Hospitalist care was not associated with patient satisfaction in 2 domains of care: cleanliness of room and communication with physician. Conclusion: For most measures of patient satisfaction, greater hospitalist care was associated with modestly better patient-centered care. Answer: Yes, patient satisfaction with a hospitalist procedure service that includes bedside procedure teaching is reassuring to patients. A study that surveyed patients who underwent procedures such as thoracentesis, paracentesis, lumbar puncture, and arthrocentesis by a hospitalist-supervised, intern-based procedure service found that nearly all patients were satisfied or very satisfied with the overall experience. Specifically, 84% of patients found hearing physicians discuss the procedure at the bedside to be reassuring, and 94% felt that this communication was a normal part of doing a procedure (PUBMED:21480494). This suggests that bedside teaching and communication during procedures can enhance patient reassurance and satisfaction.
Instruction: Does subtype matter? Abstracts: abstract_id: PUBMED:25377626 Greater loss of white matter integrity in postural instability and gait difficulty subtype of Parkinson's disease. Background: Patients with the postural instability and gait difficulty (PIGD) subtype of Parkinson disease (PD) are at a higher risk of dysfunction and are less responsive to dopamine replacement therapy. The PIGD subtype was found to largely associate with white matter lesions, but details of the diffusion changes within these lesions have not been fully investigated. Voxel-based analysis for diffusion tensor imaging data is one of the preferred measures to compare diffusion changes in each voxel in any part of the brain. Methods: PD patients with the PIGD (n=12) and non-PIGD subtypes (n=12) were recruited to compare diffusion differences in fractional anisotropy, axial diffusivity, and radial diffusivity with voxel-based analysis. Results: Significantly reduced fractional anisotropy in bilateral superior longitudinal fasciculus, bilateral anterior corona radiata, and the left genu of the corpus callosum were shown in the PIGD subtype compared with the non-PIGD subtype. Increased radial diffusivity in the left superior longitudinal fasciculus was found in the PIGD subtype with no statistical differences in axial diffusivity found. Conclusions: Our study confirms previous findings that white matter abnormalities were greater in the PIGD subtype than in the non-PIGD subtype. Additionally, our findings suggested: (1) compared with the non-PIGD subtype, loss of white matter integrity was greater in the PIGD subtype; (2) bilateral superior longitudinal fasciculus may play a critical role in microstructural white matter abnormalities in the PIGD subtype; and (3) reduced white matter integrity in the PIGD subtype could be mainly attributed to demyelination rather than axonal loss. abstract_id: PUBMED:34333322 Larger thalamus correlated with inattentive severity in the inattentive subtype of ADHD without comorbidity. Previous studies of brain structural abnormalities in attention-deficit/hyperactivity disorder (ADHD) samples scarcely excluded comorbidity or analyzed them in subtypes. This study aimed to identify neuroanatomical alterations related to diagnosis and subtype of ADHD participants without comorbidity. In our cross-sectional analysis, we used T1-weighted structural MRI images of individuals from the ADHD-200 database. After strict exclusion, 121 age-matched children with uncomorbid ADHD (54 with ADHD-inattentive [iADHD] and 67 with ADHD-combined [cADHD]) and 265 typically developing control subjects (TDC) were included in current investigation. The established method of voxel-based morphometry (VBM8) was used to assess global brain volume and regional grey matter volume (GM). Our results showed that the ADHD patients had more regional GM in the bilateral thalamus relative to the controls. Post hoc analysis revealed that regional GM increase only linked to the iADHD subtype in the right thalamus and precentral gyrus. Besides, the right thalamus volume was positively related to inattentive severity in the iADHD. There were no group differences in global volume. Our results provide preliminary evidence that cerebral structural alterations are tied to uncomorbid ADHD subjects and predominantly attribute to iADHD subtype. Furthermore, the volume of the right thalamus may be relevant to inattentive symptoms in iADHD possibly related to a lack of inhibition of irrelevant sensory input. abstract_id: PUBMED:33328969 Voxel-Based Meta-Analysis of Gray Matter Abnormalities in Multiple System Atrophy. Purpose: This study aimed to identify consistent gray matter volume (GMV) changes in the two subtypes of multiple system atrophy (MSA), including parkinsonism subtype (MSA-P), and cerebellar subtype (MSA-C), by conducting a voxel-wise meta-analysis of whole brain voxel-based morphometry (VBM) studies. Method: VBM studies comparing MSA-P or MSA-C and healthy controls (HCs) were systematically searched in the PubMed, Embase, and Web of Science published from 1974 to 20 October 2020. A quantitative meta-analysis of VBM studies on MSA-P or MSA-C was performed using the effect size-based signed differential mapping (ES-SDM) method separately. A complementary analysis was conducted using the Seed-based d Mapping with Permutation of Subject Images (SDM-PSI) method, which allows a familywise error rate (FWE) correction for multiple comparisons of the results, for further validation of the results. Results: Ten studies were included in the meta-analysis of MSA-P subtype, comprising 136 MSA-P patients and 211 HCs. Five studies were included in the meta-analysis of MSA-C subtype, comprising 89 MSA-C patients and 134 HCs. Cerebellum atrophy was detected in both MSA-P and MSA-C, whereas basal ganglia atrophy was only detected in MSA-P. Cerebral cortex atrophy was detected in both subtypes, with predominant impairment of the superior temporal gyrus, inferior frontal gyrus, temporal pole, insula, and amygdala in MSA-P and predominant impairment of the superior temporal gyrus, middle temporal gyrus, fusiform gyrus, and lingual gyrus in MSA-C. Most of these results survived the FWE correction in the complementary analysis, except for the bilateral amygdala and the left caudate nucleus in MSA-P, and the right superior temporal gyrus and the right middle temporal gyrus in MSA-C. These findings remained robust in the jackknife sensitivity analysis, and no significant heterogeneity was detected. Conclusion: A different pattern of brain atrophy between MSA-P and MSA-C detected in the current study was in line with clinical manifestations and provided the evidence of the pathophysiology of the two subtypes of MSA. abstract_id: PUBMED:31795223 HIV-1 Latency and Latency Reversal: Does Subtype Matter? Cells that are latently infected with HIV-1 preclude an HIV-1 cure, as antiretroviral therapy does not target this latent population. HIV-1 is highly genetically diverse, with over 10 subtypes and numerous recombinant forms circulating worldwide. In spite of this vast diversity, much of our understanding of latency and latency reversal is largely based on subtype B viruses. As such, most of the development of cure strategies targeting HIV-1 are solely based on subtype B. It is currently assumed that subtype does not influence the establishment or reactivation of latent viruses. However, this has not been conclusively proven one way or the other. A better understanding of the factors that influence HIV-1 latency in all viral subtypes will help develop therapeutic strategies that can be applied worldwide. Here, we review the latest literature on subtype-specific factors that affect viral replication, pathogenesis, and, most importantly, latency and its reversal. abstract_id: PUBMED:26138235 Structural brain aberrations associated with the dissociative subtype of post-traumatic stress disorder. Objective: One factor potentially contributing to the heterogeneity of previous results on structural grey matter alterations in adult participants suffering from post-traumatic stress disorder (PTSD) is the varying levels of dissociative symptomatology. The aim of this study was therefore to test whether the recently defined dissociative subtype of PTSD characterized by symptoms of depersonalization and derealization is characterized by specific differences in volumetric brain morphology. Method: Whole-brain MRI data were acquired for 59 patients with PTSD. Voxel-based morphometry was carried out to test for group differences between patients classified as belonging (n = 15) vs. not belonging (n = 44) to the dissociative subtype of PTSD. The correlation between dissociation (depersonalization/derealization) severity and grey matter volume was computed. Results: Patients with PTSD classified as belonging to the dissociative subtype exhibited greater grey matter volume in the right precentral and fusiform gyri as well as less volume in the right inferior temporal gyrus. Greater dissociation severity was associated with greater volume in the right middle frontal gyrus. Conclusion: The results of this first whole-brain investigation of specific grey matter volume in dissociative subtype PTSD indentified structural aberrations in regions subserving the processing and regulation of emotional arousal. These might constitute characteristic biomarkers for the dissociative subtype PTSD. abstract_id: PUBMED:30976169 Impact of HIV-1 subtype and Korean Red Ginseng on AIDS progression: comparison of subtype B and subtype D. Background: To date, no study has described disease progression in Asian patients infected with HIV-1 subtype D. Methods: To determine whether the disease progression differs in patients infected with subtypes D and B prior to starting combination antiretroviral therapy, the annual decline (AD) in CD4+ T cell counts over 96 ± 59 months was retrospectively analyzed in 163 patients and compared in subtypes D and B based on the nef gene. Results: CD4+ T cell AD was significantly higher in the six subtype D-infected patients than in the 157 subtype B-infected patients irrespective of Korean Red Ginseng (KRG) treatment (p &lt; 0.001). Of these, two subtype D-infected patients and 116 subtype B-infected patients had taken KRG. AD was significantly lower in patient in the KRG-treated group than in those in the KRG-naïve group irrespective of subtype (p &lt; 0.05). To control for the effect of KRG, patients not treated with KRG were analyzed, with AD found to be significantly greater in subtype D-infected patients than in subtype B-infected patients (p &lt; 0.01). KRG treatment had a greater effect on AD in subtype D-infected patients than in subtype B-infected patients (4.5-fold vs. 1.6-fold). Mortality rates were significantly higher in both the 45 KRG-naïve (p &lt; 0.001) and all 163 (p &lt; 0.01) patients infected with subtype D than subtype B. Conclusion: Subtype D infection is associated with a &gt;2-fold higher risk of death and a 2.9-fold greater rate of progression than subtype B, regardless of KRG treatment. abstract_id: PUBMED:34784526 Postural and gait symptoms in de novo Parkinson's disease patients correlate with cholinergic white matter pathology. Introduction: The postural instability gait difficulty motor subtype of patients with Parkinson's disease (PIGD-PD) has been associated with more severe cognitive pathology and a higher risk on dementia compared to the tremor-dominant subtype (TD-PD). Here, we investigated whether the microstructural integrity of the cholinergic projections from the nucleus basalis of Meynert (NBM) was different between these clinical subtypes. Methods: Diffusion-weighted imaging data of 98 newly-diagnosed unmedicated PD patients (44 TD-PD and 54 PIGD-PD subjects) and 10 healthy controls, were analysed using diffusion tensor imaging, focusing on the white matter tracts associated with cholinergic projections from the NBM (NBM-WM) as the tract-of-interest. Quantitative tract-based and voxel-based analyses were performed using FA and MD as the estimates of white matter integrity. Results: Voxel-based analyses indicated significantly lower FA in the frontal part of the medial and lateral NBM-WM tract of both hemispheres of PIGD-PD compared to TD-PD. Relative to healthy control, several clusters with significantly lower FA were observed in the frontolateral NBM-WM tract of both disease groups. Furthermore, significant correlations between the severity of the axial and gait impairment and NBM-WM FA and MD were found, which were partially mediated by NBM-WM state on subjects' attentional performance. Conclusions: The PIGD-PD subtype shows a loss of microstructural integrity of the NBM-WM tract, which suggests that a loss of cholinergic projections in this PD subtype already presents in de novo PD patients. abstract_id: PUBMED:29713293 Neurocognitive Impairments Are More Severe in the Binge-Eating/Purging Anorexia Nervosa Subtype Than in the Restricting Subtype. Objective: To evaluate cognitive function impairment in patients with anorexia nervosa (AN) of either the restricting (ANR) or binge-eating/purging (ANBP) subtype. Method: We administered the Japanese version of the MATRICS Consensus Cognitive Battery to 22 patients with ANR, 18 patients with ANBP, and 69 healthy control subjects. Our participants were selected from among the patients at the Kobe University Hospital and community residents. Results: Compared to the healthy controls, the ANR group had significantly lower visual learning and social cognition scores, and the ANBP group had significantly lower processing speed, attention/vigilance, visual learning, reasoning/problem-solving, and social cognition scores. Compared to the ANR group, the ANBP group had significantly lower attention/vigilance scores. Discussion: The AN subtypes differed in cognitive function impairments. Participants with ANBP, which is associated with higher mortality rates than ANR, exhibited greater impairment severities, especially in the attention/vigilance domain, confirming the presence of impairments in continuous concentration. This may relate to the impulsivity, an ANBP characteristic reported in the personality research. Future studies can further clarify the cognitive impairments of each subtype by addressing the subtype cognitive functions and personality characteristics. abstract_id: PUBMED:26115789 Discrete Global but No Focal Gray Matter Volume Reductions in Unmedicated Adult Patients With Attention-Deficit/Hyperactivity Disorder. Background: Gray matter reduction mainly in the anterior cingulate cortex, the basal ganglia, and the cerebellum has been reported in attention-deficit/hyperactivity disorder (ADHD). Yet, respective data remain contradictory and inconclusive. To clarify if structural alteration in these brain areas can be verified in a large cohort of adult patients and if a history of stimulant medication has an effect on brain structure, magnetic resonance imaging was performed in the context of a clinical trial on the efficacy of group psychotherapy, clinical management, methylphenidate, and placebo (Comparison of Methylphenidate and Psychotherapy in Adult ADHD Study Trial). Methods: Between January 2007 and August 2010, 1480 patients from seven study centers across Germany, aged 18 to 58, were prescreened; 518 were assessed for eligibility; 433 were randomized; and 187 were eligible for neuroimaging. The control group included 121 healthy volunteers. Structural magnetic resonance imaging data sets were acquired. Following strict quality control, 131 patient and 95 control data sets could be analyzed. All patients were unmedicated for at least 6 months. The established method of voxel-based morphometry (VBM8 segmentation and diffeomorphic anatomical registration through exponentiated lie normalization) was used to assess global and regional brain volumes. Results: Patients displayed subtle global cerebral volume reductions. There was no evidence of regional gray matter volume abnormalities. The inattentive ADHD subtype was linked to smaller volumes in the left dorsolateral prefrontal cortex. A history of previous medication did not modulate brain volumes. Conclusions: ADHD in adulthood is associated with global rather than regional volumetric abnormalities. Previous use of stimulant medication does not seem to modify subsequent brain volumes in a significant way. abstract_id: PUBMED:24916944 The association between children's ADHD subtype and parenting stress and parental symptoms. Objective. To investigate the association between two attention deficit/hyperactivity disorder (ADHD) subtypes-combined and inattentive subtypes-and parental stress in a Taiwanese population. Method. One hundred and nine children with ADHD were interviewed using the MINI Kid questionnaire. The children were divided into combined and inattentive subtypes. The level of parenting stress was measured with the self-report Parenting Stress Index and Parental Symptom Scale (Symptom Check List, SCL-90). The data were used to identify child and parental risk factors. Results. Combined subtype was highly associated with parental stress. Parents of children in this group were significantly younger, had a poorer understanding of ADHD, and had significantly higher levels of marital discord, parenting stress, parental symptoms, and life stress (all P&lt;0.05). Multivariate analysis revealed that age of mother, child with comorbidity diagnosis, and parents with poor understanding of ADHD were significantly associated with the combined subtype relative to inattentive subtype (all P&lt;0.05). Conclusion. The combined subtype of ADHD is associated with more parental psychopathology and stress than the inattentive subtype and the presence of combined subtype may indicate that such a child is at greater risk than the inattentive subtype for comorbid conditions such as oppositional defiant disorder and conduct disorder. Answer: Yes, subtype does matter in various neurological and psychiatric conditions as evidenced by differences in clinical manifestations, brain structure alterations, and treatment responses associated with different subtypes of the same disorder. For instance, in Parkinson's disease (PD), the postural instability and gait difficulty (PIGD) subtype is associated with greater white matter abnormalities and less responsiveness to dopamine replacement therapy compared to the non-PIGD subtype. Voxel-based analysis of diffusion tensor imaging data revealed significantly reduced fractional anisotropy in specific brain regions in the PIGD subtype, suggesting a greater loss of white matter integrity, which could be mainly attributed to demyelination rather than axonal loss (PUBMED:25377626). In attention-deficit/hyperactivity disorder (ADHD), structural MRI studies have shown that the inattentive subtype (iADHD) is associated with a larger thalamus, which correlates with the severity of inattentive symptoms. This suggests that cerebral structural alterations are specific to the iADHD subtype and may be related to a lack of inhibition of irrelevant sensory input (PUBMED:34333322). Multiple system atrophy (MSA) also exhibits subtype-specific gray matter volume changes, with cerebellum atrophy detected in both parkinsonism (MSA-P) and cerebellar (MSA-C) subtypes, but basal ganglia atrophy only in MSA-P. Different patterns of cerebral cortex atrophy were observed between the two subtypes, aligning with their clinical manifestations (PUBMED:33328969). In HIV-1, the influence of viral subtype on latency and latency reversal is an area of active research. Most studies and cure strategies have focused on subtype B, but it is not conclusively proven whether subtype influences the establishment or reactivation of latent viruses. Understanding the impact of different subtypes is crucial for developing globally applicable therapeutic strategies (PUBMED:31795223). The dissociative subtype of post-traumatic stress disorder (PTSD) is characterized by specific volumetric brain morphology differences, such as greater grey matter volume in certain regions, compared to PTSD without dissociative symptoms. These structural aberrations might serve as biomarkers for the dissociative subtype (PUBMED:26138235).
Instruction: Normal coronary angiograms: financial victory from the brink of clinical defeat? Abstracts: abstract_id: PUBMED:26209809 Exogenous testosterone in women enhances and inhibits competitive decision-making depending on victory-defeat experience and trait dominance. The present experiment tested the causal impact of testosterone on human competitive decision-making. According to prevailing theories about testosterone's role in social behavior, testosterone should directly boost competitive decisions. But recent correlational evidence suggests that testosterone's behavioral effects may depend on specific aspects of the context and person relevant to social status (win-lose context and trait dominance). We tested the causal influence of testosterone on competitive decisions by combining hormone administration with measures of trait dominance and a newly developed social competition task in which the victory-defeat context was experimentally manipulated, in a sample of 54 female participants. Consistent with the hypothesis that testosterone has context- and person-dependent effects on competitive behavior, testosterone increased competitive decisions after victory only among high-dominant individuals but testosterone decreased competitive decisions after defeat across all participants. These results suggest that testosterone flexibly modulates competitive decision-making depending on prior social experience and dominance motivation in the service of enhancing social status. abstract_id: PUBMED:8697169 Normal coronary angiograms: financial victory from the brink of clinical defeat? Objective: To examine the hypothesis that, in patients undergoing coronary angiography for suspected ischaemic heart disease, a normal angiographic result is associated with a fall in consumption of health care resources following the angiogram. Design: Retrospective cost-benefit analysis comparing the 12 month periods before and after coronary angiography. Setting: Tertiary cardiac referral centre. Subjects: 69 consecutive patients investigated in the financial year 1991-92 whose angiograms were normal. Main Outcome Measures: Drug and hospital admission costs in the 12 month periods before and after angiography; urgent and elective consultations with general practitioner in that time. Results: The mean cost of care per patient in the year before investigation was 656.89 pounds. A highly significant fall in all indices of resource consumption was observed in the year following investigation, the mean resulting difference in the cost of care being 35.15 pounds per month. The cost of coronary angiography would, if this fall were maintained, be recouped in a mean time of 18 months. Conclusions: Patients suspected on clinical grounds to have coronary atherosclerosis who are found at angiography to have normal coronary arteries are heavy consumers of health care resources. Early investigation for these patients is safe and has beneficial resource consequences in the medium term. abstract_id: PUBMED:36092047 Impact of victory and defeat on the perceived stress and autonomic regulation of professional eSports athletes. Competitive sports involve physiological, technical and psychological skills, which influence directly on individuals' performance. This study aims to investigate the levels of perceived stress and Heart Rate Variability (HRV) before and after matches with victory and defeat in professional eSports athletes. Our hypothesis was that the winners would have better autonomic and stress responses after match, thus corroborating the literature on neurocardiac connections. Fifty male eSport players were selected players from 10 different Brazilian teams. The experiment was carried out in 2 sessions. Firstly, after signing the informed consent form, 24 h before the game, anthropometric, physical activity levels and time of expertise data were recorded only for sample characterization and the players were familiarized with the perceived stress scale-10 (PSS-10) and the HRV measurements. Secondly, players performed the PSS-10 and HRV recording at rest by 10 min 60 and 30 min before the game (i.e., baseline time) and 10 min after the end of the game. Overall, concerning PSS-10 our findings show that VG had significant reduced scores in post-game time compared to baseline (BL) and pre-game times, while DG had significant increased scores in post-game time compared to BL and pre-game times. Regarding HRV, our results demonstrate that VG had significant increase in RR, SDNN, rMSSD, pNN50 and HF, and significant decrease in LF and LF/HF, while DG had a significant decrease in RR, SDNN, rMSSD and HF, and significant increase in LF and LF/HF. It was observed that VG had better HRV responses (greater parasympathetic activation) as well as lower levels of perceived stress, while DG had worst HRV responses (greater sympathetic activation) and higher levels of perceived stress. abstract_id: PUBMED:30416426 Things Become Appealing When I Win: Neural Evidence of the Influence of Competition Outcomes on Brand Preference. Against the background of an increasingly competitive market environment, the current study aimed to investigate whether and how victory and defeat, as two critical factors in competition outcomes, would affect consumers' preference of unfamiliar brands. In the experiment, participants' status of victory or defeat was induced by a pseudo-online game, followed by a main task of brand preference rating. Using the precise and intuitive attributes of neuroscientific techniques, we adopted event-related potentials to analyze brain activity precisely during brand information processing when individuals experienced victory or defeat. Behavioral data showed that individuals had a stronger preference for unfamiliar brands in victory trials than in defeat trials, even if the brand was completely unrelated to the competition; this indicated a transfer of valence. Three emotion-related event-related potential components, N1, P2 and later positive potentials, were elicited more negatively in victory trials than in defeat trials, indicating the existence of incidental emotions induced by victory or defeat. No significant correlation was found between any pair of ERP components and preference scores. These results suggest that the experience of victory and defeat can evoke corresponding incidental emotions without awareness, and further affect the individual's preference for unfamiliar brands. Therefore, playing a game before presenting brand information might help promote the brand by inducing a good impression of the brand in consumers. abstract_id: PUBMED:36389559 Corrigendum: Impact of victory and defeat on the perceived stress and autonomic regulation of professional eSports athletes. [This corrects the article DOI: 10.3389/fpsyg.2022.987149.]. abstract_id: PUBMED:24897662 The effect of prior victory or defeat in the same site as that of subsequent encounter on the determination of dyadic dominance in the domestic hen. We examined the effect of prior victory or defeat in the same site as that of a subsequent encounter on the outcome of dyadic encounter of domestic hens by placing them in two situations. In the first set of dyads, two unacquainted hens having experienced prior victory were introduced in the site where one had experienced victory. In the second set, two unacquainted hens having experienced defeat were introduced in the site where one had recently lost. Results indicate that victories are equally shared between individuals with prior victory experiences, while familiarity with the meeting site did not give any advantage. However, hens having previously lost were disadvantaged when the encounter occurred in the same site as that of their prior defeat. This demonstrates that previous social experience in a site is more important on the outcome of subsequent encounters for losers than winners. Losers seem to associate the site with the stressful effect of losing or being more easily dominated. abstract_id: PUBMED:22429747 Effects of victory and defeat on testosterone and cortisol response to competition: evidence for same response patterns in men and women. In this study, we report evidence from sport competition that is consistent with the biosocial model of status and dominance. Results show that testosterone levels rise and drop following victory and defeat in badminton players of both sexes, although at lower circulating levels in women. After losing the match, peak cortisol levels are observed in both sexes and correlational analyses indicate that defeat leads to rises in cortisol as well as to drops in testosterone, the percent change in hormone levels being almost identical in both sexes. In conclusion, results show the same pattern of hormonal responses to victory and defeat in men and women. abstract_id: PUBMED:32628033 From social status to emotions: Asymmetric contests predict emotional responses to victory and defeat. Social status plays a key role in expressing different emotions. However, little is known about which mechanisms underlie the variability of emotional responses that are linked to social hierarchy. Status instability-a natural characteristic of hierarchies-can help to untangle the status-emotion relationship. Therefore, we verified whether the emotional expressions of fighters could be predicted by the degree of asymmetry in their fighting abilities during a contest. Emotional expressions upon the announcement of victory or defeat were evaluated using three different methods: nonverbal behavior patterns, software-coded facial expressions, and raters' evaluation of athletes' emotional intensity (N = 824). Competition symmetry predicted contestants' emotional responses, especially happiness in victory and anger in defeat. Conversely, more asymmetric contests predicted expressions of sadness and shame upon defeat. The asymmetry in fighting abilities had no effect on athletes' expressions of pride. Our data confirmed that status instability may be crucial to explain the variety of emotional expressions in competitive contexts. (PsycInfo Database Record (c) 2022 APA, all rights reserved). abstract_id: PUBMED:15631583 Effects of implicit power motivation on men's and women's implicit learning and testosterone changes after social victory or defeat. Two studies examined interactions of implicit power motivation and experimentally varied victory or defeat in a contest on implicit learning of a visuomotor sequence associated with the contest outcome and changes in testosterone and self-reported affect. In men and women, power motivation predicted enhanced learning (sequence-execution accuracy) after a victory and impaired learning after a defeat. In men, power motivation predicted testosterone increases among winners and decreases among losers, and testosterone decreases mediated the negative effect of power motivation on learning in losers. In women, power motivation predicted postcontest testosterone increases, particularly among losers. In both men and women, self-reported affective states were influenced only by contest outcome and were unrelated to participants' testosterone changes or implicit learning. abstract_id: PUBMED:28294145 Humans Conceptualize Victory and Defeat in Body Size. Size matters considerably for victory and defeat during competitive situations. Drawing on the embodied theory of cognition, we examined the reciprocal association between size and competition outcomes. To do so, we used the 'rock-paper-scissors game', whose outcome is not contingent on apparent physical size. In Experiment 1, participants were asked to judge whether the target gesture was a winning or a losing one. Compared to responses in the incompatible condition (small-winner and large-loser), those in the compatible condition (large-winner and small-loser) were quicker. In Experiment 2, we asked participants to adjust the size of gestures to correspond to gestures previously presented, and found that the winning gesture was estimated as much larger than the losing one. In line with our main hypothesis, size information can interfere with judgments about competition outcomes, and vice versa, even when the outcome is unrelated to body size. Answer: The study titled "Normal coronary angiograms: financial victory from the brink of clinical defeat?" (PUBMED:8697169) examined the financial implications of patients with suspected ischemic heart disease undergoing coronary angiography and receiving a normal angiogram result. The hypothesis was that a normal angiographic result would lead to a decrease in the consumption of health care resources following the angiogram. The study was a retrospective cost-benefit analysis comparing the 12 months before and after coronary angiography in 69 consecutive patients with normal angiograms. The results showed that there was a highly significant fall in all indices of resource consumption in the year following the investigation. The mean cost of care per patient before the investigation was 656.89 pounds, and there was a mean difference in the cost of care of 35.15 pounds per month after the investigation. The study concluded that patients who were clinically suspected to have coronary atherosclerosis but found to have normal coronary arteries at angiography were heavy consumers of health care resources. Early investigation for these patients was deemed safe and had beneficial resource consequences in the medium term, with the cost of coronary angiography being recouped in a mean time of 18 months. Therefore, the study suggests that while the clinical expectation of finding coronary atherosclerosis was not met (which could be seen as a clinical defeat), the outcome of a normal angiogram led to a significant reduction in health care resource consumption, which can be considered a financial victory for the health care system.
Instruction: Parenchymal density changes in acute pulmonary embolism: Can quantitative CT be a diagnostic tool? Abstracts: abstract_id: PUBMED:27855350 Parenchymal density changes in acute pulmonary embolism: Can quantitative CT be a diagnostic tool? A preliminary study. Purpose: Determine the ability of quantitative CT (QCT) in defining parenchymal density changes in acute pulmonary embolism (PE). Material & Methods: Mean lung density (MLD) and percentage distribution values (PDV) were calculated in 34 patients suspected of PE using software application based on computerized volumetric anatomical segmentation. Results: Total, left, and right MLD differed significantly between emboli positive(n=23) and negative(n=11) groups(p&lt;0.006, p&lt;0.009, p&lt;0.014). PDVs differed between groups (p&lt;0.05) except for LUZ and RLZ. When PE was present in lobe &amp;/segment branches, PDVs were significantly lower except RUZ. Conclusion: QCT is a promising application for defining parenchymal density changes in PE revealing potential functional impact of emboli. This preliminary study suggests QCT could provide added value to CTPA in peripheral PE. abstract_id: PUBMED:32901353 Assessment of extra-parenchymal lung involvement in asymptomatic cancer patients with COVID-19 pneumonia detected on 18F-FDG PET-CT studies. Background: Lung involvement in patients with coronavirus disease 2019 (COVID-19) undergoing PET-CT has been previously reported. However, FDG uptake outside lung parenchyma was poorly characterized in detail. We evaluated the extra-parenchymal lung involvement in asymptomatic cancer patients with COVID-19 pneumonia through 18F-FDG PET-CT. Methods: A total of 1079 oncologic 18F-FDG PET-CT were performed between February 2 and May 18, 2020. Confirmed COVID-19 pneumonia was defined as characteristic ground-glass bilateral CT infiltrates and positive genetic/serologic tests. Nonmetastatic extra-parenchymal lung PET-CT findings were evaluated through qualitative (visual), quantitative (measurements on CT), and semiquantitative (maximum standardized uptake value: SUVmax on PET) interpretation. Clinical data, blood tests, and PET-CT results were compared between patients with and without COVID-19 pneumonia. Results: A total of 23 18F-FDG PET-CT scans with pulmonary infiltrates suggestive of COVID-19 and available laboratory data were included: 14 positive (cases) and 9 negative (controls) for COVID-19 infection, representing a low prevalence of COVID-19 pneumonia (1.3%). Serum lactate dehydrogenase and D-dimers tended to be increased in COVID-19 cases. Extra-parenchymal lung findings were found in 42.9% of patients with COVID-19, most frequently as mediastinal and hilar nodes with 18F-FDG uptake (35.7%), followed by incidental pulmonary embolism in two patients (14.3%). In the control group, extra-pulmonary findings were observed in a single patient (11.1%) with 18F-FDG uptake located to mediastinal, hilar, and cervical nodes. Nasopharyngeal and hepatic SUVmax were similar in both groups. Conclusion: In cancer patients with asymptomatic COVID-19 pneumonia, 18F-FDG PET-CT findings are more frequently limited to thoracic structures, suggesting that an early and silent distant involvement is very rare. Pulmonary embolism is a frequent and potentially severe finding raising special concern. PET-CT can provide new pathogenic insights about this novel disease. abstract_id: PUBMED:33799729 Acute Pulmonary Embolism Severity Assessment Evaluated with Dual Energy CT Perfusion Compared to Conventional CT Angiographic Measurements. The purpose of the study was to investigate whether Dual Energy CT (DECT) can be used as a diagnostic tool to assess the severity of acute pulmonary embolism (PE) by correlating parenchymal perfusion defect volume, obstruction score and right ventricular-to-left ventricular (RV/LV) diameter ratio using CT angiography (CTA) and DECT perfusion imaging. A total of 43 patients who underwent CTA and DECT perfusion imaging with clinical suspicion of acute PE were retrospectively included in the study. In total, 25 of these patients had acute PE findings on CTA. DECT assessed perfusion defect volume (PDvol) were automatically and semiautomatically quantified. Overall, two CTA methods for risk assessment in patients with acute PE were assessed: the RV/LV diameter ratio and the Modified Miller obstruction score. Automatic PDvol had a weak correlation (r = 0.47, p = 0.02) and semiautomatic PDvol (r = 0.68, p &lt; 0.001) had a moderate correlation to obstruction score in patients with confirmed acute PE, while only semiautomatic PDvol (r = 0.43, p = 0.03) had a weak correlation with the RV/LV diameter ratio. Our data indicate that PDvol assessed by DECT software technique may be a helpful tool to assess the severity of acute PE when compared to obstruction score and RV/LV diameter ratio. abstract_id: PUBMED:32420413 CTPA with a conventional CT at 100 kVp vs. a spectral-detector CT at 120 kVp: Comparison of radiation exposure, diagnostic performance and image quality. Purpose: To compare CT pulmonary angiographies (CTPAs) as well as phantom scans obtained at 100 kVp with a conventional CT (C-CT) to virtual monochromatic images (VMI) obtained with a spectral detector CT (SD-CT) at equivalent dose levels as well as to compare the radiation exposure of both systems. Material And Methods: In total, 2110 patients with suspected pulmonary embolism (PE) were examined with both systems. For each system (C-CT and SD-CT), imaging data of 30 patients with the same mean CT dose index (4.85 mGy) was used for the reader study. C-CT was performed with 100 kVp and SD-CT was performed with 120 kVp; for SD-CT, virtual monochromatic images (VMI) with 40, 60 and 70 keV were calculated. All datasets were evaluated by three blinded radiologists regarding image quality, diagnostic confidence and diagnostic performance (sensitivity, specificity). Contrast-to-noise ratio (CNR) for different iodine concentrations was evaluated in a phantom study. Results: CNR was significantly higher with VMI at 40 keV compared to all other datasets. Subjective image quality as well as sensitivity and specificity showed the highest values with VMI at 60 keV and 70 keV. Hereby, a significant difference to 100 kVp (C-CT) was found for image quality. The highest sensitivity was found using VMI at 60 keV with a sensitivity of more than 97 % for all localizations of PE. For diagnostic confidence and subjective contrast, highest values were found with VMI at 40 keV. Conclusion: Higher levels of diagnostic performance and image quality were achieved for CPTAs with SD-CT compared to C-CT given similar dose levels. In the clinical setting SD-CT may be the modality of choice as additional spectral information can be obtained. abstract_id: PUBMED:37124638 Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT. Purpose: To evaluate the diagnostic efficacy of artificial intelligence (AI) software in detecting incidental pulmonary embolism (IPE) at CT and shorten the time to diagnosis with use of radiologist reading worklist prioritization. Materials And Methods: In this study with historical controls and prospective evaluation, regulatory-cleared AI software was evaluated to prioritize IPE on routine chest CT scans with intravenous contrast agent in adult oncology patients. Diagnostic accuracy metrics were calculated, and temporal end points, including detection and notification times (DNTs), were assessed during three time periods (April 2019 to September 2020): routine workflow without AI, human triage without AI, and worklist prioritization with AI. Results: In total, 11 736 CT scans in 6447 oncology patients (mean age, 63 years ± 12 [SD]; 3367 men) were included. Prevalence of IPE was 1.3% (51 of 3837 scans), 1.4% (54 of 3920 scans), and 1.0% (38 of 3979 scans) for the respective time periods. The AI software detected 131 true-positive, 12 false-negative, 31 false-positive, and 11 559 true-negative results, achieving 91.6% sensitivity, 99.7% specificity, 99.9% negative predictive value, and 80.9% positive predictive value. During prospective evaluation, AI-based worklist prioritization reduced the median DNT for IPE-positive examinations to 87 minutes (vs routine workflow of 7714 minutes and human triage of 4973 minutes). Radiologists' missed rate of IPE was significantly reduced from 44.8% (47 of 105 scans) without AI to 2.6% (one of 38 scans) when assisted by the AI tool (P &lt; .001). Conclusion: AI-assisted workflow prioritization of IPE on routine CT scans in oncology patients showed high diagnostic accuracy and significantly shortened the time to diagnosis in a setting with a backlog of examinations.Keywords: CT, Computer Applications, Detection, Diagnosis, Embolism, Thorax, ThrombosisSupplemental material is available for this article.© RSNA, 2023See also the commentary by Elicker in this issue. abstract_id: PUBMED:34679538 Diagnosis of Pulmonary Embolism in Unenhanced Dual Energy CT Using an Electron Density Image. Dual-energy computed tomography (CT) is a promising tool, providing both anatomical information and material properties. Using spectral information such as iodine mapping and virtual monoenergetic reconstruction, dual-energy CT showed added value over pulmonary CT angiography in the diagnosis of pulmonary embolism. However, the role of non-contrast-enhanced dual energy CT in pulmonary embolism has never been reported. Here, we report a case of acute pulmonary embolism detected on an electron density image from an unenhanced dual-energy CT using a dual-layer detector system. abstract_id: PUBMED:23555409 Diagnostic Imaging of Pulmonary Thromboembolism by Multidetector-row CT. For diagnosis of pulmonary thromboembolism, multidetector-row computed tomography (CT) is a minimally invasive imaging technique that can be performed rapidly with high sensitivity and specificity, and has been increasingly employed as the imaging modality of first choice for this disease. Since deep vein thrombosis in the legs, which is important as a thrombus source, can be evaluated immediately after the diagnosis of pulmonary thromboembolism, this diagnostic method is considered to provide important information when deciding on a comprehensive therapeutic strategy for this disease. abstract_id: PUBMED:26932279 Diagnostic Yield of Pulmonary CT Angiography in the Evaluation of Pulmonary Embolisms Treated at the Puerto Rico Medical Center from 2008 to 2012. Objective: The objective of this study was to determine the diagnostic yield of pulmonary CT angiography (PCTA) in the evaluation of pulmonary embolisms treated at the Puerto Rico Medical Center from 2008 to 2012. Methods: A total of 1,004 CT angiograms were reviewed in the evaluation of pulmonary embolisms. Patient records covering from 2008 to 2012 were obtained from the picture archiving and communication system (PACS) of the Puerto Rico Medical Center. Follow-up studies and those of pediatric patients were excluded from the study. The results were recorded as either positive or negative for pulmonary embolism, according to the final report rendered by board-certified radiologists. Results: Of the 1,004 patient records reviewed, 964 were included in the study. Forty-six out of the total studies reviewed were positive, while a total of 918 studies were negative. A mean diagnostic yield of 4.8% (SD = 0.63) was obtained. Conclusion: At the Puerto Rico Medical Center, the mean diagnostic yield in the evaluation of pulmonary embolism using PCTA was 4.8%, which is in concordance with those of several previous studies, all of which had similar low yields. New diagnostic algorithms for efficiently employing PCTA for the evaluation of pulmonary embolisms are discussed herein. abstract_id: PUBMED:3392253 Computed tomography of pulmonary thromboembolism and infarction. Computed tomographic findings in 18 patients with pulmonary thromboembolism are retrospectively reviewed. In the majority of patients, thromboembolism was not suspected clinically. The CT findings can be divided into two groups: vascular and parenchymal changes. The most frequent vascular findings is an intraluminal filling defect or defects due to thrombus. The most frequent parenchymal finding is a triangular (wedge-shaped) pleural-based soft tissue attenuation lesion. Although CT is not a primary diagnostic tool in the evaluation of pulmonary thromboembolism, CT may be helpful in diagnosis of pulmonary embolism, when evaluating an undiagnosed parenchymal density. abstract_id: PUBMED:38286507 The Diagnostic Accuracy of the Hounsfield Unit Value in Pulmonary Embolism. Objective: Pulmonary embolism (PE) a vascular disease. Computed tomography pulmonary angiography (CTPA) is the radiological imaging technique used to diagnose PE. In this study, we aimed to demonstrate the diagnostic accuracy of Hounsfield Unit (HU) value for PE based on the hypothesis that acute thrombosis causes an increase in HU value on computed tomography (CT). Methods: This research was as a single-center, retrospective study. Patients presenting to the emergency department (ED) diagnosed with PE on CTPA were enrolled as the study group. In addition, patients admitted to the same emergency department who were not diagnosed with PE and had non-contrast CT scans were included as the control group. A receiver operating curve (ROC) was produced to the diagnostic accuracy of HU values in predicting PE. Results: The study population (N=74) consisted of a study group (N=46) and a control group (N=28). The sensitivity and specificity of HU value for predicting PE on thoracic CT were found 61.5% and 96.4% at a value of 54.8 (Area Under the Curve (AUC):0.690) for right main pulmonary artery; 65.0% and 96.4% at a value of 55.9 (AUC:0.736) for left main pulmonary artery; 44.4% and 96.4% at a value of 62.7 (AUC:0.615) for right interlobar artery; and 60.0% and 92.9% at a value of 56.7 (AUC:0.736) for left interlobar artery. Conclusion: HU values may exhibit high diagnostic specificity on CT, for thrombi up to the interlobar level. An HU value exceeding 54.8 up to the interlobar level may raise suspicion of the presence of PE. Answer: Yes, quantitative CT (QCT) can be a diagnostic tool for parenchymal density changes in acute pulmonary embolism (PE). A preliminary study has shown that QCT can define parenchymal density changes in PE, revealing the potential functional impact of emboli. In this study, mean lung density (MLD) and percentage distribution values (PDV) were calculated using a software application based on computerized volumetric anatomical segmentation. The results indicated that total, left, and right MLD differed significantly between emboli positive and negative groups, and PDVs differed between groups except for certain zones. When PE was present in lobe and/or segment branches, PDVs were significantly lower except for the right upper zone. This suggests that QCT could provide added value to CT pulmonary angiography (CTPA) in the diagnosis of peripheral PE (PUBMED:27855350). Additionally, Dual Energy CT (DECT) has been investigated for its ability to assess the severity of acute PE by correlating parenchymal perfusion defect volume, obstruction score, and right ventricular-to-left ventricular (RV/LV) diameter ratio using CT angiography and DECT perfusion imaging. The study found that perfusion defect volume assessed by DECT software technique may be a helpful tool to assess the severity of acute PE when compared to obstruction score and RV/LV diameter ratio (PUBMED:33799729). Therefore, these studies support the potential of QCT and DECT as diagnostic tools for assessing parenchymal density changes and the severity of acute PE.
Instruction: Sweat conductivity: an accurate diagnostic test for cystic fibrosis? Abstracts: abstract_id: PUBMED:24485874 Sweat conductivity: an accurate diagnostic test for cystic fibrosis? Background: Sweat chloride test is the gold standard test for cystic fibrosis (CF) diagnosis. Sweat conductivity is widely used although still considered a screening test. Methods: This was a prospective, cross-sectional, diagnostic research conducted at the laboratory of the Instituto da Criança of the Hospital das Clínicas, São Paulo, Brazil. Sweat chloride (quantitative pilocarpine iontophoresis) and sweat conductivity tests were simultaneously performed in patients referred for a sweat test between March 2007 and October 2008. Conductivity and chloride cut-off values used to rule out or diagnose CF were &lt;75 and ≥90 mmol/L and &lt;60 and ≥60 mmol/L, respectively. The ROC curve method was used to calculate the sensitivity, specificity, positive (PPV) and negative predictive value (NPV), as well as the respective 95% confidence intervals and to calculate the area under the curve for both tests. The kappa coefficient was used to evaluate agreement between the tests. Results: Both tests were performed in 738 children, and CF was ruled out in 714 subjects; the median sweat chloride and conductivity values were 11 and 25 mmol/L in these populations, respectively. Twenty-four patients who had received a diagnosis of CF presented median sweat chloride and conductivity values of 87 and 103 mmol/L, respectively. Conductivity values above 90 mmol/L had 83.3% sensitivity, 99.7% specificity, 90.9% PPV and 99.4% NPV to diagnose CF. The best conductivity cut-off value to exclude CF was &lt;75 mmol/L. Good agreement was observed between the tests (kappa: 0.934). Conclusions: The sweat conductivity test yielded a high degree of diagnostic accuracy and it showed good agreement with sweat chloride. We suggest that it should play a role as a diagnostic test for CF in the near future. abstract_id: PUBMED:31744807 How to perform and interpret the sweat test. Cystic fibrosis (CF) is the most common life-threatening autosomal-recessive disease affecting Caucasians in the western world. The sweat test is the main diagnostic test for CF. It is indicated as part of the clinical assessment for infants that have picked up on the national neonatal screening programme. It may also be requested where clinical suspicion of a diagnosis of CF exists despite normal screening results. This article outlines the physiological basis behind sweat testing and the technical aspects of performing the test. Indications for performing the test are also considered. The article aims to provide clinicians with a guide to interpretation of results. abstract_id: PUBMED:30609259 Comparison of two sweat test systems for the diagnosis of cystic fibrosis in newborns. Objectives: In the national newborn screening programme for CF in Switzerland, we compared the performance of two sweat test methods, by investigating the feasibility and diagnostic performance of the Macroduct® collection method (with chloride mesurement) and Nanoduct® test (measuring conductivity) for diagnosing CF. Study-design: We included all newborns with a positive screening result between 2011 and 2015 who were referred to a CF-centre for sweat testing. In the CF-centre, a Macroduct and Nanoduct sweat test were performed simultaneously. If sweat test results were positive or borderline, a DNA analysis was performed. Final diagnosis was based on genetic mutations. Results: Over 5 years, 445 children were screened positive and in 413 (114 with CF) at least one sweat test was performed (median age at first test, 22 days); both tests were performed in 371 children. A sweat test result was more often available with the Nanoduct compared to the Macroduct (79 vs 60%, P &lt; 0.001). The Nanoduct was equally sensitive as the Macroduct in identifying newborns with CF (sensitivity 98 vs 99%) but less specific (specificity 79 vs 93%; P-value comparing ROC curves = 0.033). Conclusions: This national multicentre study revealed high failure rates for Macroduct and Nanoduct in newborns in real life practice. While this needs to be addressed, our results suggested that performing the Nanoduct in addition to the Macroduct might speed up the diagnostic process because it more often yields valid results with comparable diagnostic performance. The addition of the Nanoduct sweat test can therefore help to reduce the stressful time of uncertainty for parents and to start appropriate treatment earlier. abstract_id: PUBMED:596925 Limitations of diagnostic value of the sweat test. The sweat test, even if carried out by an experienced technician, sometimes lacks reproducibility owing presumably to physiological variations (patient's diet, temperature, and other factors at present unrecognized). Some patients are particularly prone to exhibit this variability and in them a single sweat test is almost valueless. The aldosterone status is believed to be responsible for a reciprocal relationship between sweat sodium and potassium concentrations: tests done on 8 patients show that a high sweat potassium is associated with a correspondingly lower sodium--a circumstance which must be borne in mind when interpreting a patient's sweat sodium. Of 30 patients presenting with a variety of symptoms compatible with a diagnosis of cystic fibrosis and with sweat sodium ranging from 50 to 75 mEq/1 (50-75 mmol/1), only 4 have proved to have cystic fibrosis after several years of observations; 13 have later been diagnosed as having asthma. The problem of the 'grey area' of uncertainty is aggravated by the heterozygous state which is also associated with a sweat sodium in this range. Repeated sweat tests are indicated if the sweat sodium lies within the 'grey area', and the diagnostic importance accorded the test should diminish as the sodium value approaches this area. The diagnosis of cystic fibrosis must remain in doubt unless there is strong supportive clinical evidence. abstract_id: PUBMED:23056867 Comparison of classic sweat test and crystallization test in diagnosis of cystic fibrosis. Objective: Sweat chloride measurement is considered a standard diagnostic tool for cystic fibrosis (CF). This study was performed to compare sweat chloride values obtained by quantitative pilocarpine iontophoresis (classic test) with sweat crystallization detected by direct observation of a drop of perspiration under light microscopy in patients with and without CF. Methods: The tests using both techniques were performed simultaneously in patients with and without CF. Cutoff values of ≥60 mmol/L of chloride concentration for the classic sweat test was considered for diagnosis of CF. In crystallization method, observation of typical dendritic forms of salt crystals under light microscopy was interpreted positive. Findings: Sixty patients suspected to CF (31 males and 29 females) with age range of 9 months to 2 years underwent the sweat test using both techniques. Median sweat chloride values was 26.13+10.85 in group with negative and 72.76+12.78 mmol/L in group with positive sweat test, respectively. All the patients who had positive sweat test in classic method showed typical dendritic forms of salt crystal in sweat crystallization test, which provided the test with 100% sensitivity (95%CI: 93.1-100). Only one of the 31 subjects with negative results for classic sweat test had positive result for crystallization sweat test, which provided the test with 96.7% specificity (95%CI: 92.9-100). Time spent to perform the crystallization test was significantly shorter than the classic method whereas its cost was also lower than the second method. Conclusion: There was a good correspondence between two studied methods of sweat test. These results suggested the sweat crystallization test as an alternative test for detecting CF disease with high sensitivity and specificity. abstract_id: PUBMED:28017620 Biological variability of the sweat chloride in diagnostic sweat tests: A retrospective analysis. Background: The sweat test is the current gold standard for the diagnosis of cystic fibrosis (CF). CF is unlikely when sweat chloride (Clsw) is lower than 30mmol/L, Clsw&gt;60 is suggestive of CF, with intermediate values between 30 and 60mmol/L. To correctly interpret a sweat chloride value, the biological variability of the sweat chloride has to be known. Methods: Sweat tests performed in two centers using the classic Gibson and Cooke method were retrospectively reviewed (n=5904). Within test variability of Clsw was measured by comparing results from right and left arm collected on the same day. Between test variability was calculated from subjects with sweat tests performed on more than one occasion. Results: Within test variability of Clsw calculated in 1022 subjects was low with differences between -3.2 (p5) and +3.6mmol/L (p95). Results from left and right arm were classified differently in only 3 subjects. Between test variability of Clsw in 197 subjects was larger, with differences between -18.2mmol/L (p5) and +14.1mmol/L (p95) between repeat tests. Changes in diagnostic conclusion were seen in 55/197 subjects, the most frequent being changing from indeterminate to 'CF unlikely' range (48/102). Conclusion: Variability of sweat chloride is substantial, with frequent changes in diagnostic conclusion, especially in the intermediate range. abstract_id: PUBMED:24862724 A new method of sweat testing: the CF Quantum®sweat test. Background: Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods: Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results: The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p &lt; 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions: The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. abstract_id: PUBMED:31859647 Review of the sweat test indications in a Brussels' cystic fibrosis reference center Sweat test is the gold standard of the diagnosis of cystic fibrosis (CF). The aim of our study was to identify the indications leading to perform a sweat test and those that led to the diagnosis of CF. Methodology: We collected data of all sweat tests performed between 2008, 1th of March and 2015, 28th of February. They were analyzed following Rosenstein diagnosis criteria (1998): clinical manifestations suggesting CF, positive neonatal screening (≥ 1 positive assay of immunoreactive trypsin) or familial history of CF. Results: We reviewed 1,208 sweat tests over this period. Patients were aged from 13 days to 79 years. Indications were: clinical events (94.0%), a positive neonatal screening (3.7%) and a family history (2.3%). Over the 20 newly diagnosed patients, a positive neonatal screening was the main indication for the sweat test (55%). A positive neonatal screening (p&lt;0.0001), a family history (p&lt;0.0001) and pulmonary signs associated with digestive signs (p=0.004) were more frequently found in these patients. Conclusion: Sweat test indications are mostly clinical and mainly pulmonary. This study confirms that a sweat test should be performed in case of pulmonary manifestations suggesting CF especially if these are associated with digestive manifestations. abstract_id: PUBMED:35063396 Standards of care guidance for sweat testing; phase two of the ECFS quality improvement programme. More than five decades after the introduction of the quantitative pilocarpine iontophoresis technique, surveys still highlight inconsistencies in the performance and reporting of sweat tests in Europe. The sweat test remains key for the Cystic Fibrosis (CF) diagnostic pathway for all age groups, as it reflects the basic pathophysiological defect in the sweat gland. It is also critical following newborn screening as a confirmatory diagnostic step. Despite its importance, sweat test quality is variable whether performed in the laboratory or as a point of care test. The ECFS DNWG aims to improve sweat test performance, taking into account the barriers and issues identified in the European survey; the previous step in the ECFS sweat test project. This manuscript proposes a grading of sweat test guidance from "acceptable" to "optimal", aiming to pragmatically improve quality while taking into account local situations, especially in resource-limited settings. abstract_id: PUBMED:34017807 Sweat Testing and Recent Advances. Cystic fibrosis (CF) is the most common fatal genetic disease of the Caucasian population. Sweat testing is the principal diagnostic test for CF, and it is used for the evaluation of infants with positive CF newborn screening (NBS) and in patients with clinical findings suggesting CF. This article describes the classical sweat test method in detail and also provides an overwiew of recent advances. Answer: Sweat conductivity has been evaluated as a diagnostic test for cystic fibrosis (CF) and has shown promise in terms of diagnostic accuracy. A study conducted at the Instituto da Criança of the Hospital das Clínicas in São Paulo, Brazil, found that sweat conductivity values above 90 mmol/L had 83.3% sensitivity, 99.7% specificity, 90.9% positive predictive value, and 99.4% negative predictive value for diagnosing CF. The study suggested that a conductivity cut-off value of less than 75 mmol/L was the best to exclude CF. The results indicated good agreement between sweat conductivity and the traditional sweat chloride test, with a kappa coefficient of 0.934, suggesting that sweat conductivity could play a role as a diagnostic test for CF in the near future (PUBMED:24485874). In Switzerland, a comparison of the Macroduct® collection method (measuring chloride) and the Nanoduct® test (measuring conductivity) for diagnosing CF in newborns revealed that the Nanoduct was equally sensitive but less specific than the Macroduct. The study suggested that adding the Nanoduct test could speed up the diagnostic process because it more often yields valid results with comparable diagnostic performance (PUBMED:30609259). However, it is important to note that the sweat test, whether measuring chloride or conductivity, can sometimes lack reproducibility due to physiological variations, and some patients may exhibit variability in their results. This variability can affect the interpretation of the test, especially when sweat sodium values lie within the 'grey area' (PUBMED:596925). Overall, while sweat conductivity has shown high diagnostic accuracy and good agreement with the traditional sweat chloride test, it is essential to consider the limitations and variability associated with sweat testing when interpreting results. Sweat conductivity is a valuable tool in the diagnostic process for CF, but it should be used in conjunction with other clinical assessments and diagnostic methods (PUBMED:24485874; PUBMED:30609259; PUBMED:596925).
Instruction: Does esophageal function vary at the striated and smooth muscle segments in functional chest pain? Abstracts: abstract_id: PUBMED:12358233 Does esophageal function vary at the striated and smooth muscle segments in functional chest pain? Objective: Hypersensitivity of the esophageal wall may contribute to the pathogenesis of functional chest pain. Whether the hypersensitivity is more uniformly distributed along the esophageal wall or is segmental is not known. Methods: Graded balloon distentions were performed randomly at the smooth muscle as well as at the striated muscle portions of the esophagus in 20 patients with functional chest pain and in 15 healthy volunteers, using impedance planimetry. Sensory thresholds and cross-sectional area were examined in relation to the esophageal wall tension, and the results were compared between two levels as well as the two groups of subjects. Results: During balloon distention, 17 (85%) patients reported typical chest pain, 11 (55%) at both levels, four (20%) at the smooth muscle level, and two (10%) at the striated muscle level only. The sensory thresholds for perception, discomfort, or pain were lower in patients than in controls (p &lt; 0.05). The cross-sectional area and the esophageal wall stiffness at the smooth muscle level were lower than those obtained at the striated muscle level both in controls and in patients (p &lt; 0.01). The wall tension at which moderate discomfort and pain were reported was lower in patients than controls (p &lt; 0.05). Conclusions: Although in most patients the esophagus is uniformly hypersensitive, in some either the smooth muscle or the striated muscle segment can be hypersensitive. If considering balloon distention at only one level, we recommend balloon placement at 10 cm above the lower esophageal sphincter because of a higher yield of hypersensitivity. abstract_id: PUBMED:25394785 Exaggerated smooth muscle contraction segments on esophageal high-resolution manometry: prevalence and clinical relevance. Background: Two smooth muscle contraction segments (S2, S3) on esophageal high-resolution manometry (HRM) demonstrate varying contraction vigor in symptomatic patients. Significance of isolated exaggerated smooth muscle contraction remains unclear. Methods: High-resolution manometry studies were reviewed in 272 consecutive patients (56.4 ± 0.8 years, 62% F) and compared to 21 healthy controls (27.6 ± 0.6 years, 52% F), using HRM tools (distal contractile integral, DCI; distal latency, DL; integrated relaxation pressure, IRP), Chicago Classification (CC) and multiple rapid swallows (MRS). Segments were designated merged when the trough between S2 and S3 was ≥150 mmHg, and exaggerated S3 when peak S3 amplitude was ≥150 mmHg without merging with S2. Presenting symptoms and global symptom severity (on 100 mm visual analog scale) were recorded. Prevalence of merged and exaggerated segments was determined, and characteristics compared to symptomatic patients with normal HRM, and to healthy controls. Key Results: Merged segments were identified in 5.6%, and exaggerated S3 in another 12.5%, but only 17-50% had a CC diagnosis; one healthy control had merged segments. DCI with wet swallows was similar in cohorts with merged and exaggerated segments (p = 0.7), significantly higher than symptomatic patients with normal HRM and healthy controls (p ≤ 0.003 for each comparison). Incomplete inhibition and prominent DCI augmentation on MRS (p ≤ 0.01), and presenting symptoms (chest pain and dysphagia, p = 0.04) characterized exaggerated segments, but not demographics or symptom burden. Conclusions & Inferences: Merged esophageal smooth muscle segments and exaggerated S3 may represent hypermotility phenomena from abnormal inhibition and/or excitation, and are not uniformly identified by the CC algorithm. abstract_id: PUBMED:7901108 Measurement of human esophageal tone in vivo. Background: Conventional perfused manometry has led to extensive study of phasic contractile activity in the human esophagus, but little is known about esophageal tonic activity. The aims of this study were to assess esophageal smooth and striated muscle tone and the effect of a smooth muscle relaxant (amyl nitrite, 0.3 mL inhalation) on this tone. Methods: Using a computerized isobaric recording system (barostat), esophageal tonic activity in 13 healthy subjects was recorded. Two parameters were analyzed: compliance and resistance to initial stretch (resting tone). Results: The smooth muscle esophagus was significantly more compliant but presented a greater resistance to initial stretch than the striated muscle section. Amyl nitrite affected only the smooth muscle section, significantly increasing compliance and decreasing the resistance to initial stretch. Significant chest pain and/or discomfort occurred only during striated muscle esophagus distension (10 of the 13 subjects at 25 mm Hg distending pressure). Conclusions: Active tone is present in the smooth muscle esophagus and can be modulated by a smooth muscle relaxant. Compliance and resting tone differ between the smooth and striated muscle segments of the esophagus. Assessment of tone in patients with esophageal motor disorders and noncardiac chest pain should provide further insights into these disorders. abstract_id: PUBMED:17999648 Oesophageal tone and sensation in the transition zone between proximal striated and distal smooth muscle oesophagus. Previous studies have shown that the proximal striated muscle oesophagus is less compliant and more sensitive than the distal smooth muscle oesophagus. Conventional and high resolution manometry described a transition zone between striated and smooth muscle oesophagus. We aimed to evaluate oesophageal tone and sensitivity at the transition zone of oesophagus in healthy volunteers. In 18 subjects (seven men, mean age: 28 years) an oesophageal barostat study was performed. Tone and sensitivity were assessed using stepwise isobaric distensions with the balloon located at transition zone and at distal oesophagus in random order. To study the effect induced on transition zone by a previous distension at the distal oesophagus and vice versa, identical protocol was repeated after 7 days with inverted order. Initial distension of a region is referred to as 'naïf' distension and distension of a region following the distension of the other segment as 'primed' distension. Assessment of three oesophageal symptoms (chest pain, heartburn and 'other') was obtained at the end of every distension step. Compliance was significantly higher in the transition zone than in the distal oesophagus (1.47 +/- 0.14 vs 1.09 +/- 0.09 mL mmHg(-1), P = 0.03) after 'naif' distensions. This difference was not observed during 'primed' distensions. Higher sensitivity at transition zone level was found in 11/18 (61%) subjects compared to 6/18 (33%, P &lt; 0.05) at smooth muscle oesophagus. Chest pain and 'other' symptom were more often induced by distention of the transition zone, whereas heartburn was equally triggered by distension of either region. The transition zone is more complaint and more sensitive than smooth muscle oesophagus. abstract_id: PUBMED:24766344 Esophageal mucosal mast cell infiltration and changes in segmental smooth muscle contraction in noncardiac chest pain. Mast cells release potent mediators that alter enteric nerve and smooth muscle functions and may contribute to the pathogenesis of functional gastrointestinal disorders. The goal of this study was to determine if mucosal mast cell infiltration was associated with smooth muscle segmental changes in esophageal contraction. All patients with noncardiac chest pain (NCCP) were divided into two groups consisting of patients with non-erosive reflux disease or functional chest pain (FCP) according to the results of ambulatory 24 hours esophageal pH monitoring and high-resolution manometry. Pressure-volume (PV) was calculated by multiplying the length of the esophageal segment, duration of the contraction, and mean pressure over the entire space-time box (P mean). Quantification of mast cells was performed in five consecutive nonoverlapping immunostained sections. Spearman correlation analysis showed that the distal segment PV correlated with the mast cell count in all of the patients combined and in patients with FCP with correlation coefficients of 0.509 and 0.436, respectively (P = 0.004 and P = 0.042). Similar findings were observed for the segmental ratio of distal to proximal smooth muscle PV in all patients and in patients with FCP (correlation coefficients 0.566; P = 0.001 and correlation coefficients 0.525; P = 0.012, respectively). Mucosal mast cell infiltration was associated with distal esophageal contraction as a key pathophysiologic factor of NCCP. abstract_id: PUBMED:12452399 Esophageal striated muscle contractions in patients with gastroesophageal reflux symptoms. Although there are studies showing that the amplitude of contraction in the distal esophageal body may be lower in gastroesophageal reflux (GER) disease than in asymptomatic subjects, there are no data about proximal striated muscle contraction in this disease. We studied the esophageal contraction 2 or 3 cm below the upper esophageal sphincter in response to swallowing a 5-ml bolus of water in 122 consecutive patients submitted to esophageal manometry who complained of heartburn and acid regurgitation. Sixty-nine had esophagitis seen at endoscopy. Thirty-three also complained of dysphagia. No patients had esophageal stenosis, esophageal motility abnormalities in distal esophagus, chest pain, or extraesophageal manifestations of GER. We also studied 20 patients with systemic sclerosis (SSc), a disease with no involvement of striated muscle. When we measured the amplitude, duration, and area under the curve (AUC) of the proximal esophageal contraction, we did not find any differences (P &gt; 0.05) between patients with esophagitis (N = 69) or without esophagitis (N = 53), with dysphagia (N = 33) or without dysphagia (N = 89), with mild (N = 55) or severe (N = 14) esophagitis, or younger than 40 years (N = 45) or older than 60 years (N = 19). There was also no difference between patients with GER symptoms and patients with SSc (P &gt; 0.05). We conclude that patients with GER symptoms with or without esophagitis and with or without dysphagia have similar esophageal striated muscle contractions. abstract_id: PUBMED:30069979 Sustained esophageal longitudinal smooth muscle contraction may not be a cause of noncardiac chest pain. Background: The etiology of noncardiac chest pain (NCCP) is poorly understood. Some evidence suggests that it may be related to sustained esophageal contractions (SECs) of longitudinal smooth muscle. This study attempts to evaluate whether SECs play a role in symptom production in NCCP patients. Methods: This was a prospective double-blind study comparing NCCP patients to healthy controls. Subjects underwent high-resolution esophageal manometry followed by infusions of normal saline and 0.1N hydrochloric acid into the esophagus. Pain intensity was recorded during each minute of the infusion using a visual analog scale between 0 and 10. Two blinded investigators measured the esophageal length at the end of the saline and acid infusion periods as well as the point at which esophageal shortening began using the computer based manometry software. Key Results: Seventeen NCCP patients and 16 controls completed the study. 64% of study subjects demonstrated esophageal shortening in response to acid infusion with mean shortening of 0.4 ± 0.54 cm. The mean decrease in esophageal length with acid was similar between the groups (1.9% ± 2.6% for NCCP patients vs 1.7% ± 2.4% for controls, P = .82). There was no correlation between pain onset and esophageal shortening. Conclusions And Inferences: NCCP patients did not appear to have an exaggerated esophageal shortening response to intraluminal acid. As well, there was poor temporal correlation between esophageal shortening and symptoms. Thus, acid-induced SECs may not play a significant role in pain production in NCCP patients. abstract_id: PUBMED:17266691 Prevalence of increased esophageal muscle thickness in patients with esophageal symptoms. Background: Patients with achalasia, diffuse esophageal spasm (DES), and nutcracker esophagus have a thicker muscularis propria than normal subjects. The goal of our study was to determine the prevalence of increased muscle thickness in a group of unselected patients referred to the esophageal function laboratory for evaluation of the symptoms. Methods: We studied 40 normal subjects and 94 consecutive patients. Manometry and ultrasound images were recorded concurrently, using a special custom-built catheter. Esophageal muscle thickness and muscle cross-sectional area were measured at 2 and 10 cm above the lower esophageal sphincter (LES). Patients were assigned manometric diagnosis and determination was made if they had increased muscle thickness and muscle cross-sectional area. Results: Nearly all patients with well-defined spastic motor disorders, i.e., achalasia, DES, and nutcracker esophagus, revealed (a) an increase in the muscle thickness/cross-sectional area, (b) increase in esophageal muscle thickness/cross-sectional area was also seen, albeit at a lower prevalence rate, in patients with less well-characterized manometric abnormalities, i.e., hypertensive LES, impaired LES relaxation, and ineffective esophageal motility, and (c) 24% of patients with esophageal symptoms but normal manometry were also found to have an increase in muscle thickness/cross-sectional area. Dysphagia was more likely, and heartburn less likely in patients with increased muscle thickness, but there were no differences in chest pain and regurgitation symptoms between the groups. Conclusion: We describe, for the first time, increased muscle thickness in patients with esophageal symptoms and normal manometry. We suggest that increased esophageal muscle thickness is likely to be an important marker of esophageal motor dysfunction. abstract_id: PUBMED:22413883 Segmental changes in smooth muscle contraction as a predictive factor of the response to high-dose proton pump inhibitor treatment in patients with functional chest pain. Background And Aims: High-dose proton pump inhibitor (PPI) treatment leads to relatively little symptomatic improvement in patients with functional chest pain (FCP). This study was to evaluate the use of smooth muscle segmental changes in esophageal contraction as measured by topographical plots of high resolution manometry (HRM) as predictive factors of the response to high-dose PPI treatment in FCP patients. Methods: Thirty patients diagnosed with FCP were treated with rabeprazole 20 mg twice daily for 2 weeks and classified as positive and negative responders based on symptom intensity score. HRM topographical plots were analyzed for segment lengths, maximal wave amplitudes, and pressure volumes of the proximal and distal smooth muscle segments. Results: A positive response was observed in 23.3% of the patients. While the pressure volume of the proximal segment was significantly higher in the positive responders than the negative responders (900.4 ± 91.5 mm Hg/cm per s vs. 780.5 ± 133.3 mm Hg/cm per s, P = 0.017), the pressure volume of the distal segment was significantly lower in the positive responders (1914.0 ± 159.8 mm Hg/cm per s vs. 2140.5 ± 276.2 mm Hg/cm per s, P = 0.014). A prominent shifting in pressure volume to the distal segment was observed in the negative responders compared with the positive responders (segmental ratio of pressure volume (SRPV): 2.9 ± 0.5 vs. 2.1 ± 0.1, P &lt; 0.001), and 2.39 was found to be the SRPV that best differentiated positive and negative responders. Conclusions: A low SRPV was associated with a positive response to high-dose PPI treatment in patients with FCP. abstract_id: PUBMED:31953604 Minor esophageal functional disorders: are they relevant? Purpose Of Review: High resolution esophageal manometry (HRM) has expanded understanding of esophageal motor function. The Chicago Classification scheme has allowed systematic categorization of the myriad of manometric parameters identified during HRM. Multichannel intraluminal impedance pH has enhanced ambulatory reflux monitoring through complete assessment of esophageal content transit. However, the clinical implications of identified minor esophageal functional disorders remain unclear. Recent Findings: Esophagogastric junction outlet obstruction is defined by esophagogastric junction obstruction with preserved peristalsis and may be managed expectantly, or in a manner similar to achalasia. Hypercontractile esophagus has been associated with dysphagia and non-cardiac chest pain, but the clinical significance is unclear as a majority of patients will improve without specific therapy. Additionally, these findings may be confounded by chronic opiate use. Ineffective esophageal motility is characterized by diminished esophageal contraction amplitude, potentially causing dysphagia and GERD. However, this is commonly identified in asymptomatic volunteers and may represent a normal variant. The multiple rapid swallow sequence can assess esophageal contraction reserve, which may predict post fundoplication dysphagia. The post-swallow induced peristaltic wave can serve as a surrogate of gastric refluxate clearance, providing important prognostic value. However, the associated time burden and lack of alternative therapeutic options limit its clinical utility. Summary: Minor esophageal functional disorders provide new therapeutic targets for symptomatic patients. However, these findings have inconsistent associations with symptoms and poorly defined therapeutic options. Minor esophageal function disorders should not be interpreted in isolation, with management decisions accounting for clinical, endoscopic, and radiographic factors in addition. Answer: Yes, esophageal function does vary at the striated and smooth muscle segments in functional chest pain. Studies have shown that there are differences in sensory thresholds, compliance, and wall stiffness between the striated and smooth muscle portions of the esophagus in patients with functional chest pain. For instance, during balloon distention, a higher percentage of patients reported typical chest pain at both levels, but some reported it only at the smooth muscle level or the striated muscle level. Sensory thresholds for perception, discomfort, or pain were lower in patients than in controls, indicating hypersensitivity. The cross-sectional area and the esophageal wall stiffness at the smooth muscle level were lower than those at the striated muscle level both in controls and in patients. The wall tension at which moderate discomfort and pain were reported was also lower in patients than controls (PUBMED:12358233). Additionally, studies have found that the smooth muscle esophagus is more compliant but presents a greater resistance to initial stretch than the striated muscle section. Amyl nitrite, a smooth muscle relaxant, significantly increased compliance and decreased the resistance to initial stretch in the smooth muscle section (PUBMED:7901108). Furthermore, the transition zone between proximal striated and distal smooth muscle esophagus has been shown to be more compliant and more sensitive than the distal smooth muscle esophagus (PUBMED:17999648). Moreover, esophageal mucosal mast cell infiltration has been associated with changes in segmental smooth muscle contraction in noncardiac chest pain, suggesting a link between inflammation and esophageal motor function (PUBMED:24766344). High-resolution manometry studies have also revealed that exaggerated smooth muscle contraction segments may represent hypermotility phenomena from abnormal inhibition and/or excitation, which are not uniformly identified by the Chicago Classification algorithm (PUBMED:25394785). In summary, the esophageal function in functional chest pain does vary between the striated and smooth muscle segments, with evidence of segmental hypersensitivity, differences in compliance and wall stiffness, and associations with mast cell infiltration and hypermotility phenomena.
Instruction: Does reporting estimated glomerular filtration rate affect ordering of timed urine collections? Abstracts: abstract_id: PUBMED:19301454 Does reporting estimated glomerular filtration rate affect ordering of timed urine collections? Background: The National Kidney Foundation recommends reporting estimates of glomerular filtration rate (eGFR) rather than timed urine collections. When 2 of the 3 major hospitals in our region began reporting eGFR at different times we recognized a natural experiment. Methods: We conducted a retrospective, observational study at the 3 major hospitals in Chattanooga, Tennessee. Data were collected on the frequency of timed urines during a 41/2-year period. Regression analysis was used to study the association of the rate of ordering timed urines adjusted for other factors. Results: There was a marked drop in the rate of ordering of timed urines at all 3 hospitals from a mean of 21.8 per 1000 admissions in the first 4 quarter years of the study to 10.9 in the last 4 (15th-18th) quarter years. The drop began before the reporting of eGFR at any hospital. The reporting of eGFR had a small effect (-1.7 per 1000 hospital admissions) that was not statistically significant (P = 0.15). Conclusions: There has been a marked drop in the ordering of timed urines in our region. The decline began before the reporting of eGFR at 2 of the hospitals and therefore is attributable to other factors. abstract_id: PUBMED:17942776 New Jersey's experience: mandatory estimated glomerular filtration rate reporting. The passage of legislation in New Jersey mandating the calculation and reporting by clinical laboratories of the estimated glomerular filtration rate whenever a serum creatinine test is performed resulted in a flurry of activity by laboratories to bring their facilities into compliance. After guidance provided by the Department of Health and Senior Services in November 2005 regarding legislative intent, New Jersey's clinical laboratories, including more than 80 acute care hospital laboratories, successfully implemented estimated glomerular filtration rate reporting by July 2006. This reporting, however, was not achieved without controversy and logistical barriers. Despite these issues, the initial feedback from physicians in response to receiving estimated glomerular filtration rate values on test reports as mandated by state law has been largely favorable. With more than 3.5 million estimated glomerular filtration rate values reported to the department by a sampling of large independent (n = 3), physician office (n = 4), and hospital (n = 11) laboratories, average estimated glomerular filtration rate values were as follows: 79% of physician office and independent laboratory estimated glomerular filtration rate values were &gt; or = 60 ml/min per 1.73 m2, and 2% were &lt; 30 ml/min per 1.73 m2; by comparison, 66 and 11% of hospital values were &gt; or = 60 and &lt; 30 ml/min per 1.73 m2, respectively. Additional studies are necessary to determine whether the intent of the legislation to "aid health professionals in the early diagnosis of kidney disease," thereby resulting in improved treatment outcomes, is achieved. abstract_id: PUBMED:22336989 Reporting of the estimated glomerular filtration rate decreased creatinine clearance testing. The Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines suggest that clinicians use the estimated glomerular filtration rate (eGFR) measurements and minimize the use of timed urine creatinine clearance collection. The intent of this change was to improve recognition of chronic kidney disease. Here we used time-series modeling and intervention analyses to determine the effect of publication of the K/DOQI guidelines and the introduction of widespread eGFR reporting with prompts on physician ordering of 24-h urine collection for creatinine clearance. In this setting, clinical practice guidelines did not influence creatinine clearance testing; however, the direct introduction of eGFR reporting with prompts into physician workflow resulted in a sudden and significant 23.5% decrease in creatinine clearance collection over the 43 months analyzed. Thus, eGFR reporting with prompts may have produced a clinical practice change because it is integrated directly into physician workflow. Changing physician practice patterns may require more than publishing guidelines; rather it is more likely to occur through educational and structural changes to practice. abstract_id: PUBMED:21500987 The impact of reporting estimated glomerular filtration rate. The 'Kidney Disease Outcomes Quality Initiative' guidelines recommend laboratory reporting of a calculated estimated glomerular filtration rate (eGFR). The United Kingdom and several states already mandate reporting eGFR for every laboratory serum creatinine (sCr) measurement. In our study, we evaluated the impact of reporting eGFR on the management of hospitalized patients. We reviewed the medical records for 2000 patients, 1000 pre- and 1000 post-reporting eGFR. We excluded patients with previous diagnosis of chronic kidney disease, acute kidney failure, and end-stage renal disease. We analyzed the subgroup of patients with eGFR &lt;60 and sCr &lt;1.5 mg/dL. We did not notice an increase in the number of renal consult, ordering laboratory or imaging study to evaluate chronic kidney disease. The prescription habits did not change for nephrotoxic medications (nonsteroidal anti-inflammatory drugs and aminoglycosides). We did not find any change in the percentage of patients who received hydration for a radiological contrast study or the use of N-acetylcysteine. In conclusion, reporting eGFR did not improve the renal management of hospitalized patients. abstract_id: PUBMED:16254731 Timed-urine collections for renal clearance studies. The purpose of this study was to describe the reproducibility of timed-urine collections for renal clearance studies and the effect variations in urine collection has on measurement of glomerular filtration rate (GFR). Data from 222 cimetidine clearance studies (GFR-Cim) were obtained from 32 pediatric renal patients over a period of 8 years. There were three to 18 studies per child aged 4.8 years to 21 years at the time of a study. The urinary creatinine excretion rate is measured during supervised urine collection periods. The creatinine excretion rates in each child were compared to obtain data on the reproducibility of the urine collections. The coefficient of variation (CV) of the creatinine excretion rate is approximately 10% in both children and adults. The variation in GFR to be expected during repeated renal clearance studies in subjects with stable GFR, using voided urine collections, was similar in children and adults, with a CV of 12% to 14%. abstract_id: PUBMED:35830833 Monitoring residual kidney function in haemodialysis patients using timed urine collections: validation of the use of estimated blood results to calculate GFR. Objective. With growing recognition of the benefits of preserving residual kidney function (RKF) and use of incremental treatment regimes, the incentive to measure residual clearance in haemodialysis patients is increasing. Interdialytic urine collections used to monitor RKF in research studies are considered impractical in routine care, partly due to the requirement for blood samples before and after the collection. Plasma solute levels can be estimated if patients are in 'steady state', where urea and creatinine concentrations increase at a constant rate between dialysis sessions and are reduced by a constant ratio at each session. Validation of the steady state assumption would allow development of simplified protocols for urine collections in HD patients.Approach. Equations were derived for estimating plasma urea and creatinine at the start or end of the interdialytic interval for patients in steady state. Data collected during the BISTRO study was used to assess the agreement between measured and estimated plasma levels and the effect of using estimated levels on the calculated glomerular filtration rate (GFR).Main results. The mean difference between GFR calculated with estimated plasma levels for the HD session after the collection and a full set of measured levels was 2.0% (95% limits of agreement -10.7% to +14.7%,N = 316). Where plasma levels for the session before the collection were estimated, the mean difference was 1.2% (limits of agreement -10.3% to +7.9%,N = 275).Significance. Using estimated levels for one session led to a clinically significant difference in the calculated GFR for less than 3% of the collections studied. This indicates that the steady state assumption can be used to estimate solute levels when determining GFR from timed urine collections. A pragmatic approach to monitoring RKF in HD would be for patients to collect for approximately 24 h before routine bloods are taken. abstract_id: PUBMED:29371760 The relationship between vitamin D and estimated glomerular filtration rate and urine microalbumin/creatinine ratio in Korean adults. The present study was conducted to assess the association between 25-hydroxyvitamin D [25(OH)D], estimated glomerular filtration rate (eGFR) and urine microalbumin/creatinine ratio (uACR) in Korean adults. Data on 4,948 adults aged ≥20 years from the Korean National Health and Nutrition Examination Survey V-3 (2012) were analyzed. After adjusting for the related variables (except age), the odds ratios (ORs) of vitamin D deficiency with the normal group as a reference were significantly higher in the decreased eGFR plus elevated uACR group [3.089 (95% CI, 1.722-5.544)], but not in the elevated uACR [1.247 (95% CI, 0.986-1.577)] and decreased eGFR group [1.303 (95% CI, 0.789-2.152)]. However, when further adjusting for age, the ORs of vitamin D deficiency with the normal group as a reference were significantly higher in the elevated uACR group [1.312 (95% CI, 1.035-1.662)], decreased eGFR group [1.761 (95% CI, 1.062-2.919)] and the decreased eGFR plus elevated uACR group [3.549 (95% CI, 1.975-6.365)]. In conclusion, vitamin D deficiency was positively associated with the elevated uACR and decreased eGFR. In addition, vitamin D level decreased greatly when decreased eGFR and elevated uACR appeared simultaneously. abstract_id: PUBMED:23329851 Prediction of glomerular filtration rate in cancer patients by an equation for Japanese estimated glomerular filtration rate. Background: Assessment of renal function is important for safe cancer chemotherapy, and eligibility criteria for clinical trials often include creatinine clearance. However, creatinine clearance overestimates glomerular filtration rate, and various new formulae have been proposed to estimate glomerular filtration rate. Because these were developed mostly in patients with chronic kidney disease, we evaluated their validity in cancer patients without kidney disease. Methods: Glomerular filtration rate was measured by inulin clearance in 45 Japanese cancer patients, and compared with creatinine clearance measured by 24-h urine collection as well as that estimated by the Cockcroft-Gault formula, Japanese estimated glomerular filtration rate developed in chronic kidney disease patients, the Modification of Diet in Renal Disease study equation and the Chronic Kidney Disease Epidemiology Collaboration equation. The Modification of Diet in Renal Disease study and Chronic Kidney Disease Epidemiology Collaboration equations were adjusted for the Japanese population by multiplying by 0.808 and 0.813, respectively. Results: The mean inulin clearance was 79.2 ± 18.7 ml/min/1.73 m(2). Bias values to estimate glomerular filtration rate for Japanese estimated glomerular filtration rate, the Cockcroft-Gault formula, creatinine clearance measured by 24-h urine collection, the 0.808 × Modification of Diet in Renal Disease study equation and the 0.813 × Chronic Kidney Disease Epidemiology Collaboration equation were 0.94, 9.75, 29.67, 5.26 and -0.92 ml/min/1.73 m(2), respectively. Precision (root-mean square error) was 14.7, 22.4, 39.8, 16.0 and 14.1 ml/min, respectively. Of the scatter plots of inulin clearance versus each estimation formula, the Japanese estimated glomerular filtration rate correlated most accurately with actual measured inulin clearance. Conclusion: The Japanese estimated glomerular filtration rate and the 0.813 × Chronic Kidney Disease Epidemiology Collaboration equation estimated glomerular filtration rate with lower bias and higher precision than the other formulae. We therefore propose Japanese estimated glomerular filtration rate for the estimation of glomerular filtration rate in Japanese cancer patients. abstract_id: PUBMED:3453695 Difficulties in estimating glomerular filtration rate in the elderly. Estimates of glomerular filtration rate are generally obtained by measuring or estimating endogenous creatinine clearance. However, it may sometimes be difficult to obtain the necessary urine collections. Most of 19 healthy, reliable elderly outpatients were found unable to provide satisfactory 24-hour urine collections. To judge whether formulas estimating creatinine clearance from serum creatinine levels are reliable, we also compared 24-hour creatinine clearances measured in 50 inpatients with values calculated by the Cockroft-Gault equation. Only a moderate correlation was found, which may be unacceptable in the clinical situations for which the equation is used, such as drug dosing. For reasons including uncertainties in the validity of predictive formulas and unreliability of urine collections, we conclude that no acceptable method now exists for bedside estimation of glomerular filtration rate and that drug levels should be measured whenever possible in elderly patients and in those with renal insufficiency. abstract_id: PUBMED:26793755 Estimated glomerular filtration rate by serum creatinine or standardized cystatin C in Japanese patients with Graves׳ disease. Glomerular filtration rate (eGFR) by serum creatinine (eGFRCr) or standardized cystatin C (eGFRCysC) were estimated in Japanese patients with Graves׳ disease (GD) of different sex. Clinical samples were collected from patients with GD with normal renal function to accurately validate eGFRCr and eGFRCysC levels and evaluate how hyperthyroidism affects renal function. Levels of eGFRCr and eGFRCysC showed clinical usefulness in successfully treated euthyroid patients with GD regardless of sex. The article includes detailed experimental methods and data used in our analysis. The data relates to the "Paradoxical effect of thyroid function on the estimated glomerular filtration rate by serum creatinine or standardized cystatin C in Japanese Graves' disease patients" (Suzuki et al., 2015) [1]. Answer: The reporting of estimated glomerular filtration rate (eGFR) appears to have an impact on the ordering of timed urine collections, although the extent of this impact varies according to different studies. In Chattanooga, Tennessee, a retrospective observational study found a marked drop in the ordering of timed urines, but this decline began before the reporting of eGFR at two of the hospitals, suggesting that the decline was attributable to other factors (PUBMED:19301454). In contrast, another study using time-series modeling and intervention analyses found that the direct introduction of eGFR reporting with prompts into physician workflow resulted in a sudden and significant 23.5% decrease in creatinine clearance collection over the analyzed period (PUBMED:22336989). This suggests that integrating eGFR reporting directly into physician workflow can lead to a change in clinical practice. In New Jersey, where mandatory eGFR reporting was implemented, initial feedback from physicians was largely favorable, although additional studies were needed to determine if the legislation's intent to aid in the early diagnosis of kidney disease and improve treatment outcomes was achieved (PUBMED:17942776). Another study in the United Kingdom found that reporting eGFR did not improve the renal management of hospitalized patients, as there was no observed increase in renal consults, ordering of laboratory or imaging studies, or changes in prescription habits for nephrotoxic medications (PUBMED:21500987). Overall, the evidence suggests that reporting eGFR can affect the ordering of timed urine collections, but the impact may depend on how the reporting is implemented and integrated into clinical practice, as well as other contextual factors.
Instruction: Is cooking at home associated with better diet quality or weight-loss intention? Abstracts: abstract_id: PUBMED:25399031 Is cooking at home associated with better diet quality or weight-loss intention? Objective: To examine national patterns in cooking frequency and diet quality among adults in the USA, overall and by weight-loss intention. Design: Analysis of cross-sectional 24 h dietary recall and interview data. Diet quality measures included total kilojoules per day, grams of fat, sugar and carbohydrates per day, fast-food meals per week, and frozen/pizza and ready-to-eat meals consumed in the past 30 d. Multivariable regression analysis was used to test associations between frequency of cooking dinner per week (low (0-1), medium (2-5) and high (6-7)), dietary outcomes and weight-loss intention. Setting: The 2007-2010 National Health and Nutrition Examination Survey. Subjects: Adults aged 20 years and over (n 9569). Results: In 2007-2010, 8 % of adults lived in households in which someone cooked dinner 0-1 times/week and consumed, on an average day, 9627 total kilojoules, 86 g fat and 135 g sugar. Overall, compared with low cookers (0-1 times/week), a high frequency of cooking dinner (6-7 times/week) was associated with lower consumption of daily kilojoules (9054 v. 9627 kJ, P=0·002), fat (81 v. 86 g, P=0·016) and sugar (119 v. 135 g, P&lt;0·001). Individuals trying to lose weight consumed fewer kilojoules than those not trying to lose weight, regardless of household cooking frequency (2111 v. 2281 kJ/d, P&lt;0·006). Conclusions: Cooking dinner frequently at home is associated with consumption of a healthier diet whether or not one is trying to lose weight. Strategies are needed to encourage more cooking among the general population and help infrequent cookers better navigate the food environment outside the home. abstract_id: PUBMED:33260523 Cooking as a Health Behavior: Examining the Role of Cooking Classes in a Weight Loss Intervention. Americans are cooking fewer meals at home and eating more convenience foods prepared elsewhere. Cooking at home is associated with higher quality diets, while a reduction in cooking may be associated with increases in obesity and risk factors for chronic disease. The aims of this study were to examine cooking as an intervention for weight control in overweight and obese adults, and whether such an intervention increases participants' food agency and diet quality. Overweight and obese adults were randomized into one of two intervention conditions: active or demonstration. Both conditions received the same 24-week behavioral weight loss intervention, and bi-weekly cooking classes. The active condition prepared a weekly meal during a hands-on lesson, while the demonstration condition observed a chef prepare the same meal. The active condition lost significantly more weight at six months compared with the demonstration condition (7.3% vs. 4.5%). Both conditions saw significant improvements in food agency scores and Healthy Eating Index scores, though no significant differences were noted between groups. The addition of active cooking to a weight management intervention may improve weight loss outcomes, though benefits in diet quality and cooking behaviors may also be seen with the addition of a demonstration-only cooking intervention. abstract_id: PUBMED:36841438 Patterns of home cooking practices among participants in a behavioral weight loss program: A latent class analysis. Cooking education is a popular approach to health promotion; however, the relationship between specific cooking practices, diet and weight loss is not well understood. The goal of this study was to 1) evaluate the relationship between cooking practices, dietary behaviors, and weight loss after a weight loss intervention and 2) identify patterns of cooking practices and their implications on weight loss. Using a quasi-experimental, single-arm cohort study design, we analyzed data from 249 adults with overweight/obesity who were participating in a weight loss program. Participants self-reported demographics, height and weight, and diet and physical activity behaviors. The Health Cooking Questionnaire 2 (HCQ2) was used to collect information on cooking practices post intervention. The HCQ2 responses were used to generate Healthy Cooking Index (HCI) scores, a summative measure of cooking practices with the potential to influence health. Latent Class Analysis (LCA) was utilized to define distinct patterns of cooking behaviors. Cooking patterns and HCI scores were examined relative to participant demographics, dietary behaviors, and weight loss. HCI scores post-intervention were positively associated with age, weight loss, and favorable dietary behaviors in this study. The LCA revealed three distinct patterns of cooking behavior (Red Meat Simple, Vegetarian Simple, Health &amp; Taste Enhancing). The Red Meat Simple cooking pattern was associated with less weight loss compared to other patterns. The findings of this study set the foundation for more research on cooking education as a method for improving weight loss outcomes in the context of behavioral interventions. abstract_id: PUBMED:25963602 Reduction in food away from home is associated with improved child relative weight and body composition outcomes and this relation is mediated by changes in diet quality. Background: Reducing consumption of food away from home is often targeted during pediatric obesity treatment, given the associations with weight status and gain. However, the effects of this dietary change on weight loss are unknown. Objective: Our aim was to evaluate associations between changes in dietary factors and child anthropometric outcomes after treatment. It is hypothesized that reduced consumption of food away from home will be associated with improved dietary intake and greater reductions in anthropometric outcomes (standardized body mass index [BMI] and percent body fat), and the relationship between food away from home and anthropometric outcomes will be mediated by improved child dietary intake. Design: We conducted a longitudinal evaluation of associations between dietary changes and child anthropometric outcomes. Child diet (three 24-hour recalls) and anthropometric data were collected at baseline and 16 weeks. Participants/setting: Participants were 170 overweight and obese children ages 7 to 11 years who completed a 16-week family-based behavioral weight-loss treatment as part of a larger multi-site randomized controlled trial conducted in two cohorts between 2010 and 2011 (clinical research trial). Intervention: Dietary treatment targets during family-based behavioral weight-loss treatment included improving diet quality and reducing food away from home. Main Outcome Measures: The main outcome measures in this study were child relative weight (standardized BMI) and body composition (percent body fat). Statistical Analyses: We performed t tests and bootstrapped single-mediation analyses adjusting for relevant covariates. Results: As hypothesized, decreased food away from home was associated with improved diet quality and greater reductions in standardized BMI (P&lt;0.05) and percent body fat (P&lt;0.01). Associations between food away from home and anthropometric outcomes were mediated by changes in diet quality. Specifically, change in total energy intake and added sugars mediated the association between change in food away from home and standardized BMI, and change in overall diet quality, fiber, added sugars, and added fats mediated the association between change in food away from home and percent body fat. Including physical activity as a covariate did not significantly impact these findings. Conclusions: These results suggest that reducing food away from home can be an important behavioral target for affecting positive changes in both diet quality and anthropometric outcomes during treatment. abstract_id: PUBMED:37299388 Low Cooking Skills Are Associated with Overweight and Obesity in Undergraduates. Culinary skills are defined as the confidence, attitude, and the application of one's individual knowledge in performing culinary tasks, and their development may be associated with better diet quality and better health status. This study aimed to analyze the association between cooking skills, overweight, and obesity in undergraduates. This is a descriptive, observational, and cross-sectional study, with data collected between October 2020 and March 2021, with undergraduate students (n = 823) at the Federal University of Rio Grande do Norte. Participants answered the online Brazilian Cooking Skills and Healthy Eating Questionnaire Evaluation, BCSQ, which included socioeconomic information. Logistic regressions were used to assess the associations of cooking skills with overweight and obesity. From the total of the students, 70.8% were female, with a median age of 23 (21-30) years; 43.6% were with overweight or obesity; 48.8% were eutrophic; and 7.7% underweight. Overweight and obesity were significantly associated with low levels of culinary self-efficacy and self-efficacy in the use of fruits, vegetables, and seasonings in the bivariate analysis. The logistic regressions showed that living with other people and eating out were associated with higher chances of overweight and obesity. Sharing the responsibility for preparing meals and a high self-efficacy in the use of fruits, vegetables, and seasonings were associated with lower chances for overweight/obesity. Overall, our study showed that overweight and obesity were associated with lower cooking skills in the studied undergraduates. Therefore, the study demonstrates that culinary skills can be explored in educational programs that aim to reduce overweight/obesity in students. abstract_id: PUBMED:32590984 Cooking skills related to potential benefits for dietary behaviors and weight status among older Japanese men and women: a cross-sectional study from the JAGES. Background: Poor cooking skills have been linked to unhealthy diets. However, limited research has examined associations of cooking skills with older adults' health outcomes. We examined whether cooking skills were associated with dietary behaviors and body weight among older people in Japan. Methods: We used cross-sectional data from the 2016 Japan Gerontological Evaluation Study, a self-report, population-based questionnaire study of men (n = 9143) and women (n = 10,595) aged ≥65 years. The cooking skills scale, which comprises seven items with good reliability, was modified for use in Japan. We calculated adjusted relative risk ratios of unhealthy dietary behaviors (low frequency of home cooking, vegetable/fruit intake; high frequency of eating outside the home) using logistic or Poisson regression, and relative risk ratios of obesity and underweight using multinomial logistic regression. Results: Women had higher levels of cooking skills, compared with men. Women with a moderate to low level of cooking skills were 3.35 (95% confidence interval [CI]: 2.87-3.92) times more likely to have a lower frequency of home cooking and 1.61 (95% CI: 1.36-1.91) times more likely to have a lower frequency of vegetable/fruit intake, compared with women with a high level of cooking skills. Men with a low level of cooking skills were 2.56 (95% CI: 2.36-2.77) times more likely to have a lower frequency of home cooking and 1.43 (95% CI: 1.06-1.92) times more likely to be underweight, compared with men with a high level of cooking skills. Among men in charge of meals, those with a low level of cooking skills were 7.85 (95% CI: 6.04-10.21) times more likely to have a lower frequency of home cooking, 2.28 (95% CI: 1.36-3.82) times more likely to have a higher frequency of eating outside the home, and 2.79 (95% CI: 1.45-5.36) times more likely to be underweight, compared with men with a high level of cooking skills. Cooking skills were unassociated with obesity. Conclusions: A low level of cooking skills was associated with unhealthy dietary behaviors and underweight, especially among men in charge of meals. Research on improving cooking skills among older adults is needed. abstract_id: PUBMED:15277172 Weight-loss intention in the well-functioning, community-dwelling elderly: associations with diet quality, physical activity, and weight change. Background: Many older adults desire to lose weight, yet the proportion with a health-related weight-loss indication, weight-loss strategies, and success is unknown. Objective: We examined the associations of reported intention to lose weight with health-related indications for weight loss, diet quality, physical activity, and weight-loss success in well-functioning older adults. Design: This prospective, community-based cohort included 2708 elderly persons aged 70-79 y at baseline. We determined indication for weight loss by using the modified National Institutes of Health guidelines, diet quality by using the Healthy Eating Index, and weight-loss intention and physical activity by using questionnaires. Measured weight change over 1 y was assessed. Results: Twenty-seven percent of participants reported an intention to lose weight, and 67% of those participants had an indication for weight loss. Participants who reported a weight-loss intention were heavier than those who did not, had more depressive symptoms, and were more likely to be dissatisfied with their weight, regardless of weight-loss indication. Participants with an intention to lose weight reported better eating behaviors and a more active lifestyle than did participants without a weight-loss intention, independent of other health conditions. No significant difference in actual weight loss was found between participants intending and not intending to lose weight, regardless of indication for weight loss. Conclusions: Despite being associated with healthier behaviors, the intention to lose weight did not predict greater weight loss in this well-functioning elderly cohort. More attention needs to be focused on the necessity and efficacy of specific strategies for weight loss in older adults. abstract_id: PUBMED:37841737 Mediterranean diet is associated with better gastrointestinal health and quality of life, and less nutrient deficiency in children/adolescents with disabilities. Background: Children and adolescents with disabilities face various nutritional problems. This study aimed to examine dietary characteristics, nutritional status and problems, gastrointestinal health, and quality of life in children and adolescents with disabilities. Methods: This study included 5-18 years old children and adolescents (n = 1,991) with disabilities. We used the Mediterranean Diet Quality Index (KIDMED), the Gastrointestinal Symptom Rating Scale (GSRS), and the Pediatric Quality of Life Inventory (PedsQL) to assess diet characteristics, gastrointestinal problems, and life quality. We collected retrospective 24-h food record to assess energy and nutrient intakes. Results: The rate of stunting in children with disabilities varies between 16.5% and 19.8%. When comparing disability types, more children with physical disabilities were underweight (8.8% vs. 6.7%) and stunted (19.8% vs. 16.5%), while more children with intellectual disabilities were tall (7.9% vs. 5.5%) and overweight/obese (21.1 vs. 17.2%; p &lt; 0.05). Wasting (9.3%) and overweight/obesity (23.8%) were more common in children with disabilities aged 5-7 years (p &lt; 0.001). Eating problems such as loss of appetite, food refusal, food neophobia, and food selectivity were more common in children aged 5-7 years, and problems with fast eating and overeating were more common in adolescents aged 13-18 years (p &lt; 0.05). Among children and adolescents with disabilities, the nutrients with inadequate intakes were vitamin E, vitamin B1, folate, potassium, calcium, and iron, while the nutrients with intakes above the requirements were proteins, carbohydrates, vitamins A, B2, B6, B12, and C, phosphorus, zinc, and sodium. Participants with good Mediterranean diet quality had higher energy and nutrient intakes and higher percentages of meeting nutrient requirements (p &lt; 0.05). KIDMED scores were negatively correlated with GSRS total (r = -0.14, p &lt; 0.001) and subcomponent scores (abdominal pain, diarrhea, reflux, indigestion, and constipation; p &lt; 0.05), and significantly and positively correlated with PedsQL total (r = 0.12, p &lt; 0.001). A one-unit increase in the GSRS score resulted in a 14.4 times decrease in the PedsQL score, and a one-unit increase in the KIDMED score resulted in a 10.8 times increase in the PedsQL score (p = 0.001). Conclusion: Overweight/obesity, stunting/wasting, nutritional problems, and deficiencies are common among disabled children and adolescents. Mediterranean diet is associated with a better quality of life, and gastrointestinal health in children with disabilities. abstract_id: PUBMED:30569500 The effect of the food grade additive phosphate pre-treatment prior to the industrial cooking process in the quality of cooked peeled shrimp (Litopenaeus vannamei). Background: The food-grade additive phosphates have been highlighted as a potential alternative as a humectant agent to avoid large yield losses. Thus, the combined effect of phosphate with industrial cooking on shrimp quality was evaluated. Phosphate concentration (3% and 5%) and contact time (30 and 60 min) were evaluated as a pre-treatment to the industrial cooking of the peeled shrimp, in a shrimp processing plant. The yield process (%), physical quality [pH, color, texture, and water-holding capacity (WHC)] and chemical quality [moisture, protein, moisture/protein ratio, phosphate, and sodium] were also evaluated. Results: Our results showed that the food grade phosphates treatment showed a satisfactory effect on reduction of the weight loss, increase of the WHC (after cooking and posterior thawing), increase of the M/P ratio, improvement of the texture, decrease in the meat coloring, increase of the pH (0.3 to 0.5 units), but no affect on the shrimp overall quality (cooking yield, texture, WHC). Therefore, the residual phosphate and sodium contents increased proportionally with the concentration and time of contact with the additives, and slightly above the legal phosphate limit allowed. Conclusion: We observed that the phosphate used as a pre-treatment of cooking step in shrimp plant processing improves the product quality, however, this treatment must be realized in accordance with the current federal legislation in each country. © 2018 Society of Chemical Industry. abstract_id: PUBMED:27258146 Effects of four different cooking methods on some quality characteristics of low fat Inegol meatball enriched with flaxseed flour. The present study is concerned with the effects of four different cooking methods (grill, oven, pan and ohmic cooking) on physicochemical parameters (cooking yield moisture retention, fat retention, color, texture), fatty acid composition and sensory characteristics of low fat Turkish traditional Inegol meatball. Flaxseed flour was used as a fat substitute in the production of meatballs. Meatball proximate composition was affected by the cooking methods mainly as a consequence of the weight losses. The highest cooking yield was found in samples cooked in the oven. Flaxseed flour contains high amount of α-linolenic acid and ohmic cooking seems to be the best cooking method in terms of retaining this fatty acid in meatballs enriched with flaxseed flour. However ohmic cooked meatball samples had a brighter surface color and harder texture in comparison with meatball samples cooked via traditional methods. There was no significant difference between the sensory evaluation scores of meatballs. Answer: Yes, cooking at home is associated with better diet quality and can be related to weight loss intention. Analysis of cross-sectional data from the National Health and Nutrition Examination Survey indicated that a high frequency of cooking dinner at home (6-7 times/week) was associated with lower consumption of daily kilojoules, fat, and sugar compared to those who cooked dinner less frequently (0-1 times/week). This suggests that individuals who cook dinner frequently at home consume a healthier diet, regardless of whether they are trying to lose weight or not (PUBMED:25399031). Moreover, a study examining the role of cooking classes in a weight loss intervention found that overweight and obese adults who actively participated in cooking their meals during hands-on lessons lost significantly more weight after six months compared to those who only observed a chef prepare the same meals. Both groups, however, saw improvements in food agency scores and Healthy Eating Index scores, indicating benefits in diet quality and cooking behaviors (PUBMED:33260523). Additionally, a study using latent class analysis to evaluate the relationship between cooking practices, dietary behaviors, and weight loss after a weight loss intervention found that certain cooking patterns were associated with more favorable dietary behaviors and weight loss. Specifically, the "Health & Taste Enhancing" cooking pattern was associated with greater weight loss compared to the "Red Meat Simple" cooking pattern (PUBMED:36841438). In summary, cooking at home is positively associated with better diet quality and can support weight loss intentions, with active participation in cooking potentially enhancing weight loss outcomes (PUBMED:25399031; PUBMED:33260523; PUBMED:36841438).
Instruction: Is idiopathic chronic pancreatitis an autoimmune disease? Abstracts: abstract_id: PUBMED:18081217 Review of idiopathic pancreatitis. Recent advances in understanding of pancreatitis and advances in technology have uncovered the veils of idiopathic pancreatitis to a point where a thorough history and judicious use of diagnostic techniques elucidate the cause in over 80% of cases. This review examines the multitude of etiologies of what were once labeled idiopathic pancreatitis and provides the current evidence on each. This review begins with a background review of the current epidemiology of idiopathic pancreatitis prior to discussion of various etiologies. Etiologies of medications, infections, toxins, autoimmune disorders, vascular causes, and anatomic and functional causes are explored in detail. We conclude with management of true idiopathic pancreatitis and a summary of the various etiologic agents. Throughout this review, areas of controversies are highlighted. abstract_id: PUBMED:18206816 Idiopathic chronic pancreatitis. Idiopathic pancreatitis is diagnosed in up to 25% of patients with chronic pancreatitis by exclusion of other potential causes including rare ones. It has been shown that idiopathic pancreatitis comprises two clinically distinct entities characterised as early-onset and late-onset disease and that the natural courses of both forms differ from that of alcoholic chronic pancreatitis. Due to considerable progress in our understanding of hereditary and autoimmune mechanisms for development of chronic pancreatitis, a specific aetiology of chronic pancreatitis can be determined in an increasing proportion of cases. Nevertheless, the aetiopathogenesis of idiopathic chronic pancreatitis frequently remains obscure. This review focuses on the pathogenetic relevance of various endogenous and exogenous (co-)factors for the manifestation and the natural course of the disease. Moreover, it presents a multifactorial model for understanding the development of idiopathic chronic pancreatitis. abstract_id: PUBMED:31041651 Idiopathic acute pancreatitis: a review on etiology and diagnostic work-up. Acute pancreatitis (AP) is a common disease associated with a substantial medical and financial burden, and with an incidence across Europe ranging from 4.6 to 100 per 100,000 population. Although most cases of AP are caused by gallstones or alcohol abuse, several other causes may be responsible for acute inflammation of the pancreatic gland. Correctly diagnosing AP etiology is a crucial step in the diagnostic and therapeutic work-up of patients to prescribe the most appropriate therapy and to prevent recurrent attacks leading to the development of chronic pancreatitis. Despite the improvement of diagnostic technologies, and the availability of endoscopic ultrasound and sophisticated radiological imaging techniques, the etiology of AP remains unclear in ~ 10-30% of patients and is defined as idiopathic AP (IAP). The present review aims to describe all the conditions underlying an initially diagnosed IAP and the investigations to consider during diagnostic work-up in patients with non-alcoholic non-biliary pancreatitis. abstract_id: PUBMED:17083400 Idiopathic retroperitoneal fibrosis associated with IgG4-positive-plasmacyte infiltrations and idiopathic chronic pancreatitis. Idiopathic retroperitoneal fibrosis (IRPF) is an inflammatory fibrosclerosing condition, leading to renal failure by obstruction of the ureters. Idiopathic chronic pancreatitis associated with marked inflammatory infiltrates has recently been referred to as autoimmune pancreatitis (AIP), and infiltrating plasmacytes carrying immunoglobulin-gamma type 4 (IgG4) are relevant to its pathogenesis. The case is described herein of IRPF associated with subclinical pancreatitis that was most probably AIP in a 70-year-old man. Biopsy specimens of the retroperitoneal pseudotumor revealed a marked lymphoplasmacytic infiltration with dense fibrosis. Infiltrating plasma cells were immunoreactive for anti-IgG4 antibodies. Subsequent systemic examinations showed an extremely elevated serum IgG4 level and pancreatitis concordant with AIP. Following oral steroid administration, the serum IgG4 level normalized, although the appearance of the pseudotumor did not alter. Some AIP cases have been associated with idiopathic fibrosclerosing disorders including IRPF, but histological evidence of IgG4-related IRPF has rarely been provided. abstract_id: PUBMED:4062468 Idiopathic retroperitoneal fibrosis and primary biliary cirrhosis. A new association? We encountered a case of primary biliary cirrhosis in a nonalcoholic man who had been operated on for idiopathic retroperitoneal fibrosis 20 years previously. Chronic pancreatitis was also detected on endoscopic retrograde examination. After several episodes of digestive bleeding due to ruptured esophageal varixes, the patient died of massive hemorrhage. Postmortem examination showed stage 3 primary biliary cirrhosis and a thick retroperitoneal fibrous plaque, consisting of densely fibrotic areas of collagen with rare vessels and mononuclear cells. We suggest that idiopathic retroperitoneal fibrosis may be a new autoimmune disorder associated with primary biliary cirrhosis and that primary biliary cirrhosis is a potential cause of portal hypertension, cholestasis, or both in the course of idiopathic retroperitoneal fibrosis. abstract_id: PUBMED:16234029 Is idiopathic chronic pancreatitis an autoimmune disease? Background & Aims: The proportion of patients with idiopathic chronic pancreatitis (ICP) that have an autoimmune origin is unknown. Three forms of ICP have been described: pseudotumoral, duct-destructive, and usual chronic pancreatitis. The aim of this study was to identify autoimmune stigmata in the 3 forms. Methods: All patients who underwent exploration for ICP were included. The following data were recorded: examination by an internal medicine specialist, autoantibodies and immunoglobulin screening, and pancreatic duct imaging. Results: Sixty patients were included (pseudotumoral, n = 11; duct-destructive, n = 27; usual, n = 22). There were no significant differences among the 3 types with regard to sex ratio, age, frequency of acute pancreatitis, or obstructive jaundice. Pancreatic calcifications were seen only in the usual form (81%; P = .0001). Autoimmune disease was present in 10 patients: ulcerative colitis in 5 patients, primary sclerosing cholangitis in 2 patients, and Sjögren's syndrome, Hashimoto's thyroiditis, and Graves' disease in 1 patient each. Autoimmune diseases were not more frequent in patients with pseudotumoral (36%) or duct-destructive (19%) forms than in those with the usual form (5%, P = .06). Immunoglobulin G4 levels were increased in 2 of 6 in the pseudotumoral, 1 of 9 in the duct-destructive, and 0 of 12 patients in the usual group. Combining clinical and biochemical autoimmune parameters, 24 patients (40%) had at least 1 autoimmune marker or disease. Conclusions: Clinical or biochemical autoimmune stigmata are present in 40% of patients with ICP. Autoimmune mechanisms may be frequent in idiopathic pancreatitis. abstract_id: PUBMED:16163054 Systemic extrapancreatic lesions associated with autoimmune pancreatitis. Objectives: Autoimmune pancreatitis (AIP) is often associated with systemic extrapancreatic lesions. We studied 31 cases of AIP to clarify the diversity of associated systemic extrapancreatic lesions and the differences between AIP with and without systemic extrapancreatic lesions. Methods: The clinical features and courses were compared by age, sex, and blood chemistry between those with and without systemic extrapancreatic lesions. In addition, we reviewed the available literature on systemic extrapancreatic lesions with AIP. Results: Seven of the 31 cases of AIP had associated systemic extrapancreatic lesions, which were diagnosed simultaneously with AIP; however, 1 case presenting with various extrapancreatic lesions was diagnosed independently of the AIP lesion. Patients with systemic extrapancreatic lesions needed maintenance steroid therapy for AIP in 4 cases and systemic extrapancreatic lesions in 2 cases; the ratio of cases requiring maintenance steroid therapy was significantly higher among those with systemic extrapancreatic lesions (6/8) than those without (7/23). There were no significant differences between groups with regard to age, sex, extent of narrowing of the main pancreatic duct, and enlargement of the pancreas. gamma-globulin, IgG, and IgG4 levels were significantly higher in patients with AIP with systemic extrapancreatic lesions than those without. The systemic extrapancreatic lesions associated with AIP found in the literature were Sjögren syndrome, ulcerative colitis, retroperitoneal fibrosis, sialadenitis, thyroiditis, and idiopathic thrombocytopenic purpura. Conclusions: The results of this study suggest that, when encountering a case of AIP with elevated levels of gamma-globulins, IgG, and IgG4, an effort should be made to detect other systemic extrapancreatic abnormalities and initiate steroid administration. abstract_id: PUBMED:28978971 Editorial: Autoimmune Pancreatitis in Children: Is This a New Subtype of Disease or Early-Onset Idiopathic Duct-Centric Chronic Pancreatitis? The term autoimmune pancreatitis (AIP) encompasses two distinct steroid-responsive pancreatitides, type 1 AIP and idiopathic duct-centric pancreatitis (IDCP) (or type 2 AIP). The current study describes cases of both AIP subtypes in a pediatric population. A comparison of the clinical profile of the described cohort with published data strongly suggests the majority of patients in the current cohort had IDCP. Since relapse rates in IDCP are low and long-term maintenance therapy is not required for IDCP, this has implications for prognosis and therapy. However, longer follow-up is needed to more accurately determine if onset during childhood leads to a different disease course. abstract_id: PUBMED:25770706 Recent Advances in Autoimmune Pancreatitis. Autoimmune pancreatitis (AIP) is a form of chronic pancreatitis that is characterized clinically by frequent presentation with obstructive jaundice, histologically by a dense lymphoplasmacytic infiltrate with fibrosis, and therapeutically by a dramatic response to corticosteroid therapy. Two distinct diseases, type 1 and type 2 AIP, share these features. However, these 2 diseases have unique pancreatic histopathologic patterns and differ significantly in their demographic profiles, clinical presentation, and natural history. Recognizing the popular and long-standing association of the term "AIP" with what is now called "type 1 AIP," we suggest using "AIP" solely for type 1 AIP and to acknowledge its own distinct disease status by using "idiopathic duct-centric chronic pancreatitis" (IDCP) for type 2 AIP. AIP is the pancreatic manifestation of immunoglobulin G4-related disease (IgG4-RD). The etiopathogenesis of AIP and IgG4-RD is largely unknown. However, the remarkable effectiveness of B-cell depletion therapy with rituximab in patients with AIP and IgG4-RD highlights the crucial role of B cells in its pathogenesis. IDCP is less commonly recognized, and little is known about its pathogenesis. IDCP has no biomarker but is associated with inflammatory bowel disease in ~25% of patients. Recently, the international consensus diagnostic criteria for AIP identified combinations of features that are diagnostic of both diseases. Both AIP and IDCP are corticosteroid responsive; however, relapses are common in AIP and rare in IDCP. Therefore, maintenance therapy with either an immunomodulator (eg, azathioprine, 6-mercaptopurine, or mycophenolate mofetil) or rituximab is often necessary for patients with AIP. Long-term survival is excellent for both patients with AIP and patients with IDCP. abstract_id: PUBMED:24242980 Role of endoscopic ultrasonography in patients with first episode of idiopathic acute pancreatitis. Background/aims: Acute pancreatitis (AP) evades an etiological diagnosis in up to 10 % to 30 % of patients. This group, ie. idiopathic acute pancreatitis (IAP) is prone to a high recurrence (up to 70 %). Endoscopic ultrasound (EUS) is promising, but there is limited data on elucidating the cause of IAP, from India. This observational study was designed to study the role of EUS after the first episode of IAP. Methods: All patients diagnosed to have first episode of IAP were included in the study and taken up for EUS examination after 6 weeks. Patients with conditions known to predispose or precipitate AP, like alcohol binge, drugs, metabolic or autoimmune conditions, or even a positive family history, were excluded from the study. Result: A total of 51 patients were included. EUS positivity was found in 29 (56.9 %) patients. It included common bile duct (CBD) calculus in 5 (9.8 %), CBD sludge in 4 (7.8 %), gallbladder calculus in 2 (3.9 %), gallbladder sludge in 2 (3.9 %), and chronic pancreatitis (CP) in 16 (31.4 %) patients. Fourteen patients had a normal study and eight patients had indeterminate CP. Conclusions: EUS is safe and has a reasonable diagnostic yield in patients with first episode of IAP. CP and biliary lithiasis are the most frequent positive findings on EUS. EUS should be included in the diagnostic protocol after the first episode of IAP, rather than waiting for recurrent episodes. Answer: The question of whether idiopathic chronic pancreatitis (ICP) is an autoimmune disease has been a subject of investigation. According to the abstract from PUBMED:16234029, clinical or biochemical autoimmune stigmata are present in 40% of patients with ICP, suggesting that autoimmune mechanisms may be frequent in idiopathic pancreatitis. This study identified autoimmune stigmata in three forms of ICP: pseudotumoral, duct-destructive, and usual chronic pancreatitis, and found that autoimmune diseases were present in a subset of patients, including those with ulcerative colitis, primary sclerosing cholangitis, Sjögren's syndrome, Hashimoto's thyroiditis, and Graves' disease. Furthermore, PUBMED:17083400 describes a case of idiopathic retroperitoneal fibrosis (IRPF) associated with subclinical pancreatitis that was most probably autoimmune pancreatitis (AIP) in a patient with elevated serum IgG4 levels, which normalized following oral steroid administration. This suggests a link between ICP and autoimmune conditions, as IgG4-related disease is known to be an autoimmune condition. Additionally, PUBMED:16163054 discusses systemic extrapancreatic lesions associated with AIP, indicating that AIP is often associated with systemic extrapancreatic autoimmune conditions such as Sjögren syndrome, ulcerative colitis, retroperitoneal fibrosis, sialadenitis, thyroiditis, and idiopathic thrombocytopenic purpura. The abstract from PUBMED:25770706 distinguishes between two types of AIP: type 1, which is the pancreatic manifestation of IgG4-related disease, and type 2, referred to as idiopathic duct-centric chronic pancreatitis (IDCP) or type 2 AIP. This distinction further supports the notion that at least a subset of ICP cases may have an autoimmune origin. In summary, while not all cases of idiopathic chronic pancreatitis are autoimmune, a significant proportion of ICP cases exhibit autoimmune features, and there is evidence to suggest that autoimmune mechanisms play a role in the pathogenesis of some forms of ICP.
Instruction: Short-term recovery from alcohol abuse or dependence: any evidence of a relationship with treatment use in a general population sample? Abstracts: abstract_id: PUBMED:15939710 Short-term recovery from alcohol abuse or dependence: any evidence of a relationship with treatment use in a general population sample? Aims: To test whether survey respondents who report alcohol misuse in the past year are more likely to be abstinent or binge-free in the past 30 days if they have used treatment, than if they have not. Methods: Analysis of data from the 2002 US National Survey on Drug Use and Health was obtained. Results: A total of 5730 respondents scored positive for alcohol abuse or dependence in the preceding year. Fewer than 10% had used any treatment for alcohol or drugs in this period, but this was associated with a 10% increase in the past-month abstinence and past-month binge-free drinking compared with respondents who did not access treatment. Such an apparent short-term recovery appeared greater in those whose treatment had been received in a formal treatment setting, a doctor's office, or in self-help groups than in the emergency room or in prison. Conclusions: Even if part of the association between treatment and recent abstinence and non-binge drinking was causal, indicating that treatment has some impact, it is a pathway chosen only by the minority. abstract_id: PUBMED:36442440 Relationship of negative emotionality, NIAAA recovery, and 3- and 6-month drinking outcomes among adults in treatment for alcohol use disorder. Background: The National Institute of Alcohol Abuse and Alcoholism (NIAAA) recently released a new definition of recovery from alcohol use disorder (AUD). A patient is considered recovered if they are remitted from DSM-5 AUD and report cessation of heavy drinking. The NIAAA has also recently proposed the Addictions Neuroclinical Assessment (ANA) to guide treatment research. Negative emotionality is one of three domains of the ANA and theory proposes that AUD is maintained by negative reinforcement via the relief of negative affect. The purpose of the current study was to examine: (1) the relationship of end-of-treatment negative emotionality and NIAAA recovery, and (2) the ability of NIAAA recovery at the end of treatment to predict three- and six-month drinking outcomes. Method: At baseline and end-of-treatment, women and men (n = 181) in treatment for AUD completed measures of negative emotionality, drinking, and were assessed for DSM-5 AUD diagnostic criteria. At three- and six-months post-treatment, drinking was re-assessed. Results: 22.5% (n = 24) of participants met full criteria for NIAAA recovery at end-of-treatment. Lower levels of end of treatment negative emotionality were associated with increased odds of achieving NIAAA recovery. Meeting NIAAA recovery predicted greater percent days abstinent (PDA) and lower percent heavy drinking days (PHDD) at 3-months, but not at 6-months post-treatment. Conclusions: This study is among the first to report a relationship between the negative emotionality domain of the ANA and NIAAA recovery. Results underscore the importance of addressing negative emotionality in treatment. Findings also suggest that NIAAA recovery predicts positive short term drinking outcomes. abstract_id: PUBMED:31088561 Prevalence of prescribed benzodiazepine long-term use in the French general population according to sociodemographic and clinical factors: findings from the CONSTANCES cohort. Background: Data are lacking regarding the prevalence of benzodiazepine long-term use in the general population. Our aim was to examine the prevalence of prescribed benzodiazepine long-term use (BLTU) according to sociodemographic and clinical factors in the French general population. Methods: Data came from 4686 men and 4849 women included in 2015 in the French population-based CONSTANCES cohort. BLTU was examined using drug reimbursement administrative registries from 2009 to 2015. Analyses were weighted to provide results representative of the French general population covered by the general health insurance scheme. Weighted prevalence of BTLU and weighted Odds Ratios (OR) of having BTLU were computed with their 95% Confidence Interval (95% CI) according to age, education level, occupational status, occupational grade, household income, marital status, alcohol use disorder risk and depressive symptoms. All the analyses were stratified for gender. Results: Weighted prevalence of BLTU were 2.8%(95% CI:2.3-3.4) and 3.8%(95% CI: 3.3-4.5) in men and women, respectively. Compared to men, women had an increased risk of having benzodiazepine long-term use with OR = 1.34(95% CI = 1.02-1.76). Aging, low education, not being at-work, low occupational grade, low income, being alone and depressive state were associated with increased risks of having BTLU. Conclusions: BLTU is widespread in the French general population, however this issue may particularly concern vulnerable subgroups. These findings may help in raising attention on this public health burden as well as targeting specific at-risk subgroups in preventive intervention. abstract_id: PUBMED:32070907 Short-term neuropsychological recovery in alcohol use disorder: A retrospective clinical study. Background: Neuropsychological impairments found in recently detoxified patients with alcohol use disorder (AUD) can limit the benefit of psychosocial treatments and increase the risk of relapse. These neuropsychological deficits are reversible with abstinence. The aim of this retrospective clinical study was to investigate whether a short-term stay as inpatients in a convalescent home enables neuropsychological deficits observed in recently detoxified AUD patients to recover and even performance to return to normal. Methods: Neuropsychological data were collected in 84 AUD patients. Five neuropsychological components were assessed before and after a three-week stay in a convalescent home offering multidisciplinary support. Baseline and follow-up performance were compared in the entire group of patients and in subgroups defined by the nature and intensity of the therapy (OCCASIONAL: occasional occupational and physical therapy; INTENSIVE: intensive occupational and physical therapy and neuropsychological training). Results: In the entire group of patients, neuropsychological performance significantly improved between baseline and follow-up for all 5 components and even returned to a normal level for 4 of them. The ratio of patients with impaired performance was significantly lower at follow-up than baseline examination for 3 components in the INTENSIVE group only. Conclusion: Recently detoxified AUD patients with cognitive deficits benefit from a short-term stay in an environment ensuring sobriety and healthy nutrition. Cognitive recovery may be enhanced by intensive care including neuropsychological training. Alcohol programs could be postponed in patients with cognitive deficits in order to offer psychosocial treatment when patients are cognitively able to benefit from it. abstract_id: PUBMED:18276016 Antidepressant utilisation patterns and determinants of short-term and non-psychiatric use in the Finnish general adult population. Background: The aim was to study utilisation patterns and determinants of antidepressant use in the general population &gt;30 years, especially short-term use or use not related to known psychiatric morbidity. Methods: Participants from a cross-sectional population-based Finnish Health 2000 Study (2000--2001) were linked with the National Prescription Register and National Care Register for Health Care. Within a representative sample (N=7112) of the adult population (&gt;30 years), 12-month DSM-IV depressive, anxiety, and alcohol use disorders were assessed with the M-CIDI. Utilisation patterns of antidepressants were categorised to short-term, intermittent and continuous use. Factors predicting short-term use or use not related to known psychiatric morbidity were investigated. Results: Of Finnish adults 7.1% had used antidepressants in 2000, of which two-thirds reported a physician-diagnosed mental disorder; a third (35%) had major depressive or anxiety disorder during the previous 12 months. In terms of utilisation pattern, 43% were long-term users, 32% intermittent users and 26% short-term users. Short-term use was related to care by a general practitioner and having no known mental disorder. A quarter of all users had no known psychiatric morbidity. This type of user was most common among the older age groups, and inversely related to being single, on disability pension and using mental health services. Limitations: Not all psychiatric indications for antidepressant use could be explored. Conclusions: Depression remains the main indication for antidepressant use. About a quarter of users had no known psychiatric indication and the indication remained unclear. Short-term and non-psychiatric use are more commonly prescribed for the elderly. abstract_id: PUBMED:32934575 Concurrent use of addictive substances among alcohol drinkers: Prevalence and problems in a Swedish general population sample. Aims: To examine concurrent use of addictive substances among alcohol drinkers in the Swedish general population and to assess to what extent this increases the risk of alcohol problems. Methods: Data were retrieved from a nationally representative survey from 2013 on use of and problems related to alcohol, tobacco, illicit drugs and non-prescribed use of analgesics and sedatives with 15,576 respondents. Alcohol users were divided into different groups on the basis of frequency of drinking overall and binge drinking. Tobacco use was measured in terms of daily use and use of illicit drugs and non-prescribed use of analgesics and sedatives were measured in terms of last 12 months prevalence. A dichotomous indicator of a DSM-IV dependence or abuse diagnosis was used. Logistic regression models were estimated to examine the relationship between various patterns of drinking in combination with other substance use and risk of alcohol abuse and/or dependence. Results: People who drink alcohol in Sweden were more likely to use other addictive substances than non-drinkers and such concurrent use becomes more common the more alcohol is consumed. Alcohol drinkers using other substances have a higher prevalence of alcohol abuse and dependence at all frequencies of drinking. Multivariate models controlling for sex, age and drinking frequency found that an elevated risk of harm remained for drinkers using addictive substances other than snuff. Conclusion: A large group of drinkers in the Swedish general population have an accumulation of risks as a result of using both alcohol and other addictive substances. Concurrent use of cigarettes, illicit drugs and non-prescribed use of analgesics and sedatives adds an independent risk of alcohol abuse/dependence in this group in addition to their drinking. The findings point at the importance of taking multiple substance-use patterns into account when combating drinking problems. Screening for concurrent use of other addictive substances could help healthcare providers to identify patients in need of treatment for alcohol problems. abstract_id: PUBMED:26175209 Partial K-Complex Recovery Following Short-Term Abstinence in Individuals with Alcohol Use Disorder. Background: The K-complex (KC) is a brain potential characteristic of nonrapid eye movement (NREM) sleep resulting from the synchronous activity of a large population of neurons and hypothesized to reflect brain integrity. KC amplitude is lower in individuals with alcohol use disorder (AUD) compared with age-matched controls, but its recovery with short-term abstinence has not been studied. Therefore, we investigated whether the KC shows significant recovery over the first 4 months of abstinence in individuals with AUD. Methods: A total of 16 recently abstinent AUD individuals (46.6 ± 9.3 years) and 13 gender and age-matched healthy controls (41.6 ± 8.3 years) were studied on 3 occasions: the Initial session was within 1 month of the AUD individuals' last drink, then 1 and 3 months later. Overnight electroencephalogram was recorded while participants were presented with tones during stage 2 NREM sleep to elicit KCs. Results: At the Initial session, AUD participants showed significantly lower KC amplitude and incidence compared with controls. In the AUD individuals, KC amplitude increased significantly from the Initial to the 1-month session. KC incidence showed a marginally significant increase. Neither KC amplitude nor incidence changed from the 1-month to the 3-month session. No changes in KC amplitude or incidence across sessions were observed in the control group. Conclusions: Our results demonstrate partial KC recovery during the first 2 months of abstinence. This recovery is consistent with the time course of structural brain recovery in abstinent AUD individuals demonstrated by recent neuroimaging results. abstract_id: PUBMED:12957346 Short-term alcohol and drug treatment outcomes predict long-term outcome. Introduction: Although addiction is recognized as a chronic, relapsing condition, few treatment studies, and none in a commercially insured managed care population, have measured long-term outcomes. We examined the relationship of 6-month treatment outcomes to abstinence 5 years post-treatment, and whether the predictors of abstinence at 5 years were different for those who were, and were not, abstinent at 6 months. Methods: The sample (N=784) is from an outpatient (day hospital and traditional outpatient) managed care chemical dependency program. Subjects were interviewed at baseline, 6 months, and 5 years. Logistic regression analysis was used to assess which individual, treatment and extra-treatment characteristics predicted alcohol and drug abstinence at 5 years. Results: Abstinence at 6 months was an important predictor of abstinence at 5 years. Among those abstinent at 6 months, predictors of abstinence at 5 years were older age, being female, 12-step meeting attendance, and recovery-oriented social networks. Among those not abstinent at 6 months, being alcohol dependent rather than drug dependent, 12-step meeting attendance, treatment readmission, and recovery-oriented social networks predicted abstinence at 5 years. Conclusion: Our findings demonstrate a clear association between short-term and long-term treatment success. In addition, these results strongly support the importance of recovery-oriented social networks for those with good short-term outcomes, and the beneficial impact of readmission for those not initially successful in treatment. abstract_id: PUBMED:33224697 Sex and Gender Effects in Recovery From Alcohol Use Disorder. The current article provides a brief summary of biopsychosocial gender differences in alcohol use disorder (AUD), then reviews existing literature on gender differences in treatment access, retention, outcomes, and longer-term recovery. Among psychotherapies for AUD, there is support for the efficacy of providing female-specific treatment, and for female-only treatment settings but only when female-specific treatment is included. However, despite mandates from the National Institutes of Health to do so, there is little work thus far that directly compares genders on outcomes of specific psychotherapies or pharmacotherapies for AUD. Although existing research has mixed findings on sex and gender differences in overall outcomes, there are more consistent findings suggesting different mechanisms of behavior change among men and women in AUD treatment and long-term recovery. Thus, more work is needed that attends to gender and sex differences, including planning studies that are structured to examine not only gender-differentiated outcomes in treatment response, but equally important, differences in treatment access and attendance as well as differences in mechanisms of change in drinking behavior. abstract_id: PUBMED:31923562 Cigarette smoking history is associated with poorer recovery in multiple neurocognitive domains following treatment for an alcohol use disorder. Cigarette smoking is associated with neurocognitive dysfunction in various populations, including those seeking treatment for an alcohol use disorder (AUD). This study compared the rate and extent of recovery on measures of processing speed, executive functions, general intelligence, visuospatial skills and working memory in treatment-seeking alcohol dependent individuals (ALC) who were never-smokers (nvsALC), former-smoker (fsALC), and active smokers (asALC), over approximately 8 months of abstinence from alcohol. Methods: ALC participants were evaluated at approximately 1 month of abstinence (AP1; n = 132) and reassessed after 8 months of sobriety (AP2; n = 54). Never-smoking controls (CON; n = 33) completed a baseline and follow-up (n = 19) assessment approximately 9 months later. Domains evaluated were executive functions, general intelligence, processing speed, visuospatial skills and working memory; a domain composite was formed from the arithmetic average of the foregoing domains. nvsALC showed greater improvement than fsALC, asALC and CON on most domains over the AP1-AP2 interval. fsALC demonstrated greater recovery than asALC on all domains except visuospatial skills; fsALC also showed greater improvements than CON on general intelligence, working memory and domain composite. asALC did not show significant improvement on any domain over the AP1-AP2 interval. At 8 months of abstinence, asALC were inferior to CON and nvsALC on multiple domains, fsALC performed worse than nvsALC on several domains, but nvsALC were not different from CON on any domain. Our results provide robust evidence that smoking status influenced the rate and extent of neurocognitive recovery between 1 and 8 months of abstinence in this ALC cohort. Chronic smoking in AUD likely contributes to the considerable heterogeneity observed in neurocognitive recovery during extended abstinence. The findings provide additional strong support for the benefits of smoking cessation and the increasing clinical movement to offer smoking cessation resources concurrent with treatment for AUD. Answer: Yes, there is evidence of a relationship between treatment use and short-term recovery from alcohol abuse or dependence in a general population sample. Analysis of data from the 2002 US National Survey on Drug Use and Health indicated that respondents who reported alcohol misuse in the past year and had used any treatment for alcohol or drugs in this period were associated with a 10% increase in past-month abstinence and past-month binge-free drinking compared with respondents who did not access treatment (PUBMED:15939710). This suggests that treatment can have a positive impact on short-term recovery, even though it is a pathway chosen only by a minority.
Instruction: Teamwork training with nursing and medical students: does the method matter? Abstracts: abstract_id: PUBMED:20427311 Teamwork training with nursing and medical students: does the method matter? Results of an interinstitutional, interdisciplinary collaboration. Objectives: The authors conducted a randomised controlled trial of four pedagogical methods commonly used to deliver teamwork training and measured the effects of each method on the acquisition of student teamwork knowledge, skills, and attitudes. Methods: The authors recruited 203 senior nursing students and 235 fourth-year medical students (total N = 438) from two major universities for a 1-day interdisciplinary teamwork training course. All participants received a didactic lecture and then were randomly assigned to one of four educational methods didactic (control), audience response didactic, role play and human patient simulation. Student performance was assessed for teamwork attitudes, knowledge and skills using: (a) a 36-item teamwork attitudes instrument (CHIRP), (b) a 12-item teamwork knowledge test, (c) a 10-item standardised patient (SP) evaluation of student teamwork skills performance and (d) a 20-item modification of items from the Mayo High Performance Teamwork Scale (MHPTS). Results: All four cohorts demonstrated an improvement in attitudes (F(1,370) = 48.7, p = 0.001) and knowledge (F(1,353) = 87.3, p = 0.001) pre- to post-test. No educational modality appeared superior for attitude (F(3,370) = 0.325, p = 0.808) or knowledge (F(3,353) = 0.382, p = 0.766) acquisition. No modality demonstrated a significant change in teamwork skills (F(3,18) = 2.12, p = 0.134). Conclusions: Each of the four modalities demonstrated significantly improved teamwork knowledge and attitudes, but no modality was demonstrated to be superior. Institutions should feel free to utilise educational modalities, which are best supported by their resources to deliver interdisciplinary teamwork training. abstract_id: PUBMED:34146464 Developing teamwork skills in baccalaureate nursing students: impact of TeamSTEPPS® training and simulation. Objectives: Examine the impact of TeamSTEPPS® training and simulation experiences on student knowledge and teamwork attitudes in a baccalaureate-nursing program. Methods: This study used a quasi-experimental, pre-test, post-test design. The intervention included a workshop followed by 2 days of simulation experiences. Participants included a total of 46 nursing students. Instruments included the TeamSTEPPS learning benchmark and the Teamwork Attitudes Questionnaire (T-TAQ). Results: Scores on the learning benchmark increased following the intervention. In addition, changes in subscores of teamwork strategies, leadership, situation monitoring, and mutual support on the T-TAQ indicate an improvement in student attitudes toward teamwork. Conclusions: Incorporating TeamSTEPPS® strategies into undergraduate education can be effective in increasing student knowledge and improving attitudes toward interdisciplinary teamwork. abstract_id: PUBMED:29287747 Teaching nurses teamwork: Integrative review of competency-based team training in nursing education. Widespread demands for high reliability healthcare teamwork have given rise to many educational initiatives aimed at building team competence. Most effort has focused on interprofessional team training however; Registered Nursing teams comprise the largest human resource delivering direct patient care in hospitals. Nurses also influence many other health team outcomes, yet little is known about the team training curricula they receive, and furthermore what specific factors help translate teamwork competency to nursing practice. The aim of this review is to critically analyse empirical published work reporting on teamwork education interventions in nursing, and identify key educational considerations enabling teamwork competency in this group. CINAHL, Web of Science, Academic Search Complete, and ERIC databases were searched and detailed inclusion-exclusion criteria applied. Studies (n = 19) were selected and evaluated using established qualitative-quantitative appraisal tools and a systematic constant comparative approach. Nursing teamwork knowledge is rooted in High Reliability Teams theory and Crew or Crisis Resource Management sources. Constructivist pedagogy is used to teach, practice, and refine teamwork competency. Nursing teamwork assessment is complex; involving integrated yet individualized determinations of knowledge, skills, and attitudes. Future initiatives need consider frontline leadership, supportive followership and skilled communication emphasis. Collective stakeholder support is required to translate teamwork competency into nursing practice. abstract_id: PUBMED:27432367 Teamwork of clinical teachers in postgraduate medical training. Teamwork among clinical teachers is essential for continuous improvement of postgraduate medical training. This thesis deconstructs teamwork in four studies, mostly based on qualitative research approaches and one study utilizes mixed methods. We found that clinical teachers do train residents, but individually rather than as a team. The programme directors as leaders focus more on teaching activities than on the collective ambition and mutual engagement of clinical teachers. During the teaching meetings, mistakes and conflicts are mainly discussed in a general sense and are often neither directed at the individual, nor result-oriented. A valid evaluation instrument is constructed to improve teamwork. abstract_id: PUBMED:35515744 TeamSTEPPS online simulation: expanding access to teamwork training for medical students. Background: The Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) programme is an evidence-based approach to teamwork training. In-person education is not always feasible for medical student education. The aim of this study was to evaluate the impact of online, interactive TeamSTEPPS simulation versus an in-person simulation on medical students' TeamSTEPPS knowledge and attitudes. Methods: Fourth-year medical students self-selected into an in-person or online training designed to teach and evaluate teamwork skills. In-person participants received didactic sessions, team-based medical simulations and facilitated debriefing sessions. The online group received an equivalent online didactic session and participated in an interactive software-based simulation with immediate, personalised performance-based feedback and scripted debriefing. Both trainings used three iterations of a case of septic shock, each with increasing medical complexity. Participants completed a demographic survey, a preintervention/postintervention TeamSTEPPS Benchmarks test and a retrospective preintervention/postintervention TeamSTEPPS teamwork attitudes questionnaire. Data were analysed using descriptive statistics and repeated measures analysis of variance. Results: Thirty-one students (18 in-person, 13 online) completed preintervention/postintervention surveys, tests and questionnaires. Gender, age and exposure to interprofessional education, teamwork training and games were similar between groups. There were no statistical differences in preintervention knowledge or teamwork attitude scores between in-person and online groups. Postintervention knowledge scores increased significantly from baseline (+2.0% p=0.047), and these gains did not differ significantly based on whether participants received in-person versus online training (+1.5% vs +2.9%; p=0.49). Teamwork attitudes scores also showed a statistically significant increase with training (+0.9, p&lt;0.01) with no difference in the effect of training by group (+0.8 vs +1.0; p=0.64). Conclusions: Graduating medical students who received in-person and online teamwork training showed similar increases in TeamSTEPPS knowledge and attitudes. Online simulations may be used to teach and reinforce team communication skills when in-person, interprofessional simulations are not feasible. abstract_id: PUBMED:37159110 Bachelor of nursing students' experiences of a longitudinal team training intervention and the use of teamwork skills in clinical practice-A qualitative descriptive study. Aims: To describe nursing students' experiences of a TeamSTEPPS® longitudinal team training program and the application of teamwork skills in clinical practice. Design: A descriptive qualitative design. Methods: Overall, 22 nursing students participated in six online focus group interviews after attending a TeamSTEPPS® team training program from their first semester. The data were audio-recorded, transcribed and analysed using inductive content analysis and reported following the COREQ guidelines. The focus group interviews took place in the students' fifth's semester. Results: The main category "Learning teamwork is not an event; it's a journey" emerged from 3 generic categories and 12 subcategories. The participants reported that grasping the relevance of team training and the use of teamwork skills takes time. Utilizing these skills improved their awareness of being a team member and facilitated learning. Conclusion: Team training raised the participants' awareness of teamwork as an essential component of being a professional nurse. Additionally, understanding the complexity of teamwork takes time. abstract_id: PUBMED:27552977 Development of a self-assessment teamwork tool for use by medical and nursing students. Background: Teamwork training is an essential component of health professional student education. A valid and reliable teamwork self-assessment tool could assist students to identify desirable teamwork behaviours with the potential to promote learning about effective teamwork. The aim of this study was to develop and evaluate a self-assessment teamwork tool for health professional students for use in the context of emergency response to a mass casualty. Methods: The authors modified a previously published teamwork instrument designed for experienced critical care teams for use with medical and nursing students involved in mass casualty simulations. The 17-item questionnaire was administered to students immediately following the simulations. These scores were used to explore the psychometric properties of the tool, using Exploratory and Confirmatory Factor Analysis. Results: 202 (128 medical and 74 nursing) students completed the self-assessment teamwork tool for students. Exploratory factor analysis revealed 2 factors (5 items - Teamwork coordination and communication; 4 items - Information sharing and support) and these were justified with confirmatory factor analysis. Internal consistency was 0.823 for Teamwork coordination and communication, and 0.812 for Information sharing and support. Conclusions: These data provide evidence to support the validity and reliability of the self-assessment teamwork tool for students This self-assessment tool could be of value to health professional students following team training activities to help them identify the attributes of effective teamwork. abstract_id: PUBMED:34715563 Bachelor of nursing students' attitudes toward teamwork in healthcare: The impact of implementing a teamSTEPPS® team training program - A longitudinal, quasi-experimental study. Background: Teamwork skills are essential to the quality of care and patient safety; nevertheless, team training is limited in Bachelor of Nursing degree programs in Norway. Objectives: The objective of this study was to explore the impact of implementing a TeamSTEPPS® team training intervention on Bachelor of Nursing students' attitudes toward teamwork in health care. Design: A longitudinal quasi-experimental design with pre- and posttests was used. Settings: One intervention group and one control group were recruited from two campuses at a Norwegian university offering a Bachelor of Nursing degree. Participants: Subjects were recruited from a population of 423 students. Methods: For 26 months, the intervention group was exposed to the TeamSTEPPS® team training program with various learning activities to enhance teamwork skills. The intervention group and the control group responded to the Norwegian version of the TeamSTEPPS® Teamwork Attitude Questionnaire (T-TAQ) before the intervention (T0), after ten months (T1), and after 24 months (T2). The students participated in survey T0 and T1 was defined as Sample 1 and students participated in survey T0 and T2 was defined as Sample 2 The data were analyzed with parametric and nonparametric statistics. Results: At T0 there was a significant difference between the intervention and control group. The intervention group showed a significant positive change in the Total T-TAQ score from T0 to T1 and from T0 to T2. The change in mean score differed significantly between the intervention and control group in favor of the intervention group. Conclusions: This study showed that a team training program improved Bachelor of Nursing students' attitudes toward teamwork. Therefore, we recommend that the TeamSTEPPS® team training program be implemented in Bachelor of Nursing programs to facilitate a culture of teamwork. abstract_id: PUBMED:36485016 Teamwork Training With a Multiplayer Game in Health Care: Content Analysis of the Teamwork Principles Applied. Background: In health care, teamwork skills are critical for patient safety; therefore, great emphasis is placed on training these skills. Given that training is increasingly designed in a blended way, serious games may offer an efficient method of preparing face-to-face simulation training of these procedural skills. Objective: This study aimed to investigate the teamwork principles that were used during gameplay by medical students and teamwork experts. Findings can improve our understanding of the potential of serious games for training these complex skills. Methods: We investigated a web-based multiplayer game designed for training students' interprofessional teamwork skills. During gameplay, 4 players in different roles (physician, nurse, medical student, and student nurse) had to share information, prioritize tasks, and decide on next steps to take in web-based patient scenarios, using one-to-one and team chats. We performed a qualitative study (content analysis) on these chats with 144 fifth-year medical students and 24 health care teamwork experts (as a benchmark study) playing the game in groups of 4. Game chat data from 2 scenarios were analyzed. For the analysis, a deductive approach was used, starting with a conceptual framework based on Crew Resource Management principles, including shared situational awareness, decision-making, communication, team management, and debriefing. Results: Results showed that most teamwork principles were used during gameplay: shared situational awareness, decision-making (eg, re-evaluation), communication (eg, closed loop), and team management (eg, distributing the workload). Among students, these principles were often used on a basic level. Among experts, teamwork principles were used with more open forms of speak up and more justification of decisions. Some specific Crew Resource Management principles were less observed among both groups, for example, prevention of fixation errors and use of cognitive aids. Both groups showed relatively superficial debriefing reflections. Conclusions: Playing a multiplayer game for interprofessional teamwork appears to facilitate the application of teamwork principles by students in all important teamwork domains on a basic level. Expert players applied similar teamwork principles on a moderately high complexity level. Some teamwork principles were less observed among both students and expert groups, probably owing to the artifacts of the game environment (eg, chatting instead of talking). A multiplayer game for teamwork training can elicit the application of important, basic teamwork principles, both among novices and experts, and provides them with a flexible, accessible, and engaging learning environment. This may create time for exercising more complex skills during face-to-face training. abstract_id: PUBMED:34809644 Using the 'huddle' to enhance interprofessional teamwork among nursing students through a podcast: a qualitative and exploratory pilot study. Background: Interprofessional teamwork is crucial for fostering healthcare performance and for minimizing adverse events. The daily huddle is an important arena for interprofessional interaction and communication between nurses and physicians in hospitals. Although prevalence strongly rooted in clinical practice, the huddle does not seem to be a prioritized area in nursing education programs. Taking part in a huddle is traditionally something nursing students learn in their clinical studies. Therefore, there is need for learning tools that can provide nursing students with quality assured training that can improve their preparation for interprofessional teamwork and strengthen the link between the educational institution and the field of practice. In this study, we have developed and tested a podcast to increase nursing students' competence in interprofessional teamwork when participating in huddles. The aim of the pilot study was to explore nursing students' experiences with utilizing a huddle-focused podcast as a learning tool during their clinical practice studies in the hospital. Method: This qualitative and exploratory pilot study used focus group interviews. Eleven third-year nursing students who had listened to the podcast during their practical studies at a medical hospital ward were included. The interviews were subjected to content analysis. Result: The analysis identified four categories that resonated across all participants in the focus group interviews: 1. understanding one's own role in the huddle; 2. being encouraged to speak up; 3. using the huddle as a flexible learning tool; and 4. being authentic but not always realistic. Conclusion: Findings indicate that the huddle-focused podcast seems to be valuable for nursing students learning about interprofessional teamwork. The podcast seemed especially useful in helping the students to understand their own role and to speak up in the huddle meetings. The positive experiences with the flexibility of the podcast learning tool are promising for use in other educational settings. Answer: Based on the abstracts provided, it appears that the method of delivering teamwork training to nursing and medical students may not be the most critical factor in improving teamwork knowledge and attitudes. A randomized controlled trial comparing four pedagogical methods (didactic, audience response didactic, role play, and human patient simulation) found that all methods significantly improved teamwork knowledge and attitudes, but no single modality was superior (PUBMED:20427311). Similarly, an online, interactive TeamSTEPPS simulation was found to be as effective as an in-person simulation in increasing medical students' TeamSTEPPS knowledge and attitudes (PUBMED:35515744). Furthermore, incorporating TeamSTEPPS® strategies into undergraduate education has been shown to be effective in increasing student knowledge and improving attitudes toward interdisciplinary teamwork (PUBMED:34146464). A longitudinal team training intervention using TeamSTEPPS® also raised nursing students' awareness of teamwork as an essential component of professional nursing (PUBMED:37159110). Additionally, implementing a TeamSTEPPS® team training program improved Bachelor of Nursing students' attitudes toward teamwork (PUBMED:34715563). The use of a podcast as a learning tool for nursing students to participate in huddles, an important aspect of interprofessional teamwork, was also found to be valuable in helping students understand their role and speak up in team meetings (PUBMED:34809644). Moreover, a multiplayer game designed for training interprofessional teamwork skills facilitated the application of teamwork principles among students and experts (PUBMED:36485016). In conclusion, while the method of delivering teamwork training may vary, the key takeaway is that various educational modalities can be effective in improving teamwork knowledge and attitudes among nursing and medical students. Institutions can choose the method that best fits their resources and objectives, as the evidence does not point to a single superior method (PUBMED:20427311).
Instruction: An epistemic community comes and goes? Abstracts: abstract_id: PUBMED:19236697 An epistemic community comes and goes? Local and national expressions of heart health promotion in Canada. Background: The objective of this study is to examine the existence and shape of epistemic communities for (heart) health promotion at the international, national, provincial and regional levels in Canada. Epistemic community may be defined as a network of experts with an authoritative claim to policy relevant knowledge in their area of expertise. Methods: An interpretive policy analysis was employed using 60 documents (48 provincial, 8 national and 4 international) and 66 interviews (from 5 Canadian provinces). These data were entered into NUD*IST, a qualitative software analysis package, to assist in the development of codes and themes. These codes form the basis of the results. Results: A scientific and policy epistemic community was identified at the international and Canadian federal levels. Provincially and regionally, the community is present as an idea but its implementation varies between jurisdictions. Conclusion: The importance of economic, political and cultural factors shapes the presence and shape of the epistemic community in different jurisdictions. The community waxes and wanes but appears robust. abstract_id: PUBMED:37360965 Epistemic Health, Epistemic Immunity and Epistemic Inoculation. This paper introduces three new concepts: epistemic health, epistemic immunity, and epistemic inoculation. Epistemic health is a measure of how well an entity (e.g. person, community, nation) is functioning with regard to various epistemic goods or ideals. It is constituted by many different factors (e.g. possessing true beliefs, being disposed to make reliable inferences), is improved or degraded by many different things (e.g. research funding, social trust), and many different kinds of inquiry are relevant to its study. Epistemic immunity is the robustness with which an entity is resistant to performing certain kinds of epistemic activity, such as questioning certain ideas, believing certain sources, or making certain inferences. Epistemic inoculation occurs when social, political or cultural processes cause an entity to become immune to engaging in certain epistemic activities. After outlining each of these concepts, we close by considering some of the risks associated with attempts to improve others' epistemic health. abstract_id: PUBMED:29792116 The epistemic culture in an online citizen science project: Programs, antiprograms and epistemic subjects. In the past decade, some areas of science have begun turning to masses of online volunteers through open calls for generating and classifying very large sets of data. The purpose of this study is to investigate the epistemic culture of a large-scale online citizen science project, the Galaxy Zoo, that turns to volunteers for the classification of images of galaxies. For this task, we chose to apply the concepts of programs and antiprograms to examine the 'essential tensions' that arise in relation to the mobilizing values of a citizen science project and the epistemic subjects and cultures that are enacted by its volunteers. Our premise is that these tensions reveal central features of the epistemic subjects and distributed cognition of epistemic cultures in these large-scale citizen science projects. abstract_id: PUBMED:34363398 Counterstorytelling as Epistemic Justice: Decolonial Community-based Praxis from the Global South. In this paper, we present community-anchored counterstorytelling as a form of epistemic justice. We-the Miya Community Research Collective-engage in counterstorytelling as a means of resisting and disrupting dehumanization of Miya communities in Northeast India. Miya communities have a long history of dispossession and struggle - from forced displacement by British colonial rulers in the early 19th century to the present where they face imminent threats of statelessness. Against this backdrop, we theorize "in the flesh" to interrogate knowledges and representations systematically deployed to dispossess Miya people. Simultaneously, we uplift stories and endeavors that (re)humanize Miya people, creating/claiming cultural, knowledge, and political spaces that center peoples' struggles and resistance. Across these stories, we offer counterstorytelling as a powerful mode of recentering knowledges from the margins-a decolonial alternative to neoliberal epistemes that maintain institutions/universities as centers of knowledge production. abstract_id: PUBMED:32599412 Epistemic virtues and data-driven dreams: On sameness and difference in the epistemic cultures of data science and psychiatry. Data science and psychiatry have diverse epistemic cultures that come together in data-driven initiatives (e.g., big data, machine learning). The literature on these initiatives seems to either downplay or overemphasize epistemic differences between the fields. In this paper, we study the convergence and divergence of the epistemic cultures of data science and psychiatry. This approach is more likely to capture where and how the cultures differ and gives insights into how practitioners from both fields find ways to work together despite their differences. We introduce the notions of "epistemic virtues" to focus on epistemic differences ethnographically, and "trading zones" to concentrate on how differences are negotiated. This leads us to the following research question: how are epistemic differences negotiated by data science and psychiatry practitioners in a hospital-based data-driven initiative? Our results are based on an ethnographic study in which we observed a Dutch psychiatric hospital department developing prediction models of patient outcomes based on machine learning techniques (September 2017 - February 2018). Many epistemic virtues needed to be negotiated, such as completeness or selectivity in data inclusion. These differences were traded locally and temporarily, stimulated by shared epistemic virtues (such as a systematic approach), boundary objects and socialization processes. Trading became difficult when virtues were too diverse, differences were enlarged by storytelling and parties did not have the time or capacity to learn about the other. In the discussion, we argue that our combined theoretical framework offers a fresh way to study how cooperation between diverse practitioners goes and where it can be improved. We make a call for bringing epistemic differences into the open as this makes a grounded discussion possible about the added value of data-driven initiatives and the role they can play in healthcare. abstract_id: PUBMED:32726473 Disentangling the process of epistemic change: The role of epistemic volition. Background: Many interventions on epistemic beliefs (i.e., individual beliefs about knowledge and knowing) are based on Bendixen and Rule's Integrative Model for Personal Epistemology Development. Empirically, however, the model is still insufficiently validated. This is especially true for its epistemic volition component - a will or desire to actively change one's beliefs. Aims: To experimentally scrutinize the role of epistemic volition, we investigated (incremental) effects on epistemic change of an epistemic volition intervention. Sample: 412 psychology students enrolled at German universities completed the study. Methods: We employed a randomized pre-post design with three experimental groups that differed in the administered epistemic volition and resolvable controversies interventions. The purpose of the latter was to initiate an epistemic change process, thereby laying the foundation for the epistemic volition intervention. Both data collection and interventions were conducted online. In addition to self-report measures, we applied a complementary source evaluation task to analyse epistemic change. Results: Even though we found small- to medium-sized changes in epistemic beliefs, these changes did not differ between experimental conditions. Exploratory analyses suggested, however, that source evaluation task performance might have been promoted by the epistemic volition intervention and that - across experimental groups - manipulation check measures on both interventions interacted positively. Conclusion: Ultimately, we failed to separate the effects that our epistemic volition intervention had on epistemic change from these of the resolvable controversies intervention. Nonetheless, our study makes some strong contributions to - and interconnects - the growing bodies of research on epistemic change and multiple source use. abstract_id: PUBMED:33024701 On migration, geography, and epistemic communities. This commentary paper starts by questioning the assumption that migration means international migration, and goes on to affirm that migration studies has indeed come of age as a coherent if highly diverse research field. Several emerging epistemic communities are identified: migration and development; gender and migration; lifestyle migration; and youth and student migrations. Finally, I argue that the role of geography in the study of migration has been under-valued. abstract_id: PUBMED:34866666 Notes on a complicated relationship: scientific pluralism, epistemic relativism, and stances. While scientific pluralism enjoys widespread popularity within the philosophy of science, a related position, epistemic relativism, does not have much traction. Defenders of scientific pluralism, however, dread the question of whether scientific pluralism entails epistemic relativism. It is often argued that if a scientific pluralist accepts epistemic relativism, she will be unable to pass judgment because she believes that "anything goes". In this article, I will show this concern to be unnecessary. I will also argue that common strategies to differentiate relativism and pluralism fail. Building upon this analysis, I will propose a new way of looking at both positions' relations. This article aims to understand what explains the friction between scientific pluralism and epistemic relativism. I will demonstrate that conceptualizing both epistemic relativism and scientific pluralism as "stances" sheds better light on their relation and demonstrates that it is, in principle, possible to support both positions at the same time. Preferred policies and levels of analysis, however, cause friction in practice. abstract_id: PUBMED:37719301 The Value of Knowledge and Other Epistemic Standings: A Case for Epistemic Pluralism. In epistemology, the concept of knowledge is of distinctive interest. This fact is also reflected in the discussion of epistemic value, which focuses to a large extend on the value problem of knowledge. This discussion suggests that knowledge has an outstanding value among epistemic standings because its value exceeds the value of its constitutive parts. I will argue that the value of knowledge is not outstanding by presenting epistemic standings of checking, transferring knowledge, and proving in court, whose values exceed the value of knowledge in certain contexts. Moreover, the values of these other epistemic standings do not always rely on the value of knowledge. In terms of value, knowledge is not an outstanding epistemic concept. Hence, in terms of value we cannot find support for the privileged position that knowledge enjoys in epistemology. abstract_id: PUBMED:31258205 Epistemic relativism, scepticism, pluralism. There are a number of debates that are relevant to questions concerning objectivity in science. One of the eldest, and still one of the most intensely fought, is the debate over epistemic relativism. -All forms of epistemic relativism commit themselves to the view that it is impossible to show in a neutral, non-question-begging, way that one "epistemic system", that is, one interconnected set of epistemic standards, is epistemically superior to (all) others. I shall call this view "No-metajustification". No-metajustification is commonly taken to deny the objectivity of standards. In this paper I shall discuss two currently popular attempts to attack "No-metajustification". The first attempt attacks no-metajustification by challenging a particular strategy of arguing in its defence: this strategy involves the ancient Pyrrhonian "Problem of the Criterion". The second attempt to refute No-metajustification targets its metaphysical underpinning: to wit, the claim that there are, or could be, several fundamentally different and irreconcilable epistemic systems. I shall call this assumption "Pluralism". I shall address three questions with respect to these attempts to refute epistemic relativism by attacking no-metajustification: (i) Can the epistemic relativist rely on the Problem of the Criterion in support of No-metajustification? (ii) Is a combination of Chisholmian "particularism" (i.e. the insistence that we know lots of things) and epistemic naturalism an effective weapon against No-metajustification? And (iii) is Pluralism a defensible assumption? Answer: The concept of an epistemic community, as a network of experts with an authoritative claim to policy-relevant knowledge in their area of expertise, has been observed to have varying degrees of presence and influence across different levels and jurisdictions. In the context of heart health promotion in Canada, a scientific and policy epistemic community was identified at the international and Canadian federal levels. However, at the provincial and regional levels, the presence and implementation of this community varied, influenced by economic, political, and cultural factors. Despite these variations, the epistemic community for heart health promotion appears to be robust, suggesting that it can wax and wane but still maintain a significant influence over time (PUBMED:19236697). The resilience and adaptability of epistemic communities can be further understood through the concepts of epistemic health, epistemic immunity, and epistemic inoculation. Epistemic health measures how well an entity functions with regard to various epistemic goods or ideals, such as possessing true beliefs or being disposed to make reliable inferences. Epistemic immunity refers to the robustness with which an entity resists certain kinds of epistemic activities, and epistemic inoculation occurs when social, political, or cultural processes cause an entity to become immune to engaging in certain epistemic activities. These concepts highlight the dynamic nature of epistemic communities and their susceptibility to change due to various influences (PUBMED:37360965). In summary, an epistemic community can indeed come and go, influenced by a range of factors that shape its presence and effectiveness. The community's robustness and the interplay of epistemic health, immunity, and inoculation suggest that while the influence of an epistemic community may fluctuate, it can persist and adapt to changing circumstances.
Instruction: Physical activity for bone health in inactive teenage girls: is a supervised, teacher-led program or self-led program best? Abstracts: abstract_id: PUBMED:16982385 Physical activity for bone health in inactive teenage girls: is a supervised, teacher-led program or self-led program best? Purpose: To investigate the effect of a six-month teacher-led osteogenic physical activity program, vs. a self-led activity program, on ultrasound measurements of bone in inactive teenage girls. Methods: Ninety sedentary girls [mean (SD) age 16.3 (.6) years] were identified from 300 assessed for physical activity across five schools in southeast Ireland. Schools were matched and randomly assigned to a teacher-led physical activity (TLPA) program, a self-led physical activity (SLPA) program, or a control group. Broadband ultrasound attenuation (BUA), speed of sound (SOS), and os calcis stiffness index (OCSI) were measured using a portable ultrasound machine. Anthropometry, aerobic fitness, calcium intake, and physical activity were assessed, and focus groups held one month after program completion. Descriptive statistics, paired t-tests, and analysis of variance were used to analyze the data. Results: Both intervention groups demonstrated significant improvements (p &lt; .05) in BUA, SOS, OCSI and aerobic fitness, i.e., TLPA: +14.9%, +21.9%, + 15.9%, and +8.5%, respectively, and SLPA: +10.6%, +30.3%, + 15.6%, and +5.1%, respectively, with no change in controls. Differences between intervention groups and controls were significant for BUA and OCSI (p &lt; .05). TLPA and SLPA groups engaged in an average of 4.5 and 3.4 hours/week of physical activity, respectively, over the intervention period. The SLPA group continued to exercise after the intervention had ceased, whereas the TLPA group did not. Conclusions: Previously inactive teenage girls can adhere to an osteogenic activity program whether supervised or directing their own activity. Longer-term, sustainable initiatives with this age group are needed and might focus on developing personal skills for physical activity. abstract_id: PUBMED:30407077 Evaluation of the Effectiveness of a 3-Year, Teacher-Led Healthy Lifestyle Program on Eating Behaviors Among Adolescents Living in Day School Hostels in Malaysia. Background: Independence gained during adolescence may be associated with unhealthy eating behaviors. Although malnutrition among adolescents is evident, studies on eating behaviors among adolescents are scarce. Objective: To determine the effectiveness of a teacher-led Healthy Lifestyle Program on eating behaviors among adolescents in Malaysia. Methods: This was a cluster randomized controlled trial (conducted in 2012 to 2014), with 100 schools randomly selected from 721 schools, then assigned to 50 intervention schools and 50 control schools. A Healthy Eating and Be Active among Teens (HEBAT) module was developed for pretrained teachers to deliver a Healthy Lifestyle Program on eating behaviors among adolescents. Eating behaviors of the respondents was determined using Eating Behaviors Questionnaire. Linear Mixed Model analysis and χ2 test were used to determine within- and between-group effects of studied variables. Results: A total of 4277 respondents participated in this study, with 2635 samples involved in the final analysis, comprised of 921 intervention and 1714 control respondents. There were 32.4% (36.4%) males and 67.6% (63.6%) females in the intervention (control) group. Mean age was comparable between the groups (intervention = 12.98 years; control = 12.97 years). Majority of the respondents skipped meals at baseline (intervention = 74.7%; control = 79.5%). After the program, intervention respondents had higher consumption frequency of lunch, dinner, and mid-morning snack but a lower consumption frequency of late-evening snack and meal skipping behaviors than their control counterparts. Conclusion: The teacher-led Healthy Lifestyle Program was effective in reducing meal-skipping behaviors among Malaysian adolescents. abstract_id: PUBMED:37575452 Factors associated with implemented teacher-led movement and physical activity in early childhood education and care. Movement and physical activity (MoPA) is critical for children's development and health. This study aimed to explore early childhood education and care (ECEC) educators' reported frequency of implemented gross motor and physical activities (MoPA) among children in ECEC, as well as the educators' reported personal physical activity (PA) levels in leisure time. A cross-sectional survey was performed in 68 preschools in southern Sweden. Data were obtained from questionnaires completed by 359 ECEC educators. The participation rate was 61%. About two thirds offered MoPA once a week or more seldom, while one quarter offered MoPA at least every other day. Educators who reported personal PA three times or more per week, offered MoPA for the children at least every other day to a higher extent (37%) compared to colleagues who reported personal PA once or twice a week (26%) or colleagues who reported that they were never or seldom active (18%) (p = 0.034). The results from multiple logistic regression analysis showed that reported implemented MoPA among children in ECEC was significantly associated with the educators' perceptions that free play improved children's gross motor skills (OR 2.7), the educators' perceptions of needed curricular guidelines for MoPA (OR 2.1), the educators' own leisure PA level (OR 2.0) and the educators' perceptions that adequate gross motor skills were not learned at home (OR 0.4). Teacher-led MoPA occurs sparingly during the preschool day and the teachers believe that the children get sufficient MoPA in free play. The children are expected to develop their motor skills to a sufficient extent during the short moments of offered outdoor play. Teachers who are physically active in their leisure-time seem to offer gross motor training for the children to a higher extent than less active or inactive colleagues. abstract_id: PUBMED:33573594 Peer-led exercise program for ageing adults to improve physical functions - a randomized trial. Background: A peer-led exercise program is one way to empower people sharing similar characteristics to encourage others to be active, but there is a lack of evidence that these programs have physical function and other benefits when delivered to ageing adults. Methods: This randomized controlled trial lasting 12 weeks proposed an exercise peer-led program offered to 31 adults aged 50 and above, twice a week, by a trained leader of the same age from March to May 2019. The program was offered for free with limited space and equipment. Valid tests of physical function (e.g., 30-s chair stand, 6-min walk test) were used to assess the functional benefits. Psychosocial outcomes were assessed using self-reported questionnaires and metabolic outcomes via a fasted blood draw. Results: A significant difference was found between pre-and post-values in most physical function tests in the intervention group (all p &lt; 0.05). When adjusted for potential confounders, the intervention group was significantly associated with a more significant improvement on the chair stand test (ß = .26; p &lt; 0.001; r2 = 0.26), the arm curl (ß = .29; p &lt; 0.001; r2 = 0.49), as well as the 6-min walk test (ß = -.14; p &lt; 0.001; r2 = 0.62) compared with the control group. Using repetitive measures generalized linear model, the interaction between the changes and the group was significant for all three tests. Benefits were also observed for participants' stress level and perceived health in the intervention group compared to the control. Finally, no significant difference was observed between groups for metabolic health. Conclusions: The current work suggests that a 12-week peer-led exercise program can improve physical function for adults age 50 and above. Trial Registration: NCT03799952 (ClinicalTrials.gov) 12/20/2018. abstract_id: PUBMED:32690397 An evaluation of the facilitator training to implement 'Taking charge of my life and health', a peer-led group program to promote self-care and patient empowerment in Veteran participants. Objective: We developed a peer-led group program for Veterans called Taking Charge of My Life and Health (TCMLH) that emphasizes patient education, goal setting, shared decision making, and whole person care. Our aim was to conduct an evaluation of a facilitator training course to deliver TCMLH in VA sites. Methods: Repeated measures ANOVA models were used to examine change over three timepoints (pre-test, post-test, and two-month follow-up) in outcomes of attitudes, knowledge, skills, and self-efficacy related to patient empowerment, skills acquisition, self-care strategies, and curriculum facilitation. Qualitative data analysis of participant feedback was used to identify potential training adaptations and barriers to TCMLH delivery. Results: Our sample comprised 70 trainees who completed all three assessments. Participants reported high levels of training satisfaction, quality, and utility, and sustained improvements in knowledge of Whole Health, self-efficacy for group facilitation, and self-efficacy for using Whole Health concepts and tools. Implementation barriers included challenges related to group management and site logistics. Conclusion: The facilitator training course improved knowledge and self-efficacy associated with successful peer-led program delivery and identified opportunities to improve the training course and TCMLH dissemination. Practice Implications: Findings provide insights on the design and implementation of training models to support peer-led programs. abstract_id: PUBMED:24097927 Comparison between peer-led and teacher-led education in tuberculosis prevention in rural middle schools in Chongqing, China. The aim of this study was to investigate the efficacy of tuberculosis (TB) education through a comparison of peer-led and teacher-led methods of education about TB prevention among middle school students in rural Chongqing, China. A preintervention and postintervention questionnaire survey was conducted in 2 different middle school student groups to measure changes in knowledge, attitude, and practice (KAP) status of those students before and after each TB education program. Of 1265 students participating in the preintervention survey, 1176 completed the postintervention survey. KAP scores of both peer-led and teacher-led groups after intervention improved by as much as 2 times compared with before the intervention and those of the control group (P &lt; .01). KAP scores of immediate evaluation were higher than those of long-term evaluation in the teacher-led education group (P &lt; .01). The teacher-led group had a larger improvement than the peer-led group in practice scores (P &lt; .01) in immediate effect evaluation. abstract_id: PUBMED:33129630 Effect of nurse-led program on the exercise behavior of coronary artery patients: Pender's Health Promotion Model. Objective: To determine the effect of nurse-led program based on Pender's Health Promotion Model on the exercise behaviors of coronary artery patients. Methods: The two-arm parallel, single-blind, randomized controlled trial was conducted with a total of 62 patients, intervention (n = 32) and control group (n = 30). Intervention group received a nurse-led program based on Pender's Health Promotion Model and routine follow-ups of control group continued. The health perception, perceived exercise self efficacy, perceived exercise benefits/barriers, exercise-related effect, exercise frequency and time were assessed at baseline, 4th, 8th and 12th weeks. The data were evaluated by frequency, percentage, median, mean and standard deviation, chi-square, Friedman and Mann Whitney U tests. Results: Health perception (62.6 ± 9.5; median:67.0; p &lt; 0.001), perceived exercise benefit (105.8 ± 7.4; median:107.0; p &lt; 0.001), perceived exercise self efficacy (71.2 ± 5.4; median: 71.5; p &lt; 0.05), exercise-related effect (31.6 ± 6.0; median:34.0; p &lt; 0.05), exercise frequency (4.8 ± 2.2; median:6.0 days/week; p &lt; 0.05) and time (105.9 ± 53.6; median:130.0 min/week; p &lt; 0.05) were higher and perceived barriers (43.1 ± 3.9; median: 42.0; p &lt; 0.001) were lower in the intervention group at 12th week. Conclusions: The nurse-led program has been shown to increase the exercise behavior in the intervention group. Practice Implications: Since it enables patients to gain and maintain exercise, it is highlighted the model to be integrated into clinical practice. abstract_id: PUBMED:33530598 A Nurse-Led Education Program for Pneumoconiosis Caregivers at the Community Level. Pneumoconiosis is an irreversible chronic disease. With functional limitations and an inability to work, pneumoconiosis patients require support from family caregivers. However, the needs of pneumoconiosis caregivers have been neglected. This study aimed to evaluate the effectiveness of a nurse-led education program, which involved four weekly 90-min workshops led by an experienced nurse and guided by Orem's self-care deficit theory. A single-group, repeated-measure study design was adopted. Caregivers' mental health (Hospital Anxiety and Depression Scale, HADS, four single items for stress, worriedness, tiredness, and insufficient support), caregiving burdens (caregiving burden scale, CBS), and unmet direct support and enabling needs (Carer Support Needs Assessment Tool, CSNAT) were measured at the baseline (T0), immediately after (T1), and one month after intervention (T2); 49, 41, and 28 female participants completed the T0, T1, and T2 measurements. Mean age was 65.9 years old (SD 10.08) with a range between 37 and 85 years old. The program improved the caregivers' mental wellbeing, and reduced their caregiving burdens and their unmet support and enabling needs, both immediately (T1) and one-month after the intervention (T2). In particular, the intervention improved the caregivers' mental wellbeing significantly, specifically depression symptoms, stress, and tiredness immediately after the intervention; and reduced most of their unmet support needs and unmet enabling needs one-month after the intervention. This was the first nurse-led program for pneumoconiosis caregivers and should serve as a foundation for further studies to test the program with robust designs. abstract_id: PUBMED:33788603 Examining the Impact of a Peer-Led Group Program for Veteran Engagement and Well-Being. Objectives: Veterans often suffer from multiple chronic illnesses, including mental health disorders, diabetes, obesity, and cardiovascular disease. The improvement of engagement in their own health care is critical for enhanced well-being and overall health. Peer-led group programs may be an important tool to provide support and skill development. We conducted a pilot study to explore the impact of a peer-led group-based program that teaches Veterans to become empowered to engage in their own health and well-being through mindful awareness practices, self-care strategies, and setting life goals. Design: Surveys were collected before and immediately after participation in the Taking Charge of My Life and Health (TCMLH) peer-led group program. Settings/location: Sessions were held in non-clinical settings within a VA medical center in the Midwest. Subjects: Our sample comprised 48 Veteran participants who were enrolled in TCMLH and completed a pretest and post-test survey. Intervention: TCMLH is a 9-week peer-led group program with an established curriculum that leverages the power of peer support to improve patient engagement, empowerment, health, and well-being among Veterans through Whole Health concepts, tools, and strategies. Programs were led by 1 of 12 trained Veteran peer facilitators. Outcome measures: Program impact on Veteran well-being was assessed by pre-post measures, including the Patient Activation Measure (PAM), the Perceived Stress Scale (PSS), the Patient-Reported Outcomes Measurement Information System Scale (PROMIS-10), the Perceived Health Competency Scale (PHCS), and the Life Engagement Test (LET). Results: There was a significant decrease in perceived stress (PSS score). Significant improvements were also seen in mental health and quality of life (PROMIS-10), participant accordance with the statement "I have a lot of reasons for living" (LET), and patient engagement (PAM score). Conclusions: As the Whole Health movement expands-both in VA and elsewhere-our findings suggest that guiding patients in an exploration of their personal values and life goals can help in key areas of patient engagement and mental and physical health outcomes. Further study is warranted, and expansion of the TCMLH program will allow for a more rigorous evaluation with a larger sample size. abstract_id: PUBMED:36895141 Acceptability of a peer-led self-management program for people living with chronic obstructive pulmonary disease in regional Southern Tasmania in Australia: A qualitative study. Objectives: People living with chronic obstructive pulmonary disease (COPD) in regional communities experience a higher disease burden and have poorer access to support services. This study sought to investigate the acceptability of a peer-led self-management program (SMP) in regional Tasmania, Australia. Methods: This descriptive qualitative study, underpinned by interpretivism used semi-structured one-to-one interviews to gather data to explore COPD patients' views of peer-led SMPs. Purposeful sampling recruited a sample of 8 women and 2 men. Data was analysed using a thematic approach. Results: The three final themes, 'Normality and Living with the disease', a 'Platform for sharing' and 'Communication mismatch' suggest that peer-led SMPs could offer an opportunity to share experiences. The themes also suggest that COPD often manifested as a deviation from 'normal life'. Communication was often felt to be ambiguous leading to tension between the health experts and people living with the condition. Discussion: Peer-led SMP has the potential to provide the much-needed support for people living with COPD in regional communities. This will ensure that they are empowered to live with the condition with dignity and respect. Benefits of exchanging ideas and socialisation should not be ignored and may enhance sustainability of SMPs. Answer: The study described in abstract PUBMED:16982385 investigated the effect of a six-month teacher-led osteogenic physical activity program versus a self-led activity program on bone health in inactive teenage girls. The results showed that both intervention groups, the teacher-led physical activity (TLPA) program and the self-led physical activity (SLPA) program, demonstrated significant improvements in ultrasound measurements of bone, aerobic fitness, and hours per week of physical activity. However, after the intervention ceased, the SLPA group continued to exercise, whereas the TLPA group did not. This suggests that while both supervised and self-led programs can be effective in the short term, self-led programs may have better sustainability in promoting long-term physical activity among inactive teenage girls. Therefore, in terms of long-term adherence and continued physical activity, a self-led program might be considered better for bone health in inactive teenage girls.
Instruction: Diagnostic evaluation for patients with ischemic stroke: are there sex differences? Abstracts: abstract_id: PUBMED:32106749 Differences in Diagnostic Evaluation in Women and Men After Acute Ischemic Stroke. Background Sex differences have been found in stroke risk factors, incidence, treatment, and outcomes. There are conflicting data on whether diagnostic evaluation for stroke may differ between men and women. Methods and Results We performed a retrospective cohort study using inpatient and outpatient claims between 2008 and 2016 from a nationally representative 5% sample of Medicare beneficiaries. We included patients ≥65 years old and hospitalized with ischemic stroke, defined by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) and ICD-10-CM diagnosis codes. Logistic regression was used to determine the association between female sex and the odds of diagnostic testing and specialist evaluation, adjusted for age, race, and number of Charlson comorbidities. Among 78 822 patients with acute ischemic stroke, 58.3% (95% CI, 57.9-58.6%) were women. Female sex was associated with decreased odds of intracranial vessel imaging (odds ratio [OR]: 0.94; 95% CI, 0.91-0.97), extracranial vessel imaging (OR: 0.89; 95% CI, 0.86-0.92), heart-rhythm monitoring (OR: 0.92; 95% CI, 0.87-0.98), echocardiography (OR: 0.92; 95% CI, 0.89-0.95), evaluation by a neurologist (OR: 0.94; 95% CI, 0.91-0.97), and evaluation by a vascular neurologist (OR: 0.94; 95% CI, 0.90-0.97), after adjustment for age, race, and comorbidities. These findings were unchanged in separate sensitivity analyses excluding patients who died during the index hospitalization or were discharged to hospice and excluding patients with atrial fibrillation diagnosed before their index stroke. Conclusions In a nationally representative cohort of Medicare beneficiaries, we found that women with acute ischemic stroke were less likely to be evaluated by stroke specialists and less likely to undergo standard diagnostic testing compared with men. abstract_id: PUBMED:19295208 Diagnostic evaluation for patients with ischemic stroke: are there sex differences? Background And Purpose: Differences in the management of women and men with acute coronary symptoms are well documented, but relatively little is known about practices for patients with ischemic stroke. We sought to determine whether there are sex-associated differences in the utilization of diagnostic tests for ischemic stroke patients treated at academic hospitals in the United States. Methods: Medical records were abstracted for consecutive ischemic stroke patients admitted to 32 US academic medical centers from January through June, 2004, as part of the University HealthSystem Consortium Ischemic Stroke Benchmarking Project. We compared the utilization rates of diagnostic tests including neuroimaging (CT or MRI), electrocardiogram (ECG), ultrasound of the carotid arteries, and echocardiography (transthoracic or transesophageal) for women and men. Multivariate logistic regression was used to test for sex differences with adjustment for potential confounders. Results: The study included 1,256 ischemic stroke patients (611 women; 645 men; mean age 66.6 +/- 14.6 years; 56% white). There were no differences between women and men in the use of neuroimaging (odds ratio, OR = 1.37; 95% confidence interval, CI = 0.58-3.24), ECG (OR = 1.00, 95% CI = 0.70-1.44), carotid artery ultrasound (OR = 0.93, 95% CI = 0.72-1.21) or echocardiography (OR = 0.70, 95% CI = 0.70-1.22). The results were similar after covariate adjustment. Conclusions: Women and men admitted to US academic hospitals receive comparable diagnostic evaluations, even after adjusting for sociodemographic and clinical factors. abstract_id: PUBMED:10757831 Diagnostic evaluation of stroke. Diagnostic testing in patients with ischemic stroke serves many purposes, including confirmation of the diagnosis and providing clues as to possible causes. Evaluation of the cerebral vasculature, the heart, the blood coagulation system, and selected other diagnostic tests may point to a mechanism of stroke which helps determine treatment and prognosis. With the recent advent of acute interventions for ischemic stroke, diagnostic testing is now an important component in the emergency management of stroke. In this article, the authors will review the standard approach to diagnostic testing for patients with ischemic stroke or transient ischemic attack, and new developments in neuro-imaging and their use in acute stroke assessment. abstract_id: PUBMED:22645703 Racial differences by ischemic stroke subtype: a comprehensive diagnostic approach. Background. Previous studies have suggested that black populations have more small-vessel and fewer cardioembolic strokes. We sought to analyze racial differences in ischemic stroke subtype employing a comprehensive diagnostic workup with magnetic resonance-imaging-(MRI-) based evaluation including diffusion-weighted imaging (DWI). Methods. 350 acute ischemic stroke patients admitted to an urban hospital with standardized comprehensive diagnostic evaluations were retrospectively analyzed. Ischemic stroke subtype was determined by three Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification systems. Results. We found similar proportions of cardioembolic and lacunar strokes in the black and white cohort. The only subtype category with a significant difference by race was "stroke of other etiology," more common in whites. Black stroke patients were more likely to have an incomplete evaluation, but this did not reach significance. Conclusions. We found similar proportions by race of cardioembolic and lacunar strokes when employing a full diagnostic evaluation including DWI MRI. The relatively high rate of cardioembolism may have been underappreciated in black stroke patients when employing a CT approach to stroke subtype diagnosis. Further research is required to better understand the racial differences in frequency of "stroke of other etiology" and explore disparities in the extent of diagnostic evaluations. abstract_id: PUBMED:19164792 Gender-related differences in diagnostic evaluation and outcome of ischemic stroke in Poland. Background And Purpose: We compared the diagnostic evaluation and outcome of ischemic stroke between men and women in large cohort of Polish patients. Methods: Our study included 1488 consecutive patients (755 women and 733 men) with ischemic stroke, treated in a single stroke unit between January 2002 and August 2007. We analyzed demographic factors, major risk factors for stroke, severity of neurological deficit on admission, diagnostic work-up performed during the hospital stay, and outcome on discharge. Results: Women were older than men (70.9+/-13.7 vs 66.2+/-12.7 years; P&lt;0.001) and had greater neurological deficit on admission (median NIHSS score: 7 [3-13] vs 5 [3-10]; P&lt;0.001). They were also less likely to obtain good recovery on discharge (39.2% vs 49.9%; P&lt;0.001). Carotid ultrasound and echocardiography were performed more often in men (77.2% vs 68.7% and 52.4% vs 46.5%, respectively; P&lt;0.05). Lesser neurological deficit on admission, younger age, and lack of history of myocardial infarction or previous stroke, but not gender, were independent predictors of full diagnostic work-up. Conclusions: Gender does not influence the adequate diagnostic evaluation of ischemic stroke as an independent factor. abstract_id: PUBMED:38108258 Hospital-Level Variability in Reporting of Ischemic Stroke Subtypes and Supporting Diagnostic Evaluation in GWTG-Stroke Registry. Background: Secondary prevention of ischemic stroke (IS) requires adequate diagnostic evaluation to identify the likely etiologic subtype. We describe hospital-level variability in diagnostic testing and IS subtyping in a large nationwide registry. Methods And Results: We used the GWTG-Stroke (Get With The Guidelines-Stroke) registry to identify patients hospitalized with a diagnosis of acute IS at 1906 hospitals between January 1, 2016, and September 30, 2017. We compared the documentation rates and presence of risk factors, diagnostic testing, achievement/quality measures, and outcomes between patients with and without reported IS subtype. Recording of diagnostic evaluation was optional in all IS subtypes except cryptogenic, where it was required. Of 607 563 patients with IS, etiologic IS subtype was documented in 57.4% and missing in 42.6%. Both the rate of missing stroke pathogenesis and the proportion of cryptogenic strokes were highly variable across hospitals. Patients missing stroke pathogenesis less frequently had documentation of risk factors, evidence-based interventions, or discharge to home. The reported rates of major diagnostic testing, including echocardiography, carotid and intracranial vascular imaging, and short-term cardiac monitoring were &lt;50% in patients with documented IS pathogenesis, although these variables were missing in &gt;40% of patients. Long-term cardiac rhythm monitoring was rarely reported, even in cryptogenic stroke. Conclusions: Reporting of IS etiologic subtype and supporting diagnostic testing was low overall, with high rates of missing optional data. Improvement in the capture of these data elements is needed to identify opportunities for quality improvement in the diagnostic evaluation and secondary prevention of stroke. abstract_id: PUBMED:12195691 Measurable differences between sequential and parallel diagnostic decision processes for determining stroke subtype: a representation of interacting pathologies. Stroke diagnosis depends on causal subtype. The accepted classification procedure is a succession of diagnostic tests administered in an order based on prior reported frequencies of the subtypes. The first positive test result completely determines diagnosis. An alternative approach tests multiple concomitant diagnostic hypotheses in parallel. This method permits multiple simultaneous pathologies in the patient. These two diagnostic procedures can be compared by novel numeric criteria presented here. Thrombosis, a type of ischemic stroke, results from interaction between endothelium, blood flow and blood components. We tested for ischemic stroke on thirty patients using both methods. For each patient the procedure produced an assessment of severity as an ordered set of three numbers in the interval [0, 1]. We measured the difference in diagnosis between the sequential and parallel diagnostic algorithms. The computations reveal systematic differences: The sequential procedure tends to under-diagnose and excludes any measure of interaction between pathologic elements. abstract_id: PUBMED:24699492 Cost and utility in the diagnostic evaluation of stroke. The diagnostic evaluation in a patient presenting with acute stroke has several purposes depending on the clinical circumstances. These include identifying stroke mimics, differentiating ischemic stroke from intracerebral hemorrhage in the acute setting, clarifying stroke localization, and determining the stroke mechanism to guide secondary prevention. The neurologist needs to be aware of the cost implications of different approaches to the diagnostic evaluation. abstract_id: PUBMED:16186523 Gender comparisons of diagnostic evaluation for ischemic stroke patients. Background: Sixty-two percent of all stroke deaths in the United States occur in women. We compared diagnostic evaluations by gender in ischemic stroke patients in a biethnic, population-based study. Methods: A random sample of patients with ischemic stroke identified between 2000 and 2002 by BASIC (Brain Attack Surveillance in Corpus Christi Project) were selected for this study (n = 381). Gender differences in the use of stroke diagnostic tests were assessed. Separate multivariable logistic regression models predicting diagnostic test use were constructed, adjusted for age, ethnicity, hypertension, atrial fibrillation, diabetes, history of stroke, coronary artery disease, having a primary care provider, discharge disposition, modified Rankin Scale score at discharge, and insurance status. Results: The study population consisted of 161 men and 220 women. Median age was 74.3 years. The respective proportions of males and females receiving any carotid artery evaluation were 71% and 62%; brain MRI, 43% and 41%; echocardiography, 57% and 48%; and EKG, 90% and 86%. Multivariable logistic models found that women were less likely to undergo echocardiography (odds ratio [OR] 0.64, CI: 0.42 to 0.98) and carotid evaluation (OR 0.57, CI: 0.36 to 0.91). There was no association of ischemic stroke subtype and gender to explain these results (p = 0.76). Conclusions: Despite controlling for explanatory variables, women with stroke were less likely to receive standard diagnostic tests vs men. Intervention is needed to increase access to quality stroke care for women. abstract_id: PUBMED:33184228 Sex Differences in Diagnosis and Diagnostic Revision of Suspected Minor Cerebral Ischemic Events. Objective: To describe sex differences in the presentation, diagnosis, and revision of diagnosis after early brain MRI in patients who present with acute transient or minor neurologic events. Methods: We performed a secondary analysis of a prospective multicenter cohort study of patients referred to neurology between 2010 and 2016 with a possible cerebrovascular event and evaluated with brain MRI within 8 days of symptom onset. Investigators documented the characteristics of the event, initial diagnosis, and final diagnosis. We used multivariable logistic regression analyses to evaluate the association between sex and outcomes. Results: Among 1,028 patients (51% women, median age 63 years), more women than men reported headaches and fewer reported chest pain, but there were no sex differences in other accompanying symptoms. Women were more likely than men to be initially diagnosed with stroke mimic (54% of women vs 42% of men, adjusted odds ratio (OR) 1.60, 95% confidence interval [CI] 1.24-2.07), and women were overall less likely to have ischemia on MRI (10% vs 17%, OR 0.52, 95% CI 0.36-0.76). Among 496 patients initially diagnosed with mimic, women were less likely than men to have their diagnosis revised to minor stroke or TIA (13% vs 20%, OR 0.53, 95% CI 0.32-0.88) but were equally likely to have acute ischemia on MRI (5% vs 8%, OR 0.56, 95% CI 0.26-1.21). Conclusions: Stroke mimic was more frequently diagnosed in women than men, but diagnostic revisions were common in both. Early brain MRI is a useful addition to clinical evaluation in diagnosing transient or minor neurologic events. Answer: The diagnostic evaluation for ischemic stroke appears to show sex differences, with several studies indicating that women may be less likely to receive certain diagnostic tests compared to men. A retrospective cohort study using Medicare claims data found that women with acute ischemic stroke were less likely to undergo standard diagnostic testing such as intracranial and extracranial vessel imaging, heart-rhythm monitoring, echocardiography, and evaluations by neurologists and vascular neurologists, even after adjusting for age, race, and comorbidities (PUBMED:32106749). Another study from Poland also reported that women were less likely to receive carotid ultrasound and echocardiography compared to men (PUBMED:19164792). However, a study from the United States involving academic hospitals found no significant differences between women and men in the use of diagnostic tests such as neuroimaging, ECG, carotid artery ultrasound, or echocardiography after adjusting for sociodemographic and clinical factors (PUBMED:19295208). Similarly, another study suggested that gender does not independently influence the adequacy of diagnostic evaluation for ischemic stroke (PUBMED:16186523). Despite these conflicting findings, it is clear that there is variability in the reporting of ischemic stroke subtypes and supporting diagnostic evaluation across hospitals, with a significant proportion of patients missing documentation of risk factors and diagnostic testing (PUBMED:38108258). Additionally, women with stroke were found to be less likely to receive standard diagnostic tests compared to men in a biethnic, population-based study, even after controlling for various explanatory variables (PUBMED:16186523). In summary, while some studies suggest that women may receive fewer diagnostic evaluations compared to men, other research indicates that diagnostic evaluations for ischemic stroke do not differ significantly by sex. The variability in findings may be influenced by factors such as hospital practices, geographic location, and study methodologies.
Instruction: Healthcare utilization and expenditures for chronic and acute conditions in Georgia: does benefit package design matter? Abstracts: abstract_id: PUBMED:25889249 Healthcare utilization and expenditures for chronic and acute conditions in Georgia: does benefit package design matter? Background: In 2007 the Georgian government introduced a full state-subsidized Medical Insurance Program for the Poor (MIP) to provide better financial protection and improved access for socially and financially disadvantaged citizens. Studies evaluating MIP have noted its positive impact on financial protection, but find only a marginal impact on improved access. To better assess whether the effect of MIP varies according to different conditions, and to identify areas for improvement, we explored whether MIP differently affects utilization and costs among chronic patients compared to those with acute health needs. Methods: Data were collected from two cross-sectional nationally representative household surveys conducted in 2007 and in 2010 that examined health care utilization rates and expenditures. Approximately 3,200 households were interviewed from each wave of both studies using a standardized survey questionnaire. Differences in health care utilization and expenditures between chronic and acute patients with and without MIP insurance were evaluated, using coarsened exact matching techniques. Results: Among patients with chronic illnesses, MIP did not affect either health service utilization or expenditures for outpatient drugs and reduction in provider fees. For patients with acute illnesses MIP increased the odds (OR = 1.47) that they would use health services. MIP was also associated with a 20.16 Gel reduction in provider fees for those with acute illnesses (p = 0.003) and a 15.14 Gel reduction in outpatient drug expenditure (p = 0.013). Among those reporting a chronic illness with acute episode during the 30 days prior to the interview, MIP reduced expenditures on provider fees (B = -20.02 GEL) with marginal statistical significance. Conclusions: Our findings suggest that the MIP may have improved utilization and reduce costs incurred by patients with acute health needs, while chronic patients marginally benefit only during exacerbation of their illnesses. This suggests that the MIP did not adequately address the needs of the aging Georgian population where chronic illnesses are prevalent. Increasing MIP benefits, particularly for patients with chronic illnesses, should receive priority attention if universal coverage objectives are to be achieved. abstract_id: PUBMED:36327642 The association of multiple chronic conditions and healthcare expenditures among adults with epilepsy in the United States. Rationale: Epilepsy is a frequent neurologic condition with important financial strains on the US healthcare system. The co-occurrence of multiple chronic conditions (MCC) may have additional financial repercussions on this patient population. We aimed to assess the association of coexisting chronic conditions on healthcare expenditures among adult patients with epilepsy. Methods: We identified a total of 1,942,413 adults (≥18 years) with epilepsy using the clinical classification code 83 from the MEPS-HC (Medical Expenditure Panel Survey Household Component) database between 2003 and 2014. Chronic conditions were selected using the clinical classification system (ccs), and categorized into 0, 1, or 2 chronic conditions in addition to epilepsy. We computed unadjusted healthcare expenditures per year and per individual (total direct healthcare expenditure, inpatient expenditure, outpatient expenditure, prescription medication expenditure, emergency room visit expenditure, home healthcare expenditure and other) by number of chronic conditions. We applied a two-part model with probit (probability of zero vs non-zero cost) and generalized linear model (GLM) gamma family and log link (for cost greater than zero) to examine the independent association between chronic conditions, and annual expenditures per individual, generating incremental costs with 0 chronic condition as reference. Results: Over half of the patients with epilepsy had at least two chronic conditions (CC). Yearly, for each patient with one and two chronic conditions, unadjusted total healthcare expenditures were two times ($10,202; 95 %CI $6,551-13,853) to nearly three times ($21,277; 95 %CI $12,971-25,583) higher than those with no chronic conditions ($6,177; 95 %CI $4,895-7,459), respectively. In general healthcare expenditures increased with the number of chronic conditions for pre-specified cost categories. The incremental (adjusted) total healthcare expenditure increased with the number of chronic conditions (1CC vs 0 CC: $3,238; 95 %CI $524-5,851 p-value = 0.015 and ≥2 CC vs 0 CC: $8,145; 95 %CI $5,935-10,895 p-value &lt; 0.001). In general, for all cost categories, incremental healthcare expenditures increased with the number of chronic conditions with the largest increment noted between those with 2 CC and those with 0 CC for inpatient ($2,025: 95 %CI $867-3,1830), outpatient ($2,141; 95 %CI $1,321-2,962), and medication ($1,852; 95 %CI $1,393-2,310). Conclusion: Chronic conditions are frequent among adult patients with epilepsy and are associated with a dose-response increase in healthcare expenditure, a difference driven by inpatient, outpatient, and medication prescription expenditures. Greater coordination of epilepsy care accounting for the presence of multiple chronic conditions may help lower the cost of epilepsy. abstract_id: PUBMED:25912631 Healthcare costs of acute and chronic tonsillar conditions in the pediatric population in the United States. Objective: To determine the prevalence and healthcare costs associated with the diagnosis and treatment of acute and chronic tonsillar conditions (ACT) in children. Design: Cross-sectional analysis of the 2006, 2008, and 2010 Medical Expenditure Panel Surveys. Methods: Pediatric patients (age &lt; 18 years) were examined from the above mentioned database. From the linked medical conditions file, cases with a diagnosis of ACT were extracted. Ambulatory visit rates, prescription refills, and ambulatory healthcare costs were then compared between children with and without a diagnosis of ACT and acute versus chronic tonsillitis, with multivariate adjustment for age, sex, ethnicity, region, insurance coverage and comorbid conditions (e.g., asthma and otitis media). Results: A total of 74.3 million children (mean age 8.55 years, 51% male) were sampled (raw N = 28,873). Of these, 804,229 children (1.1 ± 0.1%) were diagnosed with ACT annually (mean age 7.24 years, 49.1% male); 64.6 ± 2.0% had acute tonsillitis diagnoses and 35.4 ± 2.0% suffered from chronic tonsillitis. Children with ACT incurred an additional 2.3 office visits and 2.1 prescription fills (both p &lt; 0.001) annually compared with those without ACT, adjusting for demographic variables and medical comorbidities, but did not have an increase in emergency department visits (p = 0.123). Children with acute tonsillar diagnoses carried total healthcare expenditures of $1303 ± 390 annually versus $2401 ± 618 for those with chronic tonsillitis (p = 0.193). ACT was associated with an incremental increase in total healthcare expense of $1685 per child, annually (p &lt; 0.001). Conclusion: The diagnosis of ACT confers a significant incremental healthcare utilization and healthcare cost burden on children, parents and the healthcare system. With its prevalence in the United States, pediatric tonsillitis accounts for approximately $1.355 billion in incremental healthcare expense and is a significant healthcare utilization concern. Level Of Evidence: 2C. abstract_id: PUBMED:36868387 Association between multimorbidity trajectories, healthcare utilization, and health expenditures among middle-aged and older adults: China Health and Retirement Longitudinal Study. Background: To identify the latent groups of multimorbidity trajectories among middle-aged and older adults and examine their associations with healthcare utilization and health expenditures. Methods: We included adults aged ≥45 years who participated in the China Health and Retirement Longitudinal Study from 2011 to 2015 and were without multimorbidities (&lt;2 chronic conditions) at baseline. Multimorbidity trajectories underlying 13 chronic conditions were identified using group-based multi-trajectory modeling based on the latent dimensions. Healthcare utilization included outpatient care, inpatient care, and unmet healthcare needs. Health expenditures included healthcare costs and catastrophic health expenditures (CHE). Random-effects logistic regression, random-effects negative binomial regression, and generalized linear regression models were used to examine the association between multimorbidity trajectories, healthcare utilization, and health expenditures. Results: Of the 5548 participants, 2407 developed multimorbidities during follow-up. Three trajectory groups were identified among those with new-onset multimorbidity according to the increasing dimensions of chronic diseases: "digestive-arthritic" (N = 1377, 57.21 %), "cardiometabolic/brain" (N = 834, 34.65 %), and "respiratory/digestive-arthritic" (N = 196, 8.14 %). All trajectory groups had a significantly increased risk of outpatient care, inpatient care, unmet healthcare needs, and higher healthcare costs than those without multimorbidities. Notably, participants in the "digestive-arthritic" trajectory group had a significantly increased risk of incurring CHE (OR = 1.70, 95%CI: 1.03-2.81). Limitations: Chronic conditions were assessed using self-reported measures. Conclusions: The growing burden of multimorbidity, especially multimorbidities of digestive and arthritic diseases, was associated with a significantly increased risk of healthcare utilization and health expenditures. The findings may help in planning future healthcare and managing multimorbidity more effectively. abstract_id: PUBMED:32313672 Association of body mass index and osteoarthritis with healthcare expenditures and utilization. Objective: Osteoarthritis is highly prevalent and, on aggregate, is one of the largest contributors to US spending on hospital-based health care. This study sought to examine body mass index (BMI)-related variation in the association of osteoarthritis with healthcare utilization and expenditures. Methods: This is a retrospective study using administrative insurance claims linked to electronic health records. Study patients were aged ≥ 18 years with ≥1 BMI measurement recorded in 2014, with the first (index) BMI ≥ 25 kg m-2. Study outcomes and covariates were measured during a 1-year evaluation period spanning 6 months before and after index. Multivariable regression analyses examined the association of BMI with osteoarthritis prevalence, and the combined associations of osteoarthritis and BMI with osteoarthritis-related medication utilization, all-cause hospitalization, and healthcare expenditures. Results: A total of 256 459 patients (median age = 56 y) met study eligibility criteria; 14.8% (38 050) had osteoarthritis. In multivariable analyses, the adjusted prevalence of osteoarthritis increased with increasing BMI (12.7% in patients who were overweight [25.0-29.9 kg m-2] to 21.9% in patients with class III obesity [BMI ≥ 40 kg m-2], P &lt; .001). Among patients with osteoarthritis, increasing BMI (from overweight to class III obesity) was associated with increased (all P &lt; .01): utilization rates for analgesic medications (41.5-53.5%); rates of all-cause hospitalization (26.3%-32.0%); and total healthcare expenditures ($18 204-$23 372). Conclusion: The prevalence and economic burden of osteoarthritis grow with increasing BMI; primary prevention of weight-related osteoarthritis and secondary weight management may help to alleviate this burden. abstract_id: PUBMED:33606274 Contemporary Incremental Healthcare Costs for Chronic Rhinosinusitis in the United States. Objective/hypothesis: Determine contemporary incremental increases in healthcare expenditures and utilization associated with chronic rhinosinusitis (CRS). Study Design: Cross-sectional analysis of national health care survey data. Methods: Patients reporting a diagnosis of CRS were extracted from the 2018 Medical Expenditure Panel Survey medical conditions file and linked to the consolidated expenditures file. CRS patients were then compared to non-CRS patients determining differences in healthcare utilization for office visits, emergency facility visits, and prescriptions filled as well as differences in total healthcare costs, office-based costs, prescription medication costs, and self-expenditures using demographically and comorbidity adjusted multivariate models. Results were compared to 2007, adjusted for inflation. Results: An estimated 7.28 ± 0.36 million adult patients reported CRS in 2018 (3.0 ± 0.1% of the adult U.S. population). The additional incremental healthcare utilizations associated with CRS relative to non-CRS patients for office visits, emergency facility visits, and number of prescriptions filled were 4.2 ± 0.6, 0.10 ± 0.03, and 6.0 ± 0.9, respectively (all P ≤ .003). Similarly, additional incremental healthcare expenditures associated with CRS for total health care expenses, office-based visit expenditures, prescription expenditures, and self-expenditures were $1,983 ± 569, $772 ± 139, $678 ± 213, and $68 ± 17, respectively (all P ≤ .002). Increases in total (+$1,062) and office based expenditures (+$360) compared to 2007 were significant. Conclusion: CRS continues to be associated with a substantial incremental increase in healthcare utilization and expenditures. These expenditures have significantly outpaced inflation expected increases. The national healthcare costs of CRS have increased to an estimated $14.4 billion per year. Level Of Evidence: 3 Laryngoscope, 131:2169-2172, 2021. abstract_id: PUBMED:27875247 Comorbidity prevalence, healthcare utilization, and expenditures of Medicaid enrolled adults with autism spectrum disorders. A retrospective data analysis using 2000-2008 three state Medicaid Analytic eXtract was conducted to examine the prevalence and association of comorbidities (psychiatric and non-psychiatric) with healthcare utilization and expenditures of fee-for-service enrolled adults (22-64 years) with and without autism spectrum disorders (International Classification of Diseases, Ninth Revision-clinical modification code: 299.xx). Autism spectrum disorder cases were 1:3 matched to no autism spectrum disorder controls by age, gender, and race using propensity scores. Study outcomes were all-cause healthcare utilization (outpatient office visits, inpatient hospitalizations, emergency room, and prescription drug use) and associated healthcare expenditures. Bivariate analyses (chi-square tests and t-tests), multinomial logistic regressions (healthcare utilization), and generalized linear models with gamma distribution (expenditures) were used. Adults with autism spectrum disorders (n = 1772) had significantly higher rates of psychiatric comorbidity (81%), epilepsy (22%), infections (22%), skin disorders (21%), and hearing impairments (18%). Adults with autism spectrum disorders had higher mean annual outpatient office visits (32ASD vs 8noASD) and prescription drug use claims (51ASD vs 24noASD) as well as higher mean annual outpatient office visits (US$4375ASD vs US$824noASD), emergency room (US$15,929ASD vs US$2598noASD), prescription drug use (US$6067ASD vs US$3144noASD), and total expenditures (US$13,700ASD vs US$8560noASD). The presence of a psychiatric and a non-psychiatric comorbidity among adults with autism spectrum disorders increased the annual total expenditures by US$4952 and US$5084, respectively. abstract_id: PUBMED:25047785 Maternal depressive symptoms and healthcare expenditures for publicly insured children with chronic health conditions. This study estimated the prevalence of maternal depressive symptoms and tested associations between maternal depressive symptoms and healthcare utilization and expenditures among United States publicly insured children with chronic health conditions (CCHC). A total of 6,060 publicly insured CCHC from the 2004-2009 Medical Expenditure Panel Surveys were analyzed using negative binomial models to compare healthcare utilization for CCHC of mothers with and without depressive symptoms. Annual healthcare expenditures for both groups were compared using a two-part model with a logistic regression and generalized linear model. The prevalence of depressive symptoms among mothers with CCHC was 19 %. There were no differences in annual healthcare utilization for CCHC of mothers with and without depressive symptoms. Maternal depressive symptoms were associated with greater odds of ED expenditures [odds ratio (OR) 1.26; 95 % CI 1.03-1.54] and lesser odds of dental expenditures (OR 0.81; 95 % CI 0.66-0.98) and total expenditures (OR 0.71; 95 % CI 0.51-0.98). Children of symptomatic mothers had lower predicted outpatient expenditures and higher predicted expenditures for total health, prescription medications, dental care; and office based, inpatient and ED visits. Mothers with CCHC were more likely to report depressive symptoms than were mothers with children without chronic health conditions. There were few differences in annual healthcare utilization and expenditures between CCHC of mothers with and without depressive symptoms. However, having a mother with depressive symptoms was associated with higher ED expenditures and higher predicted healthcare expenditures in a population of children who comprise over three-fourths of the top decile of Medicaid spending. abstract_id: PUBMED:31727348 The impact of internal locus of control on healthcare utilization, expenditures, and health status across older adult income levels. Our objectives were to 1) determine the prevalence of locus of control (LOC) dimensions stratified by older adult income levels; 2) characterize internal LOC attributes within income subgroups; and 3) investigate LOC associations with healthcare utilization and expenditures; self-rated health and functionality. The survey sample was identified from adults age ≥65 years with diagnosed pain conditions. Internal LOC characteristics were determined from logistic regressions; outcomes regression-adjusted. Among respondents, internal prevalence for low (N = 554), medium (N = 1,394) and high income (N = 2040) was 27%, 30% and 30%, respectively. Internal was associated with high resilience, less stress, exercise and less opioid use across income levels. Lower-income internal was additionally associated with diverse social networks, physical therapy and less drug use. Those with high internal generally had lower healthcare utilization and expenditures; better self-rated health and functionality. Internal LOC is a powerful positive resource associated with better health outcomes, especially influential for lower income. abstract_id: PUBMED:26654582 Effects of comorbid health conditions on healthcare expenditures among people with severe mental illness. Background And Aims: Little is known about the incremental cost burden associated with treating comorbid health conditions among people with severe mental illness (SMI). This study compares the extent to which each individual medical condition increases healthcare expenditures between people with SMI and people without mental illness. Methods: Data were obtained from the 2011 Medical Expenditure Panel Survey (MEPS; N = 17 764). Mental illness and physical health conditions were identified through ICD-9 codes. Guided by the Andersen's behavioral model of health services utilization, generalized linear models were conducted. Results: Total healthcare expenditures among individuals with SMI were approximately 3.3 times greater than expenditures by individuals without mental illness ($11 399 vs. $3449, respectively). Each additional physical health condition increased the total healthcare expenditure by 17.4% for individuals with SMI compared to the 44.8% increase for individuals without mental illness. Conclusions: The cost effect of having additional health conditions on the total healthcare expenditures among individuals with SMI is smaller than those individuals without mental illness. Whether this is due to limited access to healthcare for the medical problems or better coordination between medical and mental health providers, which reduces duplicated medical procedures or visits, requires future investigation. Answer: The study conducted in Georgia, as reported in abstract PUBMED:25889249, investigated the impact of the Medical Insurance Program for the Poor (MIP) on healthcare utilization and expenditures for chronic and acute conditions. The findings indicated that MIP did not significantly affect health service utilization or expenditures for outpatient drugs and provider fees among patients with chronic illnesses. However, for patients with acute illnesses, MIP increased the likelihood of using health services and was associated with reductions in provider fees and outpatient drug expenditure. Notably, for those reporting a chronic illness with an acute episode, MIP marginally reduced expenditures on provider fees. The study concluded that while MIP may have improved utilization and reduced costs for patients with acute health needs, it did not adequately address the needs of the aging population with chronic illnesses. The authors suggest that increasing MIP benefits for chronic patients should be a priority to achieve universal coverage objectives. This study's findings align with the broader literature that indicates healthcare expenditures tend to increase with the presence of chronic conditions. For instance, in the United States, adults with epilepsy and multiple chronic conditions (MCC) face higher healthcare expenditures, with costs increasing in a dose-response manner with the number of chronic conditions (PUBMED:36327642). Similarly, pediatric patients with acute and chronic tonsillar conditions (ACT) incur significant incremental healthcare utilization and costs (PUBMED:25912631). Moreover, multimorbidity trajectories among middle-aged and older adults in China are associated with increased healthcare utilization and expenditures (PUBMED:36868387). The association between body mass index (BMI) and osteoarthritis with healthcare expenditures and utilization further illustrates the economic burden of chronic conditions, with the prevalence and economic burden of osteoarthritis growing with increasing BMI (PUBMED:32313672). Chronic rhinosinusitis (CRS) is another example, with substantial incremental increases in healthcare utilization and expenditures (PUBMED:33606274). Additionally, adults with autism spectrum disorders (ASD) and comorbidities have higher healthcare utilization and expenditures compared to those without ASD (PUBMED:27875247). Maternal depressive symptoms among publicly insured children with chronic health conditions (CCHC) are associated with greater emergency department expenditures and higher predicted healthcare expenditures (PUBMED:25047785). Furthermore, an internal locus of control among older adults is associated with lower healthcare utilization and expenditures, better self-rated health, and functionality across income levels ( across income levels (PUBMED:31727348). Lastly, the study on the effects of comorbid health conditions on healthcare expenditures among people with severe mental illness (SMI) found that while each additional physical health condition increased total healthcare expenditure, the increase was smaller for individuals with SMI compared to those without mental illness (PUBMED:26654582). In summary, the design of the benefit package, such as Georgia's MIP, does matter in healthcare utilization and expenditures, particularly for acute conditions. However, chronic conditions consistently lead to higher healthcare costs, and benefit designs may need to be tailored to better address the needs of individuals with chronic illnesses to ensure both improved access to care and financial protection.
Instruction: Could craniometric measurements explain the growth of the superior sagittal sinus? Abstracts: abstract_id: PUBMED:23548853 Could craniometric measurements explain the growth of the superior sagittal sinus? Objective: The objective of this study was to relate demographic variables and craniometric measures with measurements of the superior sagittal sinus (SSS) at different points along the path of the SSS. The findings were then discussed with regards to theories of skull growth. Methods: We studied 33 skulls with known demographic characteristics and measured various craniometric parameters and distances related to the specific dimensions of the SSS. These data were statistically analyzed, and the results are presented. Results: Of the 33 cadaver samples, 16 were female and 17 were male, aged between 28 and 87 years at the time of death. The cross-sectional area of the SSS measured at the coronary suture was positively correlated with the biauricular length. In addition, when measured 1.5 cm above the torcula, the cross-sectional area of the SSS was negatively correlated with the distance between the medial epicanthi. Conclusions: The relationships found may indicate that the growth of the SSS is proportional to the activity of each segment of the SSS that occurs along its path. abstract_id: PUBMED:33093986 Relationship of superior sagittal sinus with sagittal midline: A surgical application. Background: Interhemispheric approach is widely used to surgical management of midline tumors and vascular lesion in and around the third ventricle. Complete exposure of the superior sagittal sinus to obtain adequate working space of midline lesion is difficult, because of the risk to inadvertent injury to the sinus and bridging veins, which may cause several neurological deficits. Understanding the SSS neuroanatomy and its relationships with external surgical landmarks avoid such complications. The objective of this study is to accurately describe the position of SSS and its displacement in relation with sagittal midline by magnetic resonance imaging. Methods: A retrospective cross-sectional, observational study was performed. Magnetic resonance image of 76 adult patients with no pathological imaging was analyzed. The position of the halfway between nasion and bregma, bregma, halfway between bregma and lambda, and lambda was performed. The width and the displacement of the superior sagittal sinus accordingly to the sagittal midline were assessed in those landmarks. Results: The mean width of superior sagittal sinus at halfway between nasion and bregma, bregma, halfway between bregma and lambda, and lambda was 5.62 ± 2.5, 6.5 ± 2.8, 7.4 ± 3.2, and 8.5 ± 2.1 mm, respectively, without gender discrepancy. The mean displacement according to the midline at those landmarks showed a statistically significant difference to the right side among sexes. Conclusion: In this study, we demonstrate that sagittal midline may approximate external location of the superior sagittal sinus. Our data showed that in the majority of the cases, the superior sagittal sinus is displaced to the right side of sagittal midline as far as 16.3 mm. The data we obtained provide useful information that suggest that neurosurgeons should use safety margin to perform burr holes and drillings at the sagittal midline. abstract_id: PUBMED:34211880 Penetrating Injury of Superior Sagittal Sinus. Penetrating injury of superior sagittal sinus (SSS) is very rare yet serious which can lead to morbidity and mortality. Complications such as bleeding, thrombosis, and infection are possible and should be anticipated. We report a case of 3-year-old boy with penetrating injury caused by a nail at the middle third of SSS. The patient underwent the surgery for extraction and sinus repair and antibiotic treatment during the hospital stay. He was neurologically intact and recovered completely. Comprehensive treatment of both surgical and medical management is important in achieving the best possible outcome. abstract_id: PUBMED:35433489 Anatomical Study of Arachnoid Granulation in Superior Sagittal Sinus Correlated to Growth Patterns of Meningiomas. Meningiomas in the parasagittal region were formed by arachnoidal cells disseminated among arachnoid granulations. The purpose of this study was to characterize the morphology of chordae willisii, and AGs found in the superior sagittal sinus. This study used 20 anatomical specimens. Rigid endoscopes were introduced via torcula herophili into the sinus lumen. The morphological features of arachnoid granulation and chordae willisii were analyzed, and then arachnoid granulations and chordae willisii were assessed by elastic fiber stains, Masson's stains, and imaging analysis. Three types of arachnoid granulations were present in the examined sinuses. There were 365 counts of arachnoid granulations in examined sinuses by imaging analysis, averaging 1.36 ± 2.58 per sinus. Types I, II, and III made up 20.27, 45.20, and 34.52% of 268 patients, respectively. Microscopy of chordae willisii transverse sections indicated the existence of a single layer and a multiple-layered dura sinus wall. The dural sinus wall was the thickest one in the superior sagittal sinus. The thickness of longitudinal lamellae was significantly greater than trabeculae. This study reveals the anatomical differences between arachnoid granulations in the superior sagittal sinus. The arachnoid granulations classification enables surgeons to predict preoperatively growth patterns, followed by safely achieving the optimal range of parasagittal meningioma resection. abstract_id: PUBMED:34336509 Parietal Encephalocele With Fenestrated Superior Sagittal Sinus and Persistent Falcine Sinus. We present a case of a newborn with a fenestrated superior sagittal sinus and persistent falcine sinus with a parietal encephalocele. The patient was born full-term without any associated pregnancy complications other than meconium-stained amniotic fluid at delivery. Following delivery, MRI brain demonstrated midline parietal encephalocele, persistent falcine sinus, fenestration of the superior sagittal sinus at the level of the encephalocele, subependymal heterotopia, and thick tectum. The patient underwent resection and repair on day 2 of life. MRI performed at 15 weeks of life showed a mild increase in the size of lateral ventricles. The patient did not require a ventriculoperitoneal shunt. This is a novel case that provides a valuable contribution to the existing body of literature about congenital encephalocele associated with persistent falcine sinus, fenestrated superior sagittal sinus, subependymal heterotopia, and thick tectum. abstract_id: PUBMED:24981181 Surgical treatment of parasagittal and falcine meningiomas invading the superior sagittal sinus. Objective: We present our experience with surgery of parasagittal and falcine meningiomas invading the superior sagittal sinus with special consideration of the surgical complications and the incidence of tumour recurrence. Materials And Methods: The analysis included 37 patients with parasagittal and falcine meningiomas invading the superior sagittal sinus. In 13 cases, the sinus was ligated and resected with tumour. In 14 cases, the sinus was entered with the goal of tumour resection and the sinus was reconstructed, while in 10 patients the sinus was not entered and the remaining residual tumour was observed for growth. Results: Out of 13 patients after radical resection of the tumour and invaded part of sinus, 9 revealed haemodynamic complications: venous infarction (4), significant brain oedema (3) and hypoperfusion syndrome (2). 2 out of 14 patients after resection of the tumour from the lumen of the superior sagittal sinus with subsequent sinus repair developed venous infarction after surgery. Among 27 patients after radical tumour excision the remote follow-up revealed recurrence in 2 patients. There were no significant haemodynamic complications in none of 10 cases, in which the residual tumour was left after surgery in the superior sagittal sinus. In this group, 3 cases were subjected to early post-operative radiotherapy and local recurrence was observed in 4 patients. Conclusions: The aggressive surgical treatment of meningiomas infiltrating the superior sagittal sinus is associated with a high surgical risk. The incidence of recurrence of these tumours increases significantly in the case of non-radical excision of the tumour. abstract_id: PUBMED:15286895 A case of superior sagittal sinus thrombosis after closed head injury Superior sagittal sinus thrombosis (SSST) is a rare entity, most often arising from infections, dehydration, and hematologic disorders. Development of this condition secondary to trauma is extremely rare. In this report, a 13-year-old boy who developed SSST following a closed head injury is presented. Imaging studies showed SSST caused by a depressed skull fracture. Neurologic examination of the patient was normal other than bilateral papillary stasis. He was treated with antiedematous and anticonvulsant drugs. Magnetic resonance venography obtained eight months after the diagnosis showed unoccluded superior sagittal sinus, neurologic examination findings were normal, as well. abstract_id: PUBMED:31309022 Superior Sagittal Sinus: A Review of the History, Surgical Considerations, and Pathology. A systematic PubMed and Google Scholar search for studies related to the anatomy, history, surgical approaches, complications, and diseases of the superior sagittal sinus was performed. The purpose of this review is to elucidate some of the more recent advances of our understanding of this structure. One of the earliest anatomical landmarks to be described, the superior sagittal sinus (SSS, sinus sagittalis superior (Latin); "sagittalis" Latin for 'arrow' and "sinus" Latin for 'recess, bend, or bay') has been defined and redefined by the likes of Vesalius and Cushing. A review of the various methods of approaching pathology of the SSS is discussed, as well as the historical discovery of these methods. Disease states that were emphasized include invasion of the SSS by meningioma, as well as thrombosis and vascular malformations. abstract_id: PUBMED:36375801 Microsurgery of Meningiomas Involving the Superior Sagittal Sinus. Meningiomas involving major dural sinuses can be difficult to resect without proper handling of the sinus. In young patients, a gross total resection should be attempted when feasible. A 24-year-old man presented with headaches, progressive left-sided weakness, and partial motor seizures. He was found to have a parasagittal meningioma in front of the motor cortex that invaded the superior sagittal sinus (SSS). The sinus was still patent, and the walls were preserved. Thus a gross total resection was achieved with primary suturing of the sinus, followed by reinforcement with an AnastoClip GC. Videos 1-3 details the separation of the tumor from the convexity veins and the cortex, removal of the tumor from the SSS, and reconstruction of the SSS. Gross total resection of meningiomas involving the SSS is achievable without sacrificing the sinus when planned correctly and methodically. The patient consented for the procedure and to the publication of these images. abstract_id: PUBMED:36897700 Infectious thrombosis of the superior sagittal sinus with subarachnoid hemorrhage: A case report. Rationale: Cerebral venous sinus thrombosis (CVST) represents 0.5% to 1% of all strokes. CVST can cause headaches, epilepsy, and subarachnoid hemorrhage (SAH). CVST is easily misdiagnosed because of the variety and non-specificity of symptoms. Herein, we report a case of infectious thrombosis of the superior sagittal sinus with SAH. Patient Concerns: A 34-year-old man presented to our hospital with a 4-hour history of sudden and persistent headache and dizziness with tonic convulsions of the limbs. Computed tomography revealed SAH with edema. Enhanced magnetic resonance imaging showed an irregular filling defect in the superior sagittal sinus. Diagnoses: The final diagnosis was hemorrhagic superior sagittal sinus thrombosis and secondary epilepsy. Interventions: He was treated with antibiotic, antiepileptic, fluids to rehydrate, and intravenous dehydration. Outcomes: After treatment, the seizures did not recur and the symptoms were relieved. One month after the antibiotic treatment, the muscle strength of the patient's right extremity was restored to level 5, and there was no recurrence of his neurological symptoms. Lessons: We describe a case of infectious thrombosis of the superior sagittal sinus manifested as SAH, which is easily misdiagnosed, especially when patients present with an infection. Clinicians must therefore take care during the diagnosis and selection of the treatment strategy. Answer: Yes, craniometric measurements could potentially explain the growth of the superior sagittal sinus (SSS). A study that investigated the relationship between demographic variables, craniometric measures, and the dimensions of the SSS found that the cross-sectional area of the SSS measured at the coronary suture was positively correlated with the biauricular length. Additionally, the cross-sectional area of the SSS measured 1.5 cm above the torcula was negatively correlated with the distance between the medial epicanthi. These relationships suggest that the growth of the SSS may be proportional to the activity of each segment of the SSS that occurs along its path (PUBMED:23548853).
Instruction: Comparison between laparoscopic and open radical nephroureterectomy in a contemporary group of patients: are recurrence and disease-specific survival associated with surgical technique? Abstracts: abstract_id: PUBMED:37994335 Comparison of survival outcomes between laparoscopic versus open radical nephroureterectomy in upper tract urothelial cancer patients: Experiences of a tertiary care single center. Objectives: To test for differences in overall and recurrence-free survival between laparoscopic and open surgical approaches in patients undergoing radical nephroureterectomy (RNU) for upper tract urothelial carcinoma (UTUC). Materials And Methods: We retrospectively identified patients treated for UTUC from 2010 to 2020 from our institutional database. Patients undergoing laparoscopic or open RNU with no suspicion of metastasis (cM0) were for the current study population. Patients with suspected metastases at diagnosis (cM1) or those undergoing other surgical treatments were excluded. Tabulation was performed according to the laparoscopic versus open surgical approach. Kaplan-Meier plots were used to test for differences in overall and recurrence-free survival with regard to the surgical approach. Furthermore, separate Kaplan-Meier plots were used to test the effect of preoperative ureterorenoscopy on overall and recurrence-free survival within the overall study cohort. Results: Of the 59 patients who underwent nephroureterectomy, 29% (n = 17) underwent laparoscopic nephroureterectomy, whereas 71% (n = 42) underwent open nephroureterectomy. Patient and tumor characteristics were comparable between groups (p ≥ 0.2). The median overall survival was 93 and 73 months in the laparoscopic nephroureterectomy group compared to the open nephroureterectomy group (p = 0.5), respectively. The median recurrence-free survival did not differ between open and laparoscopic nephroureterectomies (73 months for both groups; p = 0.9). Furthermore, the median overall and recurrence-free survival rates did not differ between patients treated with and without preoperative ureterorenoscopy. Conclusions: The results of this retrospective, single-center institution showed that overall and recurrence-free survival rates did not differ between patients with UTUC treated with laparoscopic and open RNU. Furthermore, preoperative ureterorenoscopy before RNU was not associated with higher overall or recurrence-free survival rates. abstract_id: PUBMED:26497823 Laparoscopic radical nephroureterectomy is associated with worse survival outcomes than open radical nephroureterectomy in patients with locally advanced upper tract urothelial carcinoma. Purpose: To compare survival outcomes between laparoscopic radical nephroureterectomy (LRNU) and open radical nephroureterectomy (ORNU) in upper urinary tract urothelial carcinoma (UTUC) patients. Methods: We retrospectively analyzed the data of 371 UTUC patients who underwent ORNU (n = 271) or LRNU (n = 100) between 1992 and 2012. The survival outcomes included intravesical recurrence (IVR)-free survival, overall survival (OS), and cancer-specific survival (CSS). The Kaplan-Meier method and log-rank test were used to estimate and compare survival curves between groups. Factors associated with survival outcomes were evaluated using univariable and multivariable Cox proportional hazard models. Results: The three-year IVR-free survival rates were similar between the ORNU and LRNU groups (59.9 and 61.7 %, p = 0.267). However, the LRNU group showed worse five-year OS (59.1 vs. 75.2 %, p = 0.027) and CSS (66.1 vs. 80.2 %, p = 0.015) rates than the ORNU group. In particular, on stratifying the study cohort by pathological stages, significant differences in OS (p = 0.007) and CSS (p = 0.005) between the surgical approaches were observed only in locally advanced disease (pT3/T4). In multivariable analysis, LRNU was an independent predictor of worse OS (p = 0.001) and CSS (p = 0.006) than ORNU. Likewise, in multivariable analysis in patients with pT3/T4 stage, LRNU was significantly associated with worse OS (hazard ratio [HR] 2.59, p = 0.001) and CSS (HR 2.50, p = 0.005). Conclusions: Our data suggest that in UTUC patients, LRNU, compared to ORNU, is generally associated with unfavorable OS and CSS results. In particular, LRNU should be performed in locally advanced UTUC patients after careful consideration of its impact on patient survival. abstract_id: PUBMED:29435873 Oncologic outcomes for open and laparoscopic radical nephroureterectomy in patients with upper tract urothelial carcinoma. Background: Oncologic benefits of laparoscopic radical nephroureterectomy (LNU) are unclear. We aimed to evaluate the impact of surgical approach for radical nephroureterectomy on oncologic outcomes in patients with locally advanced upper tract urothelial carcinoma (UTUC). Methods: Of 426 patients who underwent radical nephroureterectomy at five medical centers between February 1995 and February 2017, we retrospectively investigated oncological outcomes in 229 with locally advanced UTUC (stages cT3-4 and/or cN+). The surgical approach was classified as open nephroureterectomy (ONU) or LNU, and oncologic outcomes, including intravesical recurrence-free survival (RFS), visceral RFS, cancer-specific survival (CSS), and overall survival (OS), were compared between the groups. The inverse probability of treatment weighting (IPTW)-adjusted Cox-regression analyses was performed to evaluate the impact of LNU on the prognosis. Results: Of the 229 patients, 48 (21%) underwent LNU. There were significant differences in patient backgrounds, including preoperative renal function, lymph-node involvement, lymphovascular invasion, and surgical margins, between the groups. Before the background adjustment, intravesical RFS, visceral RFS, CSS, and OS were significantly inferior in the ONU group than in the LNU group. However, in the IPTW-adjusted Cox-regression analysis, no significant differences were observed in intravesical RFS (hazard ratio [HR], 0.65; P = 0.476), visceral RFS (HR, 0.46; P = 0.109), CSS (HR, 0.48; P = 0.233), and OS (HR, 0.40; P = 0.147). Conclusion: Surgical approaches were not independently associated with prognosis in patients with locally advanced UTUC. abstract_id: PUBMED:20724065 Comparison between laparoscopic and open radical nephroureterectomy in a contemporary group of patients: are recurrence and disease-specific survival associated with surgical technique? Background: Open radical nephroureterectomy (ORN) is the current standard of care for upper tract urothelial carcinoma (UTUC), but laparoscopic radical nephroureterectomy (LRN) is emerging as a minimally invasive alternative. Questions remain regarding the oncologic safety of LRN and its relative equivalence to ORN. Objective: Our aim was to compare recurrence-free and disease-specific survival between ORN and LRN. Design, Setting, And Participants: We retrospectively analyzed data from 324 consecutive patients treated with radical nephroureterectomy (RN) between 1995 and 2008 at a major cancer center. Patients with previous invasive bladder cancer or contralateral UTUC were excluded. Descriptive data are provided for 112 patients who underwent ORN from 1995 to 2001 (pre-LRN era). Comparative analyses were restricted to patients who underwent ORN (n=109) or LRN (n=53) from 2002 to 2008. Median follow-up for patients without disease recurrence was 23 mo. Intervention: All patients underwent RN. Measurements: Recurrence was categorized as bladder-only recurrence or any recurrence (bladder, contralateral kidney, operative site, regional lymph nodes, or distant metastasis). Recurrence-free probabilities were estimated using Kaplan-Meier methods. A multivariable Cox model was used to evaluate the association between surgical approach and disease recurrence. The probability of disease-specific death was estimated using the cumulative incidence function. Results And Limitations: Clinical and pathologic characteristics were similar for all patients. The recurrence-free probabilities were similar between ORN and LRN (2-yr estimates: 38% and 42%, respectively; p=0.9 by log-rank test). On multivariable analysis, the surgical approach was not significantly associated with disease recurrence (hazard ratio [HR]: 0.88 for LRN vs ORN; 95% confidence interval [CI], 0.57-1.38; p=0.6). There was no significant difference in bladder-only recurrence (HR: 0.78 for LRN vs ORN; 95% CI, 0.46-1.34; p=0.4) or disease-specific mortality (p=0.9). This study is limited by its retrospective nature. Conclusions: Based on the results of this retrospective study, no evidence indicates that oncologic control is compromised for patients treated with LRN in comparison with ORN. abstract_id: PUBMED:24324088 Factors predictive of oncological outcome after nephroureterectomy: comparison between laparoscopic and open procedures. Background: Although laparoscopic radical nephroureterectomy is the standard treatment for localized upper urinary tract urothelial carcinoma, open radical nephroureterectomy has been reported to have a different rate of intravesical recurrence. Patients And Methods: Intravesical recurrence-free, progression-free, and overall survival rates among patients undergoing open and laparoscopic radical nephroureterectomy from 2002 to 2013 were analyzed. Results: Although no single factor predicted intravesical recurrence-free survival, a past history of bladder cancer or grade 3 was related to poorer intravesical recurrence-free survival rate in patients treated with laparoscopic radical nephroureterectomy. Moreover, the novel proposed risk classification based on our data clearly showed better progression-free survival and overall survival, as well as intravesical recurrence-free survival, in patients treated with laparoscopic radical nephroureterectomy. Conclusion: The findings reported here may help urologists predict oncological outcomes and to plan follow-up schedules after laparoscopic radical nephroureterectomy. abstract_id: PUBMED:34634058 A retrospective multicenter comparison of conditional cancer-specific survival between laparoscopic and open radical nephroureterectomy in locally advanced upper tract urothelial carcinoma. Background: Upper urinary tract urothelial carcinomas are relatively rare and have a cancer-specific survival rate of 20%-30%. The current gold standard treatment for nonmetastatic high-grade urinary tract urothelial carcinoma is radical nephroureterectomy with bladder cuff resection. Objective: This study aimed to compare conditional cancer-specific survival between open radical nephroureterectomy and laparoscopic radical nephroureterectomy in patients with nonmetastatic stage pT3-4 or TxN(+) locally advanced urinary tract urothelial carcinoma from five tertiary centers. Methods: The medical records of 723 patients were retrospectively reviewed. The patients had locally advanced and nodal staged tumors and had undergone open radical nephroureterectomy (n = 388) or laparoscopic radical nephroureterectomy (n = 260) at five tertiary Korean institutions from January 2000 and December 2012. To control for heterogenic baseline differences between the two modalities, propensity score matching and subgroup analysis were conducted. Conditional survival analysis was also conducted to determine survival outcome and to overcome differences in follow-up duration between the groups. Results: During the median 50.8-month follow up, 255 deaths occurred. In univariate analysis, significant factors affecting cancer-specific survival (e.g., age, history of bladder cancer, American Society of Anesthesiologists score, pathological N stage, and presence of lymphovascular invasion and carcinoma in situ) differed in each subsequent year. The cancer-specific survival between patients treated with open radical nephroureterectomy and laparoscopic radical nephroureterectomy was not different between patients with and without a history of bladder cancer. After adjusting baseline differences between the two groups by using propensity score matching, both groups still had no significant differences in cancer-specific survival. Conclusion: The two surgical modalities showed no significant differences in the 5-year cancer-specific survival in patients with locally advanced urinary tract urothelial carcinoma. abstract_id: PUBMED:28522928 Systematic review of open versus laparoscopic versus robot-assisted nephroureterectomy. Upper tract urothelial carcinoma is a relatively uncommon malignancy. The gold standard treatment for this type of neoplasm is an open radical nephroureterectomy with excision of the bladder cuff. This systematic review compares the perioperative and oncologic outcomes for the open surgical method with the alternative surgical management options of laparoscopic nephroureterectomy and robot-assisted nephroureterectomy (RANU). MEDLINE, EMBASE, PubMed, and Cochrane Library databases were searched using a sensitive search strategy. Article inclusion was then assessed by review of abstracts and full papers were read if more detail was required. In all, 50 eligible studies were identified that looked at perioperative and oncologic outcomes. The range for estimated blood loss when examining observational studies was 296 to 696 mL for open nephroureterectomy (ONU), 130 to 479 mL for laparoscopic nephroureterectomy (LNU), and 50 to 248 mL for RANU. The one randomized controlled trial identified reported estimated blood loss and length of stay results in which LNU was shown to be superior to ONU (P &lt; .001). No statistical significance was found, however, following adjustment for confounding variables. Although statistically insignificant results were found when examining outcomes of RANU studies, they were promising and comparable with LNU and ONU with regard to oncologic outcomes. Results show that laparoscopic techniques are superior to ONU in perioperative results, and the longer-term oncologic outcomes look comparable. There is, however, a paucity of quality evidence regarding ONU, LNU, and RANU; data that address RANU outcomes are particularly scarce. As the robotic field within urology advances, it is hoped that this technique will be investigated further using gold standard research methods. abstract_id: PUBMED:34485377 Open Nephroureterectomy Compared to Laparoscopic in Upper Urinary Tract Urothelial Carcinoma: A Meta-Analysis. Background: In this meta-analysis, we will focus on evaluating the effects of open nephroureterectomy compared with laparoscopic nephroureterectomy on postoperative results in upper urinary tract urothelial carcinoma subjects. Methods: A systematic literature search up to January 2021 was performed, and 36 studies included 23,013 subjects with upper urinary tract urothelial carcinoma at the start of the study; of them, 8,178 were laparoscopic nephroureterectomy, and 14,835 of them were open nephroureterectomy. They were reporting relationships between the efficacy and safety of open nephroureterectomy compared with laparoscopic nephroureterectomy in the treatment of upper urinary tract urothelial carcinoma. We calculated the odds ratio (OR) or the mean difference (MD) with 95% CIs to evaluate the efficacy and safety of open nephroureterectomy compared with laparoscopic nephroureterectomy in the treatment of upper urinary tract urothelial carcinoma using the dichotomous or continuous method with a random or fixed-effect model. Results: Laparoscopic nephroureterectomy in subjects with upper urinary tract urothelial carcinoma was significantly related to longer operation time (MD, 43.90; 95% CI, 20.91-66.90, p &lt; 0.001), shorter hospital stay (MD, -1.71; 95% CI, -2.42 to -1.00, p &lt; 0.001), lower blood loss (MD, -133.82; 95% CI, -220.92 to -46.73, p = 0.003), lower transfusion need (OR, 0.56; 95% CI, 0.47-0.67, p &lt; 0.001), and lower overall complication (OR, 0.79; 95% CI, 0.70-0.90, p &lt; 0.001) compared with open nephroureterectomy. However, no significant difference was found between laparoscopic nephroureterectomy and open nephroureterectomy in subjects with upper urinary tract urothelial carcinoma in 2-5 years recurrence-free survival (OR, 0.90; 95% CI, 0.69-1.18, p = 0.46), 2-5 years cancer-specific survival (OR, 0.94; 95% CI, 0.69-1.28, p = 0.68), and 2-5 years overall survival (OR, 1.31; 95% CI, 0.91-1.87, p = 0.15). Conclusion: Laparoscopic nephroureterectomy in subjects with upper urinary tract urothelial carcinoma may have a longer operation time, shorter hospital stay, and lower blood loss, transfusion need, and overall complication compared to open nephroureterectomy. Further studies are required to validate these findings. abstract_id: PUBMED:36381160 Perioperative and oncological outcomes of laparoscopic and open radical nephroureterectomy for locally advanced upper tract urothelial carcinoma: a single-center cohort study. Introduction: Open radical nephroureterectomy (ONU) is the standard of care for treatment of upper tract urothelial carcinoma (UTUC), but laparoscopic radical nephroureterectomy (LNU) is increasingly being used due to better perioperative outcomes. However, its oncological safety remains controversial, in particular for advanced disease.We aimed to compare perioperative and oncological outcomes between surgical approaches in locally advanced UTUC (≥pT3 and/or pN+). Material And Methods: This study was a retrospective analysis of all 48 patients submitted to radical nephroureterectomy for advanced UTUC between 2006 and 2020 in our center.Perioperative data were compared between groups. Bladder tumor-free survival (BTFS), metastasis-free survival (MFS) and cancer-specific survival (CSS) were estimated using Kaplan-Meier curves and compared with log-rank p test. Multivariable Cox regression model was used to evaluate their association with surgical approach. Results: Clinical and pathological characteristics were similar between groups. LNU had lower blood loss (p = 0.031), need for transfusion (p = 0.013) and length of hospital stay (p &lt;0.001), with similar operative time (p = 0.860).LNU was associated with better MFS (hazard ratio [HR]: 0.43, 95% confidence interval [CI] 0.20-0.93, p = 0.033) and CSS (HR: 0.42, 95%CI 0.19-0.94, p = 0.036). Median time to cancer death was 41 months for LNU and 12 months for ONU (log-rank p = 0.029). BTFS was similar between groups (HR: 0.60, 95%CI 0.17-2.11, p = 0.427). On multivariable Cox regression model, surgical approach wasn't significantly associated with MFS (p = 0.202), CSS (p = 0.149) or BTFS (p = 0.586). Conclusions: In our cohort of advanced UTUC, LNU did not result in inferior oncological control compared to ONU. The minimally invasive approach conferred an advantage in perioperative outcomes. abstract_id: PUBMED:23148712 Comparison of oncological outcomes for open and laparoscopic radical nephroureterectomy: results from the Canadian Upper Tract Collaboration. Unlabelled: WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?Open radical nephroureterectomy (ORNU) with excision of the ipsilateral bladder cuff is a standard treatment for upper tract urothelial carcinoma (UTUC). However, over the past decade laparoscopic RNU (LRNU) has emerged as a minimally invasive surgical alternative. Data comparing the oncological efficacy of ORNU and LRNU have reported mixed results and the equivalence of these surgical techniques have not yet been established. We found that surgical approach was not independently associated with overall or disease-specific survival; however, there was a trend toward an independent association between LRNU and poorer recurrence-free survival (RFS). To our knowledge, this is the first large, multi-institutional analysis to show a trend toward inferior RFS in patients with UTUC treated with LRNU. Objective: To examine the association between surgical approach for radical nephroureterectomy (RNU) and clinical outcomes in a large, multi-institutional cohort, as there are limited data comparing the oncological efficacy of open RNU (ORNU) and laparoscopic RNU (LRNU) for upper urinary tract urothelial carcinoma (UTUC). Patients And Methods: Institutional RNU databases containing detailed information on patients with UTUC treated between 1994 and 2009 were obtained from 10 academic centres in Canada. Data were collected on 1029 patients and combined into a relational database formatted with patient characteristics, pathological characteristics, and survival status. Surgical approach was classified as ORNU (n = 403) or LRNU (n = 446). The clinical outcomes were overall survival (OS), disease-specific survival (DSS), and recurrence-free survival (RFS). The Kaplan-Meier method and Cox proportional regression analysis were used to analyse survival data. Results: Data were evaluable for 849 of 1029 (82.5%) patients. The median (interquartile range) follow-up duration was 2.2 (0.6-5.0) years. The predicted 5-year OS (67% vs 68%, log-rank P = 0.19) and DSS (73% vs 76%, log-rank P = 0.32) rates did not differ between the ORNU and LRNU groups; however, there was a trend toward an improved predicted 5-year RFS rate in the ORNU group (43% vs 33%, log-rank P = 0.06). Multivariable Cox proportional regression analysis showed that surgical approach was not significantly associated with OS (hazard ratio [HR] 0.89, 95% confidence interval [CI] 0.63-1.27, P = 0.52) or DSS (HR 0.90, 95% CI 0.60-1.37, P = 0.64); however, there was a trend toward an independent association between surgical approach and RFS (HR 1.24, 95% CI 0.98-1.57, P = 0.08). Conclusion: Surgical approach was not independently associated with OS or DSS but there was a trend toward an independent association between LRNU and poorer RFS. Further prospective evaluation is needed. Answer: The comparison between laparoscopic radical nephroureterectomy (LRNU) and open radical nephroureterectomy (ORNU) for upper tract urothelial carcinoma (UTUC) has been a subject of investigation in several studies, with varying conclusions regarding recurrence and disease-specific survival associated with the surgical technique. Some studies have found no significant difference in overall and recurrence-free survival rates between patients treated with LRNU and ORNU. For instance, a retrospective single-center study did not find differences in overall and recurrence-free survival rates between laparoscopic and open approaches, nor was preoperative ureterorenoscopy associated with higher survival rates (PUBMED:37994335). Similarly, a retrospective analysis of patients with advanced UTUC showed that the laparoscopic approach did not result in inferior oncological control compared to the open approach, and it offered an advantage in perioperative outcomes (PUBMED:36381160). However, other studies have reported that LRNU may be associated with worse survival outcomes compared to ORNU, particularly in patients with locally advanced disease. One study found that LRNU was associated with worse five-year overall survival (OS) and cancer-specific survival (CSS) rates than ORNU, especially in patients with locally advanced disease (pT3/T4), and LRNU was an independent predictor of worse OS and CSS (PUBMED:26497823). Another study suggested that LRNU should be performed with careful consideration of its impact on patient survival in locally advanced UTUC patients (PUBMED:34634058). A meta-analysis comparing open nephroureterectomy with laparoscopic nephroureterectomy found no significant differences in 2-5 year recurrence-free survival, cancer-specific survival, and overall survival, suggesting that laparoscopic nephroureterectomy may have perioperative advantages without compromising long-term oncological outcomes (PUBMED:34485377). In contrast, a multi-institutional analysis indicated a trend toward inferior recurrence-free survival in patients treated with LRNU, although surgical approach was not independently associated with overall or disease-specific survival (PUBMED:23148712). Overall, the evidence suggests that while laparoscopic nephroureterectomy may offer perioperative benefits, its impact on long-term oncological outcomes, particularly recurrence and disease-specific survival, may vary depending on factors such as disease stage and patient characteristics.
Instruction: Should patients with a pre-operative prostate-specific antigen greater than 15 ng/ml be offered radical prostatectomy? Abstracts: abstract_id: PUBMED:15732232 Should patients with a pre-operative prostate-specific antigen greater than 15 ng/ml be offered radical prostatectomy? Background: Patients with prostate cancer with a pre-operative prostate-specific antigen (PSA) &gt;15 ng/ml who undergo radical retropubic prostatectomy (RRP) generally do not have a good outcome, yet may have organ-confined cancer and should be offered the option of surgery. Aim: To assess the outcome of patients who underwent RRP with a pre-operative PSA &gt;15 ng/ml. Methods: Thirty-four patients, mean pre-operative PSA: 25.46 ng/ml (15.03-76.6) and mean Gleason score: 6.4 (5-9) were assessed. Results: Two groups were identified. Group I: 41% (14/34) have no biochemical recurrence to mean follow up of 58 months (30-106). Mean PSA: 18.8 ng/ml (15.03-25.84). Mean Gleason score: 6.1 (5-7). Clinical stage: T1c in 80%. No patient had seminal vesicle or lymph node involvement. Group II: 59% (20/34) have biochemical recurrence or died (3) from their disease to mean follow up of 66 months (36-98). Mean PSA: 28.9 ng/ml (15.28-76.6). Mean Gleason score: 6.7 (5-9). Clinical stage: T1c in 25%. Eleven patients had seminal vesicle (8) involvement or positive lymph nodes (3) or both (2). Conclusion: RRP seems feasible in patients whose pre-operative PSA is between 15 and 25 ng/ml with stage T1c, Gleason score &lt; or = 7 and negative lymph node frozen section. abstract_id: PUBMED:12416005 Diagnostic value of ultrasound-guided anastomotic biopsies in patients with high PSA (&gt; or = 0,4 ng/ml) after radical prostatectomy Objectives: The authors report their experience on the use of a high number of biopsies for the diagnosis of a vesicourethral anastomosis tumor recurrence in patients who underwent radical prostatectomy with a PSA elevation. Methods: Sixty-five patients with PSA &gt; or = 0.4 ng/ml after radical prostatectomy received 6 to 8 transrectal ultrasound (TRUS) guided biopsies of the vesicourethral anastomosis. Results: The biopsy scheme with 6 random anastomotic biopsies plus additional biopsies through TRUS detectable lesions was able to diagnose a local recurrence in more than 60% of the cases. In presence of a post-operative PSA &lt; 1.0 ng/ml and in absence of ultrasound detectable or palpable lesions a local neoplastic recurrence was detected in 58% of the cases. In presence of a palpable or ultrasound visible lesions, the detection rate increases to 80% of the cases. abstract_id: PUBMED:10231935 Radical perineal prostatectomy without lymphadenectomy. Patients with cT1 + 2, G1 +2, PSA &lt; or = 10 ng/ml prostate carcinoma Pelvic lymphadenectomy in patients with organ confined prostate cancer (PCa) is of no therapeutic value and is questionable in many patients because of the low incidence of metastases. 49 patients with &lt; or = cT2 b, G1 + 2, PSA &lt; or = 10 ng/ml underwent laparocopic pelvine lymphadenectomy and radical perineal prostatectomy. Only 1 patient (2%) had microscopic metastases which were missed on frozen section. Because of these own results and those reported in the literature we then performed in patients with this constellation the radical perineal prostatectomy without lymphadenectomy (n = 32). The differences present in both groups concerning complication rate and morbidity are due to laparoscopic lymphadenectomy and the learning curve in perineal prostatectomy. abstract_id: PUBMED:14711982 Radical prostatectomy for prostate cancer patients with prostate-specific antigen &gt;20 ng/ml. Objective: Prostate cancer patients with prostate-specific antigen (PSA) &gt;20 ng/ml are at high risk of progression after radical prostatectomy. Comparison has seldom been made between the outcomes of patients with PSA 20.1-50 ng/ml and those with PSA &gt;50 ng/ml after radical prostatectomy. We retrospectively analyzed the outcomes of these two groups. Methods: From 1993 to 2002, 60 prostate cancer patients receiving radical prostatectomy were enrolled in this study. Thirty-seven patients with PSA 20.1-50 ng/ml were assigned to Group I. Twenty-three patients with PSA &gt;50 ng/ml were assigned to Group II. Preoperatively, Group II had greater PSA and PSA density than Group I (P &lt; 0.0001). Group II had higher biopsy Gleason score and clinical stage than Group I (P &lt; 0.05). Pathological categories and outcomes of both groups were compared. Results: Group II had higher Gleason score and tumor volume than Group I (P &lt; 0.05). The incidence of organ-confined diseases was 29.7% in Group I and 0% in Group II (P &lt; 0.05). Group II had higher incidence of extracapsular tumor extension, positive surgical margin and lymph node involvement than Group I (P &lt; 0.05). The incidence of postoperative PSA &gt;0.01 ng/ml and PSA failure were higher in Group II than Group I (P &lt; 0.05). Need for adjuvant treatment and death from prostate cancer was similar in both groups. Conclusion: Patients with PSA &gt;50 ng/ml had a poorer prognosis than patients with PSA 20.1-50 ng/ml. Those with PSA &gt;50 ng/ml had shorter freedom from PSA failure survivals than those with PSA 20.1-50 ng/ml (P = 0.004). Classification of high-risk prostate patients into two sub-groups with PSA 20.1-50 ng/ml and PSA &gt;50 ng/ml should be considered. abstract_id: PUBMED:16387412 Pre-operative percent free PSA predicts clinical outcomes in patients treated with radical prostatectomy with total PSA levels below 10 ng/ml. Introduction: To evaluate the association of total prostate specific antigen (T-PSA) and percent free PSA (%F-PSA) with prostate cancer outcomes in patients treated with radical prostatectomy (RP). Methods: Pre-operative serum levels of T-PSA and F-PSA were prospectively measured in 402 consecutive patients treated with RP for clinically localized prostate cancer who had T-PSA levels below 10 ng/ml. Results: T-PSA was not associated with any prostate cancer characteristics or outcomes. Lower %F-PSA was significantly associated with higher percent positive biopsy cores, extracapsular extension, seminal vesicle involvement, lympho-vascular invasion, perineural invasion, positive surgical margins, and higher pathologic Gleason sum. When adjusted for the effects of standard pre-operative features, lower %F-PSA significantly predicted non-organ confined disease, seminal vesicle involvement, lympho-vascular invasion, and biochemical progression. %F-PSA did not retain its association with biochemical progression after adjusting for the effects of standard post-operative features. Based on data from 22 patients with biochemical progression, lower %F-PSA was correlated with shorter T-PSA doubling time after biochemical progression (rho = 0.681, p = 0.010). %F-PSA was lower in patients who failed salvage radiation therapy (p = 0.031) and in patients who developed distant cancer metastases compared to patients who did not (p &lt; 0.001). Conclusions: Pre-operative T-PSA is not associated with prostate cancer outcomes after RP when levels are below 10 ng/ml. In contrast, pre-operative %F-PSA is associated with adverse pathologic features, biochemical progression, and features of aggressive disease progression in patients treated with RP and T-PSA levels below 10 ng/ml. %F-PSA may improve pre-operative predictive models for predicting clinical outcomes of patients diagnosed with prostate cancer nowadays. abstract_id: PUBMED:17698095 Importance of tumor location in patients with high preoperative prostate specific antigen levels (greater than 20 ng/ml) treated with radical prostatectomy. Purpose: We investigated the effect of tumor location (anterior vs posterior) on pathological characteristics and biochemical-free survival in patients with a preoperative prostate specific antigen level of greater than 20 ng/ml undergoing radical prostatectomy since transition zone tumors are known to present with higher prostate specific antigen levels. Materials And Methods: We retrospectively studied the records of 265 patients treated with radical prostatectomy between 1984 and 2005 who had preoperative prostate specific antigen levels greater than 20 ng/ml. Review of pathology reports was performed and tumor location (anterior vs posterior) was defined. Differences in clinicopathological characteristics and prostate specific antigen recurrence rates were examined. Results: Of 265 patients with a preoperative prostate specific antigen level of greater than 20 ng/ml who underwent radical prostatectomy 50 (19%) had anterior tumors and 215 (81%) had posterior tumors. Patients with anterior tumors had lower clinical stage and less seminal vesicle involvement than patients with posterior tumors (p = 0.006 and &lt;0.001, respectively). Although Kaplan-Meier analysis demonstrated significantly higher rates of 5-year biochemical recurrence-free survival for patients with anterior vs posterior tumors (63% vs 40%, p = 0.020), anterior tumor location was not an independent predictor of biochemical recurrence. Conclusions: Radical prostatectomy is a feasible treatment option in patients with a preoperative prostate specific antigen level of greater than 20 ng/ml. The 5-year biochemical-free survival rate was 47%. Although anterior tumor location was associated with favorable pathological features and improved biochemical-free survival, it was not an independent predictor of biochemical recurrence. Further studies are warranted to identify patients with high preoperative prostate specific antigen levels most likely to have recurrence. abstract_id: PUBMED:35944100 Oncological and functional results after robot-assisted radical prostatectomy in high-risk prostate cancer patients. Background: Pentafecta is currently the standard in the comprehensive evaluation of patients undergoing radical prostatectomy, the objective of this study is the evaluation of oncological and functional outcomes in patients with prostate cancer of high risk undergoing robot-assisted radical prostatectomy. Method: Descriptive, retrospective study of 20 cases with a diagnosis of high-risk prostate cancer. The high-risk group is composed of a prostate-specific antigen equal or greater than 20 ng/mL, Gleason score equal or greater than 8, or clinical stages T2/T3 treated with robotic approach. Results: Biochemical control was achieved from the first six weeks after the surgical event. 75% (n = 15) had negative surgical margins. 100% of the patients (n = 20) presented urinary continence immediately after removal of the urinary catheter. Erectile function was preserved at 3, 6 and 12 months in 100% of the patients who underwent neuropreservation but with use of an PDE inhibitor. (n = 5). Complications were reported in 10% (Clavien-Dindo I-II). Conclusions: Robot-assisted radical prostatectomy in patients with high-risk prostate cancer is considered an appropriate treatment option in selected patients. A different experimental design is needed to define the advantages or disadvantages of this approach, as well as to determine its role and application in clinical practice. abstract_id: PUBMED:22795501 Long-term oncological outcomes of men undergoing radical prostatectomy with preoperative prostate-specific antigen &lt;2.5 ng/ml and 2.5-4 ng/ml. Objectives: Prostate-specific antigen (PSA) screening has increased the detection of small, organ-confined tumors, and studies suggest that these patients may have favorable outcomes following radical prostatectomy (RP). To date, there are limited data available on the outcomes of patients diagnosed with low PSA (≤ 4 ng/ml) who underwent RP. This study aimed to evaluate long-term oncological outcomes of patients undergoing RP with preoperative PSA &lt;2.5 and 2.5-4 ng/ml compared with PSA 4.1-10 ng/ml. Materials And Methods: Data were analyzed from 3,621 men who underwent RP between 1988 and 2010 at our institution. Patients were stratified into 3 PSA groups: &lt;2.5 ng/ml (n = 280), 2.5-4 ng/ml (n = 563), and 4.1-10 ng/ml (n = 2,778). Patient and disease characteristics were compared. Overall, biochemical disease-free (bDFS), and PCa-specific survivals were analyzed and compared between the groups. Multivariable analyses were conducted using proportional hazards model. Results: Compared with the 4.1-10 ng/ml PSA group, Gleason score &gt;7, extracapsular extension, and non-organ-confined disease were less common in patients with PSA ≤ 4 ng/ml (all P &lt; 0.001). The incidence of organ-confined disease was similar between the PSA &lt; 2.5 and 2.5-4 ng/ml groups while perineural invasion (P = 0.050) and Gleason score ≥ 7 (P = 0.026) were more common in the 2.5-4 ng/ml PSA group. Estimated 10-year overall and PCa-specific survivals were comparable across all PSA groups, whereas bDFS was significantly lower in PSA 4.1-10 group (P &lt; 0.001). bDFS was not statistically different between PSA &lt;2.5 and 2.5-4 groups (P = 0.300). 10-year bDFS were 59.0%, 70.1%, and 76.4% in PSA 4.1-10, 2.5-4, and &lt;2.5, respectively. For the PSA ≤ 4 ng/ml groups, age, race, margin status, pathologic stage, but not PSA were independent predictors of bDFS, whereas age, pathologic Gleason, and biochemical recurrence were associated with overall survival. Conclusions: Long-term oncological outcomes (overall, bDFS, PCa-specific survivals) of patients presenting with low PSA (≤ 4 ng/ml) were excellent in this study. Compared with PSA 4.1-10 ng/ml, patients presenting with PSA ≤ 4 ng/ml had better bDFS outcomes. However, there was no difference in long-term outcomes between PSA &lt;2.5 and 2.5-4 ng/ml. abstract_id: PUBMED:19466429 Oncologic outcome after radical prostatectomy in men with PSA values above 20 ng/ml: a monocentric experience. Objective: To assess the cancer control afforded by radical prostatectomy (RP) in patients with prostate-specific antigen (PSA) values above 20 ng/ml. Methods: We performed a retrospective review of prostate cancer patients who had initial PSA values above 20 ng/ml and were treated with surgery between 1995 and 2006. Biochemical recurrence was defined as a single rise in PSA levels over 0.2 ng/ml after surgery. Results: Overall, 41 patients were included. The mean age was 62 +/- 6.43 years. The mean PSA was 27.39 +/- 13.57 ng/ml (range 20.3-80). After pathological analysis, prostate cancer was organ-confined in 21 cases (51.2%) and locally advanced in 20 cases (48.8%). Positive surgical margins were detected in 36.5% of cases (n = 15). Five patients had lymph node involvement (12%). The mean prostate volume was 58 +/- 28.9 cc. The mean length of follow-up after surgery was 94 +/- 37 months. Median time to biochemical recurrence was 44.6 +/- 22 months. The 5-year PSA-free survival rate was 53%. Through univariate analysis, the pathologic stage (p = 0.016), biopsy and pathological Gleason scores (p = 0.013; p = 0.02) and positive margin (p = 0.04) were associated with recurrence. Overall, 24 patients (58.5%) experienced a biochemical recurrence. Only margin status and pathological Gleason were significant in multivariate analysis (p &lt; 0.05). Conclusion: RP can be recommended as a viable primary treatment option in selected cases of the high-risk cohort of patients with pre-operative PSA values above 20 ng/ml. However, the modalities of adjuvant treatments following RP remain to be defined in patients who are likely to evolve unfavourably. abstract_id: PUBMED:21675092 Radical prostatectomy outcome when performed with PSA above 20 ng/ml. Many centres currently do not offer radical prostatectomy (RP) to men with high-risk localised prostate cancer due to concerns regarding poor outcome, despite evidence to the contrary. We identified 18 men undergoing RP with serum PSA &gt;20 ng/ml (high-risk by National Comprehensive Cancer Network definition) and minimum follow-up of 12 years (mean 13.5). Mean preoperative PSA was 37.0 ng/ml (Range 21.1-94.0). Prostatectomy pathology reported extracapsular disease in 16 (88.9%), positive surgical margins in 15 (83%) and positive pelvic lymph nodes in 5 (27.8%). Overall and cancer-specific survival at 5 and 10-years was 83.3%, 88.2%, 72% and 76.5% respectively. With complete follow-up 11 (61.1%) are alive, and 5 (27.8%) avoided any adjuvant therapy. Complete continence (defined as no involuntary urine leakage and no use of pads) was achieved in 60%, with partial continence in the remainder. We conclude that surgery for this aggressive variant of localised prostate cancer can result in satisfactory outcome. Answer: Based on the information provided in the abstracts, it appears that offering radical prostatectomy to patients with a pre-operative prostate-specific antigen (PSA) greater than 15 ng/ml can be considered, but with careful patient selection and consideration of various factors. One study suggests that radical retropubic prostatectomy (RRP) is feasible in patients with pre-operative PSA levels between 15 and 25 ng/ml, especially if they have stage T1c, a Gleason score ≤ 7, and negative lymph node involvement. In this study, 41% of such patients had no biochemical recurrence with a mean follow-up of 58 months (PUBMED:15732232). Another study indicates that patients with PSA >20 ng/ml are at high risk of progression after radical prostatectomy, but it also suggests that outcomes may vary within this group. Patients with PSA levels between 20.1-50 ng/ml had a better prognosis than those with PSA >50 ng/ml (PUBMED:14711982). Furthermore, a study on patients with high preoperative PSA levels (>20 ng/ml) treated with radical prostatectomy found that while anterior tumor location was associated with favorable pathological features and improved biochemical-free survival, it was not an independent predictor of biochemical recurrence (PUBMED:17698095). Additionally, a study on high-risk prostate cancer patients undergoing robot-assisted radical prostatectomy reported satisfactory oncological and functional outcomes, suggesting that radical prostatectomy can be an appropriate treatment option in selected high-risk patients (PUBMED:35944100). Lastly, a study assessing cancer control afforded by radical prostatectomy in patients with initial PSA values above 20 ng/ml concluded that RP could be recommended as a viable primary treatment option in selected cases of high-risk patients with pre-operative PSA values above 20 ng/ml (PUBMED:19466429). In conclusion, while radical prostatectomy may not be suitable for all patients with a pre-operative PSA greater than 15 ng/ml, it can be offered to selected patients with careful consideration of their clinical stage, Gleason score, lymph node involvement, and other relevant factors. It is important to discuss the potential risks and benefits with the patient, as well as the likelihood of requiring additional treatments post-surgery.
Instruction: The admissions process in a graduate-entry dental school: can we predict academic performance? Abstracts: abstract_id: PUBMED:23348481 The admissions process in a graduate-entry dental school: can we predict academic performance? Aim: To assess the association between the admissions performance and subsequent academic achievement within a graduate-entry dental school. Methods: The study was conducted at the University of Aberdeen Dental School. UCAS forms for course applicants were reviewed and assigned a pre-admission score (PAS) and a tariff given for the UCAS personal statement (UCAS). Individuals ranked highest were invited to attend multiple mini-interviews (MMI), which were scored. Data was correlated with academic performance reported as the University Common Assessment Scale (0-20). Comparisons were also made between the first degree and subsequent educational achievement. Statistics: Data were analysed by multiple linear regression, Pearson correlation and unstacked ANOVA (IBM SPSS Statistics 19). Results: Data were obtained for 75 students (F: 50; M: 25). A correlation between performance at MMI and CAS scores was identified (r = 0.180, p = 0.001, df = 538). A correlation was also noted between each student's first degree and the CAS scores (F = 4.08, p = 0.001, df = 9). Conclusions: This study suggests that candidate performance at MMI might be a stronger predictor of academic and clinical performance of graduate-entry dental students compared to other pre-interview selection criteria. The first degree for such a programme also appears to be significant. abstract_id: PUBMED:35212103 Impact of role conflicts and self-efficacy on academic performance of graduate-entry healthcare students: A lagged study. Graduate entry healthcare students experience many challenges during their academic journey. The impact of these challenges needs to be considered to support students through their training and education. In this study, we examined the impact of experiencing these role conflicts (at the outset of the academic year), for example, family and caring responsibilities, activities with family/friends, and daily tasks/chores, on the academic performance (at the end of the academic year) of graduate-entry healthcare students. We also investigated the potential of students' self-efficacy for learning to mitigate the extent to which such role conflicts impact academic performance. Findings demonstrate that the more graduate entry healthcare students experienced conflicts between their life responsibilities and their academic responsibilities, the worse their academic performance was across the year. This negative relationship was somewhat mitigated by high self-efficacy for learning. The practical implications of our research suggest the need to provide specific mitigation strategies to support healthcare students regarding conflicts between their life/family responsibilities and their academic work. abstract_id: PUBMED:27543503 Comparison of graduate-entry and direct school leaver student performance on an applied dental knowledge test. Aims: To compare the academic performance of graduate-entry and direct school leavers in an undergraduate dental programme. Methods: This study examined the results of students in applied dental knowledge (ADK) progress tests conducted during two academic years. A mixed model analysis of variance (ANOVA) was conducted to compare the performance of graduate-entry and direct school leavers. ADK was treated as a repeated measures variable, and the outcome variable of interest was percentage score on the ADK. Results: The results show statistically significant main effects for ADK [F (1,113) = 61.58, P &lt; 0.001, η2p = 0.35], Cohort [F (1,113) = 88.57, P &lt; 0.001, η2p = 0.44] and Entry [F (1,113) = 11.31, P = 0.001, η2p = 0.09]. That is, students do better on each subsequent test (main effect of ADK), students in later years of the programme perform better than those in earlier years (main effect of cohort), and graduate-entry students outperform direct school leavers. Conclusions: This is the first study to explore the differences in the academic performance of graduate-entry and direct school leavers in an undergraduate dental programme. The results show that the academic performance of graduate students was better than the direct school leavers in years 2 and 3. Further research is required to compare the performance of students longitudinally across the entire duration of undergraduate dental programmes and evaluate whether this difference persists throughout. abstract_id: PUBMED:35914850 A comparison of the academic performance of graduate entry and undergraduate entry pharmacy students at the course exit level. Introduction: Graduate entry (GE) pharmacy students are trained in a shorter timeframe than undergraduate entry (UE) students. This study compares the academic performance of GE and UE pharmacy students at the course exit point. Methods: A retrospective analysis of final exam grades in written and objective structured clinical examination (OSCE) was performed between GE and UE students from three graduating cohorts. Final written examination contained clinical case study questions, whereas OSCE involved role play with simulated patients or doctors. Statistical analyses were performed by t-test and one-way analysis of variance at .05 significance level and Pearson's correlation coefficient. Results: No significant difference in academic performance was seen between GE and UE groups at course exit (P &gt; .05). There was a trend for GE students performing marginally better in OSCE than UE students. Females showed better performances in verbal communication than males. GE males showed significantly lower empathy scores than all other groups. No significant difference was seen in problem-solving scores amongst all groups. Both UE and GE groups scored significantly better in written examinations compared with OSCE. Conclusions: Graduate entry pharmacy students from accelerated learning pathway and UE students performed similarly at the course exit point, providing empirical support for non-traditional graduate entry pathway as a viable option. abstract_id: PUBMED:30858276 Elements of Undergraduate Education Related to Students' Academic Performance in the First Year of Dental School. The aim of this study was to improve understanding of predictors of student success in dental school. A total of 178 student records from the Classes of 2015 and 2016 at a U.S. dental school were reviewed for this retrospective study. The records assessed included admissions files with such elements as scores on the Dental Admission Test (DAT), participation in a pipeline program, and undergraduate transcripts; academic records from the first term of dental school (class rank, course remediation, and withdrawal/dismissal from dental school); and National Board Dental Examination (NBDE) Part I results. The results showed that the DAT Perceptual Ability Test was positively related to performance in the first term of dental school (p=0.030). The DAT Academic Average (p&lt;0.0001) and participation in a pipeline program (p=0.006) were found to be predictors of performance in the lower 25% of the class by end of first term rank. Taking organic chemistry in a summer term during undergraduate study was identified as a predictor variable for dismissal, withdrawal, or entry into a decompressed curriculum (p=0.025). Although this analysis found that traditional predictors of academic success in dental school were associated with strong academic performance in the study sample, it also provided a more complex assessment of factors that may be associated with students who struggle in the first year. As the vast majority of students in this sample successfully completed dental school, the results were not sought to inform admissions criteria, but rather to help academic and student affairs officers identify at-risk students in order to offer timely intervention. abstract_id: PUBMED:26114703 Predictive value of the admissions process and the UK Clinical Aptitude Test in a graduate-entry dental school. Aim: To assess the association between admissions performance and the UK Clinical Aptitude Test (UKCAT), and subsequent achievement within a graduate-entry dental school. Method: The study was conducted at the University of Aberdeen Dental School between 2010 and 2014. Student demographics, pre-admission scores (PAS), Universities and Colleges Admissions Service (UCAS) tariffs, multiple mini-interview (MMI) grades, UKCAT scores and percentiles were correlated with academic performance reported as the University Common Assessment Scale (0-20). Statistics: Data were analysed by Pearson correlation and multiple regression (IBM(®) SPSS(®) Statistics 21). Results: Data were obtained for 71 students (F: 44; M: 27). Student age, MMI, UKCAT scores and UKCAT percentiles demonstrated a correlation with CAS scores (r(2) = 0.119, P = 0.001, r(2) = 0.136, P = 0.001, r(2) = 0.077, P = 0.019 and r(2) = 0.118, P = 0.001, respectively). Conclusions: This study suggests that student age, candidate performance at MMI and the UKCAT might be a predictor of academic achievement for graduate-entry dental students. abstract_id: PUBMED:33815023 Predictors of academic integrity in undergraduate and graduate-entry masters occupational therapy students. Background: Academic integrity is viewed as honest and responsible scholarship and the moral code of academia. Reported incidences of academic dishonesty among health professional students are widespread and may be an indicator of future unprofessional behaviour in the workplace. Aim: This study investigated the potential predictors of academic integrity in undergraduate and graduate-entry masters occupational therapy students. Method: Occupational therapy students from five universities (n = 701 participants; 609 undergraduates; 92 graduate-entry masters) were recruited. Data were collected via a two-part self-report questionnaire that included six standardised scales: Academic Dishonesty Scale; Academic Dishonesty in the Classroom Setting Scale; Academic Dishonesty in the Clinical/Practice Education Setting Scale; Moral Development Scale for Professionals; Academic Dishonesty Tendency Scale; and Perceived Academic Sources of Stress. Data analysis involved multi-linear regression analyses with bootstrapping. Result: Significant predictors of academic integrity in occupational therapy students included age, gender, grade point average, public meaning, moral practice, general tendency towards cheating, tendency towards dishonesty in the conduct and reporting of research findings, tendency towards not providing appropriate references and acknowledgements and pressures to perform well academically. Conclusion: These findings will assist educators in identifying vulnerable students potentially prone to academic integrity infringements and implementing proactive strategies with them. Further studies are recommended to explore further predictors of students' academic integrity. abstract_id: PUBMED:25813133 Transfer students' personality types and their academic performance in a graduate-entry dental school. Purpose: The study was designed to identify how different types of transfer student personality would be constituted in Seoul National University School of Dentistry (SNU SD) and delve into what personal types were often observed more competent in academic performance. Methods: Among 40 students who transferred to SNU SD in 2004, 15 students voluntarily participated in completing the Myers-Briggs type indicator (MBTI; GS form); then, it was tested whether or not their MBTI types would be dependent upon their final grades. In addition, another 32 out of the 50 students who were enrolled through a traditional pre-den system served as a control group. Results: It was mainly found that ISTJ type was the most typical one for those transfer dental students as well as for other native dental students who excelled in their academic performance. The noticeable majority of transfer students were Introverted (67%), Sensing (80%), Thinking (86%), and Judging (80%), with S-J pattern being statistically significant. Conclusion: SNU SD has been in a rebuilding process in terms of student/outcome centered dental education to have it up to the global standards. For this reason, it is ultimately a crucial part of that process to understand what personality types of the dental students with different backgrounds in major are observed and thus recognize how to support their learning according to different patterns of individual personality. abstract_id: PUBMED:26901809 Does a selection interview predict year 1 performance in dental school? It is important for dental schools to select students who will complete their degree and progress on to become the dentists of the future. The process should be transparent, fair and ethical and utilise selection tools that select appropriate students. The interview is an integral part of UK dental schools student selection procedures. Objective: This study was undertaken in order to determine whether different interview methods (Cardiff with a multiple mini interview and Newcastle with a more traditional interview process) along with other components used in selection predicted academic performance in students. Methods: The admissions selection data for two dental schools (Cardiff and Newcastle) were collected and analysed alongside student performance in academic examinations in Year 1 of the respective schools. Correlation statistics were used to determine whether selection tools had any relevance to academic performance once students were admitted to their respective Universities. Results: Data was available for a total of 177 students (77 Cardiff and 100 Newcastle). Examination performance did not correlate with admission interview scores at either school; however UKCAT score was linked to poor academic performance. Discussion: Although interview methodology does not appear to correlate with academic performance it remains an integral and very necessary part of the admissions process. Ultimately schools need to be comfortable with their admissions procedures in attracting and selecting the calibre of students they desire. abstract_id: PUBMED:31865837 Rethinking the Admissions Interview: Piloting Multiple Mini-Interviews in a Graduate Psychology Program. Health profession programs routinely utilize traditional interviews in admissions as a means of assessing important non-academic characteristics (e.g., critical thinking, interpersonal skills, judgment) of candidates. However, the reliability and validity of traditional interviews is highly questionable. Given this, multiple health profession programs (e.g., medicine, nursing, pharmacy, physical therapy) have implemented multiple mini-interviews as an alternative for assessing non-academic characteristics. This paper describes the development and implementation of multiple mini-interviews in the admissions process for a doctoral clinical psychology program, one of the health professions yet to use multiple mini-interviews. This paper also examines the feasibility and acceptability of the multiple mini-interviews in this program. Results of a mixed-method survey of all 120 candidates who participated in admissions days are presented along with discussion of factors associated with satisfaction and dissatisfaction. Recommendations for program refinement and application to other graduate psychology programs for improved admissions processes are discussed. Answer: The admissions process in a graduate-entry dental school involves various components that aim to predict academic performance. Studies have shown that certain elements of the admissions process can indeed be correlated with subsequent academic achievement. One study conducted at the University of Aberdeen Dental School found that candidate performance at multiple mini-interviews (MMI) might be a stronger predictor of academic and clinical performance of graduate-entry dental students compared to other pre-interview selection criteria. Additionally, the first degree of the student also appears to be significant in predicting academic success (PUBMED:23348481). Another study at the same institution suggested that student age, MMI performance, and the UK Clinical Aptitude Test (UKCAT) scores might be predictors of academic achievement for graduate-entry dental students (PUBMED:26114703). A comparison of graduate-entry and direct school leaver student performance on an applied dental knowledge test revealed that graduate-entry students outperformed direct school leavers in years 2 and 3 of the program, suggesting that graduate-entry students may have an academic advantage (PUBMED:27543503). However, it is important to note that the admissions interview, whether a traditional interview or MMI, does not always correlate with academic performance once students are admitted to their respective universities (PUBMED:31865837). This indicates that while certain aspects of the admissions process can predict academic performance, they are not foolproof indicators. Moreover, the impact of role conflicts and self-efficacy on academic performance has been studied, showing that graduate-entry healthcare students who experience conflicts between their life responsibilities and their academic responsibilities tend to perform worse academically. However, high self-efficacy for learning can mitigate this negative relationship to some extent (PUBMED:35212103). In summary, while certain elements of the admissions process, such as MMI performance and UKCAT scores, can predict academic performance in graduate-entry dental school, they are not the sole determinants. Other factors, including the student's first degree, age, and self-efficacy, as well as the potential for role conflicts, also play a role in academic success.
Instruction: Do sexual health campaigns work? Abstracts: abstract_id: PUBMED:35151589 Research on the design and evaluation of sexual health prevention campaigns aimed at young people in Spain from 1987 to 2016 Introduction: New HIV diagnoses and sexually transmitted infections (STIs) continue to be a public health problem in Spain. Since the beginning of HIV infection in our country, prevention campaigns have been developed by the Health Services regarding sexual and reproductive health. Several authors warn about the poor evaluation of these campaigns. Objective: To evaluate the design and evaluation strategies of the sexual health campaigns developed in Spain from 1987 to 2016. Methods: Observational epidemiological study based on a detailed retrospective collection of data obtained from the National AIDS Plan, official agencies and Health Services about the design and evaluation of developed sexual health campaigns. Statistical analysis was performed using UNAIDS indicators system. Results: 82 campaigns have been developed since 1987, 27 have been aimed at young people. In 100% of campaigns aimed at young people, general information about HIV infection and the promotion of condom use has been addressed; however, other issues about risky sexual behavior have virtually not been included. The prevention of pregnancy in young people is present in less than 25% of campaigns. The quantity of planning and evaluation reports of the available campaigns is very low. Conclusion: The data indicate the need to improve actions aimed at sexually active young people, with more planned and evaluated actions for regarding UNAIDS criteria and efficacy indicators. abstract_id: PUBMED:18402232 Ethics and efficacy in sexual health campaigns The effectiveness of the diverse campaigns of sexual education carried out in Spain in the last 15 years scarcely has been analyzed. These campaigns have been directed fundamentally to adolescent people and its declared purpose has consisted of promoting the called "safe sex" being based only in the information on methods of barrier. Trying to clarify the efficacy of these campaigns, in the present work the epidemiological data contributed by the Department of Health and the National Institute of the Spanish Youth have been retrospectively investigated. As far as it can be measured, the evolution of the consequences of the sexual practices of the adolescents (abortions and unintended pregnancies) in the sanitary environment were also analyzed. Likewise, the data of the National Registry of Epidemiological Surveillance on the evolution of sexually transmitted diseases are collected. The results obtained of this analysis show that in teenages between 15 and 19 years, a progressive increase in the percentage of abortions regarding the total number of pregnancies has grown from 20% in 1990 to 44% in 2000 arriving at 46.6% in 2003. These data correspond with an progressive increase also related to the total number of abortions in Spain, reaching 13.7% in 2005. Likewise, the consumption of the postcoital pills for adolescents is analyzed. It has passed from 160.000 prescriptions in 2001 to nearly half a million units in 2005. This means the demand of this resource in the last 5 years has multiplied by three without achieving a stabilization in the number of new abortions per year. The evolution of the declared sexually transmitted diseases shows an increase of 79% in the infections by syphilis and a 45.8% in uncomplicated gonorrhoea. As conclusions, since an ethical perspective and since the perspective of sanitary efficiency, it can be affirmed that the validity of the campaigns of "safe sex" remains in doubt after analyzing the available data. The refusal to include in these campaigns the promotion of the abstinence in the first years of the adolescence, and the the refusal to promote the fidelity limiting the number of sexual partners, only seems to be justified for not sanitary, ideological motives, implying this a clear damage to the population at risk, saying nothing about the data of inefficiency, already available, of the campaigns previously carried out. abstract_id: PUBMED:35805742 A Cross-Sectional, Exploratory Study on the Impact of Night Shift Work on Midwives' Reproductive and Sexual Health. Background: Shift work is the basis for health care system functioning. The non-standard schedules enforce abrupt changes in the timing of sleep and light-dark exposure. It can contribute to the increased risk of various medical conditions, including reproductive and sexual health issues. The purpose of the study was to assess the impact of shift work with night shifts on midwives' reproductive and sexual health. Methods: This cross-sectional, exploratory study included 520 midwives. A descriptive questionnaire was distributed in person (414) and online (106) from July 2019 to May 2020. We used the Female Sexual Function Index (PL-FSFI) standardized questionnaire and proprietary research tools (applicable to demographic and social data and reproductive health). All statistical calculations were performed with the IBM SPSS 23 statistical package. Results: Shift work affects midwives' reproductive and sexual health. Midwives working night shifts are more likely to experience reproductive problems and sexual dysfunctions. The most pronounced differences are observed in the experience of infertility and the number of miscarriages. PL-FSFI results clearly showed the adverse impact of working shifts including night shifts on functioning in various dimensions of sexual health. Conclusion: Shift work negatively affects reproductive and sexual health and causes work-life conflict experience. It is necessary to develop procedures that minimize shift rotation and implement work schedules that allow for recuperation or rest and ensure proper family and social life. abstract_id: PUBMED:35796707 Perspectives on Cigarette Use, Vaping, and Antitobacco Campaigns Among Adolescent Sexual Minority Males and Gender Diverse Youth. Purpose: This qualitative study examined perceived benefits and drawbacks of smoking/vaping and attitudes toward antitobacco campaigns among adolescent sexual minority males and gender-diverse (ASMM/GD) youth. Methods: In July 2019, 215 U.S. ASMM/GD youth (meanage 16.78, 95.3% cisgender male, 60.0% racial/ethnic minority) answered questions about smoking/vaping behaviors, motivations for smoking/vaping, and attitudes toward antitobacco campaigns via an online survey. Data were analyzed with thematic analysis. Results: Overall, 17.2% of participants had smoked cigarettes, and 34.9% had vaped. Teens described psychological (e.g., stress relief), chemical (e.g., nicotine buzz), and social incentives (e.g., fitting in with peers) for smoking/vaping. Teens also reported concerns about physical health, costs, and self-image as drawbacks of smoking/vaping. Most considered antitobacco campaigns unrelatable and uninteresting, while others reported that campaigns reinforced their decisions to not smoke/vape. Most participants wanted antitobacco campaigns to be tailored to the sexual and gender minority (SGM) community. Conclusions: These findings shed light on ASMM/GD youth's perspectives of smoking/vaping and antitobacco campaigns. Results suggest that equipping teens with skills to cope with minority stress and resisting peer pressure could indirectly reduce smoking/vaping, and that SGM-inclusive campaigns may better reach SGM adolescents. abstract_id: PUBMED:35920340 Are Anti-Prostitution Advertising Campaigns Effective? An Experimental Study. Many governments invest public funds in communication interventions and campaigns against prostitution and sexual exploitation in an attempt to change attitudes toward prostitution and eventually decrease its consumption. Despite the considerable investment that public institutions have made in campaigns against prostitution and sexual slavery, no known empirical studies have evaluated the effectiveness of such campaigns on attitudes and behavioral change. The messages of these campaigns usually center on one of two thematic focuses: Prostituted women who suffer exploitation and male consumers of prostitution. The present study examines the impact of different anti-prostitution advertisements on attitudes among male participants (N = 155 male participants). Specifically, the experiment aims to test the differential effect of these two focuses, compared to a no-advertisement control condition, on social support for prostitution, negative and incorrect beliefs about prostitutes, and family values related to prostitution. The results show that compared with the no-advertisement control condition, advertisements focused on men who use prostitutes have a significant effect on social support toward prostitution and incorrect beliefs about prostitutes, whereas advertisements focused on female prostitutes have no effect. The results have practical implications for governments and councils regarding the efficacy of this kind of public communication campaign against prostitution consumption. abstract_id: PUBMED:31795573 Suggestions on environmental and health work from Health Environment Promotion Campaigns The Health Environment Promotion Campaigns (HEPCs) focus on the major environmental health issues and relevant factors of concern among the general public, and promote the achievement of the national health goal. Based on the summary and analysis of the background, key indicators, specific actions in different domains of the HEPCs, this paper proposes suggestions for scientifically implementing HEPCs from five aspects, namely, formulating implementation plans, establishing pilot areas, building comprehensive service platforms, improving the health literacy of residents and strengthening the development of protection technologies and standards. abstract_id: PUBMED:27332145 MSW student perceptions of sexual health as relevant to the profession: Do social work educational experiences matter? Many social work clients are at an increased risk for negative outcomes related to sexual behavior, including unwanted pregnancies and sexually transmitted infections (STIs). However, there is a dearth of literature on social work student experiences with these topics in social work classrooms and their perceptions about the topic's relevance to their practice. The purpose of this study is to explore relationships between experiences with STIs and contraception as topics in social work education and practica experiences on student perceptions toward sexual health as a relevant topic for social work. Among a national sample of MSW students (N = 443), experiences with STIs and contraception as topics in practica was significantly related to perceptions toward sexual health's relevance to social work. Findings and implications are discussed. abstract_id: PUBMED:35447680 Increasing Condom Use and STI Testing: Creating a Behaviourally Informed Sexual Healthcare Campaign Using the COM-B Model of Behaviour Change. Sexually transmitted infections (STIs) are a major public health challenge. Although theoretically informed public health campaigns are more effective for changing behaviour, there is little evidence of their use when campaigns are commissioned to the commercial sector. This study describes the implementation of the COM-B model to a sexual health campaign that brought together expertise from academics, sexual healthcare, and marketing and creative professionals. Insights were gathered following a review of the relevant academic literature. Barriers and facilitators to condom use and STI testing were explored with the use of the COM-B model and the Behaviour Change Wheel in a workshop attended by academics, behavioural scientists, healthcare experts and creative designers. Feedback on the creative execution of the campaign was obtained from healthcare experts and via surveys. Barriers to psychological capability, automatic and reflective motivation, and social opportunity were identified as targets for the campaign, and creative solutions to these barriers were collaboratively devised. The final sexual health campaign was rated positively in its ability to change attitudes and intentions regarding the use of condoms and STI testing. This study describes the implementation of the COM-B model of behaviour change to a public sexual health campaign that brought together academics, public and commercial sector expertise. The barriers and facilitators identified in this collaborative process represent potential targets for future public health communication campaigns. abstract_id: PUBMED:36767222 An Evaluation of Indoor Sex Workers' Sexual Health Access in Metro Vancouver: Applying an Occupational Health &amp; Safety Lens in the Context of Criminalization. The criminalization of sex work has been consistently shown to undermine workers' Occupational Health and Safety (OHS), including sexual health. Drawing on the 'Guide to OHS in the New Zealand Sex Industry' (the Guide), we assessed barriers to sexual health best practices among indoor sex workers in Metro Vancouver, Canada, in the context of ongoing criminalization. Part of a longstanding community-based study, this analysis drew on 47 qualitative interviews (2017-2018) with indoor sex workers and third parties. Participants' narratives were analyzed drawing on a social determinants of health framework and on the Guide with specific focus on sexual health. Our findings suggest that sex workers and third parties utilize many sexual health strategies, including use of Personal Protective Equipment (PPE) and peer-driven sexual health education. However, participant narratives demonstrate how structural factors such as criminalization, immigration, and stigma limit the accessibility of additional OHS best practices outlined in the Guide and beyond, including access to non-stigmatizing sexual health assessments, and distribution of diverse PPE by third parties. Our current study supports the need for full decriminalization of sex work, including im/migrant sex work, to allow for the uptake of OHS guidelines that support the wellbeing and autonomy of all sex workers. abstract_id: PUBMED:34389891 A Longitudinal Analysis of the Impact of Health Behavior Campaigns on HIV-Risk Behaviors and HIV Incidence in Eswatini. Between 2010 and 2015, Eswatini conducted mass media health behavior campaigns (HBCs) designed to avert new HIV infections. Using longitudinal data from the nationally representative Swaziland HIV Incidence Measurement Survey of 2011, we describe the impact of exposure to HBCs on selected HIV risk behaviors and HIV incidence among sexually active, HIV-negative adults (n = 11,232). Exposure to partner reduction HBCs was significantly associated with reporting fewer (i.e., 1 versus 2, or 2 versus ≥ 3) sexual partners in the prior 6 months at baseline among women (aOR = 3.02; 95% CI 1.38, 6.62); and at both baseline and at 6-months follow-up for men (aOR = 2.26; 95% CI 1.49, 3.44; aOR = 1.95, 95% CI [1.26-3.00], respectively). Despite these reported partner reductions, there was no association between HBC exposure and prospectively observed HIV seroconversions (n = 121). This analysis strengthens the evidence that HIV prevention at the population level requires integrated strategies. Answer: The effectiveness of sexual health campaigns is a topic of ongoing research and debate. The research provided offers mixed findings on the impact of these campaigns. In Spain, sexual health prevention campaigns aimed at young people from 1987 to 2016 have been criticized for their poor evaluation and lack of inclusion of issues beyond general information about HIV and condom promotion. The campaigns also rarely addressed the prevention of pregnancy in young people, and there was a low quantity of planning and evaluation reports available, indicating a need for improvement in campaign planning and evaluation (PUBMED:35151589). An analysis of sexual health campaigns in Spain over 15 years suggested that the campaigns, which focused on promoting "safe sex" through barrier methods, may not have been effective. The data showed an increase in abortions, unintended pregnancies, and sexually transmitted diseases among adolescents, raising questions about the ethical and efficacy perspectives of these campaigns (PUBMED:18402232). A study on the impact of night shift work on midwives' reproductive and sexual health found that shift work, including night shifts, negatively affects reproductive and sexual health, suggesting that work schedules and conditions can influence sexual health outcomes (PUBMED:35805742). Research on adolescent sexual minority males and gender-diverse youth's perspectives on smoking/vaping and antitobacco campaigns found that most participants considered antitobacco campaigns unrelatable and uninteresting, although some reported that campaigns reinforced their decisions to not smoke/vape. This suggests that tailoring campaigns to specific communities may improve their effectiveness (PUBMED:35796707). An experimental study on anti-prostitution advertising campaigns showed that advertisements focused on male consumers of prostitution had a significant effect on social support toward prostitution and incorrect beliefs about prostitutes, while those focused on female prostitutes had no effect. This indicates that the focus of a campaign can influence its effectiveness (PUBMED:35920340). The Health Environment Promotion Campaigns (HEPCs) suggest a scientific approach to implementing health campaigns, including sexual health, by considering factors such as implementation plans, pilot areas, service platforms, health literacy, and development of protection technologies and standards (PUBMED:31795573). Social work students' perceptions of sexual health as relevant to their profession were influenced by their educational experiences, indicating that education can shape attitudes toward the importance of sexual health in professional practice (PUBMED:27332145).